There is broad bipartisan support among self-identified liberals and conservatives that social media companies should add warning labels to posts that contain misleading information, or that could lead to the spread of misinformation, data from a new study by Northeastern researchers in the College of Arts, Media and Design shows.
Much of the polling on content labeling has been conducted around the U.S. presidential election. But the results of the national survey, published on Wednesday, may speak to new concerns about misinformation during the COVID-19 pandemic, particularly surrounding the use of vaccines and other health protocols, says John Wihbey, associate professor of journalism and media innovation at Northeastern and co-author of the study.
“We’re in a new moment, in a new phase of the pandemic—a moment where we can get a slightly purer sense of what the public thinks about these issues,” Wihbey says.
HATE THRIVES ON SOCIAL MEDIA — BUT WHO SHOULD POLICE IT?
Over the last several years, social media companies such as Twitter and Facebook began labeling millions of posts as misinformation, including some from former president Donald Trump, who was permanently suspended from the platforms in the aftermath of the Jan. 6 attack on the U.S. Capitol building that was perpetrated by his supporters.
Trump’s claims of widespread voter fraud during last year’s presidential election, which have been debunked, and the insurrection that followed sparked a fierce debate over the responsibility of tech companies in monitoring what sorts of information users can share, including limiting or removing so-called fake news, hate speech, and content otherwise considered problematic.
Over the summer, the team of Northeastern researchers polled more than 1,400 people in the U.S. through an academic survey platform, called Prolific. Half of the participants said they use Twitter occasionally or more frequently, and 68% said they use Facebook occasionally or more frequently.
The survey was published jointly with Northeastern’s Ethics Institute as part of a broader effort examining potentially new approaches to content labeling on social media platforms. The study’s co-authors include Garrett Morrow, a doctoral student studying political science; Myojung Chung, assistant professor of journalism and media advocacy; and Mike Peacey, associate professor of economics.
The study found that 92.1% of liberals, 60.1% of conservatives, and 78.4% of moderates “strongly or somewhat agree” that social media platforms should use labels to inform users about posts that contain misleading information. Such labels have been used to identify misinformation, such as Twitter’s “fact check” labels, and warn users about potentially graphic or harmful posts, such as the platform’s sensitive media warnings.
Participants also expressed that they encounter “problematic content”—misleading or incorrect information and hate speech—often while using the social platforms. The researchers don’t attempt to define misinformation or problematic content in the study, Wihbey says, opting instead to rely on participants’ perceptions of such problems in answering the survey questions.
The researchers also note that participants showed a high degree of “overconfidence bias,” meaning they said that they trusted their own abilities to discern misleading statements and misinformation online, but expressed distrust in others’ abilities to do the same.
The significant bipartisan agreement on labeling was slightly surprising, Wihbey says, given how polarizing the issue of content moderation was in the days following the election. Many conservatives opposed the Trump ban—and banning in general—saying it amounts to censorship.
But the study also upheld some of these partisan differences in opinion over how to best approach content moderation, with 63.2% of conservatives saying that labeling Trump’s posts, as opposed to banning him, was enough to deal with his “violating messages.” That’s compared to the more than 80% of liberals who thought that more severe action was necessary.
The study comes as governments try to exert control over the tech giants’ moderation policies. Just this week, Texas Gov. Greg Abbott, a Republican, signed a bill into law requiring that social media companies disclose their content moderation policies and create an appeals process for banned users. Under the new law, users could sue companies to get their accounts reinstated. Florida approved a similar law earlier this year.
Democrats have also tried to influence the companies’ policies. Over the summer, President Joe Biden urged Facebook to take swifter action against posts that spread COVID-19 misinformation, saying the bad information circulating on the platform about the safety and efficacy of the vaccines was “killing people.”
“There is a huge need to try to figure out what tools and methods we need to use to combat disinformation and misinformation,” Wihbey says, summarizing the survey sentiment. “At the same time, I think people don’t believe that shutting down accounts and disabling share buttons is the [only] way to go.”
Wihbey says the study may indicate that the public is coming to some sort of middle ground.
“We find that people want labels to link them to credible sources for checking, prepare them for thinking critically about misinformation, and slow the spread of misinformation by warning people about the content they may be trying to share,” its authors state.
For media inquiries, please contact firstname.lastname@example.org.