The primary aim of this project is to provide robust social and ethical analysis and evaluation of different approaches to online content labeling. The project’s central research question is: How can online social media platforms, such as YouTube, Facebook, and Twitter, meet the needs and demands of society in terms of labeling informational content in an ethical and effective way? The project includes a focus on labeling to address scientific misinformation related to COVID-19, as well as a focus on how best to incorporate fact-checking into labeling efforts. The project draws on knowledge gained from current attempts at online content labeling in general, as well as from strong analogs to content labeling, such as in library sciences and food labeling. The project incorporates international, cross-cultural perspectives and the views of global researchers. Overall, this project will provide a framework for identifying best practices in online content labeling, as well as for identifying considerations that need to be addressed in order for different approaches to be successful from both ethical and efficacy perspectives. Support for this project comes from Northeastern’s Office of the Provost and Facebook, Inc.
Prof. John Wihbey, Lead Investigator, Assistant Professor of Journalism and Media Innovation
Prof. Don Fallis, Professor of Philosophy and Computer Science
Prof. Kay Mathiesen, Associate Professor of Philosophy
Prof. Ronald Sandler, Professor of Philosophy, Director, Ethics Institute
Dr. Briony Swire-Thompson, Research Scientist, Network Science Institute
Dania Alnahdi (BS in Computer Science & Design)
Gabriela Compagni (BS in Philosophy, Minor in Data Science)
Garrett Morrow (PhD in Political Science)
Jessica Montgomery Polny (MS in Media Advocacy)
Nicholas Miklaucic (BS in Data Science & Behavioral Neuroscience)
Roberto Patterson (MS in Media Advocacy)
Dr. Matthew Kopec, Associate Director, Ethics Institute
Analyzing how the news media cover the topic of content moderation and labeling, as well as how the public absorbs and spreads this coverage, can provide important insights into whether the interests and concerns of news producers line up with those of the public . In this study, we used the open-source analytical platform Media Cloud to filter thousands of news stories using targeted keywords relating to content moderation, and then searched for common links among articles, such as popular phrases, people, corporate entities, or themes. In tandem with this examination, we assessed the extent to which the news content was viewed by the public and shared on social media, and guaged the public’s interest in the topic over time. Our results should be of interest to corporate policy makers, researchers, and regulators along at least two dimensions. First there is a substantial difference between what aspects of content moderation and labeling seem important to news producers (measured by media inlinks to articles) and what seems important to news consumers (measured by number of e.g., Facebook shares). Second, examining keywords that appear most often across content moderation news stories, such as “misinformation” and “removal,” provides further insights into what concerns are salient to the public. It’s possible that an over-reliance on news coverage has been causing stakeholders in the content moderation game to misjudge what really matters to the public, in which case our examination could help content moderators to better align their efforts with public concerns.
Dr. Briony Swire-Thomson & Nicholas Miklaucic
The backfire effect is when a correction increases an individual’s belief in the misconception and is often used as a reason not to correct misinformation. The current study aimed to test whether correcting misinformation backfires more than a no-correction control, and whether item-level differences in backfire rates were associated with (1) measurement error or (2) theoretically-meaningful attributes related to worldview and familiarity. In two near identical studies we conducted a longitudinal pre/post study with 920 participants from Prolific Academic. Participants rated 21 misinformation items and 21 facts and were assigned to either a correction condition or test-retest control. We found that no item backfired more in the correction condition compared to test-retest control or initial belief rating. Item backfire rates were strongly correlated with item reliability and did not correlate with importance/worldview. Familiarity and backfire rate were significantly negatively correlated, though familiarity was highly correlated with reliability.
Jessica Montgomery Polny, Dania Alnahdi & Gabriela Compagni
This website constitutes a library of content labeling examples and policy methods from eight major social media platforms: Facebook, Instagram, Twitter, YouTube, Reddit, TikTok, WhatsApp, and Snapchat. Content can be filtered within the tool through cross-platform labeling methods, such as providing context, publisher information, and warning notices – as well as topical filters like ‘COVID-19’ and ‘U.S. election.’ Furthermore, integrated qualitative analysis on platform policies provide a basis for evaluation on differentiating platform effectiveness. This evaluation is based on how platforms (a) consistently apply labeling criteria, (b) uphold the scope and responsibility of how content is evaluated, and (c) operate to fulfill the interests of users and other stakeholders of information. Correlating specific content examples to labeling policies is not only accessible to average social media users to understand how their content is moderated, but also for those who wish to conduct further research in content labeling and misinformation.
Prof. Ronald Sandler, Prof. John Wihbey & Dr. Matthew Kopec
As social media companies have sought to navigate the difficult challenges of addressing misinformation or borderline violating content on their platforms, a new suite of strategies that focus on information labeling has emerged. Referred to variously by companies and researchers as “inform,” “disclose,” “context,” or “friction” strategies, operationally these may involve leveraging third-party fact-checking teams; the presentation of contextual information panels, interstitials, or boxes next to the content in question; or a wide variety of other labeling approaches that attempt to provide platform users more information about questionable content. Often, these “soft” interventions may seem ethically appealing versus “hard” alternatives such as removal (censorship) or algorithmic reductions in visibility (black box strategies that may lead to controversy and mistrust). Labeling may also be considered more in keeping with free speech traditions that draw on marketplace-of-ideas concepts. Yet, we will argue in this article, there are many unresolved questions about both the ethics of labeling and about audience reception and comprehension – questions of social epistemology. This article will explore these questions and situate these emerging labeling strategies in wider context, drawing on areas such as nutrition/food/product labeling and myriad strategies attempted by generations of news media organizations. We will outline a framework for thinking more systematically about issues such as scope, evaluation, authority, and objectivity.
Garrett Morrow, Dr. Briony Swire-Thomson, Jessica Montgomery Polny, Dr. Matthew Kopec &
Prof. John Wihbey
There is a toolbox of content moderation options available to online platforms such as labeling, algorithmic sorting, and removal. A content label is a visual and/or textual attachment to a piece of user-generated content intended to contextualize that content for the viewer. Examples of content labels are fact-checks or additional information. At their essence, content labels are simply information about information. If a social media platform decides to label a piece of content, how does the current body of social science inform the labeling practice? Academic research into content labeling is nascent, but growing quickly; researchers have already made strides toward understanding labeling best practices to deal with issues such as misinformation, conspiracy theories, and misleading content that may affect everything from voting to personal health. We set aside normative or ethical questions of labeling practice, and instead focus on surfacing the literature that can inform and contextualize labeling effects and consequences. This review of a kind of “emerging science” summarizes the labeling literature to date, highlights gaps for future research, and discusses important considerations for social media platforms. Specifically, this paper discusses the particulars of content labels, their presentation, and the effects of various label formats and characteristics. The current literature can help guide the usage and improvement of content labels on social media platforms and inform public debate and policy over platform moderation.
Roberto Patterson & Prof. John Wihbey
Although the political and media environments in the US tend to buffer the effects of misinformation, elsewhere around the globe, misinformation can have near immediate consequences. For example, in Nigeria in June of 2018, horrific pictures of a mass grave site were falsely attributed to anti-Christian violence perpetrated by Fulani Muslims, and ten Muslim men were later killed in revenge. The majority of investigation on misinformation, particularly when regarding international markets and populations, has been done through news reporting, fact checking resources, and legislation. What we need now is to hear from a variety of voices on the very subject that we’ve been so deeply engaged with in a real and non-static mode. To this end, we will be conducting interviews with academics, journalists, fact checkers, and activists from countries in Europe, Africa, Asia, and the Americas to create a comprehensive and grounded look at the problem as these people currently understand and experience it. We aim to access the nuances and subtleties that rarely appear in the literature on misinformation. We believe that through a collection of varied voices, we will be able to bring the issue of misinformation and disinformation down to a level that carries significant weight and is palatable to the average listener. To promote public discourse regarding misinformation, disinformation and content moderation we aim to produce a digestible conversation bolstered by the voices of a diverse group of interviewees.
Garrett Morrow & Gabriela Compagni
Local news sources in the United States have been dwindling for years. Although newsrooms are shrinking, the American public generally trust their local news sources. Crisis events like the COVID-19 pandemic are circumstances where people are actively searching for information and some of what they will find will inevitably be misinformation given the volume of misinformation being created and the affordances of social media services that encourage viral spread. It is critical to understand if local news is spreading misinformation or acting as a cross-cutting information source. This study uses local news data from a media aggregator and mixed methods to analyze the relationship between local news and misinformation. Findings suggest that local news sources are serving as cross-cutting information sources but occasionally reinforce misinformation. We also find a worrying increase of anti-mask stories and accompanying decrease of pro-mask stories after a mask mandate is enacted.
Prof. John Wihbey & Garrett Morrow
For over a century, the metaphor of the “Marketplace of Ideas” has been central to Americans’ conceptualization of the First Amendment. However, while the metaphor is an aspirational hope for how Americans want our freedom of expression to work and may have been more applicable to an earlier period in American history, the metaphor does not currently apply to our current environment of information exchange through the medium of the internet and social media. Technological affordances, network structure, and human behavior affect the social conditions online, and in turn, limit the usefulness of treating the interactions as a marketplace. Furthermore, the unsuitability of the marketplace metaphor poorly frames society’s discussion of how content should be moderated online. In this paper, we describe why Justice Holmes Jr.’s metaphor is out-of-date and argue for its replacement with a new model that reflects the contemporary online, networked informational environment. Additionally, we point to how better to conceptualize content moderation in light of the new model.
with Dr. Briony Swire-Thomson and Prof. John Wihby as collaborators