On April 29th, 2024, National Academies hosted a workshop entitled: “Evolving Technological, Legal and Social Solutions to Counter Disinformation in Social Media.” This workshop comprised two days of interactive brainstorming and the facilitation of new research and collaborations oriented towards finding solutions to disinformation. Before the event, a call for solutions was issued to academia, industry, journalism, civil society, policy and government; as a result, over 100 ideas and initiatives were submitted, 14 of which were featured at this collaborative and generative workshop.
NULab faculty member, John Wihbey, participated in one of the panels at this workshop, specifically “New Approaches to Content Moderation.” Wihbey spoke about his research that evaluates the potential advantages and risks of using AI-powered bots, such as “chatmods” or “modbots,” as part of the effort to address disinformation, a method that he expects many social media platforms to adopt. Chatbots could be used to provide assistance, mediation, warnings, or counter-speech to users, but Wihbey asks, “do we want them doing this?”
Wihbey remarked that “we need to get ahead of this,” highlighting the need to develop ethical frameworks surrounding issues like beneficence, justice, and explicability in these contexts. He said, “I’d love for others to join in, in thinking through this problem,” and that “maybe we could make a difference in terms of making sure that companies, if they go in this direction in a big way, proceed ethically.”
You can listen to the entire panel here, called “Content Moderation.” You can read more about the panel that Wihbey participated in, and other panels hosted by this workshop, here at “Brainstorming Solutions to Disinformation.”