Skip to content
Contact Us
Stories

Breaking ChatGPT to fix Echo Chambers?

People in this story

This publication was originally posted on Blog of the APA by John Basl and Omri Leshem.

1      Introduction

There are many ways that generative AI, such as ChatGPT or AI image generators, relates to social epistemology, not least of which is that generative AI has the potential to exacerbate the pollution and degradation of our information ecosystem. But, could generative AI systems also serve as social epistemic models, helping us better understand the causes of problematic social epistemic structures and evaluate proposed solutions to challenging social epistemic problems? In this post we want to pursue this possibility, to gesture at how some recent results in the field of generative AI can be leveraged in the area of social epistemology.

In what follows, we take a close look at the phenomena of model collapse. When generative AI systems consume their own outputs, they have been shown to rapidly deteriorate, quickly evolving to generate nonsensical and incoherent outputs. We want to suggest that these phenomena might inform our views about so-called epistemic bubbles and echo chambers and make possible bridges between social epistemology and computer science.

Read more on Blog of the APA.

More Stories

Cloned Black-Footed Ferret Birth Sparks Ethical Debate. Why that is a Big Step — but not a Substitute — for Conservation

11.12.2024

Faculty Press Administration for Representation in Policy Making, Increased Freedom of Expression Protections

11.08.2024

Commemorating the World Day of the Poor: Voices on Global Hunger and Faith in Appalachia

11.26.24
All Stories