Skip to content
Contact Us
Stories

Breaking ChatGPT to fix Echo Chambers?

People in this story

This publication was originally posted on Blog of the APA by John Basl and Omri Leshem.

1      Introduction

There are many ways that generative AI, such as ChatGPT or AI image generators, relates to social epistemology, not least of which is that generative AI has the potential to exacerbate the pollution and degradation of our information ecosystem. But, could generative AI systems also serve as social epistemic models, helping us better understand the causes of problematic social epistemic structures and evaluate proposed solutions to challenging social epistemic problems? In this post we want to pursue this possibility, to gesture at how some recent results in the field of generative AI can be leveraged in the area of social epistemology.

In what follows, we take a close look at the phenomena of model collapse. When generative AI systems consume their own outputs, they have been shown to rapidly deteriorate, quickly evolving to generate nonsensical and incoherent outputs. We want to suggest that these phenomena might inform our views about so-called epistemic bubbles and echo chambers and make possible bridges between social epistemology and computer science.

Read more on Blog of the APA.

More Stories

Two Substantial AI-Related Grants for Northeastern University Philosophers

12.12.2025
Deepak Chopra speaks in London in June 2025. (Photo by Luke Dixon/Wikimedia/Creative Commons)

How Deepak Chopra’s AI spirituality is hijacking spiritual hunger

12.09.2025

Converts are finding Eastern Orthodoxy online. The church wants to help them commune face-to-face

12.12.25
Public Scholarship