Skip to content
Contact Us
Stories

Breaking ChatGPT to fix Echo Chambers?

People in this story

This publication was originally posted on Blog of the APA by John Basl and Omri Leshem.

1      Introduction

There are many ways that generative AI, such as ChatGPT or AI image generators, relates to social epistemology, not least of which is that generative AI has the potential to exacerbate the pollution and degradation of our information ecosystem. But, could generative AI systems also serve as social epistemic models, helping us better understand the causes of problematic social epistemic structures and evaluate proposed solutions to challenging social epistemic problems? In this post we want to pursue this possibility, to gesture at how some recent results in the field of generative AI can be leveraged in the area of social epistemology.

In what follows, we take a close look at the phenomena of model collapse. When generative AI systems consume their own outputs, they have been shown to rapidly deteriorate, quickly evolving to generate nonsensical and incoherent outputs. We want to suggest that these phenomena might inform our views about so-called epistemic bubbles and echo chambers and make possible bridges between social epistemology and computer science.

Read more on Blog of the APA.

More Stories

US Firm Says It Brought Back Extinct Dire Wolves

04.16.2025

A Biotech Firm Says Its Genetic Tweaks of a Wolf Amount to ‘De-Extinction.’ What Does This Mean For Living Species?

04.16.2025

Why are Fans Upset About ‘The Last of Us’ Season Two? Experts Say it Tests the Limits of Fandom and Parasocial Relationships

04.22.25
All Stories