This publication was originally posted on Blog of the APA by John Basl and Omri Leshem.
1 Introduction
There are many ways that generative AI, such as ChatGPT or AI image generators, relates to social epistemology, not least of which is that generative AI has the potential to exacerbate the pollution and degradation of our information ecosystem. But, could generative AI systems also serve as social epistemic models, helping us better understand the causes of problematic social epistemic structures and evaluate proposed solutions to challenging social epistemic problems? In this post we want to pursue this possibility, to gesture at how some recent results in the field of generative AI can be leveraged in the area of social epistemology.
In what follows, we take a close look at the phenomena of model collapse. When generative AI systems consume their own outputs, they have been shown to rapidly deteriorate, quickly evolving to generate nonsensical and incoherent outputs. We want to suggest that these phenomena might inform our views about so-called epistemic bubbles and echo chambers and make possible bridges between social epistemology and computer science.
Read more on Blog of the APA.