Deepfakes and AI-generated images are ubiquitous, which means it’s getting increasingly difficult to sort fact from fiction. According to one source, the deepfake technology software market is a $73 billion industry. AI-based tools such as Dall-E and Midjourney, which are trained on tens of thousands of actual human faces, can produce startlingly lifelike images of people.
Advances in AI image generation have precipitated a wave of deepfakes that have stirred controversy and menaced pop stars and celebrities such as Taylor Swift, Scarlett Johansson and Donald Trump. Explicit deepfakes of Swift prompted Elon Musk to block some searches of the singer on X recently, while deepfakes of politicians — including AI-generated voices in robocalls — threaten to undermine the democratic process.
Don Fallis, professor in the Department of Philosophy and Religion and Khoury College of Computer Sciences at Northeastern University, has written about the challenges that deepfakes and AI systems pose to society writ large. Right now, he sees algorithmically-guided solutions to deepfaking, and the AI tools themselves, as locked in an “arms race.” In an information-rich world, he says this ratcheting up of AI tech poses a threat to human knowledge.