In mid-March, as the Russian invasion of Ukraine crept into its third week, an unusual video started making the rounds on social media and was even broadcast on the television channel Ukraine 24 due to the efforts of hackers. The video appeared to show Ukrainian President Volodymyr Zelenskyy, stilted with his head moving and his body largely motionless, calling on the citizens of his country to stop fighting Russian soldiers and to surrender their weapons. He had already fled Kyiv, the video claimed.
Except, those weren’t the words of the real Zelenskyy. The video was a “deepfake,” or content constructed using artificial intelligence. In a deepfake, individuals train computers to mimic real people to make what appears to be an authentic video. Shortly after the deepfake was broadcast, it was debunked by Zelenskyy himself, removed from prominent online sources like Facebook and YouTube, and ridiculed by Ukrainians for its poor quality, according to the Atlantic Council.
However, just because the video was quickly discredited doesn’t mean it didn’t cause harm. In a world increasingly politically polarized, in which consumers of media may believe information that reinforces their biases, regardless of the content’s apparent legitimacy, deepfakes pose a significant threat, warns Northeastern University computer science and philosophy professor Don Fallis.