Fast Company, August 2024
Earlier this year, the journalist Julia Angwin and Princeton professor Alondra Nelson tested leading AI models’ ability to answer questions about elections such as voter registration requirements. Angwin and Nelson rated GPT-4, Gemini, Mistral, Claude, and Llama 2 on bias, accuracy, completeness, and harmfulness. Overall, the models performed poorly. Half their responses were inaccurate and more than a third were rated by the researchers as incomplete, if not harmful. While there have been improvements, as recently as this summer, the Washington Post ran similar tests and found that Alexa couldn’t even correctly say who won the 2020 election.
But these important experiments only tell part of the story about AI and civic education. As we round the homestretch into the presidential elections, headlines about deepfakes and disinformation proliferate while we squander the opportunity to use AI to educate voters about national, state, and local races to ensure that our votes are as informed as possible.