Earlier this year, the journalist Julia Angwin and Princeton professor Alondra Nelson tested leading AI models’ ability to answer questions about elections such as voter registration requirements.
Angwin and Nelson rated GPT-4, Gemini, Mistral, Claude, and Llama 2 on bias, accuracy, completeness, and harmfulness. Overall, the models performed poorly. Half their responses were inaccurate and more than a third were rated by the researchers as incomplete, if not harmful.
While there have been improvements, as recently as this summer, the Washington Post ran similar tests and found that Alexa couldn’t even correctly say who won the 2020 election.
But these important experiments only tell part of the story about AI and civic education.
Continue reading on Fast Company