Skip to content
Apply
Stories

Artificial intelligence can reflect human bias but it can also help undo it

Algorithmic predictions are ubiquitous these days—think of Amazon recommending a book based on past purchases. More controversial use arises when algorithms incorporate not just personal history, but information about people generally, blurring the lines of personal causation and broad, population-level trends.

More and more decisions are made using machine learning algorithms, which, in theory, can be useful and objective. In reality, says Kay Mathiesen, associate professor of philosophy and religion at Northeastern, “data is biased—because it’s data coming from human beings.”

Mathiesen is the lead organizer of the 17th Annual Information Ethics Roundtable, a three-day event that will address the role of artificial intelligence—if it has one at all—in law, employment, and beyond.

Read the full story on News@Northeastern. 

More Stories

Storytelling takes center stage at the women who empower summit

11.07.2019

Twitter has banned political ads. Is Facebook next?

11.04.2019

A long-shot republican presidential candidate sees a path to the party convention. He’s not dreaming.

11.21.19
Press