Home » News » Artificial intelligence can reflect human bias but it can also help undo it
Artificial intelligence can reflect human bias but it can also help undo it
Algorithmic predictions are ubiquitous these days—think of Amazon recommending a book based on past purchases. More controversial use arises when algorithms incorporate not just personal history, but information about people generally, blurring the lines of personal causation and broad, population-level trends.
More and more decisions are made using machine learning algorithms, which, in theory, can be useful and objective. In reality, says Kay Mathiesen, associate professor of philosophy and religion at Northeastern, “data is biased—because it’s data coming from human beings.”
Mathiesen is the lead organizer of the 17th Annual Information Ethics Roundtable, a three-day event that will address the role of artificial intelligence—if it has one at all—in law, employment, and beyond.