Skip to content
Topics
Stories

Artificial intelligence can reflect human bias but it can also help undo it

Algorithmic predictions are ubiquitous these days—think of Amazon recommending a book based on past purchases. More controversial use arises when algorithms incorporate not just personal history, but information about people generally, blurring the lines of personal causation and broad, population-level trends.

More and more decisions are made using machine learning algorithms, which, in theory, can be useful and objective. In reality, says Kay Mathiesen, associate professor of philosophy and religion at Northeastern, “data is biased—because it’s data coming from human beings.”

Mathiesen is the lead organizer of the 17th Annual Information Ethics Roundtable, a three-day event that will address the role of artificial intelligence—if it has one at all—in law, employment, and beyond.

Read the full story on News@Northeastern. 

More Stories

Racial justice protests were not a major cause of COVID-19 infection surges, new national study finds

08.11.2020

Here’s what could happen if the U.S. suspends federal pandemic unemployment benefits

08.10.2020

Black women asked their party for what they wanted. What happens next?

08.13.20
In the News