Skip to content
Apply
Stories

Why do companies struggle with ethical artificial intelligence?

People in this story

PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019. Paul is investigating the barriers to opportunities faced by youth in the U.S. to find ways to mitigate the economic inequality they may face later in life. Photo by Matthew Modoono/Northeastern University
PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019.

Some of the world’s biggest organizations, from the United Nations to Google to the U.S. Defense Department, proudly proclaim their bona fides when it comes to their ethical use of artificial intelligence. But for many other organizations, talking the talk is the easy part. A new report by a pair of Northeastern researchers discusses how articulating values, ethical concepts, and principles is just the first step in addressing AI and data ethics challenges. The harder work is moving from vague, abstract promises to substantive commitments that are action-guiding and measurable.

“You see case after case where a company has these mission statements that they fail to live up to,” says John Basl, an associate professor of philosophy and a co-author of the report. “Their attempt to do ethics falls apart.”

Continue reading at News@Northeastern.

More Stories

Georgia school shooting is a reminder that mass killings are tragic but rare, Northeastern criminologist says

09.04.2024

Election 2024: Do “crypto” enthusiasts actually make up a significant voting bloc?

09.03.2024

Banned in Brazil: The world is moving toward greater regulation of social media, two Northeastern experts say

09.06.24
All Stories