Skip to content
Apply
Stories

Why do companies struggle with ethical artificial intelligence?

People in this story

PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019. Paul is investigating the barriers to opportunities faced by youth in the U.S. to find ways to mitigate the economic inequality they may face later in life. Photo by Matthew Modoono/Northeastern University
PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019.

Some of the world’s biggest organizations, from the United Nations to Google to the U.S. Defense Department, proudly proclaim their bona fides when it comes to their ethical use of artificial intelligence. But for many other organizations, talking the talk is the easy part. A new report by a pair of Northeastern researchers discusses how articulating values, ethical concepts, and principles is just the first step in addressing AI and data ethics challenges. The harder work is moving from vague, abstract promises to substantive commitments that are action-guiding and measurable.

“You see case after case where a company has these mission statements that they fail to live up to,” says John Basl, an associate professor of philosophy and a co-author of the report. “Their attempt to do ethics falls apart.”

Continue reading at News@Northeastern.

More Stories

Can this strategic plan promote better well being for people who suffer from psychosis? 

12.05.2024

Northeastern delegation heads to South Korea for pivotal UN plastic pollution treaty talks

11.21.2024

How will AI scribes affect the quality of health care?

12.06.24
All Stories