Skip to content
Connect
Stories

Why do companies struggle with ethical artificial intelligence?

People in this story

PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019. Paul is investigating the barriers to opportunities faced by youth in the U.S. to find ways to mitigate the economic inequality they may face later in life. Photo by Matthew Modoono/Northeastern University
PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019.

Some of the world’s biggest organizations, from the United Nations to Google to the U.S. Defense Department, proudly proclaim their bona fides when it comes to their ethical use of artificial intelligence. But for many other organizations, talking the talk is the easy part. A new report by a pair of Northeastern researchers discusses how articulating values, ethical concepts, and principles is just the first step in addressing AI and data ethics challenges. The harder work is moving from vague, abstract promises to substantive commitments that are action-guiding and measurable.

“You see case after case where a company has these mission statements that they fail to live up to,” says John Basl, an associate professor of philosophy and a co-author of the report. “Their attempt to do ethics falls apart.”

Continue reading at News@Northeastern.

More Stories

Is the US now a four-party system? Progressives split Democrats, and far-right divides Republicans

06.05.2023

Are liberals truly more depressed than conservatives?

05.31.2023

International committee moves closer to treaty on plastic pollution—and this Northeastern policy expert is helping to lead the way

06.06.23
News@Northeastern