Skip to content

Why do companies struggle with ethical artificial intelligence?

People in this story

PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019. Paul is investigating the barriers to opportunities faced by youth in the U.S. to find ways to mitigate the economic inequality they may face later in life. Photo by Matthew Modoono/Northeastern University
PhD student Urbashee Paul poses for a portrait on Oct. 18, 2019.

Some of the world’s biggest organizations, from the United Nations to Google to the U.S. Defense Department, proudly proclaim their bona fides when it comes to their ethical use of artificial intelligence. But for many other organizations, talking the talk is the easy part. A new report by a pair of Northeastern researchers discusses how articulating values, ethical concepts, and principles is just the first step in addressing AI and data ethics challenges. The harder work is moving from vague, abstract promises to substantive commitments that are action-guiding and measurable.

“You see case after case where a company has these mission statements that they fail to live up to,” says John Basl, an associate professor of philosophy and a co-author of the report. “Their attempt to do ethics falls apart.”

Continue reading at News@Northeastern.

More Stories

Researcher and mediator Jack McDevitt retires after 45 years at Northeastern


Did the conservative justices commit perjury? Here’s what they said under oath about Roe v. Wade


New textbooks will teach that Hong Kong was never a British colony. Is China trying to rewrite history?