On November 5, Professor Michael Kearns of the University of Pennsylvania jumpstarted the NULab’s “Information, Algorithms, and Justice” speaker series with his talk “The Ethical Algorithm.” Moderated by David Lazer, the event—titled after Kearns’ book The Ethical Algorithm (2019), co-authored with Aaron Roth—included a short talk by Kearns on the topics of privacy and fairness, with special emphasis on Artificial Intelligence (AI) and machine learning’s role in issues of algorithmic discrimination. Expanding upon the concepts in The Ethical Algorithm, Kearns discussed how the emergence of new definitions for privacy and fairness from researchers in the areas such as differential privacy, algorithmic fairness, and algorithmic game theory forge pathways for more socially responsible, conscientious, and ethical design.
Kearns opened his talk with two gradual realizations he came to regarding how understandings of data and privacy have shifted in the last 20 years. Kearns’ first realization was straightforward: that “notions of data or algorithmic privacy that are based on optimization techniques are fundamentally irretrievably broken,” and, considering the trajectory of data and information technologies, this realization is well within the discipline’s expectations. Kearns’ second realization, however, was more complex: he argues that while the field had before been lacking “good semantic definitions” about interpretability and privacy, it found a new ad hoc or “right” definition of privacy in differential privacy: “In my view,” Kearns argues, “almost everybody who thinks seriously about algorithmic privacy has eventually come to the conclusion that differential privacy is sort of the right definition of privacy.” In short, the movement toward differential privacy marked a palpable change in the way researchers and experts were conducting their work and in the questions they began to ask about ethics and responsibility. Researchers and experts have to reckon with the definition of differential privacy when confronting ethics in data-related issues.
Following his talk, Kearns was joined in dialogue by members of the Northeastern community including Professors Tina Eliass-Rad (Khoury College of Computer Science) and H.C. Robinson (School of Law; Center for Law, Innovation and Creativity), and Usama Fayyad, the Inaugural Executive Director of the Institute for Experiential AI at Northeastern University and a Professor of the Practice (Khoury College of Computer Science). The ensuing discussion drew important attention to the intersections of privacy and fairness across disciplines and opened up a conversation about opportunities for scientists, engineers, and legal experts to collaborate in a joint effort toward more ethical—and more frequently tested—algorithms and information technologies. The exchange between these scholars was a highlight of the event and the conversation generated questions about how to collectively incentivize researchers and experts to take seriously the societal impact of their work, and other top issues in ethics and AI.
The conversation with Northeastern faculty and audience members culminated with Kearns identifying a few action items and overall takeaways. Kearns suggested increasing partnerships with legal experts in order to build stronger laws and regulations for AI and machine learning. He argues that if law students and legal experts start with an understanding of algorithms and algorithmic logic and of how machine learning works, they are better positioned to guide the development of new privacy regulations. Additionally, Kearns identified a pressing need for more empirical data and research for a better sense of what works with current algorithms and where they fail. According to Kearns, the outlook for future research on ethical concerns related to privacy, fairness, and algorithm design is good; this area represents a popular research topic among graduate students in the field and it’s likely that this trend will continue. Kearns emphasized gaining more experience with algorithms and frequent re-examinations of algorithmic definitions as the best next steps.
The event ended with an important reminder that all of these efforts are collective; if researchers and experts continue to work alongside one another and share data, a more ethical future is within reach.
The NULab’s “Information, Algorithms, and Justice” speaker series will continue with a talk by Julia Angwin, founder of The Markup on November 19, from 10am–11am (Eastern). We invite you to join us for Angwin’s talk “The Markup and Accountability Journalism” and group discussion with Northeastern faculty interlocutors and audience members; this event is free and open to the public, but registration is required—please RSVP here. The final event in the series for fall 2021 will be a talk with Martha Minow, Harvard Law School, on Dec. 3 at 10am; more information here.