Events
Summer 2025 Workshop
Organizers: John Basl, Kathleen Creel, Ron Sandler
Time Period: June 2ndt to July 31st, 2024
Location: RP 909
Information: This summer school is intended for graduate students with advanced training in applied ethics, ethical theory, philosophy of science, or other areas with potential research applications to AI and big data who would like to develop research capacities in the ethics of artificial intelligence (AI), data ethics, and the philosophy of technology. Designing AI and machine learning systems to promote human flourishing in just and sustainable ways will require a robust and diverse AI and data ethics research community.
Link to Program Website.
Organizers: Vance Ricks & Meica Magnani
Time Period: July 20st – July 29th 2025
Location: RP 310
Information: The Summer Training Program on Responsible Computing Education will spread robust, quickly deployable responsible computing education to a more diverse range of schools. For ten days, a cohort of two- person interdisciplinary teams who teach at minority-serving institutions interested in building responsible computing curricula will learn about strategies used at other institutions. The program will provide opportunities to learn, discuss, and exchange educational and institutional know-how. Participants will develop concrete plans for programs suited for their own institutions.
Link to Program Website.
This program is generously funded by the Mozilla Foundation’s Responsible Computing Challenge.
Upcoming Speaker Events:
Ethics Institute Speaker, David Owen
Time: 4pm – 5:30pm
Location: Renaissance Park 4th floor common room
Title: Civil Geopolitics and the Transnational State
Abstract: Recent years have seen a re-emergence of state practices of denationalisation alongside the widespread use of access to citizenship through investment (and other forms of human capital) as well as the ‘passportisation’ of conflicts and, contrastingly, the use of citizenship as reparations for historic injustice. How should we understand these phenomena? The proposal of this paper is that we can map the conditions of intelligibility of such developments as part of what I will call, adapting a term from Alan Gamlen, ‘Civil Geopolitics’ by which I refer to competition between states over establishing civil relations to some kinds of people (and not others) through varied modes of access to, or denial of, a diverse range of civil statuses up to and including a nonexclusive status of citizenship against the background of two developments: the transnationalisation of the state and the shift towards human-capital citizenship.
About the Speaker:David Owen is a Professor of Social and Political Philosophy at the University of Southampton.
Ethics Institute Speaker, Kevin Dorst
Time: 12 -1:30pm
Location: Renaissance Park 4th floor common room
Title: Ambiguity Rationalizes Confirmation Bias
Abstract: You exhibit confirmation bias when, on average, searching for evidence in favor of a claim leads you to become more confident of it. It’s pervasive, important, and mysterious. Standard irrationalist theories make confirmation bias inexplicable. Standard (Bayesian) rational theories make it impossible. However, those standard theories presuppose clarity—that we always know what our own subjective probabilities are. I show that once we permit ambiguity—uncertainty about what your own opinions are—confirmation bias becomes inevitable, even for rational Bayesians. I show how this theory can explain many of the empirical trends, including why confirmation bias is so hard to eliminate.
About the Speaker: Kevin Dorst is an Assistant Professor in the Department of Linguistics and Philosophy at MIT.
Ethics Institute Speaker, Giovanni Duca
Time: 12 -1:30pm
Location: Renaissance Park 4th floor common room
Title: Reasoning by comparing hypotheses: experimental and theoretical considerations
Abstract: When it comes to making inferences or taking decisions, it is common both in everyday and scientific contexts to reason by comparing alternative hypotheses. In the psychological literature, it has been shown that people’s inferences are sensitive to the addition of alternatives. At the same time, under certain circumstances, people fail to take alternatives into account, focusing only on a target hypothesis. An overview of such experimental results constitutes the motivation and the background for a more theoretical study of the phenomenon. With the use of some examples, I will introduce and discuss some of the general inferential advantages obtained when making inferences by comparing alternatives.
About the Speaker: Giovanni Duca is a PhD student visiting from University of Milan, Italy.
Date and Time: Friday, March 21, 2025. 10am – 5pm
Location: Cabral Center
Title: Pictures, Words, and Lies: Representing Gender & Race in the News Media
Abstract: This year’s annual Northeastern WGSS Women’s History Month Symposium focuses on the news media in its broadest sense and how it reports on gender, race, and marginalized identities. Here, we intend to examine the multi-faceted term “media” as representation, creation, industry, ideology, and collective practice. In curated conversations, panelists will examine how progressive social movements are (mis)represented in mainstream media and news, but also how we attempt to challenge those narrow and false depictions.
Link to Website Here
Ethics Institute Speaker, Alice Helliwell
Time: 12pm – 1:30pm
Location: Renaissance Park 4th floor common room
Title: Attribution, Responsibility and AI Art
Abstract: This paper will examine attribution for AI art, framed through responsibility. I will present two problems we seem to face with AI generated works – first that some people seem less willing to praise someone when they have used AI to make a work, and second that people seem to be concerned that artists may have had their images used in training and yet receive no credit for this. I will make use of the concept of aesthetic responsibility (Wolf, 2016) to explain the first issue. I will then argue that the concept of aesthetic responsibility as characterised by Wolf is flawed – as it does not overlap in all cases with the responsibility that artists have towards their works. I will propose an additional category of responsibility which I call artistic responsibility. I put forward that this distinction can help us understand a variety of artworks, in particular AI art, where I will argue that aesthetic responsibility and artistic responsibility also do not fully overlap. This, I will suggest, opens the door for data subjects to be attributed with a form of responsibility for AI works that resemble their art, even where they are not involved in (or even aware of) the process of making them.
About the Speaker: Alice Helliwell is an Assistant Professor in Philosophy at Northeastern University London