Skip to content
Stories

NULab Spring Conference 2023 Recap

Dr. Lara Martin delivers the keynote address at the NULab's spring 2023 conference on generative AI.
A crowd watches from their seats in Northeastern's Raytheon Amphitheater as speakers discuss the NULab spring conference's participants.

The NULab was delighted to host its sixth annual Spring Conference on April 21. Titled “Generative AI: Creative Potentials and Ethical Responsibilities,” it was the NULab’s first in-person conference since 2019, and its first ever conference with a hybrid component. Deans Dan Cohen and Uta Poiger welcomed conference panelists and participants, and set the stage for the day’s discussions about generative AI. Cohen provided a brief overview of some of the major milestones in the history of AI, and highlighted how those studying advanced digital methods—including those in the NULab community—are both creators and consumers of AI-related work, and are thus well positioned to contribute to current work on AI in scholarship and pedagogy. Poiger commended the critical work of the NULab in thinking about access and inclusivity for digital materials, highlighting the ways that NULab projects are contributing to critical new developments in both teaching and research. Poiger also underscored the interdisciplinary nature of the NULab, focusing on the exciting collaborations across Northeastern’s colleges and campuses. 

Panel 1

The first conference panel, “Making, Creating, and Experimenting with AI,” began with Lawrence Evalyn (Visiting Assistant Professor of English and Co-Director of the Digital Integration Teaching Initiative). Evalyn posed the question “what kinds of jokes can AI tell?” in his presentation “The Post ‘Weirdness’ Era of Machine-Generated Jokes: ubi sunt LOVE 2000 HOGS YEA?” In this presentation, Evalyn outlined the history of creators using AI to generate and play with language structure, form, and humor. According to Evalyn, AI used to play in a half-recognizable space of language where form remained but semantic meaning often failed. Now, Evalyn argues that current iterations of commercial AI like chatGPT and Bard are not the bumbling failures as demonstrated by the AI Weirdness blog, but are instead confident pretenders that call for a shift in how we should approach and apply AI technology with creative language and writing.

Kenna Cheverie (third year English major at Northeastern) continued the conversation on AI and writing in her presentation on “The Student Writing Process and AI.” Cheverie undertook an independent research project to better understand AI, its reception among students, and its potential role in writing pedagogy. In her talk, Cheverie shared her work on exploring the ethical and critical applications of AI like chatGPT in the writing process, interviewing her peers from different disciplines on the value and application of such technology. Arguing that instead of discouraging students from using AI within the writing process, Cheverie outlined how it can be useful for learning the standards of different disciplinary writing, particularly for brainstorming, thesis development, and clarifying unclear issues in assignment prompts. Cheverie ended her presentation with the question: “how do we train our students to be using technology in ethical ways?” 

This question was picked up by the next panelist, Nick Beauchamp (Associate Professor of Political Science) who explored latent moral values in Large Language Models in “Fairness, Ideology, and Personality are Necessarily Connected.” Beauchamp applied Haidt’s Moral Foundation (MF) questionnaire, which measures different liberal (care and fairness) and conservative (loyalty, authority, sanctity) values, to chatGPT to explore how LLMs incorporate and reflect moral values and ideology. Noting that after December 15, 2022, chatGPT was blocked from answering questions about what “its” individual opinion is, Beauchamp instead asked the AI about how a liberal, a conservative, or “someone” might answer the MF survey questions. He found that chatGPT’s answers almost perfectly reflected human survey responses, but that the web-based version was slightly more biased in favor of liberal values of “fairness” and “care,” perhaps due to having stronger safety protocols compared to the API. He finished by hypothesizing that it may be impossible to build AIs that are highly fair and unbiased, without also making them necessarily more liberal on related values.

Malik Haddad (Assistant Professor in Data and Computer Science at NU London) concluded the first panel with his presentation on “Using AI to improve the quality of life of powered mobility users.” Haddad’s work centers using AI technology (with Python and microcomputers) to replace certain mechanical components of powered mobility devices. Instead of using a default dataset, Haddad is using AI to train these devices based on individual users—these new methods range from learning what are wanted and unwanted movements to learning a user’s unique vocal patterns with voice recognition. Haddad demonstrated that, by switching from general to personalized detection systems, powered mobility devices become smarter or more efficient, leading to better user experiences and quality of life for those using them. Such applications of AI, Haddid argues, demonstrate new avenues for using biological technology and AI for furthering lived experiences. 

Keynote

This year’s keynote speaker was Dr. Lara Martin. Martin is a CIFellow postdoctoral researcher at the University of Pennsylvania who will soon take up a new role as a tenure-track assistant professor at UMBC; their work resides in the field of Human-Centered Artificial Intelligence with a focus on natural language applications. Martin’s keynote address argued that Dungeons and Dragons can help improve new AI development in the future, as it provides a blueprint for researchers aiming to achieve both coherence and originality in AI-assisted storytelling. Dungeons and Dragons, Martin argued, provides structure to storytelling. That same structure can help the large-language models employed by researchers tell better stories. Martin experiments with different methods geared toward making digital models create more coherent and original stories–one such method involves modeling contextual and character information in a more robust way. And the results are promising: in some instances, the results of the AI storytelling are on par with the original human narration that the models are based on. To help conference participants better understand the mechanics and methods of AI storytelling, Martin led a hands-on activity in developing and refining prompts in the OpenAI playground. 

Panel 2

Felix Muzny (Clinical Instructor at Northeastern’s Khoury College of Computer Sciences) kicked off the second panel of the conference with a presentation entitled “There’s No Right Answer: Open-ended Discussion and Interpersonal Skills in Computing Classrooms.” Muzny discussed their work in increasing the interpersonal communication skills of Khoury College teaching assistants by highlighting its relevance to the TA and student population. TAs need to be equipped to discuss important, open-ended questions on issues related to bias in natural language processing (NLP) with students; Muzny shared the strategies and challenges of a new training program that is designed to better equip TAs to meet this crucial need. 

Ethics Lead and Research Associate Professor for Northeastern’s Institute for Experiential AI, Cansu Canca turned our attention next to the March 22 open letter demanding a “pause” on training new, powerful AI systems. In “Apocalypse Now? Responsible Reaction for Responsible AI,” Canca decried the alarmist rhetoric in the letter, raising serious questions about its claims that AI is now “human-competitive.” The letter’s signatories naively seem to believe that six months will be enough time to generate satisfactory regulations, claimed Canca, instead reminding the audience that once regulations are in place they will be difficult to change or remove. Rather than acting from a position of fear and ignorance, Canca urged us to think about the ethics of AI in a holistic and analytic sense. Developing ethical AI cannot happen only by imposing restrictions post facto, but requires ethically managing the entire innovation cycle, from research and development all the way to monitoring its use. Canca closed by arguing that principles detailing ethical development of AI are only a framework: the real ethical work is implementing them effectively.

While chatbots like OpenAI’s ChatGPT 3.5 have been en vogue recently, Silvio Amir (Assistant Professor at Khoury College of Computer Sciences) reminded us that NLP has been a topic of interest for far longer, and has been applied to a variety of domains including the healthcare sector. NLP has been put to use in translation software services and many other places we now consider familiar. Amir traced the history of NLP, noting that with the advent of large language models (LLM), complex pipelines can be replaced with a single end-to-end system. However, LLMs bring about a host of challenges, such as the difficulty of inspecting and changing the models. Risks of relying on language generating models like ChatGPT also include the fact that it (and other models) frequently “hallucinate,” confidently supplying information which sounds reasonable but has no basis in reality.  Amir also clarified the dangers of models reinforcing social and racial biases during the question and answer session, arguing just how crucial it is that we not be satisfied with that reality.

Dan Jackson (Executive Director, NuLawLab) and Miso Kim (Faculty Design Director, NuLawLab and College of Arts, Media, and Design Assistant Professor) were our final presenters of the afternoon’s panel. Their joint presentation, which was given with a nod of recognition towards Jules Rochielle Sievert’s work at the NuLawLab as well, discussed how Large Language Models might be able to address unjust inequities of access to the law, and how the little-discussed realm of legal design could help AI improve access to the law. “What could the future hold? Exploring the Potential Impact of Legal Design and Generative AI” began by suggesting AI might be useful as a chatbot that could courtroom proceedings, or perhaps offer explanations of legal terms to lawyers’ clients, or even draft basic legal documents. Jackson quickly noted that the situation is more complicated: each of these seemingly helpful applications are prohibited by rules such as those against unauthorized practice of law by those not formally qualified as lawyers. Jackson underscored the urgency of finding solutions to address stark inequalities in access to law, and put forward legal design as one solution for driving change. Kim then outlined some key principles of legal design, which applies design principles to the realm of law. Seeing AI more as a “secretary” which helps in the completion of mundane tasks might help to develop legal systems which enhance the dignity and autonomy of those participating in the legal or civic systems.

Closing Remarks

After the final panel, NULab Co-Director K.J. Rawson thanked panelists and attendants for contributing to a spirited and generative day, and also emphasized the efforts of NULab staff, faculty, and graduate students in organizing the event. He acknowledged that the talks given cohered as a lively body of work on AI, but also demonstrated the diversity of approaches to digital scholarship that the NULab encourages and facilitates, echoing Dean Poiger’s comments about the NULab’s vibrant interdisciplinarity earlier in the day. The NULab’s first hybrid conference exhibited the multifaceted strengths of the NULab community as a creative, productive, and inclusive hub of digital scholarship.

More Stories

Virtual Visiting Speaker: Stephanie Russo Carroll

11.18.2024
3D Model of Burning Stairs

NULab Research Project: Triangle Shirtwaist Factory, A 3D Reconstruction

11.01.2024
Decorative NULab logo.

DH Office Hours: Speed Data-ing

11.25.24
Blog