Partially supported by a NULab Seedling Grant.
Summary: Hashtag activism addressing systemic issues like racial justice, sexual violence against women, and health inequities, evokes strong emotions online. However, we do not have reliable and robust ways of measuring those emotions. This often leaves researchers and social media platforms blind to how emotionally charged online ecosystems are around politically sensitive moments, and unable to promote healthy conversations among affected communities. The aim of this project is to create gold standard references that can help us gauge how reliably we can computationally measure the emotions from online political conversations. We will use crowdsourced labor on the Volunteer Science platform to develop better measurement tools and validation sets so that we may better calibrate our quantitative models for measuring emotion in online hashtag activism.
Studies: The goal of this research is to validate computational methods for measuring collective emotions expressed during focal events of hashtag activism. We plan to measure collective emotion through a dictionary-based approach. In this approach, we start with a dictionary of words and scores for those words across different emotional scales. For example, “love” would have a high “happiness” score while “death” would have a low score. We then calculate the emotion expressed across a collection of tweets as the weighted average of the words and their scores. We plan to assess the internal and external validity of measuring online collective emotion through this approach.
1. Internal validity: The weighted average calculated from the sentiment dictionary can only be as comprehensive as the words that are included in the dictionary. However, we can use recent techniques from machine learning to estimate scores for words we do not by using the scores we do know. Our first crowdsourcing task will elicit gold standard sentiment scores for words across several emotional scales. We aim to show that there is a strong correlation between the estimated scores and the worker-generated scores.
2. External validity: A weighted average of words is a very coarse measure of how emotion is expressed online. Our second crowdsourcing task will have workers read a series of collections of tweets from the focal hashtag activism events and rate the emotions expressed in the tweets on the same scales as the word scores. We aim to show that the human evaluations of the emotions expressed in the tweets correlate with the weighted average’s measure of emotion.
PI: Brooke Foucault Welles, Associate Professor, Communication Studies and Network Science
Co-PI: Ryan J. Gallagher, PhD student, Network Science