Partially supported by a NULab Seedling Grant.
Misinformation spreads through social networks to create ill-informed publics with potentially disastrous effects (e.g. in relation to coronavirus, climate change, or even warfare). Employing increasingly realistic computer simulations of communities of inquirers, this project studies why: why doesn’t true information always win in the marketplace of ideas? In the basic models employed by the research team, agents gather evidence in relation to an open question they wish to settle and share it with their network neighbors. They then update their beliefs on this question in a rational manner. Simulations based on these models explore whether the population as a whole convergences on the truth, as well as how long it takes to do so. In ‘testimonials’ models, the effects of introducing some misinformants (or ‘liars’) into the network are investigated, with agents pursuing various epistemic strategies for coping with this, modifying their belief update rule and/or the way in which it is applied, to accommodate the fact that some of their neighbors are misreporting their findings.
‘Simulating Epistemic Injustice’ explores more socially realistic models, taking account of the fact that individuals exhibit biases or prejudices towards certain (e.g. gender or racial) groups, and tracking their effects under various conditions (e.g. when the discrepancy between the accuracy of group members and their perceived or assumed accuracy is greater or lesser). In so doing, it begins to apply relevant sociological knowledge in the context of philosophical simulations, shedding light on the (unjust, epistemic) harms brought about by both credibility deficit and excess. It also provides a proof of concept for a larger-scale approach to combating misinformation using an extension of NetSI’s online observatory to draw on real-world data.
Alexandros Koliousis, Computer Science, NCH; Brian Ball, Philosophy, NCH; Jason Radford, Sociology, NetSI