As more and more sectors experiment with artificial intelligence, one of the areas that has most quickly adopted this new technology is law enforcement. It’s led to some problematic growing pains, from false arrests to concerns around facial recognition. However, a new training tool is now being used by law enforcement agencies across the globe to ensure that officers understand this technology and use it more ethically.
Based largely on the work of Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI, and designed in collaboration with the United Nations and Interpol, the Responsible AI Toolkit is one of the first comprehensive training programs for police focused exclusively on AI. At the core of the toolkit is a simple question, Canca says. “The first thing that we start with is asking the organization, when they are thinking about building or deploying AI, do you need AI?” Canca says. “Because any time you add a new tool, you are adding a risk. In the case of policing, the goal is to increase public safety and reduce crime, and that requires a lot of resources. There’s a real need for efficiency and betterment, and AI has a significant promise in helping law enforcement, as long as the risks can be mitigated.”