reason, July 2021
Some Facebook users have recently received warnings about “extremism” and offers of help for those with acquaintances attracted to “extremist” ideas. It’s part of an international push to discourage and restrict communications considered radical and hateful. While often couched in concern about the potential for violence, this effort looks increasingly like a scheme to narrow the boundaries of acceptable discussion and muzzle speech that makes the powers-that-be uncomfortable.
“Are you concerned that someone you know is becoming an extremist?” asks one of the Facebook messages. “We care about preventing extremism on Facebook. Others in your situation have received confidential support.”
Taken by itself, the messages are somewhat creepy indications that the tech giant doesn’t approve of a subset of its users’ communications, politics, and associates. But the messages—which send those who click through to the company’s Redirect Initiative to “combat violent extremism and dangerous organizations by redirecting hate and violence-related search terms towards resources, education, and outreach groups that can help”—is part of a much larger international program involving dozens of governments and tech firms.