Tech platforms asked to fight Islamophobia online
Last updated on: 21 March,2019 07:08 pm
“Christchurch attack is a mass shooting of, and for, the internet,” a tech columnist said.
(Web Desk) – Last week, Brent Tarrant, a terrorist livestreamed the whole massacre of two mosques in Christchurch. Surprisingly the violence was online and terrorist was aware of its interpretation on internet subcultures.
Kevin Roose, a technology columnist for The New York Times said that it was purposely done for the social media. “A mass shooting of, and for, the internet,” he said.
It is a first internet-native mass shooting based on modern extremism. Terrorist promoted the massacre live on Facebook. He also shared it on Twitter and 8Chan. Facebook was able to take down most of its videos but the numbers were already so huge, approx. around 1.5 million.
Rep. Bennie Thompson, chairman of the House Homeland Security Committee, called on tech companies to explain themselves in a briefing March 27th:
“Studies have shown that mass killings inspire copycats and you must do everything within your power to ensure that the notoriety garnered by a viral video on your platforms does not inspire the next act of violence,” Thompson wrote.
But at the same time the platforms come in for another stern lecture from Congress, others are calling for a deeper view at the bigotry that makes such terrorist attacks possible.
Mike Masnick makes a similar point in Techdirt: “The general theme is that the internet platforms don’t care about this stuff, and that they optimize for profits over the good of society. And, while that may have been an accurate description a decade ago, it has not been true in a long, long time. The problem, as we’ve been discussing here on Techdirt for a while, is that content moderation at scale is impossible to get right. It is not just “more difficult,” it is difficult in the sense that it will never be acceptable to the people who are complaining.”
Platform problems include the issues endemic to corporations that grow audiences of billions of users, apply a light layer of content moderation, and allow the most popular content to spread virally using algorithmic recommendations.
Uploads of the attack that collect thousands of views before they can be removed are a platform problem.Rampant Islamophobia on Facebook is a platform problem. Incentives are a platform problem. Subreddits that let you watch people die were a platform problem, until Reddit axed them over the weekend.
It all incurred from the free and open networking that connects everything in today’s world. It allows white supremacists to meet, recruit new believes and coordinate terrorist attacks is an internet problem.
Facebook and Google heavily invested in Al-based programs to eradicate ISIS activities off the internet. It removed videos that targets kids and adults at the risk of radicalization. It removed a huge amount of ISIS and al-Qaeda propaganda in the third quarter of 2018.
These AI tools appear to be working. ISIS members and supporters’ pages and groups have almost been completely scrubbed from Facebook. Beheading videos are pulled down from YouTube within hours. The terror group’s formerly vast networks of Twitter accounts have been almost completely erased.
Even the slick propaganda videos, once broadcast on multiple platforms within minutes of publication, have been relegated to private groups on apps like Telegram and WhatsApp.
A similar approach is needed here. Not every problem related to the Christchurch shooting should be laid at the platforms’ feet. But nor can we throw up our hands and say well, that’s the internet for you. Platforms ought to fight Islamophobia with the same vigor that they fight Islamic extremism. Hatred kills, after all, no matter the form it takes.