On Friday, as reports emerged he had killed at least 49 people after attacks at two mosques in Christchurch, social media platforms attempted to curtail the spread of his violence that went viral.
From live streaming to social media his shooting spree in a Christchurch from a helmet-mounted camera, to a post on 8Chan that had links to his 73-page manifesto and from tweeting pictures of one of the guns used in the attack to a reference to Swedish YouTuber PewDiePie, the 28-year-old Australian white nationalist behind New Zealand’s worst-ever mass killing wanted the world’s attention.
And by leveraging the Internet he got exactly that. On Friday, as reports emerged he had killed at least 49 people after attacks at two mosques in Christchurch, social media platforms attempted to curtail the spread of his violence that went viral.
The video he streamed from a helmet-mounted camera shows the gunman moving from room to room shooting at everybody praying in the mosque and two days before the attack, the Twitter handle @brentontarrant tweeted pictures of one of the guns later used. It was covered in white lettering, featuring the names of others who had committed race- or religion-based killings.
A New Zealand Facebook executive told reporters that the New Zealand police had alerted Facebook soon after the 17-minute video had ended, and the company removed the shooter’s accounts and the video. The Guardian reported that Facebook took about one hour to take the video down.
But that was more than enough time for downloaded copies of the video to make its way across the world on Facebook, Twitter, and YouTube.
The shooter’s 73-page manifesto also made its way onto Twitter and 8Chan – a message board forum infamous for its extremist discussions, according to the New York Times.
A UK government spokeswoman told the Guardian: “Facebook, Twitter, YouTube and other providers have taken action to remove the video and other propaganda related to the attack. The government has been clear that all companies need to act more quickly to remove terrorist content. There should be no safe spaces for terrorists to promote and share their extreme views and radicalise others.”
The platforms condone the promotion or display of terrorism on their sites and state that they are doing their best to remove such content. Moments before the shooting, the gunman is also heard saying, “Remember lads, subscribe to PewDiePie.” Hours later, PewDiePie, whose real name is Felix Kjellberg, tweeted, “I feel absolutely sickened having my name uttered by this person. My heart and thoughts go out to the victims, families and everyone affected by this tragedy.”
A YouTube spokesperson said: “Our hearts go out to the victims of this terrible tragedy. Shocking, violent and graphic videos have no place on our platform and are removed as soon as we become aware of them. We will work closely with the New Zealand authorities to offer any assistance we can.” Twitter said it had taken down the video, and that the platform uses both technology and human tools to take down violating content. The platform suspended a total of 205,156 terrorist-related accounts, 91 per cent which was internally flagged, in the period of January 1, 2018, through June 30, 2018.
Facebook did not respond to an email query from The Indian Express. Incidentally, this is not the first time that violence has been live-streamed on the platform, leading to growing concerns that the platforms may not be doing enough to stop the spread of such content. In 2016, an IS militant of French citizenship broadcast a 12-minute video of his victims in Paris. Similar to Friday’s attack, the video remained on the platform long enough for users to copy and circulate it.
Two years ago, major advertisers pulled millions of dollars from YouTube after an investigation showed their content was displayed next to extremist videos.
Regulation in the US and India give technology “safe harbour” protections, which means the platforms are not liable for illegal content until they are notified about it. The IT ministry released draft amendments to the IT Act in December that would make these platforms “proactively” filter through “unlawful” content.
The problems are big enough that the platforms have converged in efforts to combat them. Facebook, Microsoft, Twitter, and YouTube formed The Global Internet Forum to Counter Terrorism in 2017 to develop detection technology and maintain a shared database of videos and images to remove terrorist content more rapidly.
Source: Read Full Article