FutureFive New Zealand - Consumer technology news & reviews from the future
Story image
Updated: NZ ISPs block internet footage of Christchurch shootings
Mon, 18th Mar 2019
FYI, this story is more than a year old

The internet can be a place of discovery, but also a place to find harmful and horrifying content that should never be seen by anyone.

After last week's terrorist attack in Christchurch, New Zealand's internet service providers are doing their best to disable and block all footage of the shootings.

2degrees, Spark, Vodafone and Vocus are now blocking any website that shows footage of the mosque shootings.

This kind of move is unprecedented, but it's absolutely necessary to ensure that New Zealanders can't access the harmful content, says New Zealand Telecommunications Forum (TCF) chief executive Geoff Thorn.

He notes that the shooter clearly wanted publicity and for his actions to be seen.

“We do not believe that this is desirable and are doing what we can to prevent this from happening as much as possible,” says Thorn.

Now the ISPs are sharing their knowledge with each other and the industry to block sites that have the footage as soon as those sites of discovered.

So far ‘a number' of websites have been blacklisted. The ISPs have also made requests to have the footage removed.

“There is the risk that some sites that have legitimate content could have been mistakenly blacklisted, but this will be rectified as soon as possible,” says Thorn.

“The industry has a history of cooperating and putting competitive behaviour to one side for the benefit of New Zealanders, of which this is another good example.

UPDATE March 19, 2019: The CEOs of Spark, Vodafone and 2degrees have shared an open letter to the CEOs of Facebook, Google, and Twitter expressing their concern about how the video was shared.

"We also accept it is impossible as internet service providers to prevent completely access to this material. But hopefully we have made it more difficult for this content to be viewed and shared - reducing the risk our customers may inadvertently be exposed to it and limiting the publicity the gunman was clearly seeking," they write.

"Although we recognise the speed with which social network companies sought to remove Friday's video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online. We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.

"Content sharing platforms have a duty of care to proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded."

"Technology can be a powerful force for good. The very same platforms that were used to share the video were also used to mobilise outpourings of support. But more needs to be done to prevent horrific content being uploaded. Already there are AI techniques that we believe can be used to identify content such as this video, in the same way that copyright infringements can be identified. These must be prioritised as a matter of urgency.

"For the most serious types of content, such as terrorist content, more onerous requirements should apply, such as proposed in Europe, including take down within a specified period, proactive measures and fines for failure to do so. Consumers have the right to be protected whether using services funded by money or data."