Facebook has said it was tightening rules around its livestreaming feature ahead of a meeting of world leaders aimed at curbing online violence in the aftermath of a terror attack in New Zealand.
A lone terrorist killed 51 people at two mosques in the city of Christchurch on March 15 while livestreaming the attacks on Facebook. It was New Zealand’s worst peacetime shooting and spurred calls for tech companies to do more to combat extremism on their services.
Facebook said in a statement it was introducing a “one-strike” policy for use of Facebook Live, temporarily restricting access for people who have faced disciplinary action for breaking the company’s most serious rules anywhere on its site.
First-time offenders will be suspended from using Live for set periods of time, the company said. It is also broadening the range of offenses that will qualify for one-strike suspensions. Facebook did not specify which offenses were eligible for the one-strike policy or how long suspensions would last, but a spokeswoman said it would not have been possible for the shooter to use Live on his account under the new rules.
The company said it plans to extend the restrictions to other areas over coming weeks, beginning with preventing the same people from creating ads on Facebook. It also said it would fund research at three universities on techniques to detect manipulated media, which Facebook’s systems struggled to spot in the aftermath of the attack.
World leaders and tech bosses are meeting Wednesday in Paris to find ways to stop online violence. They’re working all day Wednesday on the “Christchurch Appeal.”
New Zealand Prime Minister Jacinda Ardern welcomed Facebook’s pledge to restrict some users from Facebook Live and invest $7.5 million in research to stay ahead of users’ attempts to avoid detection.
She said she herself inadvertently saw the Christchurch attacker’s video when it played automatically in her Facebook feed.
“There is a lot more work to do, but I am pleased Facebook has taken additional steps today … and look forward to a long-term collaboration to make social media safer,” she said in a statement.
Ardern is playing a central role in Wednesday’s meetings in Paris, which she called a significant “starting point” for changes in government and tech industry policy.
Twitter, Google, Microsoft and several other companies are also taking part, along with the leaders of Britain, France, Canada, Ireland, Senegal, Indonesia, Jordan and the European Union.
Officials at Facebook said they support the idea of the Christchurch appeal, but that details need to be worked out that are acceptable for all parties. Free speech advocates and some in the tech industry bristle at new restrictions and argue that violent extremism is a societal problem that the tech world can’t solve.
Ardern and Wednesday’s meeting host, French President Emmanuel Macron, insist that it must involve joint efforts between governments and tech giants. France has been hit by repeated Islamic extremist attacks by groups who recruited and shared violent images on social networks.
Speaking to reporters ahead of the meetings, Ardern said, “There will be of course those who will be pushing to make sure that they maintain the commercial sensitivity. We don’t need to know their trade secrets, but we do need to know what the impacts might be on our societies around algorithm use.”
She stressed the importance of tackling “coded language” that extremists use to avoid detection.
Before the Christchurch attack, she said, governments took a “traditional approach to terrorism that would not necessarily have picked up the form of terrorism that New Zealand experienced on the 15th of March, and that was white supremacy.”