What the big tech companies need to do after the Christchurch shooting

by Peter Griffin / 29 March, 2019
Photo/Getty Images/Listener illustration

Photo/Getty Images/Listener illustration

RelatedArticlesModule - Christchurch shooting facebook tech companies

In the week the world wide web turned 30, footage of the Christchurch mosque attacks appeared as a devastating reminder of the failings, indifference and greed of some of its biggest companies.

The high-definition video of the mass murder in Christchurch that went viral last week, sparking a massive effort to remove it from websites and social networks, seemed to represent everything that had gone wrong with the world wide web.

In an open letter marking its 30th birthday, the man who invented it lamented the dysfunction of the system that puts billions of web pages of content at our fingertips in mere seconds.

Yes, wrote Sir Tim Berners-Lee, the web had become a town square, a library, a cinema and a bank. “It has also created opportunity for scammers, given a voice to those who spread hatred, and made all kinds of crime easier to commit.”

He asked for us to come together as a global community to fix the web. But that’s much easier said than done.

Facebook spent nearly US$8 billion on research and development in 2017, much of it on cutting-edge artificial intelligence and data science. But that wasn’t enough to prevent its platform from being used to broadcast mass murder.

The footage recorded on the head-mounted GoPro camera worn by the alleged gunman was live-streamed over his mobile-phone connection to Facebook, a 17-minute horror movie of mayhem and death that was quickly shared, copied and reposted on Twitter, YouTube, Reddit and fringe internet forums such as 8chan.

In the 24 hours after the attack, Facebook claimed it removed 1.5 million copies or altered versions of the video from its Facebook and Instagram social-media platforms, but only 1.2 million were detected and deleted immediately when users around the world attempted to upload them.

At least 300,000 videos slipped through, which explains why they were still able to be viewed on Facebook hours after the alleged gunman had been dragged from his car by police and arrested. YouTube, the world’s largest video platform, also struggled to contain the video’s spread. At one point, an executive revealed, videos of the attack were being uploaded “as quickly as one per second”.

Facebook has refused to explain the flaws in its systems or to answer the Listener’s questions on what it plans to do to improve them. Its preferred approach to public relations is to deliver carefully vetted written statements and backgrounders, rather than answer direct questions from journalists.

But in a disclosure that is rare for Facebook, on Tuesday it’s Menlo Park, California-based head of global policy management, Monika Bickert, claimed that only 4000 people had viewed the video, with 200 watching live as the shootings took place. We’ll have to take Facebook’s word for that – its systems are notoriously opaque.

But reposting of the video on other online platforms and file-sharing networks has meant it continues to circulate on the web, remixed and edited to suit the entertainment or propaganda purposes of sick individuals.

Lack of understanding: Facebook founder Mark Zuckerberg testifies on Capitol Hill. Photo/Getty Images

Lack of understanding: Facebook founder Mark Zuckerberg testifies on Capitol Hill. Photo/Getty Images

Money before moderation

Facebook founder Mark Zuckerberg is yet to comment on the March 15 live streaming his company facilitated. But it is clear that content moderation has become his biggest headache, and Facebook’s repeated failures to prevent illegal and offensive content from circulating, including several previous live-streamed murders and suicides, increasingly looks to be the social-media giant’s weak spot.

Admittedly, the numbers the company is dealing with are staggering – 1.52 billion users logging in daily, generating billions of new posts to the site in more than 100 languages. Despite endless privacy and public-relations scandals, the platform continues to grow.

Like every other social network, Facebook has its “community standards”, in which content featuring hate speech, violence, porn or spam may be removed. In addition to its automated filtering systems, the company now employs between 10,000 and 20,000 people worldwide who specialise in moderating content on its Facebook and Instagram sites.

Facebook announced a new policy on March 28 to ban “praise, support and representation of white nationalism and separatism” on their platforms. But implementing its new policy will test the limits of these automated and manual content moderation systems as they attempt to make calls on nuanced and subtlety racist material. 

But the Facebook Live platform, launched in 2015 and made available to any Facebook account holder with a camera and an internet connection, has proven more problematic to police than the flood of videos, images and text posts that makes up the rest of the network’s uploaded content.

“It’s hard,” admits Alistair Knott, an artificial intelligence expert and associate professor at the University of Otago.

But it isn’t the key reason for Facebook Live’s content-moderation shortcomings.

“There’s no money in it for them,” he says. “There’s no commercial incentive for them to do these politically crucial types of filtering, such as getting rid of the Christchurch video.”

Instead, Facebook has focused its developer and computing resources on making its platform a safe place to advertise, a hugely successful strategy that made the company US$55.8 billion in ad revenue last year.

“Pepsi may want to stick an advert in front of a video,” Knott says. “But Pepsi needs to know the video isn’t white-supremacist content or pornography. The video will be scanned by Facebook to make sure it isn’t. That’s a commercial service and they wouldn’t get as much advertising if they didn’t do it.”

Both YouTube and Facebook also employ complex systems to detect copyrighted material that has been unlawfully uploaded, whether it be a Disney movie or an Elvis song. “If a YouTube video plays a song, they’ve got software that can identify that song and either monetise it with advertising [if the copyright holder agrees] or have it taken down,” says Knott.

A different set of algorithms determines what content is “engaging” – as a result of monitoring likes, shares, comments, video plays and time spent on the post. Engaging content attracts advertisers. But that has fed the rise of “clickbait”, where inflammatory opinion pieces on news websites or sprawling, hate-filled discussions on Facebook pages are given prominence.

Alistair Knott. Photo/Supplied

Digital watermarks

News outlets have also defended their decision to run fragments of the killer’s video in their reports – YouTube still has many videos from news outlets featuring slivers of the footage free of graphic content.

The reputational risk of airing violent or offensive content has certainly led to significant investment in automated filtering systems across the board. Facebook doesn’t disclose exactly how it auto-detects dodgy content, but it did reveal that on Friday night, it created a “hashed” version of the alleged gunman’s video “so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram”.

It clearly didn’t work properly. Knott says there’s a more effective method that may require more computer resources but should be well within the capabilities of the biggest tech companies in the world. It would involve automatically adding a type of digital watermark to every video that is uploaded to Facebook. The unique identifier, undetectable by the viewer of a dubious video, would allow Facebook to compare it instantly with any video entering the network with all matching versions deleted.

“It would catch a lot of people and be quite helpful in distinguishing between the people who are casually sharing it without really thinking about it and those who are trying to fool the filters by modifying it, which is much more malicious,” says Knott.

That could prove more effective than hashing, which can be fooled by changing attributes of the video, such as the colour contrast or brightness. A digital watermark system, common to all of the big tech companies, would allow them to more effectively stop such videos from going viral.

There’s a precedent for it. The Microsoft-developed PhotoDNA software is used by Google, Twitter, Facebook and Adobe to identify child pornography and extremist images, such as Isis beheadings.

Another more complicated technique involves analysing videos to determine their context and moderating based on what is contained in each frame. But these artificial-intelligence tools deliver patchy results.

“We are doing well with recognition of objects in static images, but classifying events or actions in video is harder,” says Knott. “How do you tell the difference between an actual shooting rampage and a shooting rampage in a film?”

The requirement for more robust systems, backed up by legal requirements, would help, he says. But in most countries, the technology has sped ahead of politics. That has led Knott and colleagues at the University of Otago to set up the Centre for Artificial Intelligence and Public Policy, which now advises several of our government departments on their use of AI and algorithms.

Sarah Roberts. Photo/Stella Kalinina/Supplied

Sarah Roberts. Photo/Stella Kalinina/Supplied

Holding back the tide

Has Facebook’s new army of human content moderators made much difference? It is hard to tell, says Sarah Roberts, an assistant professor at the University of California, Los Angeles, who looks at how commercial content moderation is carried out at tech companies.

The new frontline bodies have been accompanied by a growing mountain of content. “It’s a tall order to ask content moderators to intervene on years of violent invective targeting specific vulnerable groups of people,” says Roberts, who considered Facebook Live a grand social experiment when it launched, with Facebook users its often-unwitting beta testers.

“Putting it squarely on the shoulders of the people or systems doing the moderation is like asking someone to hold back the tide,” Roberts says.

Flagging a clearly illegal and offensive video is one thing, but making a call on hate speech, bigotry and racist comments is another story entirely. It’s the major challenge Facebook, Google, Twitter and others face as they wear criticism that they have fostered an environment for extremism and hatred to flourish.

“Put a group of people from across the world in a room and ask them to give you a definition of hate speech,” says Roberts. “What’s the likelihood that they’d agree on the same things and a shared definition?”

The Facebook content moderators spread across the globe, many of them poorly paid and faced with a daily torrent of violent, racist and pornographic content, are a thin line of defence. Instead, Roberts says we need to look beyond our computer screens to our communities and relationships and to leaders who can protect us from fake news and hate speech.

Part of that is forcing Google and Facebook to properly acknowledge their role in trashing that digital town square that Berners-Lee created.

Unfortunately, such leaders are few and far between in the US at the moment, says Roberts, pointing to President Donald Trump, her “tweeter-in-chief”.

“Trump sent his warmest sympathies and best wishes to New Zealand. I was embarrassed on the part of the United States. It was inappropriate, it was inadequate, it belied his own invective.”

Last year’s congressional hearings, which saw Zuckerberg appear on Capitol Hill to defend Facebook’s inappropriate sharing of user data in the Cambridge Analytica scandal, also left Roberts dismayed. Aged lawmakers who assembled to grill Zuckerberg were confused about how Facebook operated and made money.

“Many of those who may have the will to intervene, even in the anti-regulation climate we’ve been in for the past 40 years, seemed to be stumbling around for any understanding of the social-media ecosystem.”

Profits without responsibility

Our own politicians aren’t necessarily much more web savvy. But the Christchurch massacre seems to have galvanised the coalition government’s senior leadership in a way the country has rarely seen over an issue relating to the internet economy.

“They are the publisher, not just the postman. They cannot have the profits without the responsibility,” the Prime Minister said of Facebook in Parliament this week. That followed a conversation with British Prime Minister Theresa May about efforts in the latter’s country to hold big tech companies to account.

In February, the UK Parliament’s Digital, Culture, Media and Sport Committee concluded an 18-month investigation into disinformation and fake news by calling for a compulsory code of ethics for tech companies, to be overseen by an independent regulator that would have legal powers to launch action against corporates breaching the code.

It also wanted measures introduced to require social-media companies to take down known sources of harmful content, including proven sources of disinformation.

“If anywhere has done a reasonable amount of work already, it’s the likes of the UK,” Ardern told reporters. “They’ve been trying to hold some of these global platforms to account via a select committee process. But we really need a global alliance to deal with some of these issues.”

The UK Parliament still has to turn the recommendations into law, which may prove a tall order given the political division created by Brexit.

Others point to the hate-speech law Germany began enforcing last year, which requires social networks to remove lawbreaking material within 24 hours of being notified of its existence. Aimed squarely at Facebook, Twitter and YouTube, the law can impose fines of up to €50 million for serious breaches. But the Germans have a particularly strong aversion to racist and inflammatory rhetoric, a response to the excesses of fascism during the Nazi era.

The introduction of the European Union’s General Data Protection Regulation, also last year, forced real changes in the tech companies to better protect the privacy of their users’ data across the 28 EU nations.

Due to the global reach of their platforms, many of the tech companies implemented those changes globally, giving stronger data protection to New Zealand, too – and inundating us with reminders to check their new terms and conditions.

Australian Prime Minister Scott Morrison’s withering comments about the Christchurch live-streaming debacle and recent investigations into the market dominance of Google and Facebook in the digital ad market could give some critical mass to Australasian regulatory efforts.

Morrison has called on the G20 international forum of powerful nations, which will meet in Japan in June, to use Christchurch as an excuse to tackle the “ungoverned” area of internet extremism.

Twitter founder Jack Dorsey. Photo/Getty Images

Turning words into deeds

The international spotlight on Christchurch’s terror attacks, and Facebook’s live-streaming nightmare, could certainly be an unprecedented opportunity to translate political rhetoric into regulatory action to improve the social-networking company’s flawed content-moderation systems, says Roberts.

“It doesn’t take multiple instances of something like what happened in your country to feel that all of those interventions are for nought, when we still end up with the deaths of 50 people sent around the world and replicated on other sites.”

She also sees merit in calls from Facebook critics, including New Zealand’s Privacy Commissioner, John Edwards, to suspend the Facebook Live function until more reliable moderation can be implemented.

“Everyone could press ‘pause’ and just get a grasp.”

That’s sort of what executives at the country’s largest internet provider, Spark, thought last Friday night when they saw copies of the video spreading across the web and moved to block access to websites that were hosting it. Some of those sites remained inaccessible to most New Zealanders for days after the attack, until copies of the video were removed.

“At extremely short notice we decided to do what we thought was the right thing,” says Spark’s corporate relations lead, Andrew Pirie. “What we put in place very quickly was a blunt instrument; we were simply trying to block [domain] access,’

Vodafone, Vocus and 2degrees agreed to join the blocking effort, with the blessing of the Department of Internal Affairs and the police.

Google CEO Sundar Pichai. Photo/Getty Images

A collective website-blocking effort like that hadn’t been done before. Internal Affairs operates a filter for restricting New Zealanders’ access to websites hosting child pornography – it’s software that all of our internet service providers (ISP) voluntarily run across their networks.

Spark and its rivals knew that Facebook was struggling to remove the videos from its own network. So, why didn’t it block Facebook.com?

“We didn’t think it was an appropriate step given the importance of Facebook during this crisis as a way for people to stay in touch with each other,” says Pirie.

He says a more effective mechanism for ISPs to come together to block websites in times of crisis is the “missing link”. The ISPs’ chief executives also banded together on Tuesday to write to Zuckerberg, Google boss Sundar Pichai and Twitter founder Jack Dorsey, urging them to do more to detect and remove harmful content from their networks.

“Technology can be a powerful source for good,” wrote Simon Moutter, Jason Paris and Stewart Sherriff from Spark, Vodafone and 2degrees respectively.

“Already there are AI techniques that we believe can be used to identify content such as this video, in the same way that copyright infringements can be identified. These must be prioritised as a matter of urgency.”

The move came as a group of New Zealand companies, Spark among them, said they would be removing their advertising from YouTube and Facebook until content moderation improved. A few weeks previously, Spark had again pulled its advertising from YouTube over concerns about comments from paedophiles beneath videos featuring children.

Not “liking” it

There is no particular love of tech giants such as Google and Facebook in the telecommunications industry. They have vastly lucrative ad-supported business models while the telcos face tight margins and cut-throat competition and bear the full cost of upgrading their networks to handle the ever-increasing load of videos and social-media posts.

That social-media business model lies at the heart of most of the problems that have arisen with these platforms, says InternetNZ chief executive Jordan Carter.

“All of it is designed to give you a little dopamine burst when you click ‘like’, when you’ve made your view heard,” he says. “Because that’s the way they learn about your preferences and can do the micro-targeting to sell the advertising. That’s how they make their money.”

Carter first took to the internet in the mid-1990s, when geeks gathered in Usenet newgroups to argue about politics, connecting over slow dial-up modems.

As the social networks enter their teenage years, he sees some decidedly teenage behaviour evident in how we behave on those networks. After all, a large number of people thought it appropriate to spread the Christchurch video.

“It seems to me there isn’t always the discrimination you hope people would exercise,” says Carter. “In the past 15 years, we’ve seen a lot of blood and guns and bombs on our screens. I don’t understand the effects of prolonged exposure to violence, but maybe that means people are like, ‘Oh, I’ll take a look.’”

Although the introduction of the Harmful Digital Communications Act, in 2015, has been useful in tackling online bullying, with Netsafe as the agency responsible for investigating complaints made under the Act, Carter says that it and other legislation, such as the Human Rights Act, may be less suitable for dealing with hate speech and the methods used to groom online users for recruitment into extremist movements.

InternetNZ’s Jordan Carter. Photo/Supplied

“That connects to a broader set of questions about political speech and what the acceptable boundaries are, and that’s way beyond my pay grade,” he says. “That’s a deeply philosophical discussion about the limits of free expression in liberal societies, and what we should expect, what our laws should provide for and what our norms are for social platforms.”

If Facebook had faced the prospect of being fined several million dollars by our Government over its hosting of the footage of the attack on the mosques, it would certainly have had an incentive to fix its systems and suspend live streaming until it blocked all the coverage.

When it comes to respecting suppression orders in search results, co-operating with our privacy watchdog on investigations and paying fair tax on revenue generated here, the big tech companies have shown themselves to be evasive, aloof and uncommitted to any sort of meaningful action.

Cleaning up the web will involve a lot more than clever technology to improve content moderation to prevent the next murder from being live-streamed. It will require Facebook, Google and Twitter, in particular, to better address the hate speech they have tolerated for too long, including far-right activism masquerading as free speech.

The online fallout over Christchurch offers a glimmer of hope. But the impetus for political change to deal with the growing complexities of the online world shouldn’t be allowed to fade with the memories of those brutal attacks.

This article was first published in the March 30, 2019 issue of the New Zealand Listener.

Latest

Medical specialist and writer Eileen Merriman's prescription for success
104920 2019-04-25 00:00:00Z Profiles

Medical specialist and writer Eileen Merriman's pr…

by Clare de Lore

Eileen Merriman doesn’t have to dig too deep to find the angst, humour and drama for her award-winning novels.

Read more
We still remember them: The best in new Anzac Day reading
105020 2019-04-25 00:00:00Z Books

We still remember them: The best in new Anzac Day…

by Russell Baillie

The tide of great New Zealand books on the world wars shows no sign of going out. Russell Baillie reviews four new Anzac books.

Read more
Fine lines: New Anzac books and graphic novels for kids
105028 2019-04-25 00:00:00Z Books

Fine lines: New Anzac books and graphic novels for…

by Ann Packer

A telegraph “boy”, heroic animals and even shell-shock make for engaging reads for children.

Read more
Keeping up appearances: The challenging job of restoring NZ's lighthouses
104978 2019-04-25 00:00:00Z Life in NZ

Keeping up appearances: The challenging job of res…

by Fiona Terry

Ensuring lighthouses stay “shipshape” isn’t a job for the faint-hearted.

Read more
The former major reuniting service medals with their rightful owners
105015 2019-04-25 00:00:00Z Life in NZ

The former major reuniting service medals with the…

by Fiona Terry

Service medals are being reunited with their rightful owners thanks to former major Ian Martyn and his determined research.

Read more
PM announces 'Christchurch Call' to end use of social media for terrorism
104952 2019-04-24 00:00:00Z Politics

PM announces 'Christchurch Call' to end use of soc…

by Noted

A meeting aims to see world leaders and CEOs of tech companies agree to a pledge called the ‘Christchurch Call’.

Read more
Red Joan: Judi Dench almost saves Soviet spy story from tedium
104942 2019-04-24 00:00:00Z Movies

Red Joan: Judi Dench almost saves Soviet spy story…

by James Robins

The fictionalised account of a British woman who spied for the Soviet Union is stiflingly quaint.

Read more
What to watch on TV this Anzac Day
104749 2019-04-24 00:00:00Z Television

What to watch on TV this Anzac Day

by Fiona Rae

Māori TV once again devotes the day to Anzac programming, including a live broadcast from Gallipoli.

Read more