• The Listener
  • North & South
  • Noted
  • RNZ
Prime Minister Jacinda Ardern in Christchurch, after the mosque terror attacks which were live-streamed around the world. Photo/Kirk Hargreaves/Wikimedia Commons-CC BY 4.0.

Microsoft's president reveals details of his Jacinda Ardern meeting post-Christchurch

Of the tech luminaries that Prime Minister Jacinda Ardern has enlisted in her mission to stamp out the online spread of hate speech following the Christchurch mosque shootings, one has been particularly influential.

Microsoft’s president Brad Smith may not have the name recognition of Twitter founder Jack Dorsey, who visited Ardern at the Beehive on Monday, or Facebook’s Mark Zuckerberg, who drew the ire of many for failing to publicly apologise for the social network’s role in spreading copies of the killer’s live-streamed video.

But it was Smith, also Microsoft’s chief legal officer and the longest-serving of its leadership team, who met with Ardern and her cabinet just ten days after the attacks, on a long-planned visit. It ended up as a brainstorming session that would become the Christchurch Call.

Microsoft president, Brad Smith. Photo/Supplied.Behind the Call

In his new book Tools and Weapons: The Promise and the Peril of the Digital Age, which he co-wrote with Microsoft’s director of communications, Carol Ann Browne, Smith gives a flavour of the behind the scenes discussions that Ardern had with tech company leaders.

“She wanted to use the moment not to score public relations points but to achieve something of more lasting importance,” writes Smith.

“On a late-night phone call that Satya [Nadella, Microsoft’s chief executive] and I had with Ardern, I mentioned how struck I was by the government’s speed. As she replied, ‘when you’re small, you have to be nimble!’ ”

The Christchurch Call drew inspiration from another initiative Smith was also heavily involved in, the Paris Call for Trust and Security in Cyberspace, which calls for more explicit support under international law to protect civilians, civil infrastructure and democratic processes from cyber attacks. By early this year, over 500 organisations and 65 countries had signed the Paris Call.

It was also in Paris at the Élysée, the French equivalent of the White House, that Smith and Dorsey joined Ardern, French President Emmanuel Macron and representatives of other nations and tech companies to sign the Christchurch Call, four months after the attack. The one sour note was that Zuckerberg was a no-show – he sent a representative instead.

National leader Simon Bridges this week wrote off the Call as “a nebulous, feel-good thing” and a distraction from the Government’s pressing domestic issues. So what has happened since Ardern scored her diplomatic coup in Paris?

“We've focused on the development of the concrete parts of the crisis protocol. That has been one area where there has been a lot of additional work since the 15th of May,” Smith tells NOTED.

“We've been focused on creating the right kind of organisational structure and infrastructure that's needed for tech companies to work together, including on technology development, on sharing technology, best practices and the like.

“It's going to involve additional investment by the tech companies,” he says.

 

Prime Minister Jacinda Ardern and French President Emmanuel Macron at the “Christchurch Call” summit, which delivered an agreement signed by tech companies and world leaders. EPA/Charles Platiau, CC BY-ND Kevin Veale, Massey University.

Concrete steps?

The crisis protocol was one of nine steps the tech signatories – Amazon, Facebook, Google, Microsoft and Twitter among them, agreed to take. It involves the establishment of “incident management teams” at the tech companies that will work together to share information and coordinate action when some sort of crisis threatens to see extremist content go viral.

There has been a smattering of other developments. As the Paris meeting got underway, Facebook said it would introduce a “one strike” policy for video live-streaming that would see a user suspended for a 30-day period or longer for a user policy violation, such as sharing “a link to a statement from a terrorist group with no context”.

But that wouldn’t have stopped the Christchurch killer’s online broadcast. Facebook also said it would invest US$7.5 million in a research partnership with three US universities to “improve image and video analysis technology” to detect extremist content in videos, including those that had been altered to try and avoid the social network’s automated content filters.

Related articles: An early Facebook investor explains why the company is so slow to reform | Anti-immigration websites taken down after Christchurch shootings

It was recognition that Facebook’s filtering systems just weren’t able to catch the horrific video beamed from Christchurch and its numerous altered versions.

At Microsoft, Smith called a review of the company’s vast array of services and found nine, ranging from LinkedIn and Xbox Live, to the Bing search engine and Azure cloud platform, that could potentially be susceptible to abuse.

Terms of use, self-reporting of objectionable content and tighter security settings are relatively easy measures to take. Harder will be the collective effort required to come up with better technology to filter and screen billions of videos, posts and messages every day, particularly as new threats emerge such as deepfakes – realistic-looking video or audio recordings that are able to mimic human beings using artificial intelligence (AI) tools. Currently, tech companies employ tens of thousands of people to vet content manually.

As deepfakes look set to become a powerful weapon to spread misinformation, AI technology will increasingly need to be deployed to identify and block threats too, by studying the behaviour of billions of social network users and learning to spot patterns of bad behaviour.

“Artificial intelligence can help, but human beings will continue to be essential,” says Smith.

“That's a reflection of the fact that we're dealing with human beings who, unfortunately, are from time to time tempted to engage in these atrocious acts and these terrorist attacks. We need human creativity to respond to these human challenges.”

Embracing regulation

After becoming Microsoft’s general counsel in 2002, Smith spent much of the next decade dealing with government antitrust cases against Microsoft. It is perhaps that experience and the fact Microsoft isn’t reliant on the advertising revenue that is integral to the existence of Google and Facebook and responsible for driving some of the internet’s bad behaviour, that Microsoft is now open to greater regulation in areas.

“I think it's right that laws impose more responsibilities on tech companies,” says Smith.

Barely three months after the Christchurch attack and in direct response to it, the Australian government passed a law making it an offence for companies not to remove videos or photographs depicting murder, torture or rape.

Big tech companies most likely to find themselves running afoul of the law could face a fine up to 10 per cent of their annual global turnover and their executives could face jail terms. The big social media players complained that it could damage Australia security co-operation and introduce mass surveillance of internet users.

Brad Smith's new book. Photo/Supplied.Smith is more circumspect.

“I’m less concerned about the amount of the fine and more focused on what today is not yet a set of consistent and clear and precise legal standards around the world, especially if the law is going to have teeth, and the Australian law obviously does,” says Smith.

“It has to be clear to companies, what they're supposed to do and when and how they're supposed to do it.”

While Microsoft was resolutely anti-regulation in the dark days of the late 1990s when it was accused of having monopoly power in the PC industry, this more recent openness to tighter regulation also underpins Microsoft’s advocacy for US laws governing new disruptive technology, like facial recognition.

As Smith explains in Tools and Weapons, any application driven by AI, such as facial recognition, improves as larger quantities are fed into its machine learning algorithms. As a result, ambitious competitors will do as many deals as possible early on to get access to as much data as possible.

“Hence the risk of a commercial race to the bottom, with tech companies forced to choose between social responsibility and market access,” he writes.

“The only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition.”

Facial recognition and employee activism

Microsoft, as a trusted technology vendor, therefore has much to gain from the regulation of facial recognition, which it has recommended to governments around the world. In the US, the technology has been banned for government use in San Francisco and a handful of other cities and lawmakers have drafted bills to regulate its use.

Microsoft’s view, writes Smith, is that law enforcement agencies should only be able to use facial recognition technology built into cameras to track individuals if they have a court order to do so, just as is currently required to gain a wiretap or to track a person through their phone’s GPS chip.

In the commercial world, where Microsoft is pursuing opportunities to roll out facial recognition, its position is that legislation should require organisations, such as retailers, to provide “conspicuous notice” so people know it is being used. Some companies are pre-empting legislation with such notices. Facebook, which uses facial recognition to tag users in photos posted to the network, last week changed its policy to make it an opt-in feature rather than the default setting.

New laws would also help Microsoft navigate the moral issues around the use of AI being raised by its own employees. A new wave of employee activism in Silicon Valley is seeing workers at the likes of Amazon, Google and Microsoft, opposing some of the contracts, including with the military and law enforcement agencies.

Microsoft was caught up in a wave such activism last year, when employees petitioned management to cancel a contract with the US immigration and Customs Enforcement (ICE). In June, the Trump administration made a decision to separate children from parents at the southern US border. Images were splashed across cable news channels of migrant children, who’d just crossed the border, packed into ICE holding facilities.

Some Microsoft staff worried ICE would use Microsoft’s cloud-based facial recognition technology to identify and track migrants. It turned out the contract was to move ICE email, calendar, messaging and document storage to the cloud – bread and butter stuff for Microsoft. The company held its ground. But it was a sign that priorities were changing for employees and that Microsoft and other tech giants dismissed employee activism at their peril.

“What is so noteworthy about activism in the tech sector is that we have employees standing up not for themselves, but for broader societal issues and values,” says Smith.

“We don't necessarily always agree that their answers are the right ones. But what we learned is that their questions are the important ones.”

Tools and Weapons recounts Smith’s role navigating some of the biggest tech-related issues of the last decade, from the revelations in the Edward Snowden leaks of NSA documents to the WannaCry ransomware cyberattack that took millions of computers offline in 2017.

Threats to democracy

One of Smith’s greatest concerns is the threat to democratic processes, epitomised by the Russian meddling in the last presidential election, through hacking attacks and social media misinformation campaigns.

With elections in the US and New Zealand next year, Smith says the issue was a top priority in both countries.

“I think we should all be very focused on the kinds of disinformation threats that we now understand better.

“I think that one can easily make the mistake of just getting good at fighting the last war. We better be good at fighting the next war. Because if you lose the same war twice, well, you know, shame on you.”

Related articles: How vulnerable is our democracy to foreign interference? | Misinfodemic: When fake news costs lives

Smith avoids direct criticism of President Trump and his administration in the book, though his frustration at his government’s unwillingness to join international efforts to tackle tech-related issues is clear.

“The political winds among some of the White House staff were not blowing in favour of multilateral initiatives, regardless of the issue,” he writes of his attempts to get the government to endorse the Paris Call.

“It put us in an unusual position, as we had our government affairs teams around the world asking other countries to support the effort.”

The US also failed to officially endorse the Christchurch Call.

The trade war between the US and China turns out to be the most topical issue surfaced in the book. Microsoft has a large research facility in Beijing and hundreds of millions of Chinese people using Windows and the Office software suite.

The US-China trade war

Smith is concerned that measures to limit technology transfer to Chinese companies in the name of national security could create a “digital iron curtain down the middle of the Pacific”.

“I think it's likely to take the world backwards,” he says.

“It doesn't mean that there aren't real security issues, there are, they need to be thought through. But the only way to navigate these tensions successfully is to appreciate the way technology is created and the dual uses to which it is put and to, I think, ground ourselves in some of the deeper philosophical, political and historical trends that will continue to shape the relationship between the United States and China.”

For New Zealand, he adds, it is imperative to stay connected and engaged with both countries. Smith plans to continue working with Ardern, who he says has a “sense of moral authority” to make the Christchurch Call count for something.

“One of the things that Prime Minister Jacinda Ardern does very well is keeping us on our toes.”

Follow NOTED on Twitter, Facebook, Instagram and sign up to our email newsletter for more technology insights.