• The Listener
  • North & South
  • Noted
  • RNZ

How the Government can improve its use of artificial intelligence

Photo/Getty Images

Our use of AI for problem-solving is okay so far, but the Government needs to do better.

Here’s the good news about our government departments’ use of artificial intelligence (AI): they haven’t massively screwed up, so far. But they could if they continue with what researchers describe as a “piecemeal” approach to the use of computer algorithms to automate systems and, more importantly, make predictions that could affect our lives.

There’s a lot of hype about predictive algorithms, but they are nothing new to government. The RoC*RoI predictive model, developed by Department of Justice senior psychologists more than 25 years ago, is used to predict criminals’ risk of reconviction and risk of imprisonment.

It is fairly uncontroversial. A computer algorithm, fed basic social and demographic details and an individual’s criminal history, consistently outperforms the judgments of trained experts when it comes to picking whether a person is likely to reoffend or end up back in prison.

Macro alias: ModuleRenderer

But as the mountain of data we generate grows, the Government is naturally tempted to break down the departmental silos in which information has traditionally been kept and put it all together to give a more complete picture of each person.

We shouldn’t necessarily fear that. The data-driven approach, Bill English’s “social investment”, could lead to better provision of taxpayer services, better-functioning health and social welfare systems and the fairer sharing of wealth.

But as a group of University of Otago researchers note in a newly released Law Foundation-funded report, we need a stronger focus on “accuracy, transparency, control and bias” when designing and deploying algorithms across government.

A stocktake by the Department of Internal Affairs and Statistics NZ last year revealed that 32 algorithms were being used in 14 government agencies. They include Neet (Not in Education, Employment or Training), the Ministry of Social Development’s predictive modelling tool to identify school leavers who may be at greater risk of long-term unemployment, and ACC’s system for automating compensation claims that leaves staff only a few to process manually.

So far, so good. But there’s no independent audit of how accurate these algorithms are and whether they are less biased than the humans who would otherwise be making decisions about us. There’s no system to question the outcome of an algorithm-generated decision to see what factors were considered.

We need to get our house in order. But we have an advantage here.

James Maclaurin. Photo/Supplied

“Unlike some countries that use commercial AI products, we’ve tended to build our government AI tools in-house, which means we know how they work,” says the report’s co-author, James Maclaurin, an Otago philosophy of science professor. “That’s a practice we strongly recommend our Government continues.”

There are numerous overseas examples of the troubling results that arise when private companies are contracted by government departments to apply their proprietary algorithms. The most famous example is Compas, the system used by US judges to determine bail and sentencing limits.

An investigation by journalists at non-profit news organisation ProPublica in 2016 of the system’s automatic risk assessments revealed Compas was biased against African Americans. Northpointe, the company that built Compas, defended its algorithm, but the system’s inner workings are kept secret to protect the company’s interests, so it is still unclear how it works.

The Otago researchers argue for transparency into algorithms used in the public arena and propose a register of predictive algorithms used by the Government, complete with layperson summaries of how they work and whether they have been bought commercially or developed in-house.

But they go further, calling for the formation of an independent regulator, akin to the Privacy Commissioner, to oversee use of algorithms in government. It is a sensible proposal given how much we have to lose if public trust is undermined.

By sorting out the Government’s use of algorithms, we may also identify an approach to the bigger problem: the use of algorithms by Google, Facebook and the plethora of companies that collect and mine our data every day, using methods cloaked in secrecy. That’s where sunlight really needs to be applied.

This article was first published in the June 8, 2019 issue of the New Zealand Listener.