• The Listener
  • North & South
  • Noted
  • RNZ

Decision-making is being blindly outsourced in our online lives

They are opaque and inaccessible but the working of algorithms has wide-ranging consequences for the shape and direction of our everyday.

Photo/Getty Images
Photo/Getty Images

They have benign, if exotic, nicknames: Penguin, Panda, Hummingbird. These are the monikers given to computer algorithms that dictate what results we get when we enter text into Google’s search engine.

We are increasingly subject to algorithms in our online lives, such as when we browse Amazon’s book recommendations or ask Apple’s Siri voice assistant what the weather is going to be like today.

Recently, I bought a T-shirt that appeared in an advert in my Facebook newsfeed. It had a quote on its front from The Martian, one of my favourite sci-fi movies of recent years: “Let’s science the shit out of it!”

The line summed up a stranded astronaut’s approach to getting off Mars and I’d been thinking about how great it would look on a T-shirt when Facebook’s algorithm served it up to me: it knew that I’d “liked” the page of Andy Weir, who wrote the book, and that I’d discussed a review of the movie with my Facebook friends. Click and purchase: $24.95.

Millions of these transactions are going on every day, as we are offered things the algorithm calculates we want and takes the effort out of information transactions.

But, as Australian internet expert Michele Willson argues in a new research paper, “Algorithms and the Everyday”, published recently in the journal Information, Communication and Society, we are blindly outsourcing decision-making to algorithms more and more.

Nowhere is that more evident than in the profusion of devices such as Fitbits and smartwatches that track our footsteps, heart rate and sleeping patterns, crunching the data to suggest ways of improving our quantified selves. “The process from biology (heart) and practice (walking) to data becomes unquestioned, normalised and invisible,” Willson writes.

Tesla CEO Elon Musk. Photo/Getty Images
Tesla CEO Elon Musk. Photo/Getty Images

In the next decade, algorithms will power the development of the so-called internet of things, including driverless cars and robots. Elon Musk, a founder of electric-car maker Tesla, claims that millions of lives will be saved when algorithms, rather than drivers, control cars.

But algorithms are designed by humans and so they are subject to biases, flaws of logic and the social, political and commercial priorities of their designers. This was brought into stark relief recently with reports that in parts of the US, software is being used to predict whether people will commit crimes.

The software collects several data points about a person and calculates a risk score. That score is then used by judges to inform sentencing decisions for people convicted of crimes. But a ProPublica investigation revealed that the risk-scoring algorithms were racially biased. Analysis of 7000 risk scores of people in Broward County, Florida, tested their predictive accuracy and found the algorithm was wrong 40% of the time and tended to give blacks higher-risk scores than whites.

Most algorithms are proprietary in nature – Google is not going to give away the magic code that makes its search engine so effective – so they are a black box to most of us. They are, writes Willson, “opaque and inaccessible to outside critique … their parameters, intent and assumptions indiscernible. And yet the working of algorithms has wide-ranging consequences for the shape and direction of our everyday.” That will have to change if algorithms are to decide access to state houses or priorities for health programmes.

As is often the case when it comes to moderating the effect of new technology, the Europeans are leading the way. The European Union has adopted a requirement for data-driven decisions based solely on automated processes that have a significant potential impact on people to be clearly explained. From May 2018, EU citizens will have the right to get explanations of automated decision-making and challenge the decisions.

Some type of algorithm auditing, perhaps overseen by our Privacy Commissioner, will eventually be needed here. It’s possible to opt out of Facebook, even Google. But it’s a different matter when your government is run by algorithms.

Follow the Listener on Twitter or Facebook.