Governments everywhere are announcing new strategies for artificial intelligence. From France, which has pledged $1.85billion in funding for research in the field, to Canada, which plans to cultivate a batch of AI specialists in its universities, it’s clear political leaders see AI as an area they need to get good at fast.

As well as being an economic opportunity, AI can also be deployed to reform public services. But some caution is needed: for every benefit, there seems to be a corresponding danger. So-called “automated decision-making systems” have already been shown in a number of cases to have malign effects on citizens’ lives. How government should regulate the adoption of AI, if at all, remains in contention.

For public servants about to enter this world, it can all seem a bit overwhelming. Here’s what you need to know about the debate:

What’s the problem with AI?

Public servants have used computational tools — “automated decision systems” — to advise and guide their decisions for years already. By processing masses of data to assess the likelihood of certain events occurring, or to measure performance, algorithms can act as guides to public servants.

AI builds on these. It allows automated systems to learn as they work, adapting over time to improve their performance. Many governments are already using them, and are beginning to experiment with machine learning. They can be highly effective, but they come with dangers.

“When automation and AI is used in government, the rights of those affected by it should be respected as if a human were making the decision”

For example, in Arkansas, US, the state used an automated decision-making tool, powered by an algorithm, to calculate the number of hours of Medicaid healthcare severely-disabled people could receive. But a legal investigation found that there was an error in the code — meaning many had their hours wrongfully reduced.

The problems with such a system apply to AI. When a complex algorithm uses a convoluted mass of data to come to a decision, or even a recommendation, it can be extremely hard to open the bonnet and dig through the data to find the fault.

What should governments do?

Some argue that new rights for citizens, tailored to the digital era, need to be enshrined. “As a citizen, as a human being… what are your rights in the context of automated, data-based decision making?” asked Meredith Whittaker of New York University’s AI Now Institute on a panel at a recent Open Government Partnership summit.

“When automation and AI is used in government, the rights of those affected by it should be respected as if a human were making the decision,” she said. Whittaker argued that, too often, these get lost as government strives for efficiency and cost savings.

Melanie Robert, one of the key public officials steering Canada’s AI strategy who was speaking on the same panel, said now, while governments are planning their AI strategies, is the perfect time to enshrine such rights in government strategy. But, she acknowledged, “it’s very hard right now for public servants who are regulators and legislators, because you don’t want to move too fast to stifle innovation.”

Canada’s approach is to collect together a multidisciplinary team of experts — lawyers, technology specialists, philosophers — to investigate what effects AI can have on people’s lives and rights when used within government. It will use the consultation to inform an ethical framework. Robert feared that pressing down too hard with regulation without a thorough research process would stifle any benefits.

Not everyone agrees. “I’ve been fighting on these grounds for twenty-odd years,” said Gus Hosein, executive director of Privacy International, a charity based in London, on the panel. For Hosein, there needs to be far more rigorous testing of the ways in which government collects data, even before we tackle the question of how to approach AI. The stakes are too high, and the danger to rights too significant — for him, innovation and economic development has to be a secondary concern.

What is working already?

In France, Etalab, the Prime Minister’s Office in charge of open data and open government, is investigating the applications of AI to public services. It’s working on enshrining transparency-by-design into the government’s approach, ensuring that, for every algorithm or system that’s used, the public will be able to access its inner workings. For Etalab’s director, Laure Lucchesi, this approach is widely accepted within government, “but it’s really a challenge to make it effective for citizens.”

The reasons for this come down to the complexity of the systems, and the availability of technical expertise both within government and civil society. Sometimes, even the engineers who build the automated systems used by government have difficulty understanding how they work.

To deal specifically with the problem of public access and government accountability, New York’s AI Now has developed processes to properly vet automated systems when they are used to aid decision making.

The institute argues that government departments should complete “AI impact assessments” prior to implementing any AI or big data analysis tool. Experts should be granted access to algorithms, even proprietary systems developed by for-profit companies, and they should be made comprehensible to the people whose lives they affect.

This piece originally appeared on Apolitical, the global network for public servants. You can find the original here