That hypnotically red eye. That eerily meticulous, methodical, and measured voice.

For decades, HAL 9000, the sentient computer and antagonist extraordinaire in Stanley Kubrik’s 2001: A Space Odysseyor the gun-wielding, cyborg assassin the Terminator, have become pop culture’s shorthand for Artificial Intelligence (AI), in all its potential and potential malfeasance.

Perhaps because these kinds of depictions of AI are so dominant, this technology maintains an aura of intrigue. People continue to treat it with suspicion.

Yet, real-world usages of AI are increasingly present in our lives. Do you ever check a map to help you avoid traffic while you’re driving? Found your new favourite jam based on a Spotify recommendation? Felt thankful for a tidier inbox because the email client filtered the spam? Said “YES” or “NO” very loudly into your phone so the speech recognition can pick you up? AI technology is underpinning the most everyday of habits and actions.

It is time to build more substantive and nuanced understandings of what this technology is and what it is capable of doing. In the public sector, the potential exists for AI to have a positive impact on governments and ultimately on citizens’ lives.

As with many promising technologies, there is a lot of hype. If we don’t see AI as some kind of threat to humanity then we often reduced to a magical “black box”: simply input some data and it will spit out something innovative. With so many (mis)characterisations AI can seem either like a technology in search of a problem or a technological cure-all.

AI influences government and government influences AI

Fundamentally, AI technology can influence what government does and how it operates.

As a regulator, developer of policy or provider of public services and goods, government is not just a mere user of this technology. It can participate in the development of AI by implementing regulations that protect citizens while encouraging the creative use of technology. It can invest in AI research, sponsoring experimentation and pilot projects. It can foster collaboration with the private sector and with citizens to co-create solutions and drive demand for AI.

As much as we should always push the boundaries, AI technology should always be designed in ways that respects human rights, democratic values and diversity

The multiple (and potentially) simultaneous roles government can play in relation to AI must be played strategically and with purpose. AI initiatives have the potential to be completely paradigm shifting and risky, government should have a considered AI management strategy. Around the world, at least 38 countries plus the European Union have developed/ developing, a national AI strategy to establish a strategic vision and approach to AI and to align the capacities, norms and structures of the relevant actors and ecosystems.

Here at the OECD Observatory of Public Sector Innovation, we’ve reviewed these strategies to understand what elements make for good ones. From this research, we can say that robust strategies are:

1. Systems focused  

AI strategies may be most useful when they are holistic and systems focused.

These strategies encompass all the various level of government and are attuned to structures and systems within the public sector that AI can influence or are trialled within. They are also considerate of the interactions of other sectors and stakeholders.

Government would be wise to consider what sort of cross-government and cross-sector councils, networks or communities could best support the diffusion of information and practices related to AI and develop the kind of guiding principles for the entire system that offer clear directions without stifling flexibility and experimentation.

For example, the UK Government’s AI Council and Centre for Data Ethics and Innovation is an independent expert committee developed to advise on how to stimulate the adoption of AI, promote its ethical use and maximise its contribution to economic growth. It consists of leaders from business, academia and civil society to foster cross-sectoral collaboration within the AI community.

2. Trustworthy and fair

Government needs to develop a trustworthy fair, and accountable approach to using AI and must keep a steady hand on the legal and ethical frameworks surrounding the development and use of this technology, always ensuring that they keep pace with its evolution.

As much as we should always push the boundaries in this emerging area of research and practice, AI technology and AI systems should always be designed in ways that respects human rights, democratic values and diversity. Government plays an important safeguarding role to ensure that there is still scope for human (and humane) intervention in AI systems, so fair and just outcomes for society can be maintained.

3. Data-enabled and secure

Data is an asset that needs to be managed well through its lifecycle. Secure ethical access to, and use of, quality data, means that people’s privacy and security are protected and bias is mitigated.

4. Flexible with talent 

Government should build internal capability and capacity so that public servants can procure, use, manage and evaluate AI initiatives.

AI could very well assume functions that people historically performed. However, its purpose is not to replace human endeavour, rather it is to enhance, accelerate or augment it. In addition to bolstering internal capacity with AI, government needs to consider how it works with external actors or partners, such as the private sector, in order the balance the strengths of each to achieve public missions.

How to manage AI

Combining all these elements is no small feat.

Government should continually revisit and rework their strategy as the technology evolves. For government to innovate using AI, a portfolio approach may prove useful. Government may use different strategies for managing AI based on the innovation facet considered or also use AI to drive innovation in each facet.

  • Enhancement-oriented: Government may look to improve existing processes and make them more efficient by using AI. For instance, do the same things faster or do the same using fewer resources.
  • Mission-oriented: Developing AI in itself may be seen as a top priority for government. Alternatively, its development could be seen as a step towards achieving greater objectives such as economic growth or a shift towards more personalised public services.
  • Adaptive: Government may feel the need to invest in the development of AI in reaction to other governments doing so or compelled by its adoption by businesses and citizens. On the other hand, public sector organisations could also leverage predictive AI to better adapt to changing situations.
  • Anticipatory: Thinking ahead, anticipating the potential long-term developments of AI could help government prepare more adequate responses to mitigate risks and maximise benefits. At the same time, AI could be developed to deal with situations of high uncertainty.

Contribute to the AI primer

We want public servants and their partners in industry and civil society to build their knowledge of this rapidly evolving, emergent technology. That’s why we have developed an AI primer “Hello, World: Artificial Intelligence and its Use in the Public Sector” .

“Hello, World!” is often the very first computer program written by someone learning how to code, and we want this primer to be able to help public officials take their first steps in exploring AI. The primer aims to help governments understand the definitions and context for AI, some technical underpinnings and approaches, explore how governments and their partners are developing strategies for and using AI for public good, and understand what implications public leaders and civil servants need to consider.

The primer covers major development in this field of research and practice and offers many real-life case studies from public sectors around the world.

You can provide comment in our public consultation here until 15 September 2019.

Let’s get beyond images of scary robots and cyborgs, provocative as they are, and start building some practical knowledge for how government should understand, plan for, manage and leverage AI for people and the public good. 

This piece originally appeared on Apolitical, the global network for public servants. You can find the original here.