I think what you’re seeing with things like ChatGPT, some of the earliest cases against it were filed by data privacy regulators who said, “You’re not respecting, as a private company, things like GDPR in Europe, because you’re training AI models on personal information.” SPRING 2024 // Canadian Government Executive / 9 LORI: Thank you, Christina. There’s a lot of food for thought there. I don’t think I’m alone in finding the transformation to AI integration scary. When you talk about the example of use of AI in supporting voters in an election, I find that very interesting and perhaps a very exciting opportunity, but also kind of scary. Can you talk to us a little more about how that worked? What was the process that led up to that? What were the risks and benefits? Did it help to make citizens more respond? Was voter turnout better? As a political scientist, those are the sorts of things I think about, because on the one hand, you can see how this could be risky, but on the other hand, it could be the type of thing that can help us to combat some of the problems we’re having in terms of voter suppression, declining turnout, and declining interest. CHRISTINA: I would say that an AI model is just a representation of the underlying data. And because of the way AI can unlock value from data, that without the AI model and algorithm, it would have been harder to do before. Things like information retrieval, AI is good at that, summarization, integrating data silos if you have the right governance. When I talk about election information, what chatbots like the Watson Assistant are good at is helping with the information retrieval, the integration of that data, to answer questions based on a closed set of data. That’s part of the government’s database around elections, so the information is about where your polling place is, connecting it with addresses, that type of thing. It’s not answering fundamental questions about what a particular politician’s platform is necessarily, but more integrating data silos, whether it’s your license and where you live, where your polling place might be, and connecting and bringing that to the forefront for citizens, making it easier for them to access information that they need. LORI: That makes a lot of sense. I wonder if you could talk a bit about the other side of it, the culture in the public sector, the culture in the public. As we think about the risks and innovations, and again, whether AI is scary or whether it’s a great opportunity or both, can you talk a bit about some of the cultural pieces, some of the cultural fragments that will affect how people respond to AI? CHRISTINA: I would say from a public sector perspective, it exists to serve its citizens. And when you look at whether it’s the U.S., Canada, or anywhere, it’s probably the only sort of organization that counts every single member of the country as one of its clients, and manages some of the most sensitive data, the critical lifeblood of benefits and services and the like. So, I think first and foremost, when it comes to government, when you think about the data that’s managed, when you think about the importance of government services to citizens, trust must be part of the culture. I also think a really important point is policymaking versus innovation and adoption of AI. Governments are notoriously slower to adopt innovative technology than the private sector might be. We’re a little behind. We’re a little slower on the government side to develop new innovations, a little more cautious. And in this trend of AI, we’re seeing the technology evolve so quickly that we’ve got this sort of sense that governments need to regulate. That’s what they can do. They need to regulate to keep up with the technology, which is part of the reason, again, coming back to trust, coming back to transparency, basic principles around the adoption of AI making sure they’re not at odds with policymaking. LORI: I think you can link those points around the trust in the process, because people expect that government is doing things behind the scenes to make sure that something is safe for them, whether it is a vaccine, a type of medication or a type of technology. Yes, we’re six months behind the private sector. But it’s hard for something like this, where it seems like the possibilities are endless, I think it’s hard for people to trust even with the process behind it. CHRISTINA: I absolutely agree with that, but I think there are basic principles that if you start applying them to AI, they’re true in government and in the private sector. From a government accountability perspective, if something comes from the government, it should be true, it should be accurate, and there’s a real dependency. We saw this with COVID-19, which is why that example of the election chatbot and the information regarding COVID vaccines are well connected for this time right now, because you think about government services being completely overwhelmed in the global pandemic, right? And applying technology to help address and get services to citizens, like vaccine information, like health information about COVID-19, in a time when governments were completely overwhelmed by the need of the citizens. So that’s where these technologies could be hugely impactful. But if I’m getting misinformation from something like a chatbot about where to go for my vaccine or how to sign up for my vaccine, that could really be dangerous. We must navigate that from a strategic perspective, in terms of where are we going to deploy these helpful technologies without eroding trust. LORI: If I can switch focus, I want to make sure we keep this mindfulness about Canada learning from the U.S. on the regulatory side. Given everything you know about what’s happening in the U.S., what can Canada, on that regulatory side, learn with respect to our own regulatory process? CHRISTINA: One of the things the U.S. is doing quite well is recognizing what they don’t know. If you look at Senator Schumer, he’s hosting a lot of convenings of private companies, government, academia, researchers in this space, very multidisciplinary convenings to educate policymakers about the implications and the risks of AI, the potential of AI and the like, so a lot of outreach. I would also point to The National Institute of Standards and Technology (NIST), a part of government that sits within the Department of Commerce. They’ve been working for two years on a risk management framework. Again, collaborating with a multistakeholder, cross-disciplinary group in the private sector, in academia, in government, INTERVIEW
RkJQdWJsaXNoZXIy NDI0Mzg=