Canadian Government Executive - Volume 30 - Issue 1

10 / Canadian Government Executive // SPRING 2024 INTERVIEW in civil society, to develop a framework for managing risk in AI that is applicable in any country of the world. It’s looking at the lifecycle of AI, how do you consider risk from the outset, how do you manage them over the lifecycle? The point in both examples is that government is partnering with those who are using and deploying AI, they’re partnering with the communities that will be impacted by AI, the end citizens, and educating themselves on the technology before acting with prescriptive regulation. I think that’s an important lesson because you can’t regulate without having that sense of knowledge. LORI: I’m a political scientist and come at everything in that lens, and it strikes me that this really has the capacity to change the relationship between state and non-state actors. There are implications for misinformation, democracy, privacy, ethics, service delivery, you name it. This is a complex area in which to gain legitimacy and literacy, and these are people who must keep on top of a bunch of different public issues, policy issues, problems, pieces of legislation, and more. What would you say is the level of literacy in government on this? It strikes me that this sort of issue has such huge implications and riskbenefit, but it’s something that the non-state actors who are leaders in this field have a lot of leverage and huge amounts of information and understanding, and governments are figuring this out at the same time as they’re figuring out all kinds of things. CHRISTINA: It’s not just AI. The relationship between private sector and public sector has evolved over the years. There’s more and more research being done in the private sector. Private sector is playing a bigger role in even things like going to space. I do think there is a responsibility on the part of the private sector to help educate public sector on the technology, and to support research, publicly funded and privately funded, and opening that up to academia and the like, bringing in voices from other than just the private sector. That’s one of the dangers that I want to warn against. It’s interesting. I’m sitting here from IBM, and I’m saying don’t listen to just private industry. I think from an AI perspective, because the harms play out in the application and context is so important, that you need more than just those who understand the technology to be helping inform policymakers. You need those who are subject to the technology. You need their perspective. You need research perspective. You need academic perspective, because academia has the freedom to study things like trust in a much deeper manner than, say, a corporation whose commercial interests may not fully align with throwing that much money into trust and safety. I think crossdisciplinary, multi-stakeholder solutions and convenings are what’s important. LORI: I appreciate that, and I agree with you about responsibility. I think the concept of corporate social responsibility is going to have to shift in this light entirely. I also want to ask you how ChatGPT has affected everything? CHRISTINA: Let me back up and offer some perspective. I’ve been responsible for implementing AI governance in IBM for more than four years now. And I’ve been very actively involved in policy recommendations, having many conversations with regulators around the globe, putting our own principles around artificial intelligence into practice across our company. And it’s been a journey. And it was much more of a push than a pull until ChatGPT. That brought AI to the centre of attention for regular, everyday citizens in a way that was just was not happening before. The conversations were certainly happening, companies were adopting AI, the public sector was adopting AI, but not to the degree that it is. The ability to take a general-purpose AI model to many different downstream uses has significantly transformed what AI is capable of and how to utilize it. It’ll make it much easier to utilize and adopt AI across the globe. That said, I think it’s good that it originally came out in the form of something like a chatbot that everyone can interact with. Because you can also see there’s no magic. It’s wrong a lot. It’s essentially predicting the next word in a sentence, which sometimes means it can very plausibly predict accurate and wrong things. It has ignited those conversations. It’s not magic. It needs to be regulated. And those same basic principles that we talked about in IBM four or five years ago are the same basic principles that apply in the context of ChatGPT and foundational models. Things like trust and explainability and preserving privacy and having security in your AI models and eliminating your bias from algorithms. It’s all the same principles. It’s just more tangible to people. LORI: Okay. Thank you. I’m going to jump to some other questions We’re sometimes in a rush to implement and enable AI outside our organizations. Is there a benefit to first trying to implement and learn internally first, taking it for a spin on the inside before anybody sees how it could affect things on the outside? CHRISTINA: It absolutely is. One of the benefits of being in the chief privacy office of a technology company is that we get to test our own tech. We have a platform for generative AI, WatsonX. And the platform is fully capable in terms of having an AI studio to train AI models, a common data architecture so it works across multiple providers, not just IBM. And importantly, from my perspective, an AI governance capability. In the chief privacy office, we’re using that technology internally. We’re contributing back to the product team in terms of fact sheets and transpar-

RkJQdWJsaXNoZXIy NDI0Mzg=