Canadian Government Executive - Volume 30 - Issue 1

SPRING 2024 // Canadian Government Executive / 11 INTERVIEW ency and the ability to produce auditable documentation. All those capabilities that we think we need from a governance perspective; we’re helping to inform what’s in our product. I think that’s important. LORI: How can we leverage AI to enhance the client experience with programs and services? How can we use this to make things better for people, whether it’s health services, garbage pickup, or whatever the case may be? CHRISTINA: It’s where we started the conversation in terms of some of the things that AI is good at, and one of them is information retrieval. As I mentioned, an AI model is really just a representation of data. So, the more capable AI models can get, the more governance you put around data, which is necessary to implement AI, then the more you can erase data silos, leverage relevant information to bring it, first and foremost, into the hands of citizens. Some of the earliest use cases that we’re seeing now, in particular with foundation models and the best use cases, are around things like customer service chatbots that will be able to pull relevant information, put it into the hands of customer service agents from across many systems in a company and enable a more informed and more relevant customer service experience. LORI: I want to talk a bit more about privacy. How can we guarantee the public that the information they give will not be collected? And are there issues around government use of data, reasons that government can’t use data in the same way that corporations can use data, so that they may not be able to reflect the same service experience back to a citizen as opposed to a customer? CHRISTINA: Privacy is obviously fundamental to AI technology, and it’s part of the reason why, from my own personal perspective, we’re seeing so many privacy teams and privacy professionals also taking on responsibility for AI ethics. One of the primary focuses of the technology we’re developing in IBM is that it must respect privacy rules and it must preserve privacy. And I think what you’re seeing with things like ChatGPT, some of the earliest cases against it were filed by data privacy regulators who said, “You’re not respecting, as a private company, things like GDPR in Europe, because you’re training AI models on personal information.” We’ve come a long way from those initial conversations. There absolutely is a need to respect privacy, both in the training of AI models and in the output. I don’t think there are any proposals that say we should throw privacy law out the window in the face of AI. If anything, AI is bringing attention to the personal information regulatory environment, which is why it’s so important. And when you look at things like Bill C-27 in Canada, it has elements associated with personal information and reinvigorating and bringing privacy law up to date in Canada, but also regulating artificial intelligence and data. I think the two are very much interlocked. LORI: How does the private sector plan to equip the public sector to integrate and implement new technology based on trust dependencies? CHRISTINA: I’ll give you the example of IBM and our principles of trust and transparency, which we published six years ago. Those principles are that AI should be transparent and explainable. And for us, because we’re an enterprise company, we don’t train our AI on client data. Client data is their data. And when you think about the pillars that underpin fairness, transparency and explainability, there are capabilities that relate back to scientific capabilities. We don’t just articulate principles without providing the tools to help address the principles. We deployed some of the first toolkits into open source to help do things like generate fact sheets for AI, sort of nutrition labels that show the data that went into training a model, what it’s good for, expected outputs, and the like. This makes it more transparent and explainable. In bias detection, we also have deployed to open source and continue to work on and improve capabilities around bias detection in AI algorithms. I think what the private industry is doing is working on those capabilities to enable companies, public sector, wherever it might be, to adhere to the principles around transparency and bias detection and explainability in software. LORI: Thank you. Before we conclude our conversation, do you have any final thoughts? CHRISTINA: On the question of private industry not wanting technology regulated, I think that’s not correct in general. Sure, there are some pockets of private industry that don’t want regulation, but it is very clear that if technology is not trusted, it will not be adopted. And that’s very dangerous to technology companies. We obviously want our technologies to be trusted. We’ve been very actively advocating for AI to be regulated for, as I mentioned, four years now, helping to deploy technical capabilities that will enable things like transparency and explainability and fairness. The issue on sustainability is also critical. When do we use AI and when do we not use it? Each time you send in an inquiry to ChatGPT, that’s costing energy and money. And when you think about deploying AI in your operations, I think it’s important to have a plan for when this is the most optimal technology to use versus many other technological capabilities. That’s the first point. Have a strategy and have a plan and be mindful of what you’re using this technology for and what you’re not using it for. Because there are implications with any technology from a sustainability perspective. Companies are working on how to make AI more efficient from a sustainability perspective, and that’s a huge initiative of IBM. For example, we have offerings in this space that will help companies to understand their blueprint across all their real estate and capture things like how much greenhouse gas is being used across a global footprint, how to track that, how to report it out. I think this comes back to your point around sustainability as well and ESG and how important it is for private companies to be involved and be leaders in ESG as well. LORI: That is a really important point to end on. Christina, thank you so much. This has been fantastic. We’ve learned a whole lot from you. CHRISTINA: Thank you for having me. The ability to take a general-purpose AI model to many different downstream uses has significantly transformed what AI is capable of and how to utilize it. It’ll make it much easier to utilize and adopt AI across the globe.

RkJQdWJsaXNoZXIy NDI0Mzg=