In today’s rapidly evolving technological landscape, the question of how to integrate artificial intelligence (AI) into the public sector has become an important and increasingly urgent conversation. To shed light on it, Professor Lori Turnbull, Editor in Chief of Canadian Government Executive recently hosted Christina Montgomery, Vice President and Chief Privacy and Trust Officer at IBM, at the DX Summit 2023. With her wealth of experience in overseeing IBM’s global privacy program and AI ethics initiatives, Christina shares insights into the ethical, regulatory, and societal implications of AI and government adoption in Canada.
LORI: I’d like to introduce our first speaker. Christina Montgomery is Vice President and Chief Privacy and Trust Officer at IBM. Christina is overseeing the company’s privacy program, compliance, and strategy on a global basis and directing all aspects of IBM’s privacy policies. She also chairs IBM’s AI Ethics Board. During her tenure at IBM, Christina has served in a variety of positions, including Managing Attorney, Cybersecurity Counsel, and most recently, Corporate Secretary to the company’s Board of Directors. A global leader in AI ethics and governance, Christina is a member of the US Chamber of Commerce AI Commission and a member of the United States National AI Advisory Committee. This was established in 2022 to advise the President and the National AI Initiative Office on a range of topics related to AI. Christina is also an advisory board member of the Future of Privacy Forum, advisory council member of the Center for Information Policy Leadership, and a member of the AI Governance Advisory Board for the International Association of Privacy Professionals. She received a BA from Binghamton University and a JD from Harvard Law School. Welcome, Christina. Thank you so much for being here.
CHRISTINA: Thank you Lori for giving me the opportunity to open today’s summit. We’re here today to talk about digital transformation. It’s a fitting theme as we approach one year since the launch of ChatGPT, a moment which truly ignited awareness and adoption of AI worldwide. But these types of flashy consumer use cases are not where the real transformational powers lie. Foundation models are set to radically and quickly change how businesses operate. And as consumers and citizens increasingly test and trial AI, leaders in Canada are deepening their understanding of AI’s potential and where the most value can be derived.
In an IBM study released earlier this year, 78% of Canadian CEOs said they had a clear plan for the role advanced AI will play in their organization’s decision-making five years from now. And at the same time, the dramatic surge in public attention around AI has rightfully raised questions. And they’re questions that are critically important, like what is the potential impact on society and the workforce? What do we do about challenges with AI around bias and explainability? What about misinformation and harmful and abusive content that can be generated by AI systems that are misused? So just as this last year ushered in the meteoric rise of capabilities like ChatGPT and other generative AI models, it also brought forth a variety of policy recommendations around AI from around the globe. These are important conversations to be having right now.
Leaders across the U.S. and Canada and around the world are steeped in discussions on how to significantly increase productivity and competitiveness to kickstart a new wave of economic growth. Most of us in this room have likely heard economic figures and projections tied to AI adoption. The CEO of IBM, Arvind Krishna, has said AI could add $10 trillion to global GDP by 2030. And in Canada, experts have predicted that AI could increase the nation’s economy by $210 billion and potentially save the average Canadian worker 100 hours a year. AI innovation is absolutely transforming businesses and industries, including in the public sector. But it presents new and creative ways to think about how we might transform governments to modernize digital services, make departments more effective, and enhance services for our constituents.
However, like any technology going through rapid development, AI could also be hazardous. And today there aren’t enough rules of the road. An era of AI cannot be another era of “move fast and break things.” As leaders shape this technology, we play a critical role in ensuring secure and responsible approach to AI adoption across business, government and industry. So how do we solve this challenge to ensure that we capture value while managing risk? Importantly, for the public sector, public leadership can set the tone on AI adoption. As AI proliferates consumer life in our business, it’s critical to maintain the public’s trust. Governments worldwide have a heightened sense of urgency on this topic, including here in Canada, where last month we saw the introduction of interim guidelines to bridge the legislative process with Bill C-27 and the Artificial Intelligence and Data Act. The key for Canada in this legislation will be to find the right balance between regulations that protect Canadians from the risks of AI while letting innovators leverage a technology that will be crucial in our efforts to solve world issues, will reinvent how we do business, and provide services to citizens. It’s encouraging to see the progress here in Canada, and we’re happy to provide our perspective to the federal government on how these policies should be shaped.
While these guidelines and policies are critical for Canada’s long-term success in AI adoption, so too is a plan of action for the public sector to define how AI can transform government and improve digital services for Canadians. Some of this work is already happening here today. Last year, for example, IBM worked with the city of Markham, Ontario, to leverage a virtual assistant to help voters access reliable and accurate information at any time of day about the upcoming municipal election. The initiative built on prior work, which was a Canadian first, where the city of Markham used the same platform to offer 24-hour customer service for residents looking for COVID-19 information through text, chat, and voice calls at any time.
But in IBM’s view, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of technology throughout its life cycle. This includes design, development and use, and maintenance, with humans at the heart of services delivered by the government. Given where we are today, adoption is accelerating, government progressing on AI policy, we have a window now to establish AI frameworks across organizations that support increased productivity while delivering trusted outcomes. Simply put, we believe Canada can benefit from a blueprint to guide responsible AI adoption, and that the government and the public sector can play a key leadership role in defining this plan.
At the foundation of this blueprint is a clear plan and perspective on responsible AI guidelines. This is particularly critical when we consider the potential benefits associated with implementing AI in government operations. For example, AI can help government departments reduce information overload and increase employee productivity by putting vast stores of government data to work, to produce contextualized services and create specific guidance for talent and workforce decisions. This can allow government employees to work on higher-value things. Contextual services for citizens, an area with heightened responsibility and scrutiny, can also be improved. AI and generative AI can be responsibly applied to summarize information, provide personalized responses to citizens’ questions, such eligibility for particular services. How do I apply? What forms do I use? This can all be done through AI applications and generative AI. But at IBM, we deeply believe that trustworthy and responsible AI will lie at the heart of these improvements.
To wrap up, IBM has been at the forefront of responsibly introducing groundbreaking technologies for more than a century. Technologies that solve some of the world’s most complex problems, and in many cases lead to a better quality of life for all. For us, responsibility here means we only release technology to the public after understanding its consequences, providing essential guardrails, and ensuring accountability. In short, we believe that addressing the repercussions of these innovations is just as important as the innovations themselves. This approach has never been truer than it is with AI, given the critical role that AI can play in transforming government if it is trusted. We look forward to doing our part and working with leaders like you here in Canada and worldwide to build an AI future that we can all trust.
LORI: Thank you, Christina. There’s a lot of food for thought there. I don’t think I’m alone in finding the transformation to AI integration scary. When you talk about the example of use of AI in supporting voters in an election, I find that very interesting and perhaps a very exciting opportunity, but also kind of scary. Can you talk to us a little more about how that worked? What was the process that led up to that? What were the risks and benefits? Did it help to make citizens more respond? Was voter turnout better? As a political scientist, those are the sorts of things I think about, because on the one hand, you can see how this could be risky, but on the other hand, it could be the type of thing that can help us to combat some of the problems we’re having in terms of voter suppression, declining turnout, and declining interest.
CHRISTINA: I would say that an AI model is just a representation of the underlying data. And because of the way AI can unlock value from data, that without the AI model and algorithm, it would have been harder to do before. Things like information retrieval, AI is good at that, summarization, integrating data silos if you have the right governance. When I talk about election information, what chatbots like the Watson Assistant are good at is helping with the information retrieval, the integration of that data, to answer questions based on a closed set of data. That’s part of the government’s database around elections, so the information is about where your polling place is, connecting it with addresses, that type of thing. It’s not answering fundamental questions about what a particular politician’s platform is necessarily, but more integrating data silos, whether it’s your license and where you live, where your polling place might be, and connecting and bringing that to the forefront for citizens, making it easier for them to access information that they need.
LORI: That makes a lot of sense. I wonder if you could talk a bit about the other side of it, the culture in the public sector, the culture in the public. As we think about the risks and innovations, and again, whether AI is scary or whether it’s a great opportunity or both, can you talk a bit about some of the cultural pieces, some of the cultural fragments that will affect how people respond to AI?
CHRISTINA: I would say from a public sector perspective, it exists to serve its citizens. And when you look at whether it’s the U.S., Canada, or anywhere, it’s probably the only sort of organization that counts every single member of the country as one of its clients, and manages some of the most sensitive data, the critical lifeblood of benefits and services and the like. So, I think first and foremost, when it comes to government, when you think about the data that’s managed, when you think about the importance of government services to citizens, trust must be part of the culture.
I also think a really important point is policymaking versus innovation and adoption of AI. Governments are notoriously slower to adopt innovative technology than the private sector might be. We’re a little behind. We’re a little slower on the government side to develop new innovations, a little more cautious. And in this trend of AI, we’re seeing the technology evolve so quickly that we’ve got this sort of sense that governments need to regulate. That’s what they can do. They need to regulate to keep up with the technology, which is part of the reason, again, coming back to trust, coming back to transparency, basic principles around the adoption of AI making sure they’re not at odds with policymaking.
LORI: I think you can link those points around the trust in the process, because people expect that government is doing things behind the scenes to make sure that something is safe for them, whether it is a vaccine, a type of medication or a type of technology. Yes, we’re six months behind the private sector. But it’s hard for something like this, where it seems like the possibilities are endless, I think it’s hard for people to trust even with the process behind it.
CHRISTINA: I absolutely agree with that, but I think there are basic principles that if you start applying them to AI, they’re true in government and in the private sector. From a government accountability perspective, if something comes from the government, it should be true, it should be accurate, and there’s a real dependency. We saw this with COVID-19, which is why that example of the election chatbot and the information regarding COVID vaccines are well connected for this time right now, because you think about government services being completely overwhelmed in the global pandemic, right? And applying technology to help address and get services to citizens, like vaccine information, like health information about COVID-19, in a time when governments were completely overwhelmed by the need of the citizens. So that’s where these technologies could be hugely impactful. But if I’m getting misinformation from something like a chatbot about where to go for my vaccine or how to sign up for my vaccine, that could really be dangerous. We must navigate that from a strategic perspective, in terms of where are we going to deploy these helpful technologies without eroding trust.
LORI: If I can switch focus, I want to make sure we keep this mindfulness about Canada learning from the U.S. on the regulatory side. Given everything you know about what’s happening in the U.S., what can Canada, on that regulatory side, learn with respect to our own regulatory process?
CHRISTINA: One of the things the U.S. is doing quite well is recognizing what they don’t know. If you look at Senator Schumer, he’s hosting a lot of convenings of private companies, government, academia, researchers in this space, very multidisciplinary convenings to educate policymakers about the implications and the risks of AI, the potential of AI and the like, so a lot of outreach. I would also point to The National Institute of Standards and Technology (NIST), a part of government that sits within the Department of Commerce. They’ve been working for two years on a risk management framework. Again, collaborating with a multistakeholder, cross-disciplinary group in the private sector, in academia, in government, in civil society, to develop a framework for managing risk in AI that is applicable in any country of the world. It’s looking at the lifecycle of AI, how do you consider risk from the outset, how do you manage them over the lifecycle? The point in both examples is that government is partnering with those who are using and deploying AI, they’re partnering with the communities that will be impacted by AI, the end citizens, and educating themselves on the technology before acting with prescriptive regulation. I think that’s an important lesson because you can’t regulate without having that sense of knowledge.
LORI: I’m a political scientist and come at everything in that lens, and it strikes me that this really has the capacity to change the relationship between state and non-state actors. There are implications for misinformation, democracy, privacy, ethics, service delivery, you name it. This is a complex area in which to gain legitimacy and literacy, and these are people who must keep on top of a bunch of different public issues, policy issues, problems, pieces of legislation, and more. What would you say is the level of literacy in government on this? It strikes me that this sort of issue has such huge implications and risk-benefit, but it’s something that the non-state actors who are leaders in this field have a lot of leverage and huge amounts of information and understanding, and governments are figuring this out at the same time as they’re figuring out all kinds of things.
CHRISTINA: It’s not just AI. The relationship between private sector and public sector has evolved over the years. There’s more and more research being done in the private sector. Private sector is playing a bigger role in even things like going to space. I do think there is a responsibility on the part of the private sector to help educate public sector on the technology, and to support research, publicly funded and privately funded, and opening that up to academia and the like, bringing in voices from other than just the private sector. That’s one of the dangers that I want to warn against. It’s interesting. I’m sitting here from IBM, and I’m saying don’t listen to just private industry. I think from an AI perspective, because the harms play out in the application and context is so important, that you need more than just those who understand the technology to be helping inform policymakers. You need those who are subject to the technology. You need their perspective. You need research perspective. You need academic perspective, because academia has the freedom to study things like trust in a much deeper manner than, say, a corporation whose commercial interests may not fully align with throwing that much money into trust and safety. I think cross-disciplinary, multi-stakeholder solutions and convenings are what’s important.
LORI: I appreciate that, and I agree with you about responsibility. I think the concept of corporate social responsibility is going to have to shift in this light entirely. I also want to ask you how ChatGPT has affected everything?
CHRISTINA: Let me back up and offer some perspective. I’ve been responsible for implementing AI governance in IBM for more than four years now. And I’ve been very actively involved in policy recommendations, having many conversations with regulators around the globe, putting our own principles around artificial intelligence into practice across our company. And it’s been a journey. And it was much more of a push than a pull until ChatGPT. That brought AI to the centre of attention for regular, everyday citizens in a way that was just was not happening before. The conversations were certainly happening, companies were adopting AI, the public sector was adopting AI, but not to the degree that it is. The ability to take a general-purpose AI model to many different downstream uses has significantly transformed what AI is capable of and how to utilize it. It’ll make it much easier to utilize and adopt AI across the globe.
That said, I think it’s good that it originally came out in the form of something like a chatbot that everyone can interact with. Because you can also see there’s no magic. It’s wrong a lot. It’s essentially predicting the next word in a sentence, which sometimes means it can very plausibly predict accurate and wrong things. It has ignited those conversations. It’s not magic. It needs to be regulated. And those same basic principles that we talked about in IBM four or five years ago are the same basic principles that apply in the context of ChatGPT and foundational models. Things like trust and explainability and preserving privacy and having security in your AI models and eliminating your bias from algorithms. It’s all the same principles. It’s just more tangible to people.
LORI: Okay. Thank you. I’m going to jump to some other questions We’re sometimes in a rush to implement and enable AI outside our organizations. Is there a benefit to first trying to implement and learn internally first, taking it for a spin on the inside before anybody sees how it could affect things on the outside?
CHRISTINA: It absolutely is. One of the benefits of being in the chief privacy office of a technology company is that we get to test our own tech. We have a platform for generative AI, WatsonX. And the platform is fully capable in terms of having an AI studio to train AI models, a common data architecture so it works across multiple providers, not just IBM. And importantly, from my perspective, an AI governance capability. In the chief privacy office, we’re using that technology internally. We’re contributing back to the product team in terms of fact sheets and transparency and the ability to produce auditable documentation. All those capabilities that we think we need from a governance perspective; we’re helping to inform what’s in our product. I think that’s important.
LORI: How can we leverage AI to enhance the client experience with programs and services? How can we use this to make things better for people, whether it’s health services, garbage pickup, or whatever the case may be?
CHRISTINA: It’s where we started the conversation in terms of some of the things that AI is good at, and one of them is information retrieval. As I mentioned, an AI model is really just a representation of data. So, the more capable AI models can get, the more governance you put around data, which is necessary to implement AI, then the more you can erase data silos, leverage relevant information to bring it, first and foremost, into the hands of citizens. Some of the earliest use cases that we’re seeing now, in particular with foundation models and the best use cases, are around things like customer service chatbots that will be able to pull relevant information, put it into the hands of customer service agents from across many systems in a company and enable a more informed and more relevant customer service experience.
LORI: I want to talk a bit more about privacy. How can we guarantee the public that the information they give will not be collected? And are there issues around government use of data, reasons that government can’t use data in the same way that corporations can use data, so that they may not be able to reflect the same service experience back to a citizen as opposed to a customer?
CHRISTINA: Privacy is obviously fundamental to AI technology, and it’s part of the reason why, from my own personal perspective, we’re seeing so many privacy teams and privacy professionals also taking on responsibility for AI ethics. One of the primary focuses of the technology we’re developing in IBM is that it must respect privacy rules and it must preserve privacy. And I think what you’re seeing with things like ChatGPT, some of the earliest cases against it were filed by data privacy regulators who said, “You’re not respecting, as a private company, things like GDPR in Europe, because you’re training AI models on personal information.” We’ve come a long way from those initial conversations. There absolutely is a need to respect privacy, both in the training of AI models and in the output.
I don’t think there are any proposals that say we should throw privacy law out the window in the face of AI. If anything, AI is bringing attention to the personal information regulatory environment, which is why it’s so important. And when you look at things like Bill C-27 in Canada, it has elements associated with personal information and reinvigorating and bringing privacy law up to date in Canada, but also regulating artificial intelligence and data. I think the two are very much interlocked.
LORI: How does the private sector plan to equip the public sector to integrate and implement new technology based on trust dependencies?
CHRISTINA: I’ll give you the example of IBM and our principles of trust and transparency, which we published six years ago. Those principles are that AI should be transparent and explainable. And for us, because we’re an enterprise company, we don’t train our AI on client data. Client data is their data. And when you think about the pillars that underpin fairness, transparency and explainability, there are capabilities that relate back to scientific capabilities. We don’t just articulate principles without providing the tools to help address the principles. We deployed some of the first toolkits into open source to help do things like generate fact sheets for AI, sort of nutrition labels that show the data that went into training a model, what it’s good for, expected outputs, and the like. This makes it more transparent and explainable. In bias detection, we also have deployed to open source and continue to work on and improve capabilities around bias detection in AI algorithms. I think what the private industry is doing is working on those capabilities to enable companies, public sector, wherever it might be, to adhere to the principles around transparency and bias detection and explainability in software.
LORI: Thank you. Before we conclude our conversation, do you have any final thoughts?
CHRISTINA: On the question of private industry not wanting technology regulated, I think that’s not correct in general. Sure, there are some pockets of private industry that don’t want regulation, but it is very clear that if technology is not trusted, it will not be adopted. And that’s very dangerous to technology companies. We obviously want our technologies to be trusted. We’ve been very actively advocating for AI to be regulated for, as I mentioned, four years now, helping to deploy technical capabilities that will enable things like transparency and explainability and fairness.
The issue on sustainability is also critical. When do we use AI and when do we not use it? Each time you send in an inquiry to ChatGPT, that’s costing energy and money. And when you think about deploying AI in your operations, I think it’s important to have a plan for when this is the most optimal technology to use versus many other technological capabilities. That’s the first point. Have a strategy and have a plan and be mindful of what you’re using this technology for and what you’re not using it for. Because there are implications with any technology from a sustainability perspective. Companies are working on how to make AI more efficient from a sustainability perspective, and that’s a huge initiative of IBM. For example, we have offerings in this space that will help companies to understand their blueprint across all their real estate and capture things like how much greenhouse gas is being used across a global footprint, how to track that, how to report it out. I think this comes back to your point around sustainability as well and ESG and how important it is for private companies to be involved and be leaders in ESG as well.
LORI: That is a really important point to end on. Christina, thank you so much. This has been fantastic. We’ve learned a whole lot from you.
CHRISTINA: Thank you for having me.