Canadian Government Executive - Volume 30 - Issue 1

www.canadiangovernmentexecutive.ca Publication Mail Registration Number: 40052410 THE MAGAZINE FOR PUBLIC SECTOR DECISION MAKERS SPRING 2024 INSIDE: 0 9 01 61399 70471 $5.00 Display until July 10 VOLUME 30 | NUMBER 1 PREPARING FOR CHANGE: EXAMINING GOVERNMENT TRANSITION DYNAMICS IN ELECTION CYCLES INVESTMENT/PROJECT PORTFOLIO MANAGEMENT IN GOVERNMENT NAVIGATING RISKS AND REWARDS AI IN THE PUBLIC SECTOR

More than 50,000 Government of Canada employees have trusted us with their second language evaluation goals. Get Your SecondLanguage Evaluation (SLE) LevelsOn Your First Try WE’LL HELP YOU TACKLE YOUR SECOND LANGUAGE EVALUATION TEST (B & C LEVELS) WITH CONFIDENCE. THANKS TO OUR UNIQUE METHODOLOGY AND 1 ON 1 (OR SMALL GROUP) APPROACH, YOU ALWAYS GET THE ATTENTION YOU NEED TO SUCCEED. I highly recommend LRDG to all those looking to improve their French language skills. The tailored learning programs allow you to focus on areas of importance to you while improving your overall fluency. This is, by far, the best part experience that I have had taking French courses. Amy Larin Highly recommend LRDG for French language learning. The learn-on-your-own online modules give lots of flexibility for professionals, parents and busy people... and when combined with weekly one-on-one tutoring, you also get the accountability and personal experience that you can’t get with an app. Nancy Brooks +1 514.989.1669 1 888.989.LRDG sales@lrdgonline.com www.lrdgonline.com 90% Test Success Rate with a Proven Track Record 200+ Language Tutors 50k Successful Learners 250+ Organizations Language Research Development Group (LRDG) is Canada’s premier online language training solution for mastering English and French. We’re also the only supplier to be awarded the Government of Canada’s recent National Master Standing Offer for Virtual Language Training, including SLE Prep and self-study via the LRDG Portal. LRDG’s blended approach to one-on-one online tutoring, group training, and self-study empowers learners to pursue their language goals at their own pace and take advantage of personalized learning paths. MASTER CANADA'S OFFICIAL LANGUAGES SLE Testing & Training in Part-Time, Full-Time, 1 on 1 & Group Formats. Language Research Development Group Inc. # EN578-202723/011/ZF

SPRING 2024 // Canadian Government Executive / 3 CONTENTS 6 AI in the Public Sector: Navigating Risks and Rewards By Christina Montgomery 12 Preparing for Change: Examining Government Transition Dynamics in Election Cycles By Michael Wernick PERSPECTIVE 18 An Investment in Domestic Data: Repatriating the Data Supply Chain with a Sovereign Cloud By Paul West, Director of Global Public Sector, ThinkOn 20 Facility Leadership in the Executive Suite By Jonathan Burbee, Director of Business Development, Gordian 22 Investment/Project Portfolio Management in Government By Evan Diamond 28 MIDDLE MANAGEMENT Strengthen accountability for results By John Wilkins 30 GOVERNING DIGITALLY The (Not So) Hidden Threat to A.I. in Government By Jeffrey Roy 32 THE LAST WORD The Regulation of Online Harm: It’s About Politics, Policy, Governance – and Doing the Right Thing for Real By Lori Turnbull 22 Investment/Project Portfolio Management in Government CGE ONLINE: Letters We welcome feedback on articles and story ideas. Email lori@promotivemedia.ca About the Cover AI in the Public Sector: Navigating Risks and Rewards It’s in the Archives Missed an issue? Misplaced an article? Visit www.canadiangovernmentexecutive.ca for a full archive of past CGE issues, as well as online extras from our many contributors.

OUR MISSION IS TO CONTRIBUTE TO EXCELLENCE IN PUBLIC SERVICE MANAGEMENT EDITORIAL DEPUTY EDITOR | LORI TURNBULL lori@promotivemedia.ca MANAGING EDITOR | TERRI PAVELIC terri@promotivemedia.ca CONTRIBUTORS | CHRISTINA MONTGOMERY MICHAEL WERNICK EVAN DIAMOND JEFFREY ROY JOHN WILKINS EDITORIAL ADVISORY BOARD DENISE AMYOT, PENNY BALLANTYNE, JIM CONNELL, MICHAEL FENN, LANA LOUGHEED, JOHN MILLOY, VIC PAKALNIS, ROBERT SHEPHERD, ANDREW TREUSCH, DAVID ZUSSMAN SALES & EVENTS DIRECTOR, CONTENT & BUSINESS DEVELOPMENT | DAVID BLONDEAU 905-727-3875 david@promotivemedia.ca ART & PRODUCTION ART DIRECTOR | ELENA PANKOVA elena@promotivemedia.ca SUBSCRIPTIONS AND ADDRESS CHANGES CIRCULATION SERVICES | circulation@promotivemedia.ca GENERAL INQUIRIES 21374, 2nd Concession Rd, East Gwillimbury, ON, L9N 0H7 Phone 905-727-3875 Fax 905-727-4428 canadiangovernmentexecutive.ca CORPORATE GROUP PUBLISHER | J. RICHARD JONES john@promotivemedia.ca Publisher’s Mail Agreement: 40052410 ISSN 1203-7893 Canadian Government Executive magazine is published 6 times per year by Navatar Press. All opinions expressed herein are those of the contributors and do not necessarily reflect the views of the publisher or any person or organization associated with the magazine. Letters, submissions, comments and suggested topics are welcome, and should be sent to lori@promotivemedia.ca REPRINT INFORMATION Reproduction or photocopying is prohibited without the publisher’s prior written consent. High quality reprints of articles and additional copies of the magazine are available through circulation@promotivemedia.ca. Privacy Policy: We do not sell our mailing list or share any confidential information on our subscribers. We acknowledge the financial support of the Government of Canada through the Canada Periodical Fund (CPF) for our publishing activities. canadiangovernmentexecutive.ca 4 / Canadian Government Executive // SPRING 2024 WEB EDITOR’S NOTE Welcome to the Spring Edition of Canadian Government Executive magazine! This issue features superb writing on key topics related to public service excellence including: • The integration of artificial intelligence (AI) into public sector work • “Hidden threats” with respect to the implementation of AI • The role of the public service during a transition period • Portfolio and project management in departments • The regulation of online harm There is a lot to unpack and we really hope you enjoy reading this issue. Each article speaks to a specific set of challenges with respect to policy and/or governance and identifies ways in which the public service can respond effectively. We would like to introduce Evan Diamond as a new regular contributor to CGE. This issue features the first in a series of articles by him on the general topic of public service improvement. We would also like to send out a special thanks to Christina Montgomery, Vice President and Chief Privacy and Trust Officer at IBM. She was our keynote speaker at the DX Summit held last October. She gave an informative, dynamic, and thought-provoking presentation on the ethical, regulatory, and societal implications of AI and government adoption in Canada. She drew from her experience on these matters in the American context and provided examples that help shed light on the way forward in Canada. We are excited to include her presentation in this issue. Also, we’d like to extend a special thanks to Michael Wernick, Jarislowsky Chair in Public Sector Management at the University of Ottawa, for being a guest on the CGE Podcast. He is uniquely suited, as a former Clerk to the Privy Council, to comment on the role of the public service during a transition period. We’ve included the text of that podcast here in this issue. Happy reading! Dr. Lori Turnbull Deputy Editor, Canadian Government Executive

DRIVING OVER $1 BILLION IN BUSINESS OPPORTUNITIES ANNUALLY. CANADA’S LARGEST DEFENCE, SECURITY AND EMERGING TECH EVENT MAY 29 & 30, 2024 EY CENTRE, OTTAWA PROUDLY CREATED AND PRODUCED BY: canseccanada.ca | @cadsicanada REGISTRATION OPENS MARCH 2024

AI IN THE PUBLIC 6 / Canadian Government Executive // SPRING 2024 INTERVIEW NAVIGATING RISKS AND REWARDS In today’s rapidly evolving technological landscape, the integration of how to integrate artificial intelligence (AI) into the public sector has become an important and increasingly urgent conversation. To shed light on it, Professor Lori Turnbull, Editor in Chief of Canadian Government Executive recently hosted Christina Montgomery, Vice President and Chief Privacy and Trust Officer at IBM, at the DX Summit 2023. With her wealth of experience in overseeing IBM’s global privacy program and AI ethics initiatives, Christina shares insights into the ethical, regulatory, and societal implications of AI and government adoption in Canada.

SECTOR: SPRING 2024 // Canadian Government Executive / 7 INTERVIEW LORI: I’d like to introduce our first speaker. Christina Montgomery is Vice President and Chief Privacy and Trust Officer at IBM. Christina is overseeing the company’s privacy program, compliance, and strategy on a global basis and directing all aspects of IBM’s privacy policies. She also chairs IBM’s AI Ethics Board. During her tenure at IBM, Christina has served in a variety of positions, including Managing Attorney, Cybersecurity Counsel, and most recently, Corporate Secretary to the company’s Board of Directors. A global leader in AI ethics and governance, Christina is a member of the US Chamber of Commerce AI Commission and a member of the United States National AI Advisory Committee. This was established in 2022 to advise the President and the National AI Initiative Office on a range of topics related to AI. Christina is also an advisory board member of the Future of Privacy Forum, advisory council member of the Center for Information Policy Leadership, and a member of the AI Governance Advisory Board for the International Association of Privacy Professionals. She received a BA from Binghamton University and a JD from Harvard Law School. Welcome, Christina. Thank you so much for being here. CHRISTINA: Thank you Lori for giving me the opportunity to open today’s summit. We’re here today to talk about digital transformation. It’s a fitting theme as we approach one year since the launch of ChatGPT, a moment which truly ignited awareness and adoption of AI worldwide. But these types of flashy consumer use cases are not where the real transformational powers lie. Foundation models are set to radically and quickly change how businesses operate. And as consumers and citizens increasingly test and trial AI, leaders in Canada are deepening their understanding of AI’s potential and where the most value can be derived. In an IBM study released earlier this year, 78% of Canadian CEOs said they had a clear plan for the role advanced AI will play in their organization’s decision-making five years from now. And at the same time, the dramatic surge in public attention around AI has rightfully raised questions. And they’re questions that are critically important, like what is the potential impact on society and the workforce? What do we do about challenges with AI around bias and explainability? What about misinformation and harmful and abusive content that can be generated by AI systems that are misused? So just as this last year ushered in the meteoric rise of capabilities like ChatGPT and other generative AI models, it also brought forth a variety of policy recommendations around AI from around the globe. These are important conversations to be having right now. Leaders across the U.S. and Canada and around the world are steeped in discussions on how to significantly increase productivity and competitiveness to kickstart a new wave of economic growth. Most of us in this room have likely heard economic figures and projections tied to AI adoption. The CEO of IBM, Arvind Krishna, has said AI could add $10 trillion to global GDP by 2030. And in Canada, experts have predicted that AI could increase the nation’s economy by $210 billion and potentially save the average Canadian worker 100 hours a year. AI innovation is absolutely transforming businesses and industries, including in the public sector. But it presents new and creative ways to think about how we might transform governments to modernize digital services, make departments more effective, and enhance services for our constituents. However, like any technology going through rapid development, AI could also be hazardous. And today there aren’t enough

rules of the road. An era of AI cannot be another era of “move fast and break things.” As leaders shape this technology, we play a critical role in ensuring secure and responsible approach to AI adoption across business, government and industry. So how do we solve this challenge to ensure that we capture value while managing risk? Importantly, for the public sector, public leadership can set the tone on AI adoption. As AI proliferates consumer life in our business, it’s critical to maintain the public’s trust. Governments worldwide have a heightened sense of urgency on this topic, including here in Canada, where last month we saw the introduction of interim guidelines to bridge the legislative process with Bill C-27 and the Artificial Intelligence and Data Act. The key for Canada in this legislation will be to find the right balance between regulations that protect Canadians from the risks of AI while letting innovators leverage a technology that will be crucial in our efforts to solve world issues, will reinvent how we do business, and provide services to citizens. It’s encouraging to see the progress here in Canada, and we’re happy to provide our perspective to the federal government on how these policies should be shaped. While these guidelines and policies are critical for Canada’s long-term success in AI adoption, so too is a plan of action for the public sector to define how AI can transform government and improve digital services for Canadians. Some of this work is already happening here today. Last year, for example, IBM worked with the city of Markham, Ontario, to leverage a virtual assistant to help voters access reliable and accurate information at any time of day about the upcoming municipal election. The initiative built on prior work, which was a Canadian first, where the city of Markham used the same platform to offer 24-hour customer service for residents looking for COVID-19 information through text, chat, and voice calls at any time. But in IBM’s view, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of technology throughout its life cycle. This includes design, development and use, and maintenance, with humans at the heart of services delivered by the government. Given where we are today, adoption is accelerating, government progressing on AI policy, we have a window now to establish AI frameworks across organizations that support increased productivity while delivering trusted outcomes. Simply put, we believe Canada can benefit from a blueprint to guide responsible AI adoption, and that the government and the public sector can play a key leadership role in defining this plan. At the foundation of this blueprint is a clear plan and perspective on responsible AI guidelines. This is particularly critical when we consider the potential benefits associated with implementing AI in government operations. For example, AI can help government departments reduce information overload and increase employee productivity by putting vast stores of government data to work, to produce contextualized services and create specific guidance for talent and workforce decisions. This can allow government employees to work on higher-value things. Contextual services for citizens, an area with heightened responsibility and scrutiny, can also be improved. AI and generative AI can be responsibly applied to summarize information, provide personalized responses to citizens’ questions, such eligibility for particular services. How do I apply? What forms do I use? This can all be done through AI applications and generative AI. But at IBM, we deeply believe that trustworthy and responsible AI will lie at the heart of these improvements. To wrap up, IBM has been at the forefront of responsibly introducing groundbreaking technologies for more than a century. Technologies that solve some of the world’s most complex problems, and in many cases lead to a better quality of life for all. For us, responsibility here means we only release technology to the public after understanding its consequences, providing essential guardrails, and ensuring accountability. In short, we believe that addressing the repercussions of these innovations is just as important as the innovations themselves. This approach has never been truer than it is with AI, given the critical role that AI can play in transforming government if it is trusted. We look forward to doing our part and working with leaders like you here in Canada and worldwide to build an AI future that we can all trust. 8 / Canadian Government Executive // SPRING 2024 INTERVIEW

I think what you’re seeing with things like ChatGPT, some of the earliest cases against it were filed by data privacy regulators who said, “You’re not respecting, as a private company, things like GDPR in Europe, because you’re training AI models on personal information.” SPRING 2024 // Canadian Government Executive / 9 LORI: Thank you, Christina. There’s a lot of food for thought there. I don’t think I’m alone in finding the transformation to AI integration scary. When you talk about the example of use of AI in supporting voters in an election, I find that very interesting and perhaps a very exciting opportunity, but also kind of scary. Can you talk to us a little more about how that worked? What was the process that led up to that? What were the risks and benefits? Did it help to make citizens more respond? Was voter turnout better? As a political scientist, those are the sorts of things I think about, because on the one hand, you can see how this could be risky, but on the other hand, it could be the type of thing that can help us to combat some of the problems we’re having in terms of voter suppression, declining turnout, and declining interest. CHRISTINA: I would say that an AI model is just a representation of the underlying data. And because of the way AI can unlock value from data, that without the AI model and algorithm, it would have been harder to do before. Things like information retrieval, AI is good at that, summarization, integrating data silos if you have the right governance. When I talk about election information, what chatbots like the Watson Assistant are good at is helping with the information retrieval, the integration of that data, to answer questions based on a closed set of data. That’s part of the government’s database around elections, so the information is about where your polling place is, connecting it with addresses, that type of thing. It’s not answering fundamental questions about what a particular politician’s platform is necessarily, but more integrating data silos, whether it’s your license and where you live, where your polling place might be, and connecting and bringing that to the forefront for citizens, making it easier for them to access information that they need. LORI: That makes a lot of sense. I wonder if you could talk a bit about the other side of it, the culture in the public sector, the culture in the public. As we think about the risks and innovations, and again, whether AI is scary or whether it’s a great opportunity or both, can you talk a bit about some of the cultural pieces, some of the cultural fragments that will affect how people respond to AI? CHRISTINA: I would say from a public sector perspective, it exists to serve its citizens. And when you look at whether it’s the U.S., Canada, or anywhere, it’s probably the only sort of organization that counts every single member of the country as one of its clients, and manages some of the most sensitive data, the critical lifeblood of benefits and services and the like. So, I think first and foremost, when it comes to government, when you think about the data that’s managed, when you think about the importance of government services to citizens, trust must be part of the culture. I also think a really important point is policymaking versus innovation and adoption of AI. Governments are notoriously slower to adopt innovative technology than the private sector might be. We’re a little behind. We’re a little slower on the government side to develop new innovations, a little more cautious. And in this trend of AI, we’re seeing the technology evolve so quickly that we’ve got this sort of sense that governments need to regulate. That’s what they can do. They need to regulate to keep up with the technology, which is part of the reason, again, coming back to trust, coming back to transparency, basic principles around the adoption of AI making sure they’re not at odds with policymaking. LORI: I think you can link those points around the trust in the process, because people expect that government is doing things behind the scenes to make sure that something is safe for them, whether it is a vaccine, a type of medication or a type of technology. Yes, we’re six months behind the private sector. But it’s hard for something like this, where it seems like the possibilities are endless, I think it’s hard for people to trust even with the process behind it. CHRISTINA: I absolutely agree with that, but I think there are basic principles that if you start applying them to AI, they’re true in government and in the private sector. From a government accountability perspective, if something comes from the government, it should be true, it should be accurate, and there’s a real dependency. We saw this with COVID-19, which is why that example of the election chatbot and the information regarding COVID vaccines are well connected for this time right now, because you think about government services being completely overwhelmed in the global pandemic, right? And applying technology to help address and get services to citizens, like vaccine information, like health information about COVID-19, in a time when governments were completely overwhelmed by the need of the citizens. So that’s where these technologies could be hugely impactful. But if I’m getting misinformation from something like a chatbot about where to go for my vaccine or how to sign up for my vaccine, that could really be dangerous. We must navigate that from a strategic perspective, in terms of where are we going to deploy these helpful technologies without eroding trust. LORI: If I can switch focus, I want to make sure we keep this mindfulness about Canada learning from the U.S. on the regulatory side. Given everything you know about what’s happening in the U.S., what can Canada, on that regulatory side, learn with respect to our own regulatory process? CHRISTINA: One of the things the U.S. is doing quite well is recognizing what they don’t know. If you look at Senator Schumer, he’s hosting a lot of convenings of private companies, government, academia, researchers in this space, very multidisciplinary convenings to educate policymakers about the implications and the risks of AI, the potential of AI and the like, so a lot of outreach. I would also point to The National Institute of Standards and Technology (NIST), a part of government that sits within the Department of Commerce. They’ve been working for two years on a risk management framework. Again, collaborating with a multistakeholder, cross-disciplinary group in the private sector, in academia, in government, INTERVIEW

10 / Canadian Government Executive // SPRING 2024 INTERVIEW in civil society, to develop a framework for managing risk in AI that is applicable in any country of the world. It’s looking at the lifecycle of AI, how do you consider risk from the outset, how do you manage them over the lifecycle? The point in both examples is that government is partnering with those who are using and deploying AI, they’re partnering with the communities that will be impacted by AI, the end citizens, and educating themselves on the technology before acting with prescriptive regulation. I think that’s an important lesson because you can’t regulate without having that sense of knowledge. LORI: I’m a political scientist and come at everything in that lens, and it strikes me that this really has the capacity to change the relationship between state and non-state actors. There are implications for misinformation, democracy, privacy, ethics, service delivery, you name it. This is a complex area in which to gain legitimacy and literacy, and these are people who must keep on top of a bunch of different public issues, policy issues, problems, pieces of legislation, and more. What would you say is the level of literacy in government on this? It strikes me that this sort of issue has such huge implications and riskbenefit, but it’s something that the non-state actors who are leaders in this field have a lot of leverage and huge amounts of information and understanding, and governments are figuring this out at the same time as they’re figuring out all kinds of things. CHRISTINA: It’s not just AI. The relationship between private sector and public sector has evolved over the years. There’s more and more research being done in the private sector. Private sector is playing a bigger role in even things like going to space. I do think there is a responsibility on the part of the private sector to help educate public sector on the technology, and to support research, publicly funded and privately funded, and opening that up to academia and the like, bringing in voices from other than just the private sector. That’s one of the dangers that I want to warn against. It’s interesting. I’m sitting here from IBM, and I’m saying don’t listen to just private industry. I think from an AI perspective, because the harms play out in the application and context is so important, that you need more than just those who understand the technology to be helping inform policymakers. You need those who are subject to the technology. You need their perspective. You need research perspective. You need academic perspective, because academia has the freedom to study things like trust in a much deeper manner than, say, a corporation whose commercial interests may not fully align with throwing that much money into trust and safety. I think crossdisciplinary, multi-stakeholder solutions and convenings are what’s important. LORI: I appreciate that, and I agree with you about responsibility. I think the concept of corporate social responsibility is going to have to shift in this light entirely. I also want to ask you how ChatGPT has affected everything? CHRISTINA: Let me back up and offer some perspective. I’ve been responsible for implementing AI governance in IBM for more than four years now. And I’ve been very actively involved in policy recommendations, having many conversations with regulators around the globe, putting our own principles around artificial intelligence into practice across our company. And it’s been a journey. And it was much more of a push than a pull until ChatGPT. That brought AI to the centre of attention for regular, everyday citizens in a way that was just was not happening before. The conversations were certainly happening, companies were adopting AI, the public sector was adopting AI, but not to the degree that it is. The ability to take a general-purpose AI model to many different downstream uses has significantly transformed what AI is capable of and how to utilize it. It’ll make it much easier to utilize and adopt AI across the globe. That said, I think it’s good that it originally came out in the form of something like a chatbot that everyone can interact with. Because you can also see there’s no magic. It’s wrong a lot. It’s essentially predicting the next word in a sentence, which sometimes means it can very plausibly predict accurate and wrong things. It has ignited those conversations. It’s not magic. It needs to be regulated. And those same basic principles that we talked about in IBM four or five years ago are the same basic principles that apply in the context of ChatGPT and foundational models. Things like trust and explainability and preserving privacy and having security in your AI models and eliminating your bias from algorithms. It’s all the same principles. It’s just more tangible to people. LORI: Okay. Thank you. I’m going to jump to some other questions We’re sometimes in a rush to implement and enable AI outside our organizations. Is there a benefit to first trying to implement and learn internally first, taking it for a spin on the inside before anybody sees how it could affect things on the outside? CHRISTINA: It absolutely is. One of the benefits of being in the chief privacy office of a technology company is that we get to test our own tech. We have a platform for generative AI, WatsonX. And the platform is fully capable in terms of having an AI studio to train AI models, a common data architecture so it works across multiple providers, not just IBM. And importantly, from my perspective, an AI governance capability. In the chief privacy office, we’re using that technology internally. We’re contributing back to the product team in terms of fact sheets and transpar-

SPRING 2024 // Canadian Government Executive / 11 INTERVIEW ency and the ability to produce auditable documentation. All those capabilities that we think we need from a governance perspective; we’re helping to inform what’s in our product. I think that’s important. LORI: How can we leverage AI to enhance the client experience with programs and services? How can we use this to make things better for people, whether it’s health services, garbage pickup, or whatever the case may be? CHRISTINA: It’s where we started the conversation in terms of some of the things that AI is good at, and one of them is information retrieval. As I mentioned, an AI model is really just a representation of data. So, the more capable AI models can get, the more governance you put around data, which is necessary to implement AI, then the more you can erase data silos, leverage relevant information to bring it, first and foremost, into the hands of citizens. Some of the earliest use cases that we’re seeing now, in particular with foundation models and the best use cases, are around things like customer service chatbots that will be able to pull relevant information, put it into the hands of customer service agents from across many systems in a company and enable a more informed and more relevant customer service experience. LORI: I want to talk a bit more about privacy. How can we guarantee the public that the information they give will not be collected? And are there issues around government use of data, reasons that government can’t use data in the same way that corporations can use data, so that they may not be able to reflect the same service experience back to a citizen as opposed to a customer? CHRISTINA: Privacy is obviously fundamental to AI technology, and it’s part of the reason why, from my own personal perspective, we’re seeing so many privacy teams and privacy professionals also taking on responsibility for AI ethics. One of the primary focuses of the technology we’re developing in IBM is that it must respect privacy rules and it must preserve privacy. And I think what you’re seeing with things like ChatGPT, some of the earliest cases against it were filed by data privacy regulators who said, “You’re not respecting, as a private company, things like GDPR in Europe, because you’re training AI models on personal information.” We’ve come a long way from those initial conversations. There absolutely is a need to respect privacy, both in the training of AI models and in the output. I don’t think there are any proposals that say we should throw privacy law out the window in the face of AI. If anything, AI is bringing attention to the personal information regulatory environment, which is why it’s so important. And when you look at things like Bill C-27 in Canada, it has elements associated with personal information and reinvigorating and bringing privacy law up to date in Canada, but also regulating artificial intelligence and data. I think the two are very much interlocked. LORI: How does the private sector plan to equip the public sector to integrate and implement new technology based on trust dependencies? CHRISTINA: I’ll give you the example of IBM and our principles of trust and transparency, which we published six years ago. Those principles are that AI should be transparent and explainable. And for us, because we’re an enterprise company, we don’t train our AI on client data. Client data is their data. And when you think about the pillars that underpin fairness, transparency and explainability, there are capabilities that relate back to scientific capabilities. We don’t just articulate principles without providing the tools to help address the principles. We deployed some of the first toolkits into open source to help do things like generate fact sheets for AI, sort of nutrition labels that show the data that went into training a model, what it’s good for, expected outputs, and the like. This makes it more transparent and explainable. In bias detection, we also have deployed to open source and continue to work on and improve capabilities around bias detection in AI algorithms. I think what the private industry is doing is working on those capabilities to enable companies, public sector, wherever it might be, to adhere to the principles around transparency and bias detection and explainability in software. LORI: Thank you. Before we conclude our conversation, do you have any final thoughts? CHRISTINA: On the question of private industry not wanting technology regulated, I think that’s not correct in general. Sure, there are some pockets of private industry that don’t want regulation, but it is very clear that if technology is not trusted, it will not be adopted. And that’s very dangerous to technology companies. We obviously want our technologies to be trusted. We’ve been very actively advocating for AI to be regulated for, as I mentioned, four years now, helping to deploy technical capabilities that will enable things like transparency and explainability and fairness. The issue on sustainability is also critical. When do we use AI and when do we not use it? Each time you send in an inquiry to ChatGPT, that’s costing energy and money. And when you think about deploying AI in your operations, I think it’s important to have a plan for when this is the most optimal technology to use versus many other technological capabilities. That’s the first point. Have a strategy and have a plan and be mindful of what you’re using this technology for and what you’re not using it for. Because there are implications with any technology from a sustainability perspective. Companies are working on how to make AI more efficient from a sustainability perspective, and that’s a huge initiative of IBM. For example, we have offerings in this space that will help companies to understand their blueprint across all their real estate and capture things like how much greenhouse gas is being used across a global footprint, how to track that, how to report it out. I think this comes back to your point around sustainability as well and ESG and how important it is for private companies to be involved and be leaders in ESG as well. LORI: That is a really important point to end on. Christina, thank you so much. This has been fantastic. We’ve learned a whole lot from you. CHRISTINA: Thank you for having me. The ability to take a general-purpose AI model to many different downstream uses has significantly transformed what AI is capable of and how to utilize it. It’ll make it much easier to utilize and adopt AI across the globe.

JOHN JONES: Today, we’re going to be talking about the “end times and new times” and the transition period as we head our way to an election. We’re going explore the topic of how governments prepare for these new times. Joining us is our special guest Michael 12 / Canadian Government Executive // SPRING 2024 CGE LEADERSHIP SERIES LORI TURNBULL: Thank you, John, and thank you so much Michael, for joining us. We’re really excited about this. I’m going to start by giving you the floor Michael so you can share all you know about government transitions. We’re not at the transition time yet. It’s March 2024. I don’t think there’s Michael Wernick hold the Jarislowsky Chair in Public Sector Management at the University of Ottawa and is also the former Clerk of the Privy Council. He is uniquely suited to comment on the role of the public service during a time of government transition. Canada does transfers of power very well, largely because the permanent public service provides stability while the political government either changes hands or, in the event of a return of the incumbent, recalibrates after an election. In conversation with Lori Turnbull, he delves into this critical issue. PREPARING FOR CHANGE: Examining Government Transition Dynamics in Election Cycles Wernick, Jarislowsky Chair in Public Sector Management at the University of Ottawa, as well as Professor Lori Turnbull, Editor in Chief of Canadian Government Executive. Welcome to you both. This is this is a great topic. I’m looking forward today’s conversation.

SPRING 2024 // Canadian Government Executive / 13 CGE LEADERSHIP SERIES going to be an election until October 2025, but now that I said that I’m sure that I’ll be proven wrong. But we can use that as a possible date. And I think it’s a likely date, given the circumstance with the confidence and supply agreement between the Liberals and New Democrats. There is security for the Liberal government, even though they’re in a minority. My guess is that they are not in a rush to go to election anytime soon. There seems to be enough in this agreement with the NDP to keep them going. I never want to jump the gun and make assumptions about how Canadians will vote. But if one reads the polls, it looks like we are probably going to be heading towards some kind of change in government. So, without even presupposing that, I want to get your thoughts, Michael. What does it look like when a government is heading towards an election where they are going to be asking for a fourth term, which governments typically don’t get? What are the possibilities here? What is going through the minds of public servants who are advising and supporting a government that is in the last part of its third term? MICHAEL WERNICK: Thanks for the invitation. I’ll put in a plug for my book if you don’t mind. On some of this, I think I’ll leave some of the political science to you. There’s an important principle that we only have one government at a time and the main role of the public service in the federal government or any provincial government for that matter, is to deliver the programs and services and provide advice to the government until the next one is sworn in. So, this will play out in more or less three acts. Governments behind in the polls with a high probability of defeat creates certain dynamics. And that’s the sort of medium-term transition planning that we can delve into. The second act is the actual election period. Obviously campaigns matter and there will be swings in the polls and different senses of who’s going to win. And people will be putting out seats projections and issues around the legitimacy of coalitions. And the third act is the pure transition, the handoff of power for one democratically elected group to another. And we do that reasonably well and quickly in Canada, but it will certainly raise a whole bunch of challenges for parts of the public service. My second theme would be that for most of the federal public service, this doesn’t really matter. They come in and deliver their programs, services, transactions, regulations and so on. And until a new government has a major change in machinery or policy, they just keep calm and carry on. So, transition really affects a narrow slice. LORI TURNBULL: It’s such an interesting thing because I think it’s a really important point that the public service is the thing that stays the same. It’s one of the reasons why Canada is so good at transition, because we have this permanent public service that continues to keep everything going. I want to go through this chronology you’ve put forward for us. How does the public service prepare? Because there are some countries where it is normal for some people “For many political purposes, they will deploy a piece of legislation and say, “Look at this, if you vote for the other guys, it might get taken away.” It’s kind of like leaving the table the way you want it when you call the election.” — Michael Wernick

about electoral promises. The other stream is really all the issues and events that will never come up in the election campaign. I remember this conversation about, “You’re going to be elected in late October and guess what? Within a few weeks, you’re off to an international summit and you have to lead the Canadian position on climate change.” So, there are upcoming court cases and international events and other kinds of milestones that you would want to bring to bear and say to a new department minister or Prime Minister. These are other things you’re going to have to focus on and spend some time and energy on. That process of blending election promises and creating a to-do list is how the early days of an agenda gets shaped. The other phenomenon is like sands going through the hourglass. You work backwards. How many days of parliamentary time are left and how many days, how many meetings of Cabinet and Treasury Board can you have? And that number is going to 14 / Canadian Government Executive // SPRING 2024 CGE LEADERSHIP SERIES start diminishing. So, the ability for this government to deploy new things is diminishing and their ability to finish things they’ve already started is diminishing. And that’s fine. For many political purposes, they will deploy a piece of legislation and say, “Look at this, if you vote for the other guys, it might get taken away.” It’s kind of like leaving the table the way you want it when you call the election. In other cases, it’s important to finish the job and get the bill passed. So, I think there are parts of the public service where it will be super busy because they’ve got an active agenda from the current government. They’re also thinking about how this would work out with different scenarios. There could be a blue majority. There could be a minority government, there are coalition possibilities, and given the swings that happen in election, you have to be agile and think about a variety of scenarios. LORI TURNBULL: For sure. Because I get worried sometimes that we put so much emphasis on polls. It’s not to in any way detract from their methodological soundness, and it’s not to suggest that they’re not right. But they aren’t a vote. They are a measurement of where the public is at the time and, as you say, campaigns matter a lot. We have a lot of people who don’t switch on at all in terms of paying attention until the campaign, until the two-week mark. Over the past 10, 20 years we had a lot of voter promiscuity and people who didn’t make up their minds sometimes until even the last weekend of the campaign. But it strikes me that we had issues around low turnout in the last Ontario election. Only 43.5% of people who were eligible showed up to vote. Now in that case there were issues around the result looking like a foregone conclusion. People weren’t particularly excited about any of the people on the ballots, and a lot of people didn’t show up for anybody. And I’m not sure that’s necessarily going to happen. I don’t think we’re going to see the same thing at the federal level. It strikes me that there will be less voter volatility. There used to be a time where you could imagine a person making a choice between voting Progressive Conservative or Liberal. But now I think the iterations of the Liberals and the Conservatives federally and the polarizing effect of their leaders, is creating a scenario where you get a stark contrast between the two parties. In your experience, is there an additional set of challenges if the ideological positioning of the incoming and outgoing government is particularly stark? in the public service to reach out to what could potentially be an incoming government even before the writ is dropped, so that there’s some kind of coordination around what campaign promises could look like. I wonder if you could give us a little bit of that. Is it common for people to talk openly about the possibility that there could be a change in government even before the writ is dropped? MICHAEL WERNICK: Well, part of the dynamic is the electoral promises. I mean that’s open software and out in the public. If a government says we’re going to legalize cannabis or bring an end to first past the post elections or repeal the carbon tax, that’s out in the open and it’s straightforward for public servants to think about. It’s not our job to question whether you’re going to do that, but we’re going to engage on costing, legal risks, and international obligations. There are all kinds of things that the public service can bring to an early conversation “About 200,000 votes one way or the other would have changed the outcome of the last two elections. In most elections the parties are playing for a very small margin of voters in a very limited set of constituencies.” — Michael Wernick

SPRING 2024 // Canadian Government Executive / 15 CGE LEADERSHIP SERIES MICHAEL WERNICK: I’ll start with a couple of things and then we can come back to this. We are used to watching the Americans fuss about 6 or 7 swing states in the Electoral College, but we’re not that different. About 200,000 votes one way or the other would have changed the outcome of the last two elections. In most elections the parties are playing for a very small margin of voters in a very limited set of constituencies. Big majorities are the outliers. In honour of Brian Mulroney, I’d go back to the 1988 election. He called the election for a mandate for free trade. He led in the polls when he called the elections, then they swung towards the Turner Liberals mid-campaign. Then they swung back. You can imagine the convulsions this must have the caused for people in trade and economic policy. So, to go back to your question, it’s totally normal and legitimate that governments go back and forth from red to blue and then repeal or change direction. We saw the Martin government build a childcare program that the Harper government reversed and so on. Those are legitimate decisions for voters to make and they will decide which platform they prefer. The fundamental point is that everybody accepts the rules of the game, which is 343 constituency races. What counts is obtaining a majority in the House of Commons. And that brings in issues about coalitions and partnerships. It’s not the party that gets the most votes. It wasn’t in 2021. LORI TURNBULL: It wasn’t in 2019 either. MICHAEL WERNICK: It’s the party that wins the most seats. Now, so far in every previous election, all the parties have accepted these key principles. It is first past the post. It is confidence of the House of Commons that decides who wins and people have accepted outcomes where the party that got the most votes overall didn’t get the most seats. The more interesting outcomes are the 1985 Frank Miller election in Ontario and the British Columbia election in 2017. The party that won the most seats couldn’t form a government because there was a workable coalition of other parties. This is commonplace in Europe, as you know, but we haven’t really stressed tested it in Canada. The acceptance of coalition governments, the dynamic that’s going on today in the Netherlands and in Pakistan is about the legitimacy of combinations of parties to form a government. We’ve never really had that debate in Canada. Some of your readers will remember the dissolution dispute of 2008, the political accord between the Liberals and other opposition parties to bring down the Harper government, which led to a fight about dissolution of Parliament. But the underlying issue was its legitimacy in the eyes of Canadian voters. For the stability of the country, it would be good if one of the teams wins a clear majority and forms of functional government. But there are scenarios that are not completely impossible. LORI TURNBULL: We don’t know that there’s going to be a clear result for anybody, and we could be in a circumstance where the Conservatives come first, but they don’t get a majority of seats. MICHAEL WERNICK: I think for Mr. Poilievre this big lead in the polls cuts both ways. He will be asked by journalists during the campaign if he accepts the basic rules of the game, that whoever commands a majority in the House of Commons is the winner, even if it’s the party that didn’t win the most seats. If he starts to question that during the campaign, then we’re in a whole different space, right? And the other problem is he is now and more and more seen as the government in waiting and has to go through almost 15 months as the government in waiting. He’s going to be held to a much higher standard of explaining his policies and his programs. And what is he going to do? It’s easier to surge from behind with some slogans and generalities than to be the frontrunner. I think he will have the problem of Canadians starting to see him as a government in waiting. He’s going to have to manage that as well, just as the incumbent government does. LORI TURNBULL: I agree. He won’t have the same ability to bob and weave around journalists questions then as he does now, and it will take a whole different tone. MICHAEL WERNICK: But I would point out, Doug Ford won his first election with a fairly detailed policy program. And he won his second election with almost none. I think the need for detailed policy programs has diminished over time. But Canadians can engage the parties over the next year in terms of, “OK, what are you going to do if we give you the rings of power?” I would say there’s a couple of strengths

in Canada we shouldn’t forget. We’ve just gone through a major redistribution of federal constituencies. We’ve gone from 338 to 343. Those boundaries were redrawn by independent commissions, and nobody’s challenging the legitimacy of those constituencies. We’re not having an argument about gerrymandering the way they do in most American states, and we’ve never had a case provincially or federally, where anybody has questioned that Elections Canada delivered an accurate count of a free and fair election. The actual electoral process has never been in question and is unlikely to be. Next year, we’ve got an interesting issue about foreign interference and dark money, and whether that’s going to mean in a very, very narrow election that you can tip with 200,000 votes. Then you know foreign interference in dark money will become an issue. LORI TURNBULL: Yeah. And just thinking about your comments that we 16 / Canadian Government Executive // SPRING 2024 CGE LEADERSHIP SERIES don’t have people questioning whether ballots were appropriately counted. We do not have the same thing that happened in the U.S., where people are questioning whether the administration of the election was fair. MICHAEL WERNICK: What we have had, though, and again triggering memories, is we’ve had a round of discussions about non-citizens voting. There was a wave that you’ll recall well about the Fair Elections Act, which Mr. Poilievre took the leadership on, and it was very much a concern about whether there was a margin of non-citizens voting during Canadian elections. What kind of identification and verification was appropriate? They made a few changes which the Trudeau government then repealed in another Fair Elections Act, and so on. We could have a bit of an argument about non-citizen voting in Canada, especially given the surge in immigration numbers, but I don’t expect that to be a big part of the fabric of the next election. “I think Canadians will learn a fair bit about the arithmetic of potential combinations. But I think it’s the job of the media and others to draw out our political parties on this.” — Michael Wernick

RkJQdWJsaXNoZXIy NDI0Mzg=