In today’s interconnected world, the cyber-based and data-driven dimensions of national security are continually expanding. No surprising, then, that Artificial Intelligence (A.I.) has emerged as an important and expanding dimension of security and intelligence operations. The OECD defines A.I. as a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. There are three types of harm that can potentially stem from the design and deployment of AI systems: intended, unintended, and systemic.
Within such a context – and with an eye on systemic biases inherent in A.I. design and deployments, our third report of the National Security Transparency Advisory Group (NS-TAG) set out to examine the evolving set of inter-relationships between national security authorities and racialized minorities.
From a Canadian vantage point, the Canada Border Services Agency (CBSA) is a particularly revealing case. A recent CBC article quotes an Agency executive in acknowledging that the pandemic has enabled the agency to greatly accelerate digital strategies and break through ‘glass ceilings’ in the way prior to Covid-19. In his own meeting with TAG members, CBSA’s President specifically cited systemic biases of AI systems as one of his primary concerns about this digital evolution – as well as the importance of ensuring appropriate safeguards to mitigate such risks.
Such biases have been shown to be especially impactful on racialized communities. In early 2022, a group of US Senators, for example, called upon US federal entities to abandon their usage of Clearview Technologies (an American company specializing in A.I. solutions, notably facial recognition methodologies), citing specifically a ‘threat to Black Communities.’ In Canada, the RCMP’s past usage of Clearview technologies generated controversy – and disagreement between the policing and security agency and the federal Privacy Commissioner.
Our report draws upon such examples – as well as own consultations with racialized communities and other stakeholders, and calls upon the national security community to commit to greater openness and engagement in its usage of A.I. solutions now and going forward. In doing so, we specifically endorse two key recommendations from a 2011 report (Getting Ahead of the Curve) issued jointly by the BC and Yukon Ombudsman and Information and Privacy Commissioners that examines the public sector’s growing usage of A.I.: first, the need for guiding principles that incorporate transparency, accountability, legality, procedural fairness, and protection of privacy; and secondly, the need for government to promote capacity-building, co-operation, and public engagement on A.I. Beyond these directions, the Privacy Commissioners eloquently articulate why transparency is critical to the effective governance of automated and algorithmic governance systems.
As the Government of Canada has sought to make A.I. a pillar of new economic opportunities, working closely with industry in this regard, it is vital to ensure that such collaborative endeavours also happen with as much shared openness as possible. Accordingly, our report recommends that Public Safety Canada work with Responsible A.I., a leading centre of excellence devoted to developing accredited standards of openness and usage for and across all sectors.
In order to effectively address racial and gender biases inherent in A.I. design and usage, national security entities also require an appropriately diverse workforce and an inclusive working culture. Encouragingly, the Minister of Public Safety, Marco Mendocino acknowledged this point in a June public forum organized by Centre for International Governance Innovation, further adding that meaningful engagement with racialized communities is essential to cultivating the public trust necessary to underpin collective security.
In an equally encouraging sign, the Canadian security Intelligence Service (CSIS) has also publicly responded to our report in a detailed and thoughtful manner with a firm commitment to address the important matters at hand: ‘We know that the voices of racialized communities and Indigenous peoples have not been heard as clearly as they should in conversations around policy, legislative and operational deliberations on national security matters…. We are committed to changing this.’
Hopefully, then, our report can assist in widening dialogue and cultivating trust between racialized communities and national security authorities, a relationship all too often strained by suspicion and controversy. As digital innovation grows, transparency, engagement and accountability are essential enablers of adaptive governance and collective security.