A federal advisory group in Canada is calling for increased transparency from security agencies regarding their use of artificial intelligence systems and software applications. The group recommends that agencies publish detailed descriptions of their current and intended uses of AI, as well as consider amending legislation to ensure oversight of AI use within federal agencies. The government views the group as a way to implement a federal commitment to transparency in national security. While security agencies emphasize the importance of openness, they also acknowledge limitations in what they can disclose publicly due to the nature of their work.

Security agencies in Canada are already using AI for various tasks, such as translation of documents and detection of malware threats. The report predicts a growing reliance on AI technology for analyzing large volumes of data, recognizing patterns, and interpreting trends and behavior. Public awareness of the objectives and activities of national security services is crucial as the use of AI expands within the sector. The report highlights the need for mechanisms to enhance transparency within the government and improve external oversight and review.

The Canadian government has issued guidance on the use of artificial intelligence, requiring agencies to conduct algorithmic impact assessments before implementing AI systems. Additionally, the Artificial Intelligence and Data Act, currently before Parliament, aims to ensure responsible design, development, and rollout of AI systems. However, the act does not cover government institutions like security agencies, prompting the advisory group to recommend extending the law’s jurisdiction to include them. The report emphasizes the importance of collaboration between the government and private sector in national security objectives, noting that secrecy can breed suspicion while openness and engagement are key to innovation and public trust.

The Communications Security Establishment (CSE), Canada’s cyberspy agency, has been a leader in using data science and AI to analyze information. The agency emphasizes that AI is used to enhance human decision-making rather than replace it. The CSE has developed AI tools for tasks such as translation and detecting phishing campaigns. The agency faces unique limitations related to national security that may impact its ability to disclose details of its AI use. The Canadian Security Intelligence Service (CSIS) is also working on formalizing plans and governance around AI use, with transparency as a core consideration, though there are limits on what can be publicly discussed to protect operational integrity.

Concerns about the use of cutting-edge AI technology were raised when the RCMP was found to have breached privacy laws by using facial recognition software from Clearview AI without proper compliance. In response, the RCMP created the Technology Onboarding Program to assess compliance with privacy legislation, and plans to publish a transparency blueprint outlining key principles and tools. The transparency advisory group calls for more public reporting on the government’s transparency commitment, including a review of initiatives and impacts. Public Safety Canada has received and shared the group’s recommendations, but has not indicated agreement or provided a timeline for implementing them. Overall, the report underscores the importance of transparency and oversight in the use of artificial intelligence within Canada’s national security framework.

Share.
Exit mobile version