policy monitor

Canada – Artificial Intelligence and Data Act (AIDA)

The Canadian government submitted the Artificial Intelligence and Data Act (“AIDA”) for a first reading by the House of Commons in June 2022. The AIDA seeks to regulate trade and commerce in AI systems through common requirements for design, development and use of such systems. The act also creates prohibitions for the use of AI systems that may result in serious harm to individuals.

What: legislative proposal

Impactscore: 2

Voor wie: policy makers, sector organisations, AI-companies and-users

URL: https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading


Broad application and Risk Based approach

The AIDA broadly defines AI systems as technological systems that, autonomously or partly autonomously, process data related to human activities through genetic algorithms, neural networks, machine learning or other techniques to generate content or make decisions, recommendations or predictions. The Act uses a risk based approach that applies additional, stricter obligations to “high-impact AI systems”. Additional regulations must however still define the criteria for when a system has a “high impact”.

Obligations for all AI systems

The draft Act contains provisions addressing AI systems in general. Firstly, persons designing, developing, making available or managing the operation of an AI system must assess if their system is a high-impact system. Secondly, if AI systems use anonymized data then these persons must establish measures for the manner in which data is anonymized and for the use and management of such anonymized data. This obligation also applies to providers and processors of anonymized data for AI systems. For both of these obligations, the relevant persons must keep general records of their reasons and measures as well as additional records as required by additional regulations.

Obligations for high-impact AI systems

Persons responsible for high-impact systems must take measures to identify, assess and mitigate risks of harm or biased output arising from the use of the systems. They must also monitor these measures as well as their effectiveness and keep records describing the measures in general terms and as required by additional regulations. Harm is considered to be physical or psychological harm to an individual, damage to property or economic loss to an individual.

Providers and operators of high-impact systems must publish a plain-language description of the system on a public website, including:

  • explanations of its intended (providers) or actual (operators) use;
  • the type of content, decisions or recommendations it creates (or for providers, intends to create);
  • the risk mitigation measures;
  • and other information required by law.

In addition, persons responsible for a high-impact system must notify authorities if the system results or is likely to result in material harm.


The AIDA also provides means of enforcement to the competent Minister. The minister can for instance order the persons subject to the AIDA to provide the general records described above. The Minister can also conduct, themselves or through an independent auditor, an audit if there are reasonable grounds that the requirements for AI systems in the AIDA were not complied with. Following the audit, the Minister can require that risk mitigating measures are implemented to address points identified in the audit.

For high-impact systems, the Minister has additional enforcement tools at its disposal. They can require additional records to be provided if the high-impact system can result in harm or biased output. If there is a serious risk of imminent harm, then the Minister can also order the responsible persons to cease the use or the making available of the high-impact system.

Finally, the Minister may order that the persons publish information relating to any or all of their requirements under the AIDA or to the audit on a publicly available website. Such an order may not however require that confidential business information is disclosed.


The AIDA also contains several provisions that try to balance its enforcement with the confidentiality of business information. The Minister is allowed to share confidential business information if needed for a subpoena, warrant or court order or to certain analysts for their administration, provided confidentiality-preserving measures are taken. Relevant information may also be shared with other governmental organisations if there are reasonable grounds to believe that a person under the AIDA infringed upon the law. This sharing may only include personal information or confidential business information if necessary for administering the law and if the receiving organisation agrees to ensure confidentiality of the information.

Finally, the Minister may publicly publish information on a breach of the AIDA by a person if it is in the public interest, but may not publish confidential business information in doing so. The Minister may also publish, without consent or notification, information on an AI system if there is reasonable grounds to believe that the system leads to a serious risk of imminent harm and the publication is essential to prevent this harm. Such a publication may however not include personal information or confidential business information.


A person who violates the obligations under the AIDA may be subject to the penalties in the AIDA or face administrative fines. Fines in the AIDA for non-individuals (e.g. organisations) can amount to the greater of $10.000.000 CAD or 3% of their gross global revenue on conviction in indictment or $5.000.000 CAD or 2% of their gross global revenue on summary conviction. The fines for individuals are at the discretion of the court but may not be greater than $50.000 CAD in summary conviction.

The AIDA also prohibits persons from possessing or using personal information for the purpose of designing, developing, using or making an AI system knowing or believing that it was illegally obtained. Additionally, a person may not knowingly or recklessly make available an AI system that is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property. It is similarly forbidden to make a system available with the intent to defraud the public and cause substantial economic loss. Penalties for these offences can include fines up to the greater of $25.000.000 CAD or 5% of gross global revenue (for non-individuals) and fines as well as imprisonment up to 5 years (for individuals).