Regulation (EU) 2024/1689 (the EU Artificial Intelligence Act) is a risk-based regulation establishing rules for organisations providing and using artificial intelligence systems within the EU, and for their systems. The Act uses the term ‘operator’ to refer to organisations within the regulatory scope across the entire AI supply chain, and includes extraterritorial application provisions for operators outside the EU.
Should the EU AI Act apply to a non-EU organisation, particularly companies or organisations with a domestic legal entity, significant time and financial resources may be required to establish the requisite systems, policies, and processes.
This checklist outlines key steps for organisations outside the EU to determine whether Regulation (EU) 2024/1689 (the EU AI Act) applies to them and to comply with its obligations. It identifies operational considerations and highlights critical analyses and reviews organisations should undertake to assess their compliance with the EU AI Act requirements. Additionally, it proposes strategies for leveraging existing operational frameworks to meet the Act's obligations.
1. Determining whether the EU Artificial Intelligence Act applies to this organisation
1.1 Understanding the extraterritorial scope of the EU Artificial Intelligence Act
It must be understood that the EU Artificial Intelligence Act will apply to organisations located outside the EU if the organisation:
provides artificial intelligence systems or general purpose artificial intelligence (GPAI) models on the EU market;
or the output generated by the artificial intelligence system will be used within the EU (or is intended for use within the EU).
1.2 Understand how the EU AI Act defines an ‘artificial intelligence system’
Determine whether the system used or provided by the organisation is a machine-based system that:
is designed with varying degrees of autonomy;
may exhibit adaptability after deployment;
and can infer, for explicit or implicit purposes, how to generate outputs such as:
predictions;
content;
recommendations;
or decisions capable of affecting physical or virtual environments.
(See Article 3(1) of the EU AI Act.)
1.3 Identify AI systems excluded from the Act's scope
Determine whether the EU AI Act does not apply to an organisation's AI system because its use is limited to:
military, defence, or national security purposes (regardless of the entity undertaking these activities);
Public authorities of third countries or international organisations for compliance with international agreements, or for law enforcement or judicial cooperation with the EU or its Member States (subject to specific safeguards);
Pre-market scientific research, development, or testing (excluding testing under real-world conditions);
Use of AI systems by natural persons for purely personal, non-professional activities;
or systems under free and open-source licences, unless they are used as high-risk AI systems, constitute prohibited practices, or are employed in AI systems interacting directly with humans.
(See Article 2(3) to (12) of the EU AI Act.)
1.4 Understanding General Purpose Artificial Intelligence (GPAI) Terminology
a. Determining whether an organisation is a GPAI model operator
It should be understood that an AI model qualifies as a GPAI model if it meets the following criteria:
It is trained using vast amounts of data through large-scale self-supervised methods;
It demonstrates significant generalisability;
It is capable of performing a wide range of different tasks;
And it can be integrated into various downstream systems or applications.
(See Article 3 of the EU Artificial Intelligence Act)
b. Determining whether an organisation is a GPAI system operator
Consider that an AI system qualifies as a GPAI system if it meets the following criteria:
It is based on a GPAI model;
And it can be used for multiple purposes, either directly or integrated into other AI systems.
c. Determining whether a model constitutes a GPAI model with systemic risk
It must be understood that a model possesses systemic risk as a GPAI model if it exhibits high-impact capabilities as defined in Article 51 of the EU AI Act. Such a GPAI model is presumed to possess high-impact capabilities if its cumulative computational effort during training exceeds 10^25 floating-point operations.
Consider that a model may be designated as a GPAI model posing systemic risk if the European Commission or the EU AI Act Scientific Panel declares it possesses high-impact capabilities based on criteria including:
The number of model parameters;
The quality or scale of the dataset;
The computational effort expended during training;
Input and output data types (e.g., biological sequences);
Input and output modalities;
Model capability benchmarks, including:
Level of autonomy;
And adaptive learning capabilities.
Market influence;
Number of registered end-users.
2. Identifying an Organisation's Role in the AI Supply Chain
Where the EU Artificial Intelligence Act applies, determine whether the organisation operates in any of the following capacities:
a. Provider:
Develops or commissions the development of an AI system or GPAI model;
and places it on the EU market or puts it into service within the EU for the first time under its own name or trademark (whether for payment or free of charge).
b. Deployer:
An entity using an AI system in the course of its professional activities.
c. Distributor:
An entity supplying AI systems on the EU market;
and which is neither a Supplier nor an Importer.
d. Importer:
An entity established or located within the EU;
supplying AI systems bearing the trademark of a non-EU entity on the EU market.
e. Product Manufacturer:
The entity responsible for producing the final product (see Preamble Paragraph 87 of the EU AI Act).
2.1 Understanding the Transfer of Provider Obligations
Determine whether an organisation or downstream deployers, distributors, or importers may modify an AI system, for example:
- Affixing their name or trademark to a high-risk AI system already placed or used on the EU market;
- Making substantial modifications, i.e., changes to the system's functionality beyond the scope of the provider's initial conformity assessment;
- Or altering the intended purpose of the AI system in a manner that renders it a high-risk AI system.
Determine whether any such modifications alter the organisation’s role within the AI system supply chain.
Determine whether any such modifications change the AI system’s risk classification and the organisation’s corresponding obligations.
2.2 Consider Appointing an Authorised Representative
Understand that an organisation must appoint an authorised representative if it falls under either of the following categories:
A provider of high-risk AI systems;
Or a provider of GPAI models.
Consider that where the Authorised Representative requirement applies, the organisation must:
Complete the appointment of an Authorised Representative prior to placing the AI system or GPAI model on the EU market;
And ensure that the Authorised Representative is established within an EU Member State where the relevant applicable activities are conducted.
Consider that the organisation must empower the Representative with the authority to verify the organisation's compliance with the following requirements:
Technical documentation requirements;
And conformity assessment requirements.
3 Determining the applicable risk classification for each artificial intelligence system
3.1 Identifying prohibited artificial intelligence practices
Identify any prohibited AI practices that the organisation needs to phase out from the EU market, including the following AI practices:
Deploying subliminal, manipulative or deceptive techniques that substantially distort individual behaviour in a manner reasonably likely to cause significant harm by undermining informed decision-making or autonomy;
Exploiting the vulnerability of individuals or groups (e.g., age, disability, or economic status) to substantially distort their behaviour in a manner reasonably likely to cause significant harm;
Assessing or classifying people based on their social behaviour or personality traits to create social scoring systems that lead to adverse treatment;
Assessing or predicting an individual's risk of committing criminal offences solely based on trait analysis or evaluations of personality traits and characteristics;
Indiscriminately conducting web scraping to create or enhance facial recognition databases;
Enabling emotion recognition in workplaces or educational institutions;
Classifying individuals based on biometric data to infer certain characteristics, including race, political views, or religious beliefs;
Or employing real-time biometric recognition for law enforcement purposes in publicly accessible spaces.
3.2 Identification of High-Risk Artificial Intelligence Systems
Identify any AI system that:
is itself, or constitutes a safety component of, a product covered by existing EU product safety legislation;
and must undergo third-party conformity assessment under that product safety legislation (e.g., toys and medical devices).
Identify any AI system listed in Annex III of the EU Artificial Intelligence Act that may pose a significant risk of harm to the health, safety, or fundamental rights of EU citizens, including AI systems used in the following areas:
Biometric technologies;
Critical infrastructure;
Education and vocational training;
Employment;
Access to essential public services;
Law enforcement;
Immigration, asylum, and border control;
Or administrative or judicial processes and democratic processes.
3.3 Identifying Limited-Risk Artificial Intelligence Systems
Identify any AI system not classified as high-risk but meeting the following conditions:
Directly interacting with individuals within the EU (e.g., chatbots);
Generating synthetic audio, images, video, or text content (including Generative Pre-trained Large Language Models);
Used for emotion recognition or biometric classification;
Generating or manipulating image, audio, or video content (e.g., deepfakes);
Or generating or manipulating text published to inform the public about matters of public interest.
3.4 Identifying Low-Risk or No-Risk Artificial Intelligence Systems
Identify any AI system meeting the following conditions:
Performing simple automated tasks without direct human interaction (e.g., email spam filters);
And not falling under any other risk category.
3.5 Understanding Obligations Regarding Prohibited AI Practices
Organisations must understand that entities at any stage of the AI supply chain must not supply or use systems deploying or incorporating prohibited AI practices.
Consider that organisations failing to comply may face fines of up to €35 million or 7% of their global annual turnover, whichever is higher.
3.6 Understanding obligations regarding high-risk artificial intelligence systems
Determine whether high-risk AI systems meet technical requirements, including:
a continuously operating and regularly reviewed risk management system;
data governance and management practices for training, validation, and testing datasets;
up-to-date technical documentation;
automated logging;
information and usage instructions provided to deployers;
direct human oversight;
and accuracy, robustness, and cybersecurity.
3.7 Understanding obligations regarding limited-risk artificial intelligence systems
Organisations operating as providers or deployers of limited-risk AI systems must recognise their obligation to comply with specific and far-reaching transparency requirements. Specific obligations depend on the organisation's role and the nature of the limited-risk AI system, but in all cases must:
Provide individuals with specific information, which must be:
Provided in a clear and intelligible manner;
And provided before or at the time of the first interaction or engagement with the AI system.
Meet AI literacy requirements.
4 Consideration of Specific Obligations for GPAI
4.1 Understanding Specific Obligations for General Purpose Artificial Intelligence (GPAI) Models
Where an organisation is classified as a provider, it must:
Create and maintain up-to-date technical documentation for the model.
Create and maintain sufficient and current information to enable downstream supply chain members to comply with their obligations under the EU Artificial Intelligence Act. This information includes:
Information specified in Annex XI of the EU AI Act (EU AI Act: Annex XI: Technical Information for GPAI Models);
How the model interacts with external hardware or software, along with any relevant software version information;
Technical details regarding model integration, usage instructions, modalities, data formats, training data, and training data sources.
Establish a policy to ensure respect for EU copyrighted works.
Publish a training data summary.
Consider whether high-risk system obligations similarly apply.
Appoint an authorised representative.
4.2 Understanding Specific Obligations for GPAI Models with Systemic Risk
If an organisation is classified as a provider, it must:
Fulfil all obligations specific to GPAI models.
Notify the European Commission within two weeks of reaching the systemic risk threshold or becoming aware that such threshold is imminent.
Evaluate and test the model using state-of-the-art tools to identify and mitigate systemic risks.
Assess and mitigate systemic risks that may arise at EU level.
Document any serious incidents and potential corrective measures, reporting immediately to the EU AI Office and national competent authorities.
Ensure the model (including its physical infrastructure) possesses adequate cybersecurity safeguards.
5 Understanding Transparency Requirements
5.1 Determining whether transparency rules apply to the AI system
Identify any AI systems used or provided by the organisation whose intended purpose is to:
Interact directly with individuals;
Or generate content for viewing by individuals.
Determine whether the organisation is a provider or deployer of the relevant AI system.
Provide transparency information no later than when an individual first uses or encounters the content.
5.2 Understanding Transparency Rules for Providers
If the AI system interacts directly with humans (e.g., chatbots), ensure:
The AI system informs individuals that they are interacting with an AI system;
and that transparency information is clear and easily identifiable.
Where an AI system generates synthetic content, ensure:
AI system outputs are labelled as artificially generated or manipulated;
transparency information is clear and easily identifiable;
and that, considering technical feasibility and implementation costs, the labelling solution employed is as effective, interoperable, robust and reliable as practicable.
5.3 Understanding Transparency Rules for Deployers
Where an AI system performs emotion recognition or biometric classification:
Personal data must be processed in accordance with relevant data protection laws;
Disclosure information must be provided to the relevant data subjects;
And transparency information must be clear and easily identifiable.
Where an AI system generates or manipulates deepfake content:
Disclosure must be made that the content is AI-generated or manipulated;
And transparency information must be clear and easily identifiable.
If an AI system publishes synthetic text concerning matters of public interest:
Disclose that the text is AI-generated or manipulated, unless exceptions apply;
Ensure transparency information is clear and readily identifiable.
6 Conducting Required Assessments
6.1 Conducting Required Qualified Assessments for High-Risk AI Systems (Providers)
a. Identify opportunities to leverage providers' existing data governance practices for qualified assessments, such as their:
Procedures for assessing data privacy and cybersecurity risks;
Data mapping and record-keeping practices;
Existing risk assessments, including Data Protection Impact Assessments (DPIAs);
And technical documentation.
b. Determine how providers of high-risk AI systems should conduct the qualified assessment:
Internally (e.g., for most high-risk AI systems, including those meeting the EU AI Act's uniform standards);
or external assessment by an authorised third party known as a notified body (e.g., where the AI system is subject to existing EU product safety or other legal requirements mandating third-party assessment) (see Article 29 of the EU AI Act).
c. Where the provider conducts an internal conformity assessment, it must:
verify that its quality management system complies with Article 17 of the EU AI Act;
confirm the compliance of technical documentation;
and confirm that the design and development process of the AI system, along with its post-market surveillance, aligns with the technical documentation.
d. Where conformity assessment is conducted by an external notified body, the following must be provided to that body:
Technical documentation;
Access to the provider's premises;
Updated information, including any proposed changes to the AI system or quality management system;
Opportunities for periodic audits and testing;
and, in limited circumstances, access to the AI system's training data and trained models.
6.2 Preparing the EU Declaration of Conformity for High-Risk AI Systems
Identify opportunities for the provider to consolidate the Declaration of Conformity with other similar obligations, including other declarations of conformity required under EU law.
Draft the EU Declaration of Conformity to include the following:
The name, model, and other unique reference information for identifying and tracing the AI system;
The name and address of the provider or authorised representative;
A statement that this declaration is issued under the sole responsibility of the provider;
Confirmation that the AI system complies with the EU Artificial Intelligence Act;
Where the AI system processes personal data, confirmation of compliance with EU data protection regulations;
Relevant harmonised standards or other common specifications;
Where applicable, information on the notified body, the conformity assessment procedure, and the certificate of conformity;
The name of the authorised signatory;
And the date and place of signing.
6.3 Conducting the Required Fundamental Rights Impact Assessment for High-Risk AI Systems (Deployers)
Consult the EU AI Office to obtain the FRIA template.
Consider leveraging the organisation's existing governance mechanisms to:
Identify high-risk AI systems within the scope;
Identify and coordinate internal and external stakeholders;
And obtain and review the AI system's technical documentation and user manuals.
Identify opportunities to utilise the organisation's existing Data Protection Impact Assessment (DPIA) processes to conduct the FRIA.
Complete the FRIA prior to the first use of the AI system, unless the system is intended for use in the management or operation of critical public infrastructure security components.
Ensure the FRIA includes:
A description of the processes in which the provider or deployer will use the high-risk AI system;
An indication of the period and frequency of use planned by the provider or deployer;
Identification of the categories of natural persons and groups potentially affected by the high-risk AI system;
define the specific risks that may affect the aforementioned individuals or categories of groups;
and describe human oversight measures and other risk mitigation measures.
Even where not mandatory, conducting an FRIA may be considered a best practice.
7. Determining Compliance Deadlines
It should be recognised that the EU Artificial Intelligence Act formally entered into force on 1 August 2024.
It should be understood that most provisions will apply from 2 August 2026, though certain requirements have earlier compliance dates, including:
Prohibited AI practices, fully banned from 2 February 2025;
AI literacy obligations, applicable from 2 February 2025;
GPAI model rules, applicable from 2 August 2025;
Penalty provisions apply from 2 August 2025, though penalties for GPAI model providers commence on 2 August 2026.
It should be understood that rules concerning high-risk AI systems constituting safety components covered by existing EU product safety legislation will apply from 2 August 2027.
It should be recognised that products already placed on the market may be subject to different compliance dates (see Article 111(1)-(3) of the EU AI Act).
7.1 Consider Voluntary Early Compliance
Consider joining the ‘AI Pact’ to voluntarily comply with the EU AI Act ahead of the planned enforcement date.
Consider adopting voluntary codes of conduct and governance mechanisms, particularly for high-risk compliance requirements.
Special Notice:
This article is an original work by a lawyer of JAVY Law Firm and represents solely the author's personal views. It shall not be construed as formal legal advice or recommendations issued by JAVY Law Firm or its lawyers. Should any content herein be reproduced or referenced, the source must be duly acknowledged.
© Beijing JAVY Law Firm Beijing ICP Registration No. 18018264-1