The EU, as a special supranational system of integrated markets, has long maintained a relatively aggressive regulatory attitude towards the legislation of new business models. In particular, since the explosion of the internet industry in the 2000s, the EU has not been able to keep up with market changes as fast as the US and China, and has been lagging behind China and the US in the commercialisation of internet services, big data, and AI for local companies, and has therefore traditionally advocated aggressive legislation in these areas to limit large multinationals from occupying a dominant position that is difficult to counteract in the EU market. Since the release of the GDPR in 2018, the EU's legislative exploration of the digital economy has been the main object of reference for all other mainstream economies when it comes to relevant legislation. In view of this legislative phenomenon and the current general trend of rapid development of the AI industry, it is also necessary to familiarise ourselves with the EU market's legislation for the AI industry as a theoretical reserve for multinational enterprises' conquest of the overseas market, as well as for the possible future legislation in the field of AI in China.
I. Overview
The EU Artificial Intelligence Bill (EU2024/1689) sets out a comprehensive legal framework for the development and use of AI. The AI Bill aims to establish a comprehensive legal framework for the development, supply and use of AI systems in the EU. The framework aims to ensure that AI systems are safe and respect fundamental rights and values, while also supporting AI innovation. The bill broadly consists of the following:
(1) Harmonisation of the rules for the supply and use of AI systems in all Member States, including extraterritorial provisions for operators outside the EU.
(2) Prohibition of certain AI practices that may be particularly hazardous.
(3) Technical requirements for high-risk AI systems with significant risk of harm.
(4) Requirements for different levels of organisation in the supply chain for operating high-risk AI systems, from the provider to the deployer of the AI system.
(5) Rules for providers of general-purpose artificial intelligence (GPAI) models, including provisions to protect copyrighted works used in model training.
(6) Transparency requirements for certain AI systems that interact directly with humans or generate certain types of content.
(7) Rules for market oversight, governance, and regulatory enforcement.
(8) Helpful measures to support innovation, particularly to assist small and medium-sized enterprises and start-ups. These measures include sandboxes for safe AI product development.
II. Risk classification of AI systems
Under the requirements of the AI Act, the risk of an AI system is defined as a combination of the likelihood of harm and its severity, with different rules applying to AI systems with different levels of risk. The AI Act establishes four risk categories: prohibited AI practices, high-risk AI systems, limited-risk AI systems, and minimal- or no-risk AI systems.There are also specific rules for GPAI models.
Most AI systems are likely to be minimal or no risk (e.g., email spam filters) and will not attract any additional compliance obligations under the AI Act, other than the broader obligation of AI literacy that applies to all AI systems. The rules for other AI systems depend on their risk category and, in some cases, the role of the organisations that provide or use them in the supply chain of AI systems.
The AI Bill contains new compliance rules for a wide range of operators in the AI systems supply chain. The AI Act uses the term ‘operator’ to refer collectively to the different roles in the AI supply chain. These roles include suppliers, deployers, distributors, importers, authorised representatives and product manufacturers. Responsibilities and obligations are attached to these different roles in different ways, so determining exactly which party performs which role is key to complying with compliance obligations.
iii. commencement and implementation date of the bill
As the new framework imposes significant new requirements on a range of operators in the AI value chain, organisations involved in the development, supply or use of AI systems should assess the extent to which their activities may be affected by the new framework. Most of the requirements in the Bill come into force from 2 August 2026, but some from 2025 onwards.
Organisations whose business currently involves the EU should take proactive steps to understand the extent to which the AI Bill is likely to apply to them, so that they can assess and implement any changes required to their internal processes to ensure compliance. Most of the requirements of the AI Act (including those applicable to high-risk AI systems) do not apply until 2 August 2026. However, some requirements will come into force in 2025, namely those relating to prohibited AI, AI literacy and new GPAI models. There are also specific areas of compliance requirements whose applicability dates are set to 2027 or 2030 (section 111 of the bill).
Pending the formal entry into force of the Bill and its application on the ground, the European Commission encourages organisations to voluntarily adopt the AI Convention before the AI Bill becomes applicable. It is expected that the AI Convention will be launched during this transition period and it is designed to help organisations manage their transition to full compliance with the Act.
IV Scope of Application of the Bill
Like the GDPR, the AI Bill is also strongly extraterritorial and applies to any organisation or person providing an AI system or GPAI model on the EU market, regardless of where they have established their entity or where their operating entity is located. In addition even if the product or service is not directly aimed at the EU market, if the output of the AI system is to be used in the EU, the Act applies to the provider or deployer of the AI system outside that EU.
If entities not established in the EU fall within the scope of the AI Act (and their AI systems do not fall within any exclusions; see Excluded AI Systems), the following compliance practices should also be undertaken:
(1) Identify and phase out prohibited AI practices (see Prohibited AI Practices).
(2) Examine the categories of high-risk AI systems (see High-Risk AI Systems) and consider any technical compliance requirements (see Technical Compliance Requirements for High-Risk AI Systems).
(3) Consider their role in the AI system supply chain and the operator's compliance obligations.
(4) Consider whether they are a provider of GPAI models and check the required compliance obligations.
(5) Consider the transparency requirements for AI systems that interact with individuals. These requirements apply to both limited risk AI systems and high risk AI systems.
(6) Check that their business activities do not result in a transfer of supplier obligations to their own business entities.
(7) For providers of high-risk AI systems or GPAI models, consider meeting the requirement to designate an authorised representative.
(8) Establish internal governance processes (see Evidence of Compliance), including dynamic monitoring of services once they are brought to market.
V. Important definitions in the Bill
1. Artificial intelligence system
The original text of the bill reads ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. decisions that can influence physical or virtual environments.’ There are seven key components, each of which is broken down below:
(1) machine-based here means that the system relies on hardware and software for data processing, learning, and automation of applications, but does not include data processing that relies on human beings for decision making. In this context, ai can be an independent part of a product, a feature embedded in a physical product, or a feature not embedded in a physical product.
(2) designed to operate with varying levels of autonomy (设计成具有不同程度的自主性) independent of human intervention, the system can learn and work without human intervention, such as sweeping robots with intelligent obstacle avoidance and map recognition, self-driving cars, automatic stock speculation software, and so on. However, it does not include systems that require continuous human intervention for every decision, or systems without adaptive and self-learning based on fixed pre-designed mechanical work processes. A typical example is the common commercial drones, which have some automatic obstacle-avoidance procedures but mainly rely on manually given commands to operate.
(3) EXHIBIT ADAPTIVITY Systems may be required to have the ability to self-learn after deployment, but not all AI systems are required to have this distinct characteristic. This element of the definition can be difficult to distinguish from the previous one, and in this regard the EU Commission guidance emphasises that adaptiveness is more about the ability of a system to evolve itself, so that when deployed and in use, the outputs of the system under the same input commands will change and produce different outputs than before.
(4) for explicit or implicit objectives (in order to achieve some explicit or implicit objectives) Explicit objectives mean that the developer directly implanted the system objectives into the system through the code; unspecified objectives mean that the developer did not directly implant the system objectives into the code, but involved some rules for the operation of the system, e.g., the self-driving system will not directly stipulate the avoidance of traffic accidents in the code, but set the rules for the operation of the system. The autopilot system does not directly specify the avoidance of traffic accidents, but sets the rule of stopping when seeing a red light. The system goals do not need to be specified in advance, e.g., recommendation systems use reinforcement learning to gradually narrow down the model preferences of individual users, and user prompts can supplement the system's goals during the deployment phase.
(5) infers from input how to generate output (from the input to infer how to generate output) the system will be based on input knowledge and based on certain logical reasoning output results. This feature is clearly different from simple data processing, which requires learning, deduction, categorisation and other actions to process data.
(6) predictions, content, recommendations or decisions) AI systems will generate outputs based on their functions, usually predictions, text, audio or video, recommendations or decisions.
(7) influence physical or virtual environment (have influence on the environment) here refers to the system to generate the results of the outside world has a practical impact, can be physical impact, or electronic virtual environment in the impact.
2、Excluded AI systems (Excluded AI systems)
The AI Act is not directed at all AI systems, and under Article 2 of the Act, the following systems are excluded from regulation under the Act:
(1) Systems used exclusively for military, defence or national security purposes, regardless of the type of entity carrying out these activities.
(2) Systems used by public authorities in third countries or by international organisations in compliance with international agreements for law enforcement or judicial cooperation with the EU or Member States.
(3) Systems used solely for scientific research and development or testing (other than testing under real-world conditions) before being placed on the market.
(4) A natural person using an AI system for purely personal, non-professional activities.
(5) Systems used under a free and open source licence, but excluding AI systems not intended for use as high-risk AI systems, as prohibited AI practices or for direct human interaction.
3. Artificial intelligence literacy (AI literacy)
AI literacy refers to the skills, knowledge and understanding of those deploying or providing AI systems (as well as other affected persons) to use AI systems intelligently and to be aware of the opportunities and risks of potential harm from AI systems. Section 4 of the Bill imposes an obligation to take steps to ensure adequate AI literacy, but at present it does not appear that failure to comply with this obligation will be penalised by a direct administrative fine. The Office of Artificial Intelligence expects to publish a voluntary code of conduct by 2 May 2025 at the latest to help organisations promote AI literacy.
4. Deployers (Deployer)
Any individual or organisation using an AI system under the authority of the system's rightsholder (except for personal non-commercial use). This may include an employer making an AI system available to its employees, or a public body making an AI system available for public use.
5. Operator (Operator)
A supplier, product manufacturer, deployer, authorised representative, importer or distributor of AI systems.
6、Deepfake (Deepfake)
Deepfake is defined as ‘AI-generated or manipulated image, audio, or video content that resembles an existing person, object, place, entity, or event and appears to people to be real or authentic.’ The use of deep fakery will always be subject to Article 50 transparency obligations.
7. generalised artificial intelligence model (gpai model)
A generalised AI model is an AI model that has been trained using large-scale self-supervised training with large amounts of data, shows significant general applicability and the ability to competently perform a wide variety of different tasks, and can be integrated into a wide variety of downstream systems or applications.
VI. AI acts prohibited by the Act
Article 5 of the AI Act sets out the categories of prohibited AI practices. These practices are banned outright as they are considered incompatible with fundamental EU values (respect for human dignity, freedom, equality, democracy and the rule of law) and rights (including the right to non-discrimination, data protection and privacy, and children's rights). Article 5 of the bill, which is now being implemented in practice, includes prohibited AI behaviours such as:
(1) Subliminal technology which can cause or is reasonably likely to cause significant harm to a person by impairing their ability to make informed decisions and thereby seriously distorting their behaviour.
(2) Taking advantage of the vulnerability of an individual or a particular group of people (for example, because of their age, disability or financial situation) and thereby seriously distorting their behaviour in a way that causes, or is reasonably likely to cause, significant harm.
(3) Social scoring systems based on known, inferred or predicted personality traits that result in harmful or unfavourable treatment.
(4) Risk assessment systems (other than certain human assessments based on verifiable facts) used to assess a person's risk of committing or reoffending.
(5) Indiscriminate crawling of web pages for the purpose of creating or enhancing a facial recognition database.
(6) Emotion recognition systems in the workplace or educational institutions (except for medical or security reasons).
(7) Biometric classification systems used to infer, for example, race, political opinion, or religious identity.
(8) Real-time, remote biometric systems for use in public spaces for law enforcement purposes, with the exception of searching for kidnap victims, preserving life, and locating suspects of certain criminal activities, subject to safeguards and a few exceptions.
VII. High-Risk Artificial Intelligence Systems
The definition of high-risk AI systems is embodied in Article 6 of the bill and is divided into two specific cases:
1. Product-related high-risk AI systems protected by EU harmonised legislation and subject to third-party conformity assessment
An AI system intended for use as a safety component of a product, or an AI system that is a product in its own right, that is protected by EU harmonised legislation listed in Appendix I and that is subject to a third party conformity assessment in order for the product to be placed on the market or put into use in accordance with the EU harmonised legislation listed in Appendix I. The main areas listed in Appendix I include: mining extraction and machinery use regulations, toy safety, leisure craft and personal boats, lifts and lift safety components, equipment and protection systems for use in potentially explosive atmospheres, radio equipment, pressure equipment, ropeway installations, personal protective equipment, appliances for the combustion of gaseous fuels, medical equipment, civil aviation safety, two- or three-wheeled vehicles and four-wheeled vehicles, agriculture and forestry vehicles, marine equipment, interoperability of railway systems within the EU.
2. High-risk AI systems in Annex III that are multi-domain and subject to an assessment of the risk of significant harm
AI systems included in Annex III, unless they are considered not to cause significant harm to EU citizens. In addition to falling within the scope of the Annex III use cases, high-risk AI systems must pose a risk of significant harm to the health, safety or fundamental rights of natural persons. Article 6(3) of the bill excludes the high-risk categorisation of AI systems listed in Annex III that do not pose a risk of significant harm. Annex III of the bill mainly covers the following areas:
(1) Biometrics. For remote biometric systems (except where the sole purpose is identity verification), biometric classification based on sensitive or protected attributes, and AI systems for emotion recognition.
(2) Critical infrastructure. Security components in the management and operation of critical digital infrastructure, road traffic, or utility supplies as such.
(3) Education and vocational training. Determining admission or access to learning institutions, assessing learning outcomes, evaluating the appropriate level of education received or acquired, and monitoring and detecting prohibited behaviours of students during examinations.
(4) Employment. Making recruitment or selection decisions (including evaluating) candidates (or placing targeted job advertisements) or decisions affecting terms and conditions, promotion, termination of employment, assignment, and monitoring and evaluating performance and behaviour.
(5) Access to essential public services. Decisions affecting an individual's access to essential public services and benefits and to private services in specific areas such as credit scoring (except for the detection of financial fraud), life and health insurance.
(6) Law enforcement. Use as a polygraph or similar, or to assess the reliability of evidence, the risk of reoffending or the risk to victims, or to profile individuals during detection, investigation, or prosecution.
(7) Immigration, asylum and border control. This includes assistance in reviewing applications for asylum or visa clearance.
(8) Judicial and democratic process. For use by judicial authorities to research and interpret facts or to influence voting in elections or referendums.
In addition, according to Article 6(4) of the Bill, a supplier of an AI system may, before placing an AI system on the market or putting it into service, submit its assessment to the competent national authority and, by means of the assessment and registration on the EU database, determine for itself that its AI system is not high-risk. Given that high-risk AI systems carry the most onerous technical and operator obligations, suppliers will be keen to ensure that their judgement on high-risk AI systems is correct. The European Commission will provide guidelines detailing the application of Article 6 in practice, including real-life examples of high-risk and non-high-risk AI systems.
VIII. Technical Compliance Requirements for High-Risk AI Systems
1. Basic Technical Requirements
If an AI system is categorised as high risk, it attracts a series of fine-grained compliance requirements that must be met, which are embodied in sections 8-15 of the Act. Among the key requirements are:
(1) Establishment of a comprehensive risk management system. Key obligations include identifying the risks foreseeable when AI systems are used for their intended purpose, as well as the risks of possible misuse. This should be an ongoing process spanning the full lifecycle of the AI system, from design to post-market monitoring, with the goal of minimising or eliminating risks to an acceptable level, where technically feasible (section 9 of the Act).
(2) Training based on training, validation and test datasets to meet defined quality standards. Given the intended purpose of the AI system, this obligation aims to ensure that the data are relevant, representative, unbiased and, to the greatest extent possible, free from error and complete (Article 10 of the bill).
(3) It is technically feasible to ensure automatic record-keeping (i.e. keeping an event log) over the lifetime of the AI system. The system log must be kept for at least 6 months. This will make the function of the AI system traceable for risk identification and post-market surveillance. At a minimum, each log must record the period of use, the reference database used, the input data that led to the match, and the names of the individuals involved in the human monitoring activity (section 12 of the Act).
(4) Ensure that information on the operation of high-risk AI systems is made available to deployed personnel. The information must be sufficiently transparent and accompanied by instructions for use to enable deployed personnel to interpret the output of the system and use it appropriately (section 13 of the bill).
(5) Enabling human oversight during use. This is intended to minimise the risk of high-risk AI systems being used in accordance with their intended use or under reasonably foreseeable conditions of misuse. It is essential that the AI system allows the person performing the supervision to interrupt or stop its operation in a safe state (section 14 of the Act).
(6) Perform with an appropriate level of accuracy, robustness and cybersecurity and remain so throughout its lifecycle. The European Commission will develop benchmarks to help organisations measure the technical aspects of this obligation. Cyber resilience techniques used by AI systems must address third-party attacks that attempt to manipulate training data, such as data poisoning or model evasion (Article 15 of the Bill).
(7) The system is accompanied by technical documentation sufficient to demonstrate compliance with the requirements of the AI Act. These documents form one of the key pillars on which the conformance of high-risk AI systems is established. There is a high demand for technical documentation, which should contain at least the elements listed in Annex IV. Information describing the development of high-risk AI systems, and their performance throughout their lifecycle, is essential to ensure the compliance of downstream members of the AI supply chain. The European Commission will provide a form listing simplified technical documentation for use by SMEs (Article 11 of the Bill).
2. Technical Documentation Requirements
The technical documentation required by Annex IV to item (7) above, i.e. Article 11 of the Act, needs to include the following:
(1) a general description of the AI system, including: a. the intended use, the name of the provider and version details reflecting changes from previous versions; b. how the AI system is to be used in conjunction with the hardware or software; c. the relevant software or firmware version and the requirements relating to version updates; d. a description of all forms in which the AI system is to be placed on the market or put into service (e.g. software packages, embedded hardware, downloads or api); e. the intended hardware environment; f. if part of a product, details of the external features, logos and internal layout of the product; g. basic details of the user interface and instructions for use provided to deployers.
(2) A detailed description of the elements of the AI system and its development process, including: a. the development steps, any pre-trained systems or tools used, and how they were used, integrated, or modified; b. the general logic of the AI system and algorithms, design principles and assumptions about their mode of use, the main classification choices, and a description of the expected output and quality of the outputs; c. the development, testing, and training of the system architecture and computational resources; d. training datasets and their sources; e. human oversight measures; f. predetermined changes to the AI system and its performance to ensure continued compliance of the AI system with the AI Act; g. validation and testing procedures; and h. cybersecurity measures.
(3) Detailed information on the monitoring, functions, and controls of the AI system, including the accuracy of its overall expectations.
(4) The appropriateness of performance indicators.
(5) Details of the risk management system required by Article 9.
(6) A description of relevant changes made to the AI system over its life cycle.
(7) A list of harmonised standards that apply in whole or in part or, in the absence of harmonised standards, a list of other relevant standards and technical specifications that apply.
(8) A copy of the EU Declaration of Conformity.
(9) Details of the system's post-market monitoring plan and the methodology used to assess the performance of the AI system for post-market monitoring purposes.
IX. Compliance requirements for operators of high-risk AI systems
In addition to the technical compliance requirements, further compliance requirements apply to every organisation in the supply chain of high-risk AI systems. Most of the obligations fall on the provider, with organisations downstream of the AI supply chain obliged to verify the compliance of organisations upstream of the chain.
1. obligations of providers
The compliance obligations of providers of high-risk AI systems before they are placed on the market or put into use are extensive, and the following compliance obligations are set out in sections 16 to 21 of the Act:
(1) Ensure that the AI system meets technical compliance requirements.
(2) affix a CE marking indicating the provider's name, address and any registered trade marks (section 16 of the bill)
(3) Establish a quality management system (Article 17 of the bill). This requirement should include: a. Strategies for completing conformity assessment and achieving regulatory compliance; b. Processes for AI system design, quality control, post-market surveillance, and records management; c. Testing procedures for all phases of the development of the high-risk AI system, the frequency of testing, and the extent to which serious incidents are reported; d. Technical specifications and harmonised standards used or any other means by which the high-risk AI system complies with the AI Act Act; e. Data management processes, including data tagging, mining, and aggregation.
(4) Retention of specified documentation for ten years after the AI system has been placed on the market or put into service, including technical documentation, details of the quality management system, the EU Declaration of Conformity and any decisions or approvals of the Notified Body (section 18 of the Act).
(5) Retention of automatically generated logs. The logbook must be retained for a minimum of six months (Section 19 of the Act).
(6) Completion of the conformity assessment procedure and drafting of the EU Declaration of Conformity (Article 16 of the Act).
(7) Register in the EU central database. If high-risk AI systems are used in national critical digital infrastructures, they should be registered at the national level simultaneously (Article 16 of the bill).
(8) Appointment of an authorised representative for compliance, who must be established in the Member State where the in-scope activities take place (Article 22 of the Bill).
(9) Providers must also take immediate corrective measures if their high-risk AI systems are no longer compliant with the AI Act and co-operate with the competent authorities as required (Articles 20 and 21 of the Bill).
2. Obligations of the importer
Before placing a high-risk AI system on the market, the importer must verify that the supplier has: possessed the relevant conformity assessment procedures for the completion of the high-risk AI system, drafted the technical documentation, appointed an authorised representative, affixed the CE marking and provided an EU certificate of conformity. The importer must include its name, address and registered trademark on the packaging of the AI system and co-operate with the national authorities upon request.
If the importer considers that the AI system does not comply with the standard, it should not place it on the market until it does. Importers also have a monitoring obligation and must report potential risks to authorised representatives and market surveillance bodies, who must cooperate with the competent national authorities upon request. Importers must retain a copy of the certificate issued by the Notified Body (if applicable), the instructions for use and the EU Declaration of Conformity (Article 23 of the Act) themselves for a period of 10 years after the placing on the market or entry into service of the high-risk system.
3. Obligations of the distributor
Prior to placing the HRAIS on the market, the distributor must verify that the CE marking has been applied to the product, that the product has an EU Declaration of Conformity and Instructions for Use for the HRAIS, and that both the supplier and the importer have complied with their upstream compliance obligations.
Once the high-risk AI system has been placed on the market, distributors have a monitoring obligation. They must take corrective action by notifying the provider or importer of the AI system or withdrawing the AI system if they have reason to believe that the AI system is no longer compliant. Distributors must also notify the provider or importer of the AI system immediately if they believe that the use of the AI system poses a risk to the health, safety or fundamental rights of EU citizens and must co-operate with the national competent authorities (Article 24 of the Act).
4 Obligations of the deployer
(1) Deployers have a slightly different role in the AI supply chain as they will be using high-risk AI systems to perform activities within their organisations.
Deployers are required to put in place appropriate technical and organisational measures to ensure that they use high-risk AI systems in accordance with the instructions for use. Deployers will need to allocate appropriate human oversight to individuals with the appropriate skills, training and authority. Where input data is within their control, deployers must ensure that the input data is relevant and sufficiently representative of the intended purpose of the high-risk AI system. Monitor the performance of the high-risk AI system in accordance with its instructions for use, and the deployer must inform the provider of situations where such use may pose a risk at the national level. Deployers are also required to retain automatically generated logs for at least six months. Those deploying high-risk AI systems in the workplace must inform employees that they will be subject to the use of high-risk AI systems prior to deployment. Public sector deployers of high-risk AI systems must only use high-risk AI systems that are registered in the EU database and must also register their use of such systems (section 26 of the bill).
(2) For organisations subject to public law, private operators providing public services, and operators deploying certain high-risk AI systems, such as AI systems for assessing an individual's reputation, establishing credit scores, or for assessing and pricing risks in life or health insurance, a Fundamental Rights Impact Assessment (FRIA) must be completed prior to the first use of a high-risk AI system by a deployer. In order to reduce the compliance burden, the FRIA can use information that has already been compiled for previous impact assessments, such as the Data Protection Assessment (DPIA) (section 27 of the Act).
5. Transfer of AI system provider obligations
Organisations in the AI supply chain should exercise caution if they wish to utilise the AI systems of others, as there are certain circumstances in which provider obligations may be transferred to them. These include the utilisation of AI systems that were not initially classified as high risk, and the triggers for the transfer of provider obligations under section 25 include the filing of a trade mark application and/or material modifications to a high risk AI system.
Pursuant to Section 25(1)a of the Act, the provider's obligations are transferred to the distributor, importer, deployer or other third party if the distributor, importer, deployer third party applies its trade mark to a high-risk AI system that is already on the market or in service. In addition, under section 25(1) b, c of the Act, where a significant modification is made to a high-risk AI system that is already on the market or in service so that it remains high-risk, or where the intended use of an AI system is modified, including a GPAI system (which is not categorised as high-risk but is already in the marketplace), so that the modified AI system becomes a high-risk AI system, the provider's obligations are transferred to any distributor, importer, deployer or other third party.
When any of the activities in section 25(1) occur, the original provider of the high-risk AI system will no longer be considered a provider of that particular AI system under the AI Act. However, the original provider must provide technical documentation and provide assistance to the organisation to which the provider obligation is transferred. The European Commission (EC) will develop guidance on what constitutes a material modification in the future.
X. General Purpose AI Models (GPAI)
The AI Act contains special provisions governing GPAI models, including the requirement to publish a summary of training data. Given that GPAI models can be used in high-risk use cases, providers are likely to be subject to both high-risk and GPAI model obligations, although many of the high-risk and GPAI model obligations cover similar areas. The Office of Artificial Intelligence will assist in the drafting of a code of practice to assist organisations in complying with these rules (section 56 of the Act).The GPAI model code of practice is currently under development and a final version is expected to be agreed by early May 2025.The following are some of the key elements of the GPAI model code of practice that are expected to be agreed by the end of May.
1. Basic compliance obligations of the GPAI model
Sections 53 and 54 of the Bill set out the obligations of all GPAI model providers. Providers must.
(1) Create and maintain up-to-date technical documentation for the GPAI model.
(2) create and maintain sufficient up-to-date information to enable downstream members of the GPAI model supply chain to comply with their AI Act obligations, including how the GPAI model interacts with external hardware or software and any relevant software versions, as well as the technical details of the GPAI model's integrations, instructions for use, schemas, data formats, training data and their sources.
(3) Publish a summary of the training data and establish provisions for the protection of copyrighted works.
(4) Providers of GPAI models established in third countries must appoint an authorised representative.
2. GPAI model technical data requirements
(1) The general description that a GPAI model needs to have includes: a. the intended tasks it can perform and the types of AI systems it can be integrated with; b. acceptable use strategies; c. date of release and mode of release; d. number of architectures and parameters; e. form (text, images) and format of inputs and outputs; and f. licence details.
(2) A detailed description of the GPAI model specific to the model development process including, at a minimum: a. instructions and technical means for the use of the GPAI model for integration with other AI systems; b. design specifications and training methods; c. training data, sources, and methods for bias detection; d. computational resources (e.g., number of floating point operations); and e. known or estimated energy consumption.
(3) Additional information required for GPAI models that incorporate systemic risk includes: a. Evaluation strategies and criteria; b. Applicable adversarial testing measures; and c. Detailed description of the AI system architecture and processes.
3. GPAI models with systemic risk
According to Article 51 of the Act, a GPAI model with systemic risk is recognised if a one-compartment GPAI model has a high-impact capability assessed according to the relevant technical tools and benchmarks or if it has been declared to have a high-impact capability by the European Commission or by the Scientific Panel of the Artificial Intelligence Act equivalent to the criterion set out in the Annex XIII of the Act in accordance with Article 51(1) a. The GPAI model is presumed to have high impact capability if it has a high impact capability according to the relevant technical tools and benchmarks. A GPAI model is presumed to have high impact capability if it uses more than 10^25 floating point operations or FLOPS of cumulative computation during training. Under Article 52 of the Act, GPAI providers with systemic risk must notify the European Commission within two weeks of reaching or planning to reach the systemic risk threshold. If the provider of a GPAI model that meets the systemic risk categorisation believes that it is not systemically risky, the provider can present arguments to the European Commission. If the Commission does not agree, the provider can request a reassessment no earlier than six months after the initial designation decision, and GPAI models with systemic risk will be published on a list that is regularly updated by the European Commission.
In addition to complying with the obligations imposed on providers of GPAI models, providers of GPAI models with systemic risk are subject to the following additional compliance obligations under Article 55 of the Act:
(1) use state-of-the-art tools for model assessment and testing to identify and mitigate systemic risk.
(2) Assess and mitigate possible systemic risks at the EU level.
(3) Document and report any serious incidents and possible corrective actions to the AI Office and national authorities in a timely manner.
(4) Ensure that GPAI models (including their physical infrastructure) are adequately protected by cybersecurity.
© Beijing JAVY Law Firm Beijing ICP Registration No. 18018264-1