Today's medical AI projects often need to process massive amounts of patient information, such as medical records, medical images, and genomic data, to train and validate algorithm models. Given their high sensitivity and potential risks, both the GDPR and China's Personal Information Protection Law advocate or require a privacy impact assessment (DPIA, Data Protection Impact Assessment; also known as PIPI, Personal Information Protection Impact Assessment, in China) to be conducted before initiating high-risk data processing. Additionally, institutions must establish a clear data protection responsibility system internally to form a compliant closed-loop system.
01 Scenarios where DPIA/PIPI applies
Regarding the scenarios where DPIA/PIPI applies, Article 35 of the GDPR explicitly states that if data processing activities are “based on new technologies, involve large-scale processing of special categories of data, or are likely to result in a high risk to the rights of individuals,” a DPIA must be conducted prior to commencing such activities. For example, if a hospital plans to launch an AI-based imaging diagnosis platform, the platform will need to collect CT images from hundreds of thousands of patients for algorithm training. This clearly falls under the category of “large-scale processing of sensitive data,” so the hospital must complete a DPIA in advance and properly retain the relevant documentation. If the hospital fails to fulfill its DPIA obligations, the regulatory authority has the right to impose fines based on the severity of the situation.
Similarly, Article 55 of China's Personal Information Protection Law also requires that a personal information protection impact assessment be conducted when processing sensitive personal information, automated decision-making, large-scale cross-border transfer of personal information, and other high-risk scenarios, and that the assessment results be retained for more than three years. Medical AI projects typically involve two high-risk elements: “processing sensitive information” and “automated decision-making.” Therefore, it is crucial to prepare a formal assessment report before the project goes live, as this will help address potential regulatory audits or litigation.
02Main Contents and Steps of DPIA/PIPI
The main contents of DPIA/PIPI typically include the following steps:
a. Describe the project and data flow: Explain the purpose and functions of the AI project, the types of medical data collected, the sources of data collection, how data is stored and shared, and the expected retention period.
b. Identify risks: List potential compliance and ethical risks related to legal basis, data minimization, security measures, and user rights.
c. Assessing impacts: Quantitatively or qualitatively assess the severity of risks to personal privacy and information security, such as data breaches, algorithmic discrimination, and cross-border violations.
d. Developing mitigation measures: Propose specific measures to reduce risks, such as pseudonymization, reducing data fields, adding human review, and improving notification and withdrawal mechanisms.
e. Draw conclusions and document: Make a comprehensive judgment on whether the remaining risks are acceptable. If they cannot be reduced to a controllable range, recommend suspending the project or adjusting the plan.
Through this process, AI development teams and compliance departments can gain a deep understanding of the potential impact of the project on individual rights and provide decision-makers with a comprehensive list of risks and countermeasures.
03 Key Points of Internal Governance: Organizational Structure and Division of Responsibilities
In terms of internal governance, organizational structure and division of responsibilities are key to ensuring compliance. The GDPR requires institutions that process large amounts of sensitive data, such as hospitals, insurance companies, and research institutions, to appoint a data protection officer (DPO) who reports directly to senior management. Article 52 of China's Personal Information Protection Law also requires personal information processors to designate a person in charge of coordinating privacy compliance and responding to inquiries from the public or regulators. In healthcare institutions, an “Information Security and Compliance Committee” is often established, led by hospital leadership and involving compliance and legal affairs personnel, IT experts, and clinical department representatives to form a cross-departmental collaboration mechanism.
Additionally, standardized processes are indispensable. Before designing or introducing an AI project, the compliance department must review whether the project requires a Data Protection Impact Assessment (DPIA). If so, it should guide the project team in completing the assessment report. After the project goes live, it must be continuously monitored and subject to regular audits, with assessment results updated promptly. Hospitals should establish internal management systems to clearly define how patient data should be stored, how access should be authorized, and how internal violations should be handled.
Furthermore, employee training and evaluation are equally important. Technical staff and doctors should understand that principles such as “data minimization” and “protection of sensitive information” are not optional but actual legal obligations. Regular training and simulations, such as data breach emergency response procedures, should be conducted based on job requirements to ensure that all staff are fully informed; Compliance performance can be incorporated into departmental and individual performance evaluations to create an effective incentive mechanism.
04 Common Issues and Areas for Improvement
However, there are still some common issues and areas for improvement in the compliance practices of medical AI projects. For example:
a. Lack of documentation in the AI development process: Some companies only conduct formal DPIA before project completion or launch, with little attention paid to privacy risks during actual development. It is recommended to advocate “Privacy by Design,” integrating compliance and security considerations from the initial stages of model development.
b. Internal data misuse: Some employees or researchers store large amounts of patient information on personal hard drives or share it with unrelated third parties. In the event of a breach, it is difficult to trace responsibility to specific individuals. Therefore, strict access controls and audit log mechanisms must be implemented.
c. Cross-departmental communication barriers: Clinical departments, IT teams, and legal compliance departments lack daily collaboration, rendering DPIA ineffective. The solution is to establish a unified compliance command system and implement a “mandatory approval for sensitive business” system to ensure that professional teams participate in decision-making at critical stages.
05 Outlook: A Future of AI in Healthcare Based on Compliance
As artificial intelligence increasingly penetrates core medical operations, from clinical diagnosis to personalized treatment, algorithms may empower these processes. A rigorous and operational compliance mechanism is not only a regulatory requirement but also helps hospitals and technology companies build public trust and avoid future risks. DPIA (or PIPI) is not merely a report; it represents respect for patient rights and data security and signifies another level of professional competence in the integration of medicine and technology.
As the potential of medical AI continues to unfold, compliance and privacy protection will serve as the “safety barriers” for its sustainable growth. Only under the backdrop of synergistic evolution between technology and law can AI healthcare truly gain societal recognition and realize the vision of “using data to safeguard health and algorithms to nurture life.”
© Beijing JAVY Law Firm Beijing ICP Registration No. 18018264-1