In the contemporary wave of medical digitisation, the most central and risky aspect is often health data, including medical history, diagnostic and treatment processes, genetic testing, surgical records, health monitoring data, etc. If such information is illegally accessed or abused, it may not only lead to economic fraud, but may also undermine the dignity of the patient and even lead to social discrimination. Therefore, the legislation of each country provides the most stringent protection for ‘health data’. Although there is a difference in conceptual design between the EU GDPR and China's Personal Insurance Law, the essence of the goal is the same - to classify such data as a high-risk category that requires special rules to control.
01Special Categories of Personal Data under the GDPR
Article 9 of the GDPR specifically provides for the principle of prohibited processing of ‘special categories of personal data’, including ‘health-related data’. ‘Health-related data’ is not limited to medical records collected by medical institutions, but can be classified as any information that reveals the health status of an individual, such as physiological indicators monitored by wearable devices, genetic sequence data, and psychological counselling records. Once identified as a special category of personal data, the GDPR adopts a ‘general prohibition, exceptions permitted’ attitude towards it, and explicit exceptions listed in the law must be met before it can be processed.
One of the common exceptions is the ‘explicit consent of the data subject’, but this is often not the only basis in healthcare scenarios. For example, hospitals can claim an exception to Article 9(2)(h) of the GDPR when providing medical services to patients, i.e. ‘necessary for the provision of medical or preventive medicine services’, without having to obtain repeated written consent (although, nevertheless, the obligation to provide transparency, e.g. by means of a privacy statement, is still necessary). This reflects the legislator's respect for the reality of healthcare needs - the continuity of care does not allow for frequent interruptions to complete superficial authorisations. However, if a healthcare organisation plans to use patient information for scientific analysis or commercial marketing purposes, it may need to obtain a separate data subject authorisation or a scientific research exception pursuant to Article 9(2)(j) of the GDPR, with protection measures such as de-identification.
02 Sensitive Personal Information under the Personal Insurance Law
In contrast, China's Personal Security Law adopts the concept of ‘sensitive personal information’, which includes medical, health, biometric, genetic, religious, and juvenile information, and specifies requirements for additional strict protection. Article 28 of the law states that any disclosure or unlawful use of such information could lead to serious damage to an individual's reputation, person or property, and must be treated with caution.
For healthcare data, the Personal Insurance Law does not list specific exemptions such as ‘necessity of medical services’ and ‘public health’ as the GDPR does; ‘individual consent’ is the most emphasised and common requirement for the processing of healthcare data. In healthcare data processing, ‘individual consent’ is one of the most emphasised and common bases of lawfulness, although Article 13 of the Law also lists other cases where consent is not required. As for public emergencies (e.g., the reporting of a major infectious disease), the ‘fulfilment of a legal duty or obligation’ exception listed in Article 13 of the Law on Personal Insurance may apply, thus obviating the need for individual patient consent. As you can see, both laws provide a certain amount of space for public health and clinical needs, but the GDPR has a wider variety of exceptions and is more flexible in its application, whereas the Chinese law focuses more on the core process of ‘obtaining authorisation’ for data use.
03 Compliance Points: Basis of Legality and Application of Exceptions
When a medical institution or healthcare platform begins to process a large amount of users' medical records, diagnostic information, and physical examination reports, it must first determine the basis of legitimacy:
a. If relying on informed patient consent, ensure that the consent is ‘separate, express, and fully informed;
b. if it is based on special provisions such as medical purposes (Article 9, 2(h) of the GDPR) or public health duties (Article 13 of the Act on Personal Protection), there must be a statutory basis or proof of the fulfilment of a legal obligation;
c. If used for scientific research, care needs to be taken both to comply with legal authorisations and, to the extent practicable, to de-identify or anonymise data, submit it for ethical approval or complete a DPIA (or PIPI) to mitigate privacy risks.
In compliance practice, the aspect that companies often tend to overlook is the use of healthcare data for secondary applications, such as transferring what appears to be routine consultation data to another sector for commercial analysis or accurate advertising; or sharing user health information with insurance companies for risk control. Without a proper legal basis, these actions may constitute ‘over-collection and use’, violating the sensitive data protection provisions of the GDPR or the Personal Insurance Law, which may lead to significant penalties or litigation risks.
04 Implications for commercial platforms and AI companies
a. Do not overlook the authorisation of secondary use of sensitive information because you have signed a basic service agreement with the hospital or the user;
b. The processing of large-scale patient data needs to be carefully evaluated, and attempts to collect comprehensive health records based on the mere fact that ‘the user clicked on the privacy policy’ are likely to be found to be over-collected;
c. For automated decision-making (e.g. AI scoring, AI diagnosis), it is also important to explicitly provide channels for human intervention, with user consent or regulatory exemptions, and to try to ensure that algorithms are not discriminatory in their bias.
In summary, both the GDPR and the PDPA take a highly cautious approach to health data: the former adopts a general prohibition and exception to the rule for special categories of personal data; the latter includes it as sensitive personal data, requiring in principle individual consent or lawful authorisation. Although the two models are different, they both point to the consensus that ‘high-risk data requires a high standard of protection’. The technology landing in the era of medical AI must fully respect these legal red lines in order to move forward.
Special Announcement:
This article was originally written by JAVY law firm attorneys and represents the views of the author, and should not be regarded as a formal legal opinion or advice issued by JAVY law firm or its attorneys. If you need to reproduce or quote any content of this article, please specify the source.
© Beijing JAVY Law Firm Beijing ICP Registration No. 18018264-1