Global Advisory Experts Logo

Find a Global Law Expert

Specialism
Country
Practice Area

Awards

Since 2010, the Global Law Experts annual awards have been celebrating excellence, innovation and performance across the legal communities from around the world.

artificial intelligence, medical devices and gdpr in healthcare: everything you need to know about the current legal frame

posted 3 years ago

There is no doubt that healthcare is one of the areas where the use of Artificial Intelligence (AI) systems is highly growing. As well, there is full awareness that this growth offers high potential, but also high risks.

In this respect, the EU Commission’s recent proposal for a Regulation on AI (which is currently under discussion of the EU Council, and will be soon passed on first reading by the UE Parliament) is undoubtedly a legislative framework that could increase trust and security in the use of this technology in patient care.

It is therefore important to highlight the current regulatory framework into which the new IA Regulation will fit.

Index of the topics

  • Artificial Intelligence and medical software: the regulatory framework
  • When software is classified as a medical device
  • SAMD: almost always, the Declaration of Conformity it’s not enough
  • AI software in Healthcare: aspects to be improved
  • GDPR and AI systems
  • Principle of accuracy of data
  • Principle of fairness
  • Principle of transparency
  • Privacy by design and by default
  • Comparison between risks and benefits of data treatment
  • Ethic and AI

CURRENT REGULATORY FRAMEWORK FOR SOFTWARE IN HEALTHCARE

It is important to highlight that the current normative framework on medical software appears already quite complete, if correctly applied. Therefore, this set of laws could provide a sufficient level of security and reliability both to AI software users and producers.

On the matter, the main three topics to deal with are:

  • Security and reliability of the software;
  • Correct data management;
  • Ethicality of software functionality.

These issues are already regulated in the following UE Regulations, Guidelines, and Documents:

  • MDR – UE Reg. 2017/745 on medical devices (2017);
  • GDPR – UE Reg. 2016/679 on data protection (2016);
  • “Ethics Guidelines for Trustworthy Artificial Intelligence” (2019) of the “High-Level Expert Group on AI”;
  • WHO Document “Ethics and governance of artificial intelligence for health” (28 June 2021).

This is, in essence, a first regulatory framework in which will be integrated the next AI Regulation.

Then, let’s see which are the most relevant profiles of these disciplines related to the use of AI in healthcare.

WHEN SOFTWARE IS CLASSIFIED AS A MEDICAL DEVICE

The recent MDR (UE Reg 2017/745 on medical devices) introduces a new regulation on medical devices that strongly impacts the area of medical software.

Firstly, the MDR defines in detail when a software falls into the definition medical device (Software As a Medical Device – SAMD), clarifying that all software that have diagnosis and cure purpose along with software that provides support to the healthcare professional in assuming therapeutic decisions or offers help in carrying out the healthcare service must be considered as a medical device (article n. 2, letter a and Annex VIII,  rule 11).

Secondly, the Medical Device Coordination Group has provided a specific guideline on the new classification of software (the “MDCG 2019-11 Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745”).

SAMD: ALMOST ALWAYS, THE DECLARATION OF CONFORMITY IS NOT ENOUGH

Under the previous Directive, the 93/42/CEE, software mostly fell within the Risk Class I.

This meant that, to put those software on market, it was enough to have the mere Declaration of Conformity of the producer, i.e. a self-declaration.

Today the new rules of Classification, introduced by MDR, place almost all SAMD within Classes II a, II b and III.

Consequently, the process of apposition of CE marking, mandatory for the commercial distribution, must pass through the approval of a Notified Organ (NB), which controls software’s compliance with the requirements with the General Safety And Performance Requirements (GSPR), listed in Annex I of MDR

AI SOFTWARE IN HEALTHCARE: ASPECTS TO BE IMPROVED

Due to the expansion of NB activity and to the significant growth of AI software in health area, on 6th October 2021, the European Association Medical Devices – Notified Bodies has enacted a Position Paper, bearing suggestions, to the Legislator of the new Reg. on AI, on the improvement of specific aspects currently under discussion.

In particular:

  • A proper coordination between the requirements of MDR and those in Reg. AI;
  • The need to lay down harmonized standards and common specifics to allow the Notified Organs to provide a process of evaluation of conformity equal and transparent;
  • The issuance of industry guidelines for the implementation of the Reg. on AI. in coordination with the existing regulatory framework about New Legislative;
  • In relation to article 10 of the AI Proposal “Data and Data governance”, data is required to be “complete, accurate and sufficiently justified” given that the Real Word Data (which are essential for AI) are, in most cases, inaccurate and not without errors.

Finally, the MDR provides, in general terms, that the compliance to RGSP of the Annex I must be proved by a Clinic Evaluation (Article 61 et seq., MDR), that demonstrates the safety of the medical device and its clinical benefit.

With specific regard to the clinical evaluation of the software, MDCG has provided the following two guidelines:

  • MDCG 2019-16 Guidance on Cybersecurity for medical devices;
  • MDCG 2020-1 Guidance on Clinical Evaluation (MDR)/Performance Evaluation (IVDR) of Medical Device Software.

GDPR AND AI SYSTEMS

One side, the application of GDPR (UE Data Protection Regulation 2016/679) to AI systems presents some critical aspects (see recent EDPB-EDPS Joint Opinion 5/2021 on the proposal of Regulation on artificial intelligence).

On the other side, the general asset of GDPR already provides a range of possible answers.

The basis of the whole GDPR is article 5 on the principles of data treatment.

This article states that data must be processed fairly and in a transparent manner in relation to the data subject (article 5, letter a) and that the data must be accurate and up to date (article 5, letter d).

In the field of AI, fairness, transparency, and accuracy of data principles must be read in strict connection. Let’s see how best to interpret each of these principles:

Principle of accuracy of data

In general terms, this principle requires that data treatment must be accurate and, in the event of inaccurate data, every reasonable and necessary step is taken to rectify it.

In the field of AI, the accuracy of the single data must be read as an initial necessity and final target, to make the accuracy the essential point of the whole process of the data itself.

The “intelligence” of a machine depends, in fact, on the information that is given to it and that it processes.

Therefore, if the machine is given incorrect or inaccurate data, it will process incorrect results.

This happen both where the system is limited to returning data through an input-output process, and in the case where the system is able to learn and evolve based on the data it knows and processes.

PRINCIPLE OF FAIRNESS

This principle involves the treatment process itself, which in AI systems must also involve ethical aspects.

Very correctly, ICO (Information Commissioner’s Office) – in the Guidance on AI and data protection (part relating to “How do the principles of lawfulness, fairness, and transparency apply to AI?”) states:

… if you use an AI system to infer data about people, in order for this processing to be fair, you need to ensure that:

  • the system is sufficiently statistically accurate and avoids discrimination; and
  • you consider the impact of individuals’ reasonable expectations

The references to the avoidance of discrimination and to the respect of the “reasonable expectations consideration” of the data subject seems to include many of aspects relating to ethicality of AI softwares.

It follows that the full compliance to the principle of fairness, according to article 5, GDPR, involves the compliance to principles of ethics in the treatment.

PRINCIPLE OF TRANSPARENCY

The whole data process must also be transparent, which means that the data subject should be able to understand how data and information are processed.

Concerning the Artificial Intelligence, article 13, (2f), establish the obligation for the  to provide information about “the existence of an automated decision-making process, including profiling as referred to Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved and the importance and the expected consequences of such processing for the data subject“.

The regulation has application difficulties, however it is clear that GDPR has established very clear transparency rules for very complex systems such as those of AI systems, considering that only what is transparent is assessable and, therefore, liable to trust and confidence on the part of the individual concerned and the community.

PRIVACY BY DESIGN AND BY DEFAULT

The entire process consisting of the design of the AI system and also its use must be carried out considering a further essential rule of the GRPR: article 25, which states the principles of privacy by default e privacy by design.

This means that already in the design phase, long before the start of data processing, it is necessary to think about and define the purposes and the objectives of data processing.

These principles and objectives must be established by identifying the data needed to achieve these same aims, and establishing which data are indispensable and foreseeing the operation of the system, and defining the rules of behavior of the man who operates on the system.

It follows that, the notion of privacy by design and by default anticipate and accompany the compliance with the principles of accuracy, fairness, and transparency principles.

COMPARISON BETWEEN RISKS AND BENEFITS OF DATA TREATMENT

Article 35 also requires a Data Protection Impact Analysis (DPIA), i.e. an analysis of the risks arising from that data processing and an assessment of proportionality between existing risks and benefits from processing.

Among the risk measurement criteria, account must be taken of the data protection parameters (Confidentiality, Integrity and Availability) and also of the possible impact on the other fundamental rights of the individual, as protected by the Constitution and by European and International sources (see on this point, Guidelines on Data Protection Impact Assessment (DPIA)).

In the case of an AI system, an example of fundamental right could be the principle of non-discrimination, which must be evaluated in the testing and programming phase of the algorithm.

In these phases it is necessary to verify that no discriminatory values have been found embedded in the code itself.

Also, is necessary to indicate any corrective measures to be taken where such a risk arises during use of the AI system (for further details see “The Data Protection Impact Assessment as a Tool to Enforce Non-Discriminatory AI”).

Ethics and AI

All of the above relates to the issue of the relationship between ethics and AI: this will be, as  known, the focal point of the whole future UE regulatory activity on AI.

The development of AI systems will, therefore, have to respect a number of specific ethical principles such as non-discrimination and non-distortion principles.

To this effect, the Data Protection Impact Assessment (DPIA) may constitute a useful instrument, as of now, to measure the ethicality of the software according to the current indication of the “High-Level Expert Group on Artificial Intelligence” (AI HLEG), which, among other things, has provided a self-assessment tool on the reliability of AI.

This tool is a checklist designed to provide practical help, to all those operating in the business of AI system development, that operate on the basis of the principles contained in the above-mentioned “Ethics Guidelines for Trustworthy Artificial Intelligence“.

Author

Join

who are already getting the benefits
0

Sign up for the latest advisory briefings and news within Global Advisory Experts’ community, as well as a whole host of features, editorial and conference updates direct to your email inbox.

Naturally you can unsubscribe at any time.

Newsletter Sign Up

About Us

Global Advisory Experts is dedicated to providing exceptional advisory services to clients around the world. With a vast network of highly skilled and experienced advisers, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.

Contact Us

Stay Informed

Join Mailing List

GAE