How healthcare companies can comply with the EU AI Act and avoid heavy penalties

AI is increasingly being utilised in the healthcare sector, offering numerous benefits and applications. It is playing a significant role in improving medical diagnosis and treatment planning, streamlining administrative tasks, accelerating medical research, and in drug discovery, where it is being used to accelerate the process by analysing vast datasets. AI is also being used to predict and prevent health issues.

Regulation (EU) 2024/1689 (“EU AI Act”, “AI Act”) came into force on 1st August 2024 and is the first comprehensive regulation on AI by a major regulator anywhere.

The timeline for its implementation and the requirement to comply with its obligations must be a key concern for any company utilising AI. Healthcare companies must pay close attention to the AI Act due to the sensitive nature of the business, and because many activities in the sector are classified as “high-risk”.

PENALTIES FOR NON-COMPLIANCE WITH THE AI ACT

Non-compliance with the AI Act can lead to significant financial penalties.

For prohibited AI systems, fines may reach up to €35 million or 7% of a company's worldwide annual turnover, whichever is higher. Providers of general-purpose AI models face fines of up to €15 million or 3% of their turnover. Additionally, incorrect or misleading information supplied to authorities can result in fines up to €7.5 million or 1% of turnover. These penalties underscore the importance of adhering to the Act's requirements to avoid severe financial impacts.

HIGH-RISK AI SYSTEMS IN HEALTHCARE

Healthcare-related AI systems can be classified as high-risk under two main circumstances:

  1. The first concerns medical devices, specifically AI systems intended to be used as safety components of products or as products themselves, covered by EU harmonisation legislation like the Medical Devices Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR), and requiring third-party conformity assessment. This includes, for example, AI-assisted medical image diagnosis systems and all medical AI products classified as Class IIa or higher under the MDR (44% of all approved medical devices on the market according to the European Database on Medical Devices (EUDAMED) in 2023).

  2. The second relates to specific use cases, namely AI systems listed in Annex III of the AI Act that pose a significant risk to health, safety, or fundamental rights. Examples include systems for evaluating and prioritising emergency calls, emergency patient triage systems, and systems for determining eligibility for health services.

To be deemed high-risk under Annex III, a system must represent a significant risk of harm. This is determined by evaluating, for example, the severity, intensity, and duration of the potential harm, the probability of the harm occurring, and the impact on groups or individuals.

Many AI systems in healthcare, such as diagnostic tools and therapeutic devices, are classified as high-risk. This classification requires compliance with rigorous standards for data governance, transparency, and risk management. Additionally, providers of high-risk AI systems in healthcare must conduct conformity assessments, register their systems in an EU database, and implement post-market surveillance measures. The AI Act interacts with the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), requiring healthcare companies to navigate complex regulatory landscapes to ensure compliance.

The AI Act is not solely about restrictions. Notably, it provides some exceptions for medical uses; for instance, certain prohibited practices like facial recognition may be allowed for medical reasons. Exemptions exist for lawful medical practices that are non-manipulative, such as psychological treatments and rehabilitation, provided they comply with medical standards and obtain necessary consent. To balance innovation with safety, the Act also offers opportunities for testing environments and supports start-ups in developing AI models before public release.

AI ACT IMPLEMENTATION TIMELINE

The entry into force of the AI Act in August 2024 marked the beginning of a transition period for companies to align their practices with the new regulations.

As with the GDPR, many companies are late in the game and have not realised how soon critical changes to their systems and practices may need to be implemented. Especially those running so-called “high-risk systems” need to urgently identify any issues and make the necessary changes to achieve compliance. By 2 February 2025, AI systems deemed to have an unacceptable risk will be prohibited. These include systems that manipulate human behaviour to circumvent users' free will or exploit vulnerabilities of specific groups.

Key dates include:

  • 2nd August 2025: Obligations for General Purpose AI systems kick in, including ensuring transparency and accountability in how these systems are developed and used.
  • 2nd August 2026: Full application of the Act for most AI systems begins. Companies must then comply with all requirements related to data quality, documentation, and transparency.

RECOMMENDED ACTIONS FOR HEALTHCARE COMPANIES

While the AI Act places a compliance burden on companies utilising AI across all sectors, some of the challenges for healthcare companies are unique due to the nature of the business and existing regulatory requirements. The following points outline high-level recommendations for actions to be undertaken as soon as possible, if not done already.

1. Conduct risk assessments

Evaluate AI systems to determine their risk classification and implement necessary compliance measures. This requires:

  • A thorough inventory and assessment of all AI systems used within your organisation, including identification of AI systems across clinical, administrative, and research departments;
  • Documentation of each system's function, users, and origin, and classification of the systems according to the AI Act's risk categories;
  • Specific assessment and classification of high-risk AI systems, including Fundamental Rights Impact Assessment (FRIA);
  • Assessment of Quality Management Systems (QMS) and Data Protection Impact (DPI).

2. Enhance data governance

Align data management practices with both GDPR and AI-specific requirements to protect patient data. This requires:

  • Robust data quality and management processes, such as implementing rigorous data quality controls to ensure high-quality, unbiased datasets for AI system training and operation;
  • Data governance frameworks that cover the entire lifecycle of AI systems;
  • A risk-based approach, calling for the development of AI governance and compliance strategies, as well as QMS systems specifically for AI development and use.

3. Develop compliance frameworks

Integrate AI Act obligations into existing compliance frameworks to meet both sector-specific and AI-specific regulations. This requires:

  • Reviewing and determining which systems impact EU patients or process EU health data, even if the company is not based in the EU;
  • Aligning compliance efforts with existing Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) requirements;
  • Ensuring that AI systems comply with GDPR requirements, especially when processing health-related data.

4. Engage in continuous monitoring

Establish systems for incident reporting and post-market surveillance to ensure ongoing compliance and safety of AI systems. This includes:

  • Implementing procedures to collect, document, and analyse relevant data on the performance of high-risk AI systems throughout their lifetime;
  • Developing mechanisms to detect and report malfunctions, unexpected results, or any changes in performance that may affect safety or compliance with the Act;
  • Integrating AI-specific quality control processes into existing QMS frameworks;
  • Continuous risk assessment and mitigation.

5. Review contracts

As part of the activities described above, healthcare companies will inevitably need to conduct thorough contract reviews to ensure compliance with the AI Act.

A wide range of agreements may need to be revised and adjusted. For example, in supplier and vendor agreements, the responsibilities for AI Act compliance, including data governance, transparency, and human oversight requirements, should be defined. Suppliers should also be requested to provide necessary documentation and information about AI systems to enable healthcare providers to meet their transparency and oversight obligations. Risk assessment, mitigation, and incident reporting should also be covered, particularly for high-risk systems.

In license agreements, collaboration agreements, or other agreements dealing with intellectual property, ownership of AI algorithms, models, and associated data needs to be clarified. Any licensing agreements for AI technologies should be reviewed to ensure that they allow for compliance activities such as audits and transparency requirements.

Data processing agreements must be examined carefully due to the AI Act's stringent requirements on data governance and its interaction with GDPR. The agreements should cover the specific data requirements for AI training, validation, and testing. Provisions should be included that align with both GDPR and AI Act requirements for handling sensitive health data.

Other specific terms that may require attention are, for example:

  • Indemnification terms: Ensure appropriate indemnification for potential regulatory fines or legal actions resulting from AI Act violations;
  • Transparency disclosures: Ensure agreements include necessary disclosures about AI system use and capabilities;
  • Maintenance and updates: Outline responsibilities for ongoing system maintenance, updates, and compliance with evolving AI Act requirements.

By proactively addressing the areas described above, including a structured review of contracts, healthcare companies can better position themselves to meet the compliance requirements of the EU AI Act while managing their relationships with AI providers, partners, and users to maintain innovation in medical technologies.

If you would like to discuss the impact of and compliance with the AI Act in more detail, please contact us today.