The EU AI Act and SaMD Development
Understanding the EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) represents the European Union’s first-ever comprehensive legislative framework for artificial intelligence regulation at the global level. The Act was published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024. A key characteristic of the legislation is its risk-based classification system for AI systems, which establishes four distinct risk tiers with progressively stringent regulatory requirements: unacceptable risk, high risk, limited risk, and minimal risk.
Phased Implementation Timeline and Applicable Requirements
Following its entry into force, the AI Act’s regulatory requirements are applied gradually according to a phased timeline. The primary milestones are as follows: provisions prohibiting certain high-risk AI systems entered into effect on February 2, 2025. Requirements applicable to general-purpose AI (GPAI) model providers commenced on August 2, 2025. Core obligations for high-risk AI systems began their application on August 2, 2026. For AI systems embedded in existing regulated products under product safety legislation—such as those subject to the Medical Device Regulation (MDR)—the compliance deadline is extended until August 2, 2027.
High-risk AI systems are subject to comprehensive regulatory requirements including the establishment of risk management systems, utilization of high-quality datasets, detailed technical documentation, and effective human oversight mechanisms. Particularly noteworthy requirements include comprehensive technical documentation as specified in Annex IV of the AI Act, ensuring traceability through automatic logging capabilities, transparency of the AI system, and effective human oversight as outlined in Articles 9-15. The enforcement provisions are stringent: violations of prohibitions on certain AI systems may result in penalties up to EUR 35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk AI system requirements may incur fines up to EUR 30 million or 3% of global annual turnover, whichever is higher.
Territorial Scope and Extraterritorial Application
The AI Act has applicability beyond EU boundaries, creating global regulatory implications. Specifically, AI “providers” (entities that develop and place AI systems on the EU market) and “deployers” (organizations using AI systems within the EU) must comply with the Act’s requirements regardless of where they are established, insofar as the AI system’s output is used within the EU.
Positioning of SaMD Under the AI Act
Software as a Medical Device (SaMD) is automatically classified as a high-risk AI system under the AI Act. This classification derives from Article 6(1) and Annex I of the Act, which designate AI systems intended to serve as safety components of products subject to the Medical Device Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR), or AI systems that are themselves such products and subject to third-party conformity assessment under applicable EU harmonization legislation, as high-risk AI systems. This classification imposes stringent regulatory requirements on SaMD developers and providers.
Key requirements for SaMD include ensuring AI system explainability, implementing continuous performance monitoring, and establishing bias detection and mitigation mechanisms. In medical decision support systems, particular emphasis is placed on the transparency and explainability of the AI’s decision-making processes. The technical documentation requirements under AI Act Annex IV are substantially more comprehensive than those typically required for FDA 510(k) or De Novo submissions. They must include detailed descriptions of: AI system design specifications, system architecture, critical design choices and their rationale, data requirements, training methodologies, computational resources utilized in development and validation, validation and testing procedures, and performance metrics.
Managing Dual Compliance: AI Act and MDR
Regulatory Coherence and Integration Challenges
SaMD development companies must now navigate two parallel regulatory frameworks: the AI Act and the Medical Device Regulation. The MDR has traditionally required: collection of clinical evaluation data, establishment of quality management systems (QMS), and implementation of post-market surveillance (PMS) activities. The AI Act introduces additional AI-specific requirements: ensuring algorithm transparency, conducting continuous performance evaluation, and maintaining appropriate human oversight throughout the product lifecycle.
Successfully managing this dual regulatory burden requires establishing an integrated quality management framework that builds upon existing MDR compliance infrastructure while incorporating AI-specific requirements. The AI Act facilitates this integration through Article 11(2), which permits companies to leverage existing MDR technical documentation systems and enhance them by appending Annex IV information to address AI Act requirements. In essence, companies may extend their existing MDR technical documentation with AI-specific information to demonstrate compliance with both regulatory frameworks.
However, the industry has raised concerns regarding the coherence of the two regulatory regimes. Given that both MDR and AI Act employ risk-based regulatory approaches, overlapping requirements exist—including risk management, data governance, and conformity assessment processes. The European Commission published guidance on the interplay between the AI Act and MDR/IVDR in June 2025 to clarify the implementation of both frameworks. Small and medium-sized enterprises (SMEs) benefit from provisions allowing simplified technical documentation, reducing regulatory burden while maintaining safety and effectiveness standards.
Practical Implementation Strategy
Organizational Structure and Governance
Establishing appropriate organizational infrastructure is essential for effective compliance. This includes: establishing an AI Ethics Committee to provide oversight of AI-related decisions and practices; appointing an AI Risk Management Officer with clear responsibility and authority for AI compliance activities; and implementing company-wide education and training programs to build organizational capability. These structural measures create a foundation for systematic and comprehensive response to AI Act requirements.
Technical Implementation and System Design
At the technical level, organizations should consider adopting Explainable AI (XAI) technologies that enable clear articulation of AI decision-making rationales to both healthcare professionals and patients. Beyond optimizing predictive accuracy, the technical architecture must support transparent explanation of AI outputs. Strengthening data quality management requires establishing robust controls over the sourcing, accuracy, and completeness of training, validation, and testing datasets. Implementation of bias detection and mitigation systems is critical, ensuring that biases associated with protected characteristics (such as gender, race, or age) are systematically identified and mitigated. The automatic logging system required by AI Act Article 12 must capture complete and accurate records of AI system usage, supporting traceability and post-market surveillance.
Quality Assurance and Validation Strategy
Quality assurance for AI-driven medical devices requires a paradigm shift from traditional medical device development approaches. Whereas conventional medical device validation typically occurs at a defined point in the development lifecycle, AI systems—particularly those with adaptive or learning capabilities—may continue to evolve and improve after market entry. Consequently, organizations must establish processes for continuous performance monitoring and periodic re-evaluation, ensuring ongoing system reliability. Particular attention must be paid to the quality management system (QMS) requirements mandated by AI Act Article 8. Organizations must extend their existing MDR-compliant QMS to incorporate AI-specific elements including data quality management, model performance monitoring, and AI-specific risk assessment and mitigation activities.
Impact of the AI Act on SaMD Development
Effects on Development Processes and Timelines
Implementation of the AI Act will fundamentally transform SaMD development. Development processes must now incorporate: risk assessment activities commencing from the concept phase, establishment of continuous monitoring mechanisms throughout the product lifecycle, and comprehensive documentation of design decisions and technical justifications. These requirements are likely to extend development timelines and increase development costs. Beyond traditional waterfall development models, organizations must establish development lifecycles that integrate ongoing post-market improvements and surveillance activities as integral components.
Implications for Competitive Advantage
While regulatory compliance imposes tangible burdens, appropriate implementation of risk management and quality assurance systems enhances product reliability, safety, and effectiveness—ultimately strengthening competitive positioning in the marketplace. Organizations that proactively prepare for AI Act compliance may achieve regulatory approval timelines more efficiently than competitors and establish stronger relationships with healthcare professionals and patients based on demonstrated commitment to transparency and safety.
Support for Small and Medium-Sized Enterprises
The AI Act acknowledges the disproportionate compliance burden on SMEs by permitting simplified technical documentation, with the European Commission to provide guidance “targeted at the needs of small and micro enterprises.” Implementation of these simplified requirements is anticipated in the coming months.
Global Regulatory Implications
The AI Act will likely influence regulatory approaches internationally. The U.S. Food and Drug Administration released draft guidance on AI-enabled medical device lifecycle management in January 2025, reflecting growing convergence around AI governance principles. Early preparation for AI Act compliance positions organizations advantageously for navigating international regulatory harmonization and achieving global market access.
Medical AI Development and Liability Frameworks
The Revised Product Liability Directive and AI Liability Directive
While appropriate regulatory frameworks are indispensable for safe and effective medical AI development, the AI Act’s requirements, though demanding, represent an opportunity to enhance the quality and trustworthiness of medical AI systems and support more effective healthcare delivery. SaMD development companies should embrace this regulatory evolution as a catalyst for strategic innovation and comprehensive compliance planning.
Simultaneously, the legal liability framework significantly impacts AI medical device development. The Revised Product Liability Directive 2024/2853, which entered into force on December 8, 2024, takes effect on December 9, 2026. This directive substantially revises EU product liability law by explicitly expanding the definition of “product” to include software, firmware, computer programs, applications, and AI systems. Consequently, manufacturers and suppliers of AI-based medical devices may face strict liability (liability without proof of fault) for defects causing harm, transforming the liability landscape for digital health innovations.
The proposed AI Liability Directive, under consideration since September 2022, was withdrawn by the European Commission on February 11, 2025, due to lack of consensus on final provisions. The Commission reserves the right to propose alternative approaches addressing AI-related liability. The withdrawn proposal had aimed to facilitate victim recovery by establishing rebuttable presumptions of causality in causation and providing national courts authority to order disclosure of evidence regarding high-risk AI systems suspected of causing damage. The withdrawal of this proposal creates uncertainty regarding the specific liability framework applicable to AI-related damages.
Integrated Compliance Framework
SaMD developers must prepare for compliance with the Revised Product Liability Directive’s requirements. Particularly important are: establishing liability frameworks for software defects under strict liability principles; understanding that failure to provide cybersecurity protections or security updates may constitute product defects; and recognizing that demonstrated compliance with AI Act high-risk system requirements may help mitigate liability exposure by establishing adherence to recognized safety standards.
Conclusion
The regulatory landscape governing medical AI development is complex, yet appropriate implementation of the required frameworks strengthens product safety, effectiveness, and trustworthiness. SaMD development organizations that strategically address the integrated requirements of the AI Act, MDR/IVDR, and Revised Product Liability Directive position themselves for sustainable business development and sustained competitive advantage in global markets. By incorporating AI ethics principles and regulatory compliance requirements from early development stages, organizations create the strongest foundation for achieving regulatory approval, market success, and the confidence of healthcare professionals and patients who depend upon these life-critical technologies.
Comment