Choose country/ region and language

Artificial Intelligence for Products

Artificial Intelligence for Products

EU Artificial Intelligence Act (EU)2024/1689 (AI Act)

The regulation represents a groundbreaking regulatory framework aimed at addressing the complexities and risks associated with artificial intelligence (AI) technologies. As the first comprehensive legislation of its kind globally, the AI Act categorizes AI applications based on their risk levels and establishes clear obligations for deployers, developers and users within the European Union.

Legislative Progress

The Act was officially published in the EU official Journal July 2024, and became effective as of 2024 August 1st. It establishes several key timelines affecting products already placed or to be placed on or put into service in the EU market incorpoating AI and machine learning based algorithms. After the initial enforcement, the following key timelines can be observed which needed to be carefully monitred by any stakeholders from deployers of AI systems to the deisgnated Notified Bodies in order to ensure compliance of the products.

Objectives and Structure of EU AI Act

The AI Act aims to ensure that AI systems are developed and used in a manner that respects fundamental rights and safety while fostering innovation. It categorizes AI applications into three risk levels and establishes clear obligations for providers and deployers of high-risk AI systems, including requirements for safety assessments and compliance with existing laws protecting fundamental rights throughout the lifecycle of these systems.

High-Risk AI Systems

This new regulation aims to ensure the safety and reliability of AI technology across industries, to increase transparency and to lay down a general framework for the future of trustworthy AI systems placed on the EU market. It introduces a risk-based approach, the respective obligations imposed on operators and the potential penalties for failure to meet the requirements.

The Act defines ‘AI system’ as “means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The definition originates from the definition provided by OECD and while it does not limit the scope to certain technologies or the application of the AI it aims to provide a comprehensive coverage expecting that in the future more yet to be invented algorithm and technologies will come forth.

High-Risk AI Systems (Art. 6): High-risk AI systems, such as those used in critical infrastructure, education, employment, and law enforcement and a large set of AI-enabled medical devices must meet stringent requirements for risk management, data governance, transparency, and human oversight. Based on the regulation regardless of whether the AI is a component of a product or a product on its own in case the conditions set out in the AI Act Art (6) are met the product will become a High-Risk AI System.

What does it mean for Product Manufacturers?

Manufacturers who create products which include AI components are responsible for ensuring that these products comply with relevant regulations when they are marketed in the EU. If a manufacturer integrates an AI system into a product that already mandates a conformity assessment, they are classified as a "provider" under the AI Act when the product is made available in the EU under their name or trademark.

Obligations Under the EU AI Act

  1. Compliance with Regulations: Deployers of any AI system are responsible ensuring compliance with requirements set out in the AI Act, particularly if the system is classified as high-risk. This includes conducting risk assessments and ensuring transparency about how the AI operates.
  2. Documentation and Labeling: They are required to maintain proper documentation (technical file) demonstrating compliance with the AI Act and to label their products appropriately, including providing contact information for accountability.
  3. Liability: Under the revised EU Product Liability Directive, product manufacturers can be held liable for damages caused by defective products, including those with integrated AI systems. This means they are the first point of redress for consumers who experience harm due to product failures.

Conformity Assessment

Products that meet with the definition of High-Risk AI are to follow mainly 2 major categories:

  1. Products which are already regulated under other sectorial regulations
  2. Products which are not yet covered by other sector specific legislation

Sectoral legislation for conformity assessment procedures, applicable to products listed in AI Act Annex I, must be followed. Where gaps exist with the AI Act, manufacturers of medical devices are expected to adequately address them, and these will be assessed by designated Notified Bodies. (EU AI Act Art 43. §3)

For example, Medical Devices which are covered by the existing EU MDR/ IVDR are prominently face with the necessary compliance with the AI Act, where conformity assessment procedure will follow the sectorial processes while ensuring AIA specific considerations are also sufficiently included and covered where gaps are present.

Products which are not regulated under EU wide sector specific legislation are also object for AI Act, and these include their usage in certain biometric identification, critical infrastructure (e.g. the supply of water, gas, heating or electricity), education and vocational training, employment, law enforcement, etc. (in Annex III).

Implications for Businesses

Compliance with the AI Act is crucial for manufacturers wishing to sell their products in the EU market. Non-compliance can lead to significant penalties. (AI Act Art. 99)

Breach under the AI Act Potential Fine
Breach of prohibition on unacceptable risk AI systems​Up to the higher of EUR 35 million or 7% of the total annual worldwide turnover ​
Non-compliance with any other requirement under the AI Act​Up to the higher of EUR 15 million or 3% of the total annual worldwide turnover ​
Supplying incorrect, incomplete or misleading information to notified bodies and national authorities ​Up to the higher of EUR 7.5 million or 1% of the total annual worldwide turnover​

Impact on Innovation

The obligations imposed by the AI Act may affect how manufacturers approach product development, necessitating a balance between innovation and regulatory compliance. Early adaptation during the design and development of AI Systems is crucial to ensure on time market access.

Impacts on AI-enabled Medical Device

Medical device manufacturers exporting AI-enabled devices to the EU shall comply with the European AI Act to maintain market access and avoid penalties. The AI Act potentially applies to Software as a Medical Device (SaMD) and any AI-integrated / AI-enabled medical device classified as high-risk. Similar to the EU MDR/IVDR, the AI Act mandates conformity assessments, detailed technical documentation, and vigilance systems which are on top of existing EU MDR/IVDR requirements, while allowing the utilization of existing procedures and frameworks. Regular audits by notified bodies ensure that AI-specific aspects and regulatory requirements are met, following a structure aligned with the New Legislative Framework.

Impacts on Consumer Electronics or Smart Household appliances

Manufacturers of consumer electronics and smart household appliances will need to comply with specific requirements based on the risk level of the AI system used (e.g., high-risk AI systems). Compliance includes conforming to transparency, documentation, and risk management requirements. Authorities will have the power to oversee compliance and enforce regulations. This means regular market surveillance and potential penalties for non-compliance. Companies might need to invest in continuous monitoring and reporting systems to stay compliant. While ensuring compliance with the AI Act might increase development costs and timelines, it also promotes product innovation. Compliance with the AI Act could also enhance consumer trust, knowing they are built with safety, transparency, and ethical considerations in mind. This could potentially boost marketability and consumer acceptance of AI-powered consumer electronics and smart household appliances.

How TÜV Rheinland Supports Businesses with the EU AI Act

As the EU Artificial Intelligence Act moves closer to full implementation, businesses across various sectors are preparing to navigate the new regulatory landscape. TÜV Rheinland is at the forefront of supporting companies in addressing the challenges and opportunities presented by the AI Act.

Show all Hide all

1. Conformity Assessment

  • CE Marking and Third-party Notified Body assessment: Conduct pre-market conformity assessments for AI systems, particularly those classified as high-risk. This involves evaluating whether the AI system meets the safety and regulatory requirements outlined in the EU AI Act before it is brought to market. Where multiple legislations exist (e.g. Medical devices, Machinery, RED) combined assessments options to ensure seamless process and reduce burden on manufacturers
  • Technical Documentation Review: Comprehensive review of technical documentation required for compliance, ensuring that all necessary information is accurately presented as part of the conformity assessment.

2. AI-System Testing

  • Tailored & Comprehensive Solutions: Leverage customized and bundled testing services, including the CB Scheme, from trusted third-party inspection laboratories to streamline global certification and compliance processes for AI systems across multiple markets.
  • Performance & Reliability Assurance: Validate AI system performance, accuracy, efficiency, and scalability through customized stress tests and functional assessments. Testing in real-world conditions will be available provided that all the conditions in Article 57 or 60 are fulfilled.

3. Cybersecurity and Data Privacy

The AI Act emphasizes the importance of data privacy and security. TÜV Rheinland offers specialized End-to-End Security services to protect your AI from potential threats with advanced vulnerability assessments, penetration testing, and continuous monitoring for secure and reliable operations.

  • Penetration Testing: An authorized simulated cyberattack on a computer system, performed to evaluate the security of the system. Testers dig deeper to identify the root cause of the vulnerability that allows access to secure systems or stored sensitive data. Read more here
  • Vulnerability Scanning: Laboratories can perform rigorous testing, including red-teaming exercises, to identify vulnerabilities in AI systems. This helps in assessing risks to health, safety, and fundamental rights, which is a critical requirement under the EU AI Act. Read more here

4. Training and Education

  • Workshops and Training Programs: training sessions for developers and deployers on compliance requirements, best practices in AI safety, and ethical considerations in AI development.
  • Awareness Campaigns: Help to raise awareness about the implications of the EU AI Act among stakeholders, ensuring that all parties understand their responsibilities under the new regulations.

5. Additional Services by TÜV Rheinland Group

In addition to comprehensive AI system conformity assessment as a trusted independent partner our teams can offer a range of management system services to further strengthen your organization's compliance and operational resilience. These services ensure that your AI systems and processes adhere to international standards, providing you with a robust framework for both quality and information security management. This includes among many others:

  • ISO 42001 Quality Management: Implement and maintain a top-tier Quality Management System (QMS) tailored to AI and innovative technologies, ensuring consistent product quality and regulatory compliance.
  • ISO 27001 Information Security Management: Protect sensitive data and mitigate cybersecurity risks with comprehensive Information Security Management System (ISMS) services that address cloud security and data privacy regulations

TÜV Rheinland is committed to fostering innovation while ensuring the development of safe and trustworthy AI. As the EU AI Act enters into force, businesses can rely on TÜV Rheinland's expertise and experience to navigate the regulatory changes, mitigate risks, and unlock the full potential of AI technologies while maintaining compliance with the new regulations.

Related Topics

Make sustainability works for you, find more details about the Sustainability service applicable to electrical products

Read more

Interested in getting performance certification?

Read more

Click to know more about products cybersecurity testing and certification solutions

Read more

Smart Medical Devices

Medical Device testing

Contact

Contact us to request a non-binding offer

Contact us to request a non-binding offer

Last Visited Service Pages