top of page
Search

The AI Act entered into force!




As of 1 August 2024, the Artificial Intelligence Act (“AI Act”) enters into force with a transitional period of two years. The AI Act aims to protect our fundamental rights as a person and as a society against high-risk AI systems, whilst ensuring innovation throughout the EU. One of the considerations is that this Regulation should help the EU to establish themselves as AI world-leader.

 

The essentials of the AI Act

The AI Act applies a risk-based approach. Four different risk categories correspond with four different levels of regulation. Where AI systems with an unacceptable risk are prohibited, and limited or no risk systems are just slightly regulated, it is the high-risk AI systems that are thoroughly regulated under the AI Act. High-risk AI systems are systems the legislator deems potentially risky, and should therefore be regulated.


We have explored the AI Act through the eye of the Life Sciences industry to understand which sectors will be affected by the Regulation. Concerning the Life Sciences industry, it mostly means that AI systems used in medical devices or in vitro diagnostics will be considered as high-risk AI systems. These systems  must undergo a conformity assessment, to ensure that the system complies with the AI Act. Whether this should be an internal conformity assessment or a third-party conformity assessment depends on the presence of harmonized standards. The legislator seems to understand the burden that falls upon manufacturers and has possibly tried to ease the regulatory burden. It is now possible to integrate the conformity assessment into already existing procedures, for example the medical devices conformity assessment.


High-risk AI systems are required to comply with provisions on risk management, data (governance), technical documentation (incl. post-market monitoring), record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Already existing legislation concerning personal data and privacy remain under the General Data Protection Regulation (“GDPR”), but still apply in this context. When an AI system passes the conformity assessment, it will obtain the EU declaration of conformity. Consequently, the already existing rules concerning CE-marking stay in place. In case the AI system is completely digital, a digital CE-marking suffices. After CE-marking but before placing the AI system on the market, the provider must register said AI system in the newly established EU-database.

If an AI system passed the conformity assessment, it will be certified for a period as longas indicated on the certification. The maximum duration of the conformity assessment for AI systems already covered by other Union harmonization is 5 years, which may be extended after re-assessment. The maximum for high-risk AI systems is 4 years, also possibly to be extended after re-assessment.

Infringement of the AI Act may lead to punitive sanctions in the millions, or a percentage of the total annual worldwide turnover. In case of infringement of prohibited practices of non-compliance, the sanctions may amount to 35 million euros or 7% of the total annual worldwide turnover of the preceding financial year, whichever is higher.


Next to the authority granted this Regulation to conformity assessment bodies, the market surveillance authorities of each Member State may start a national procedure for dealing with AI systems presenting a risk. AI systems presenting a risk shall be understood as ‘product presenting a risk’.[1] If the respective authority has sufficient grounds, it will evaluate whether the systems are in conformity with the Regulation.

Already existing is the AI Office, the center for AI expertise in the EU. New under the AI Act in the installation of the European Artificial Intelligence Board is (“The Board”). The Board shall advice and assist the Commission in order to facilitate consistent and effective application of the AI Act. Along with the Board, an advisory forum and scientific panel of independent experts will be set-up to always give expert opinions and advice on pressing matters.

Lastly, the Commission will work in coordination with the Board and standardization organizations to develop harmonized standards. It is expected that these standards will be published within the first year of implementation. These standards will provide for a detailed explanation of all the requirements and obligations for high-risk AI systems.


A gradual introduction


The AI Act provides for a gradual application of requirements and obligations.

·       2 February 2025: companies are expected to grow AI literacy, meaning that personnel that works with AI should have an understanding of artificial intelligence. Also, AI systems with an intolerable risk are prohibited from this moment.

·       2 August 2025: marks the date that companies should have estimated whether the AI systems fall under the Regulation. If so, the companies have to notify the competent authorities and apply for (or conduct) a conformity assessment

·       2 August 2027: The requirements and obligations regarding high-risk AI systems turn into effect on 2 August 2027.


Lastly: are you not sure whether your product-system combination should comply with the Regulation or do you have any other questions with respect to the AI Act?

Feel free to contact us!


We are happy to help you with respect to this exciting topic.


[1] Article 3(19) Regulation (EU) 2019/1020

17 views0 comments

Comentários


bottom of page