LS&HC Horizons 2022 - Flipbook - Page 22
Hogan Lovells | 2022 Life Sciences and Health Care Horizons
22
Digital Health and AI
Outlook on the proposed Artificial Intelligence Act on medical devices in the EU
The implementation of artificial intelligence (AI) is often seen to have
great potential to advance health care. However, the ability of an
AI system to further develop, optimise and adapt itself is one of the
biggest challenges for smart medical devices with regard to regulation
and liability law – because such “learning” is currently not provided
for in medical device legislation.
Self-learning AI software in Europe requires a CE marking under
the European Union Medical Device Regulation (EU MDR) - as
for any medical device. However, two worlds collide: According to
Annex I, section 17.1 EU MDR, software must be designed to ensure
“repeatability, reliability and performance in line with their intended
use”. For “locked” algorithms, which provide the same result each
time the same input is applied, this is not a problem. However,
continuously learning and adaptive algorithms, especially software
based on a “black box” model, are by definition not supposed to
deliver repeatability.
Unlike in the U.S., where the FDA takes a sector-specific approach
on AI, the European Commission plans to create a general regulatory
framework for AI across all sectors and published a proposal for an
Artificial Intelligence Act (AIA). Under the proposal, AI applications
in medical devices within the meaning of the EU MDR, or those which
constitute a medical device themselves, would be considered as “highrisk AI systems” and require a conformity assessment by a “notified
body”. Conformity assessments for AI medical devices would continue
to be conducted in accordance with the EU MDR, while certain
additional provisions of the AIA would also apply.
Arne Thiermann
Partner, Hamburg
Whether and when a self-learning AI medical device would need to
undergo a new conformity assessment would depend on whether a
certain change to an algorithm and its performance has been predetermined by the provider and assessed at the moment of the initial
conformity assessment. Accordingly, if a certain change was “predetermined”, it would not constitute a substantial modification and
would not require a new conformity assessment. However, the
question of what “pre-determined” means, and how specifically it
needs to be described in the technical documentation, is still open and
will need to be clarified at some point in time.
In addition to opportunities and innovations, the use of AI also creates
the risk of damage caused by machine intelligence. The principle
that liability follows from responsibility reaches its limits in artificial
intelligence. Therefore, manufacturers must address these issues as
early as during the device development phase and contractually
regulate as many potential risks as possible.
The regulatory landscape is evolving now, but eventually, regulators
have to develop certification processes to handle adaptive AI systems
and to approve not only an initial AI model but also a process for
adjusting an AI model over time.
Nicole Böck
Counsel, Munich