Inside AI Policy

May 2, 2024

AI Daily News

European analyst identifies gaps in AI Act for addressing risks in the health sector

By Mariam Baksh  / December 22, 2023

The European Unions’ broad approach to regulating artificial intelligence is likely to leave patients in the dark while allowing many applications to avoid the regulation’s most significant proposal, according to researchers at Health Action International.

“The AI Act is a horizontal legislation which is supposed to regulate AI used across very different fields, therefore, provisions are not always addressing the needs or concerns of the healthcare context,” HAI Research Officer Janneke Van Oirschot told Inside AI Policy. “When talking about those risks related to the AI Act, we see that major gaps remain.”

The Netherlands-based non-profit, which does policy analysis to expand access to essential medicines, issued a detailed report in February 2022, on implications for the health sector based on initial drafts of the EU legislation. But while final text is still awaited -- with adoption now expected in April -- the HAI researchers fear the regulation will fall short by failing to facilitate sector-specific standards.

Janneke Van Oirschot

Janneke Van Oirschot, Research Officer, Health Action International

“As the next step in the AI Act, standards will be developed, however, at the moment the plan is to develop again horizontal standards, instead of sector specific ones,” Van Oirschot said. “We think it’s tremendously important to have those sector-specific standards for healthcare.”

The AI Act will also apply to those outside of the EU to the extent the technology is available to those in the European market, but the observations from the HAI researchers are also insightful as U.S. policymakers weigh relying on individual federal agencies to address AI harms against establishing a central authority.

According to an extensive Q&A released by the European Commission on the coming regulations, “AI systems being safety components of products covered by sectorial Union legislation will always be deemed high-risk when subject to third-party conformity assessment under that sectorial legislation.”

The conformity assessments are at the heart of the EU legislation as drafted. This is where reviews for factors such as data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness, will occur. But they will only be required for AI applications deemed “high-risk.”

A “high-risk” categorization would also trigger the requirement to implement corresponding risk management systems, but as the Q&A notes, systems “that perform narrow procedural tasks, improve the result of previous human activities, do not influence human decisions or do purely preparatory tasks are not considered high-risk.”

That could potentially allow applications often in use for clinical care to fall through the cracks.

“A lot of the AI that we see now being used in this context is not classified as medical device and therefore won’t be regulated as high-risk, therefore, there are no regulatory requirements and no conformity assessments for these systems,” Van Oirschot said. “You could think of AI systems for surveillance of behaviour or lifestyle monitoring, and all kind of smart technology such as incontinence material and smart beds, which collect extremely sensitive data, and whose malfunctioning can harm patients. We think these systems should not remain unregulated.”

Even when an AI application does fall into the high-risk category, Van Oirschot said the legislation as drafted fails to account for the people who are ultimately subjected to it.

“The AI Act talks about the ‘provider’ of AI and the ‘user’ of AI. The provider, in some instances, has transparency obligations towards the user. However, in the medical context, it often happens that the user of AI is a healthcare provider who uses it as assistive tool in their decision making,” Van Oirschot said. “The patient is not directly interacting with the AI, but is the one who faces the consequences of the decision, therefore, it’s unclear what this means for transparency obligations towards the patient.”