Inside AI Policy

May 9, 2024

AI Daily News

Software industry sees possible agreement on high-risk AI uses in European Union talks

By Rick Weber  / October 2, 2023

The software industry is expecting European Union officials to try to reach an agreement this week on the definition of high-risk artificial intelligence uses as part of “trilogue” talks on a final draft the EU’s landmark AI Act, which is expected to be finalized by the end of this year.

“There is definitely a number of very, very important issues being discussed” this week by EU officials of the Commission, Council and Parliament for a final version of the AI Act approved by the Parliament in June, said BSA|The Software Alliance policy director Matteo Quattrocchi from Brussels in a zoom call today with reporters.

“One of them is the definition of high risk,” Quattrocchi noted, adding “risk-based and high-risk are very important aspects of the of the structure of the legislation.”

“So today, member states and the Parliament are going to try and see if they can finalize a shared version of what actually constitutes a risk both as a concept and partially in a list of AI uses that could be qualified as high risk under the act,” he said.

The BSA virtual event was scheduled as EU officials are meeting Oct. 2-3 to negotiate a final draft of the AI Act for eventual approval by member states. Final adoption of the act is expected in March with the enforcement of rules expected 18 months after that.

The act would establish a tiered approach to regulating AI with all uses identified as high risk required to be registered and assessed before going to market. Initial categories of high-risk AI uses include biometric identification, management and operation of critical infrastructure, education and vocational training and law enforcement.

The EU talks come after BSA and other tech industry groups released a Sept. 29 “joint statement” urging negotiators to refocus the proposed legislation on addressing high-risk AI uses.

“Europe has a unique opportunity to define sensible rules for the development, deployment, and use of AI for the next decade, and to set a strong example for the world on how to best regulate AI,” the industry statement says.

“As you can imagine, there are some issues that remain contentious and one of them, chiefly, is the use of artificial intelligence in the sphere of law enforcement, especially the use of artificial intelligence for biometric identification in public spaces,” Quattrocchi told reporters.

“The European Parliament has suggested almost completely banning this technology, while the member states are suggesting limits to the use of this technology, but still allowing law enforcement agencies to use it,” Quattrocchi noted.

“So, this is going to be probably the most complicated negotiation phase for the local legislators in the EU before they can finalize the AI Act,” he said. “This part has not started yet, we expect them to work on this in November.”

Another contentious issue for EU negotiators will be regulation of foundation AI models used for chatbots and large-language models that have exponentially increased in use and popularity since the EU Parliament began work on the AI Act.

“Another very important aspect is foundation models or general purpose AI,” Quattrocchi said.

“As you have seen the rise and very important success of generative AI, especially more broadly by foundation models, the European Parliament and the member states also want to figure out what they want to do in the AI act about these more novel technologies,” he said.

“So, we will probably see negotiation on this begin in October,” Quattrocchi added.

Pressures from the tech industry for EU negotiators to refocus their efforts on high-risk AI uses come amid concerns by AI developers and users that more recent versions of the legislation have strayed from this risk-focused approach.

“The departure from the AI Act’s original risk-based and technology-neutral approach risks inhibiting the development and use of AI in Europe. Moreover, the multiplication of overlapping rules in the AI Act, as well as the broad extension of the list of high-risk use cases and the list of prohibited AI systems, would create unnecessary red tape and legal uncertainty,” the industry coalition said in its statement released last week.

The 15 groups signing on to the statement include BSA, the Information Technology Industry Council, the American Chamber of Commerce to the European Union, the European Tech Alliance, the internet focused group DOT Europe, the Computer and Communications Industry Association, and several European-country based tech associations.