Martin Ragan, co-founder of Slovak startup Cequence.io, enters a prompt into a simple chatbot interface. In less than 10 seconds, the underlying artificial intelligence, or AI, sifts through thousands of contracts with tens of thousands of pages and spits out a detailed answer.
"The bullet points refer to a part of the contract you can navigate right away to and check the answer," Ragan told DW. "Risk mitigation is a huge part of this: 'Would I pay a penalty if I did this with supplier x?' Now, you can make an educated business decision."
Ragan, who says the chatbot is 80-90% accurate during the implementation and gets better as it learns from more customer data, presented his AI contract management software at the Basque Open Industry exhibition in Bilbao last month.
Mixed reviews for AI Act
1,000 kilometers (620 miles) to the northeast, in Brussels, the negotiations about the EU's AI Act are in the last phase of the legislative process.
Presented by the European Parliament in June, the bloc's key piece of legislation to regulate artificial intelligence has the overarching goal of putting "guardrails" around the development and use of the technology.
Ragan thinks the AI Act will have both positive and negative consequences.
"The opinions we are hearing are mixed. We work with highly sensitive data; if the regulations give more trust to our clients — trust meaning the data is safe, and the AI won't just take it and put it somewhere where we don't want to have it — it's gonna have a positive impact for us as well."
It appears unlikely that EU member states and lawmakers will reach a deal by their December 6 deadline. In mid-November, negotiations on the draft law abruptly stopped over so-called foundation models — large deep learning neural networks that power chatbots like the one of Cequence.io.
Then, Germany, France and Italy, the EU's three largest countries, spoke out against the tiered approach initially envisaged on foundation models. They warned that slapping tough restrictions on these newfangled models would harm the EU's own champions, like German OpenAI competitor Aleph Alpha.
Instead, they suggested self-regulating foundation models through company pledges and codes of conduct.
Unresolved legal questions for SMEs
Networks like Digital SME, however, have warned that this proposed self-regulation would shift the responsibility of conforming to the regulation to companies developing and using AI further down the supply chain, especially small and medium-sized enterprises (SMEs).
The resulting compliance costs could overburden SMEs, result in legal uncertainty and stifle the adoption of AI, says a statement released by the European Digital SME Alliance, which claims to represent more than 45,000 enterprises in Europe.
Around 99% of all businesses in the European Union are SMEs, employing around 100 million people and accounting for more than half of Europe's GDP.
Margarete Rudzki, who works at the German Confederation of Crafts and Small Businesses (ZDH), believes addressing the issue of liability is crucial for increasing both trust and adoption of AI.
"There's great potential for AI among our businesses, for instance when it comes to predictive maintenance. But it's paramount to determine who in the supply chain is actually liable for damage done, for example by high-risk AI products or systems," Rudzki told DW last month at the European Commission's flagship conference on SMEs in Bilbao.
"Just imagine an AI-powered garage door — the algorithm malfunctions, and it hurts the neighbor's child. Who would be liable? We need to resolve all these thorny legal questions," she added.
According to EU statistics agency Eurostat, only 7% of European SMEs used AI in 2021 — far from the Commission's 2030 target of 75%.
To regulate, or not to regulate
In the tiered approach versus self-regulation debate regarding foundation models, Cequence.io co-founder Martin Ragan errs on the side of Germany, France and Italy in their call for less regulation.
He thinks the reporting duties and other transparency obligations the AI Act might entail could put EU companies at a competitive disadvantage.
"My main concern is how competition in the EU could be slowed down here compared to the US," Ragan, who also worked in Silicon Valley for a year, told DW.
"Until you get a review or approval for a new model from EU regulators, someone in the US can come up with the same idea or copy it, get it into the market much faster and blow you out of the water. This means you could lose your entire business."
Different rules for different risk levels
The AI Act proposal differentiates between four risk levels: From minimal or no risk over limited risk, such as generative AI tools like ChatGPT or Google's Bard, to what lawmakers consider high-risk applications, such as border control management.
Finally, technology like facial recognition tools or social scoring pose "unacceptable risk" and are to be banned.
One of the aims of the Act is to protect democratic processes like elections from AI-generated deepfakes and other sources of disinformation. This is especially topical in light of next year's EU-wide elections.
Yet if policymakers can't smooth out the sticking points and agree on a finalized set of principles and rules on December 6, the AI Act could be delayed until after the 2024 election. For Europe's SMEs, a postponement would mean that their legal uncertainty around artificial intelligence continues.
Edited by: Kristie Pladson