Image of a man wearing Wärtsilä safety gear along with a helmet with the Wärtsilä logo using a tablet

Will the EU’s new AI act enable or stifle innovation?

The EU’s new AI Act is the first of its kind in the world. That said, the act has its supporters and critics, with one side praising its attention to preserving human freedoms, while the other side accuses it of stifling innovations.

On December 9th, 2023, the EU took the first steps towards enacting laws that govern the use of Artificial Intelligence (AI) within the bloc with a new AI Act. The proposed laws are expected to act as a role model for other nations as they scramble to regulate the development of AI and other advanced technologies.

“The EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rule book, it is a launchpad for EU startups and researchers to lead the global AI race,” says European Commissioner Thierry Breton.

Classifying AI by risk potential

The EU’s AI Act is many steps ahead of other global powers like the US, which is still formulating its policies. A cornerstone of the act is its 4-tier classification system for AI systems.

Under the classifications, Minimal Risk includes systems such as spam filters. High Risk includes recruitment systems and systems used in democratic processes and law enforcement. Unacceptable Risk considers systems that infringe on human rights, and Specific Transparency Risks governs situations where people do not know they are dealing with AI-powered systems such as chatbots and avatars. Experts say this risk-based model offers clear guidance when dealing with a fast-evolving technology such as AI.

“A risk-based model is something that we use in many other domains as well, especially in general security. It gives a clear path to follow for stakeholders, including legislative bodies about the types of rules and standards that will be needed,” explains Jouni Laiho, Director, Corporate Security, at Wärtsilä. “The only thing I would have added is to have a risk-opportunity model, that not only talks about the risks but also the benefits, making it easier for stakeholders to make a more informed choice.”

Experts are applauding the EU for being clear on prohibiting uses of AI that step on personal freedoms and human rights. Among the safeguards being considered, it will ban the use of Unacceptable Risk AI, such as in developing biometric systems that categorise populations based on race, political & religious beliefs, or sexual orientation. It also bans untargeted scraping of facial images from the internet, social scoring based on social behaviour or personal characteristics, and the use of AI to manipulate human behaviour or exploit human vulnerabilities such as age, social or economic situation, and disabilities.

“The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities,” says Dragos Tudorache, Member of the European Parliament (MEP).

A risk-based model gives a clear path to follow for stakeholders, including legislative bodies about the types of rules and standards that will be needed.The only thing I would have added is to have a risk-opportunity model, that not only talks about the risks but also the benefits, making it easier for stakeholders to make a more informed choice.

Jouni Laiho, Director, Corporate Security, at Wärtsilä.

More AI, more transparency

AI classified as being high-risk and posing a threat to health, safety, fundamental rights, environment, democracy, and the rule of law, must meet clear obligations including a mandatory fundamental rights impact assessment. European citizens will also have the right to launch complaints about AI systems and receive explanations for decisions taken that impact their rights.

What’s more, general-purpose AI (GPAI) systems and the GPAI models they are based on will have to follow transparency requirements including complying with EU copyright law, labelling deepfakes and AI-generated content, and notifying users if they are interacting with AI-generated chatbots and avatars. High-risk classified AI systems will have to conduct model evaluations, assess, and mitigate systemic risks, ensure cybersecurity and report on their energy efficiency.

A new European AI office will be set up to enforce the AI Act, while penalties as large as 1.5% to 7% of a company’s global sales turnover are being considered for rule-breaking.

“The world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology,” says Brando Benifei, MEP. “Correct implementation will be key – the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models”.

I think that legislators should actively consult with other types of stakeholders including industries, NGOs, and research institutes. We need to put forth clear technical criteria for the different classifications, as well as practical evaluation processes for the different AI systems, and all this needs to be done promptly so that innovation is not hindered.

Tal Katzav, General Manager, Machine Learning & Advanced Analytics, at Wärtsilä

Innovation enabler or innovation stifler?

While the AI act has its supporters, it also has drawn its share of criticisms with many worried that the act will stifle innovation around AI in the EU. Critics say that the subsequent rise in bureaucracy surrounding the implementation of the act will see AI companies spending time and resources in compliance rather than innovating new AI solutions. They worry that the AI act will deter investments in the European AI ecosystem, and lead to a brain drain with top talent heading to the US and China.

Meanwhile, experts are also calling upon authorities to practise pragmatism when it comes to AI development and deployment in sectors where they do not have a direct impact on end-users and consumers. They worry that the Act’s focus on general-purpose AI could end up complicating the use of broader AI types in industries such as shipping, heavy engineering, and energy.

“There have been many uses of AI even before the likes of ChatGPT caught the world by surprise. We have AI and Machine Learning being used to optimise voyages, to estimate how much heavy fuel to use vs battery, for different business process automation. These are AI applications derived from the physical environment, not human behaviour,” explains Tal Katzav, General Manager, Machine Learning & Advanced Analytics, at Wärtsilä.

“We may use the data to get in touch with customers, offering them replacement parts using AI-powered predictive maintenance. How will the new AI Act treat these applications? I urge legislators to consider these aspects as well to determine which type of AI falls into what type of category and ensure that the fine print does not inadvertently hamper innovation,” he adds.

What next for the AI Act?

While the AI Act represents the first real attempt to regulate this fast-evolving technology, experts are calling on legislators to ensure the act stands unique, both in its protections for human freedoms, as well as for its enablement of innovation. For that, more collaboration and dialogue with stakeholders, and more tailor-made provisions are the only way forward.

“I think we should not use one single recipe to control or enable the vast field of AI. You need to take a closer, sub-domain-specific approach. You need to look at the various technologies existing underneath the AI umbrella today, and then examine the risks & implications for each technology and create legislation accordingly. This is my advice to the legislators. Trying to govern all AI with one simple or complex rule does not make sense,” says Laiho. 

“When I read the act, I sense that it has been designed considering tech giants and primarily high-risk AI systems. Yes, I agree that these should be more heavily regulated. But I also think that legislators should actively consult with other types of stakeholders including industries, NGOs, and research institutes,” says Katzav. “We need to put forth clear technical criteria for the different classifications, as well as practical evaluation processes for the different AI systems, and all this needs to be done promptly so that innovation is not hindered.”

The proposed act will have to run the gauntlet of being formally adopted by both the European Parliament and the European Council before it becomes law. Will the EU’s AI Act be an effective role model for the world? Only time will tell.

Written by
Nikhil Sivadas
Senior Editor at Spoon Agency