What business leaders should know about EU AI Act compliance

AI compliance expert Arnoud Engelfriet shares key takeaways from his book 'AI and Algorithms,' describing the EU AI Act's effects on innovation, risk management and ethical AI.

The landmark EU AI Act comprises a complex array of sections, definitions, guidelines and rubrics, making it challenging to navigate. But understanding the AI Act is essential for organizations looking to innovate with AI while complying with both legal and ethical standards.

Arnoud Engelfriet is the chief knowledge officer at ICTRecht, an Amsterdam-based legal services firm specializing in IT, privacy, security, and algorithms and data law. In his role running the ICTRecht Academy, Engelfriet is responsible for disseminating and deepening knowledge related to AI legislation.

Image of book cover for 'AI and Algorithms: Mastering Legal and Ethical Compliance.'Click on the book cover
image to learn more.

In his book AI and Algorithms: Mastering Legal and Ethical Compliance, published by Technics, Engelfriet explores AI legislation -- the AI Act included -- as part of the larger conversation around ethical AI development, management and use.

The introduction of new AI guidelines often raises concerns: Will legislation stifle creativity? Do teams have the necessary skills to ensure compliance? To answer these questions, organizations must understand current and upcoming legislation so that they can build and deploy more trustworthy AI systems.

Compliance and innovation

As of August 2024, the much-anticipated AI Act is now in force. With tiered implementation dates ranging from six months to over a year out, organizations still have time to understand what exactly compliance under the act entails.

A common concern among businesses is that the legislation might stifle creativity, especially given the fast pace of AI development.

"Compliance and innovation have always been somewhat at odds," Engelfriet said.

However, he noted that the AI Act's tiered approach and flexibility leaves room for markets to tailor compliance requirements, in some cases. "We don't see the AI Act as something that's going to abort or cancel all kinds of AI innovations," he said.

For example, the act's guidelines for regulatory sandboxes provide a space for organizations to build and test new AI systems safely, away from the market and end users. The key requirement is that the technology being tested is not yet in production.

"It's going to be slower than before, but at the same time, it's going to be a little safer for your customers, for the environment," he said.

Ensuring trustworthy AI

The AI Act, like many AI guidelines designed for consumer safety, aims to make AI more trustworthy. But what does "trustworthy AI" really mean?

The term gained prominence in 2019, following its inclusion in the first draft of the AI Act. Although the exact definition remains somewhat ambiguous, the act outlines three main characteristics of trustworthy AI, Engelfriet said: It must be legal, technically robust and ethical.

You cannot really trust a machine. You can only trust the designers and the operators.
Arnoud EngelfrietChief knowledge officer, ICTRecht

However, Engelfriet emphasized that trust is ultimately placed in the humans behind the AI system, not the technology itself. "You cannot really trust a machine," he said. "You can only trust the designers and the operators."

The AI Act addresses the legal aspect by consolidating laws and guidelines into one place. It accounts for technical robustness -- defined as an AI system's ability to operate reliably within its intended use case -- by requiring transparency about what the system is designed to do, such as making automated decisions or functioning as a chatbot, and ensuring that it consistently performs successfully from a technical perspective.

Ethics, the final aspect of trustworthy AI, has gained increasing attention since the rise of generative AI in late 2022. One 2023 study analyzed over 200 different AI ethics guidelines, highlighting the field's fragmented approach. Ethics guidelines aim to curb the many risks associated with AI, from data protection -- often linked with GDPR compliance -- to bias prevention and safety concerns. Ethical compliance means ensuring that AI systems do not perpetuate bias or lead to physical harm, Engelfriet said.

The Assessment List for Trustworthy AI, developed by the European Commission's High-Level Expert Group on AI, provides a practical framework for ethical AI guidelines. Although the framework is generic enough to apply across industries, Engelfriet cautioned that it will likely need to be adapted to specific organizational needs.

The AI compliance officer

With multiple iterations of legislation, complicated regulatory requirements and a vast amount of information to consider, it's easy for compliance teams to feel overwhelmed by AI initiatives. To account for the growing need for multifaceted compliance, AI compliance officers can assist organizations in building AI systems or integrating AI into their workflows, Engelfriet said.

"We see a lot of people struggling ... people working with earlier drafts of the act, for example," Engelfriet said. Companies might also struggle with the fine print or have difficulty deciphering where their organization and its AI use fit into the AI Act's tiered rubric.

To this end, ICTRecht created a course to complement AI and Algorithms, designed to teach employees how to integrate AI compliance into their business. The course is accessible to anyone; no knowledge of AI compliance, the AI Act or AI in general is required. Engelfriet sees people with a variety of job roles in the classroom, with many course participants coming from data, privacy and risk functions.

The common thread? "They want to expand into AI compliance -- which is a good thing, because ... AI compliance is more than data protection," Engelfriet said.

Overall, the AI Act sets the tone for future AI regulations, Engelfriet said. As seen with GDPR, early legislation is often the most influential, so businesses would do well to get on board with the EU AI Act in a proactive and comprehensive way.

Click here for an excerpt from Chapter 2 of AI and Algorithms that discusses the main parts of the EU AI Act, including important definitions, risk tiers and related legislation.

Olivia Wisbey is associate site editor for TechTarget Enterprise AI. She graduated with Bachelor of Arts degrees in English literature and political science from Colgate University, where she served as a peer writing consultant at the university's Writing and Speaking Center.

Dig Deeper on AI business strategies