Blog
Complying with the EU AI Act while accelerating innovation: an interview with AI translator Elena Nuñez Castellar
Complying with the EU AI Act while accelerating innovation: an interview with AI translator Elena Nuñez Castellar
Lorem ipsum

Testimonial
The course i attended gave me a great insight about data & ai. I would suggest it to everybody new and experienced people.

Jane Doe
UX Designer
Can you provide a brief overview of the EU AI Act?
What is its purpose?
Before I can answer this question, I need to take a step back and delve into the origins of the EU AI Act. Before there was any regulatory framework, discussions were held on the potential of AI on the one hand and the threats posed by AI on the other. In 2018, the European Commission established the Artificial Intelligence High-Level Expert Group (AI HLEG), which published the influential white paper: ‘A definition of AI: main capabilities and disciplines’. In this white paper, the group proposed a definition to guide the development of AI ethics guidelines and policy recommendations. Additionally, in 2019, the expert group presented its Ethics guidelines for trustworthy AI.
These Ethics guidelines outline key requirements that AI systems must meet to be considered trustworthy. However, as guidelines, they do not carry legal consequences for non-compliance. They function more like a code of conduct. So, in 2019, the European Commission took a step forward in introducing legislation that applies to any organization or company that puts into service AI tools within the European Union, including fines for non-compliance.
So, in 2021, to ensure the safe, responsible, and reliable development and deployment of AI, the European Commission proposed the first EU regulatory framework for AI across Europe: the AI Act. This framework is currently in the final phase of the European legislative procedure, and its enactment is expected in early 2024. Of course, a transition period will be provided to allow organizations sufficient time to comply.
What impact will the AI Act have on organizations?
Any organization that offers AI-related products or services in the EU market, regardless of its headquarters’ location, must ensure compliance with the EU AI Act. The fines for non-compliance are substantial, surpassing those stipulated by the GDPR (General Data Protection Regulation). It’s important to note that the EU AI Act imposes obligations not only on providers of AI systems, but also on users and third parties, such as distributors.
The AI Act also classifies AI systems according to their level of risk, with different rules and regulations applying depending on the risk level. Furthermore, certain activities are expressly prohibited if an AI system poses an unacceptable risk to the safety or fundamental rights of individuals. For example, manipulation of human behavior, exploitation of vulnerable individuals, or social scoring for employment decisions are prohibited.
Why is an Ethics Board crucial for organizations working with AI?
Often, an organization’s management primarily focuses on the benefits of AI in relation to business capabilities. However, it’s essential to recognize that AI also brings risks. An Ethics Board plays a vital role for reflecting on these risks, identifying the responsibilities the organization has, and addressing other crucial points. The requirements for trustworthy AI outlined by the Artificial Intelligence High-Level Expert Group serves as an important guideline in these discussions. For example, regarding transparency, it’s important that people are aware when they’re interacting with a Chatbot. Additionally, the Ethics Board must ensure that AI systems don’t compromise on diversity, non-discrimination, and fairness. For example, due to insufficient training data, AI models can amplify human biases and pose a risk for minorities.
Who should be involved in an Ethics Board?
How does an Ethics Board influence decision-making in an organization?
Are there tools available to assist organizations in evaluating their AI systems?
Yes, there are several tools available. One particularly noteworthy tool is capAI, which was developed by researchers at the University of Oxford to provide an independent, comparable, quantifiable, and accountable assessment of AI systems that conforms with the proposed EU AI Act regulation. Many other tools, created solely by ethicists, offer rigid advice without considering the complexity of issues. CapAI, though, distinguishes itself by having an interdisciplinary team behind it. As a result, it maintains a balanced approach that takes into account ethical, technical, and business aspects. It’s a pragmatic tool developed in collaboration with business schools, and it’s explained in plain language for individuals with varying levels of technical expertise.
When designing an AI system, it’s also essential to raise questions at each stage of its life cycle. There are five key stages in the AI tools life cycle: design, development, evaluation, operation, and retirement. CapAI ensures adherence to the principles set forth in the AI Act throughout each of these stages, incorporating input from all stakeholders, including the business owner, the technical lead, and end users.
How can UMANIQ assist companies with AI systems, and what sets them apart?
UMANIQ offers an EU AI Act (AIA) readiness track, which guides your organization to achieve compliance with your AI systems. We analyze the associated risks and determine your obligations based on the AI systems within your portfolio. We also provide a final report that includes recommendations. This includes technical considerations, such as which logs you need to store, because organizations remain responsible for post-log monitoring once AI tools are deployed.
Our approach stands out due to our pragmatic and holistic approach on AI. We firmly believe in an interdisciplinary approach. We combine our technical expertise in data and AI with a comprehensive understanding of the EU AI Act and substantial experience in innovation management. The latter is particularly important in this domain. Many people erroneously assume that the EU AI Act will slow down innovation in AI. However, this misconception arises from viewing the AI Act merely from a legal perspective, perceiving it as a set of restrictions. At Sparkle, we strive to develop AI solutions that align with the innovation cycle, from initial experimentation to production and deployment. As AI translators, we’re able to build bridges between various stakeholders, translating business needs into practical AI solutions.

label text
Curious about the impact of the AI Act on your business?

Jane Doe
UX designer