The adoption of AI-driven solutions has led to increased regulatory scrutiny, especially in the European Union. The incredibly fast uptake of these technologies has sparked the interest of European regulators, who have chosen to intensify oversight to ensure that AI systems are used responsibly in high-risk domains like banking or any industry that handles very sensitive data.
The EU AI Act introduces obligations that affect how organizations design, deploy, and manage AI-powered agents. The European Commission and EU Commission played a central role in drafting and legislating the Act at the EU level, establishing a harmonized regulatory framework for AI across member states.
Using AI tools and platforms like those provided by Microsoft, AWS and Google now requires an even more rigorous assessment to ensure that your organization is mitigating any legal risk and complying with safe & responsible AI regulations.
Some see the EU AI Act as a barrier to AI innovation in Europe. But the goal of the Act is to strike a balance between protecting consumers and supporting innovation. The EU is actively working to make sure the rules are flexible enough to encourage progress, while still being firm enough to keep people safe.
This article will cover the key provisions of the EU AI Act. In this article we outline key considerations and practical step to ensure that you fully understand EU AI Act compliance when deploying AI systems.
The EU Artificial Intelligence Act (EU AI Act) sets the world’s first comprehensive regulatory framework for artificial intelligence. Its goal is simple in principle, but complex in practice: to ensure that AI systems deployed in the EU market are safe, transparent, fair, and accountable; especially when used in areas that impact the fundamental rights of natural persons, their wellbeing, or access to essential services. The Act classifies AI systems into different risk categories: unacceptable, high, limited, and minimal risk, each with specific regulatory requirements.
The EU AI Act draws a clear line when it comes to unacceptable risks in artificial intelligence. Certain ai practices are strictly prohibited because they pose significant threats to safety, health, or fundamental rights.
These prohibited ai practices include the use of subliminal techniques or manipulative methods that can distort a person’s behavior or decision-making, especially when such practices exploit vulnerabilities due to age, disability, or socio-economic status.
Another major area of concern is the use of ai systems for social scoring, where individuals are evaluated or ranked based on personal characteristics or behavior in ways that can lead to discrimination or unfair treatment.
The act also bans the deployment of real-time remote biometric identification systems in publicly accessible spaces, except under very limited circumstances such as specific law enforcement purposes and with strict oversight.
To ensure compliance, organizations must implement a robust quality management system that includes regular risk assessments, audits, and clear documentation of their ai systems and processes.
Building trust in artificial intelligence starts with transparency and accountability - core principles of the EU AI Act.
Organizations must provide clear, accessible information about their ai systems, including details on the data used for training, the algorithms powering the system, and any potential risks or biases that may arise.
This transparency extends to explaining how decisions are made and ensuring that users understand the capabilities and limitations of the ai system.
Accountability is equally important.
The act requires that ai systems are designed with human oversight in mind, allowing for intervention or correction when necessary. Regular testing, validation, and monitoring are essential to ensure that ai systems function as intended and do not cause unintended harm.
Companies must also have procedures in place to report and address serious incidents, such as errors, biases, or system failures, and must respect individuals’ rights to opt out of automated decision-making processes.
To demonstrate compliance, organizations should implement strong data quality management, ensure model interpretability and explainability, and maintain thorough technical documentation.
Even in the United States, where AI regulation is more fragmented, companies are moving cautiously. LLM-driven mistakes have already prompted regulatory warnings (e.g., from the FTC), lawsuits, and reputational damage.
As a result, even in less regulated markets, organizations are adopting the same layered architecture patterns, balancing generative AI innovation with defensible governance frameworks.
At its core, the EU AI Act is a comprehensive AI law that classifies AI systems based on the level of risk they pose to individuals and society. This risk-based approach determines what obligations apply and to whom. Certain applications are considered high risk under the Act, requiring strict compliance, while many AI systems are classified as minimal risk and are subject to fewer obligations, primarily related to transparency and compliance with existing laws.
The EU AI Act distinguishes between specific-purpose AI systems, designed for a single, well-defined task (like detecting credit card fraud or scoring resumes), and general-purpose AI systems, which can be applied across many domains and tasks. General-purpose AI models are capable of performing a wide range of distinct tasks and may exhibit significant generality, which is a key factor in their regulatory classification under the Act.
The latter includes large language models (LLMs), such as GPT or Gemini, which can generate text, summarize documents, translate languages, and more. The Act introduces additional obligations for these general-purpose systems, especially when they are used in high-risk contexts.
When general-purpose AI systems like LLMs are used in high-risk applications, they are subject to the same full compliance obligations as any other high-risk AI system under the EU AI Act.
Depending on the risk category, different rules and transparency obligations apply. Transparency requirements are especially strict for high-risk and general-purpose AI systems, including clear disclosure and labeling of AI-generated content to ensure user awareness and regulatory compliance.
Understanding your role in the AI supply chain is essential to determine how compliance applies.
Whether you’re building internal copilots, customer-facing assistants, or multi-agent orchestration systems, AI agents are squarely in the scope of the EU AI Act. The classification of an AI system depends not on the technology stack itself, but on:
In short, it’s not the fact that you're using an LLM or a Copilot that defines your obligations it’s what the agent is doing, and how you’ve designed its guardrails.
Whether you're developing AI systems on Microsoft Azure, AWS, Google Cloud, or another platform, the EU AI Act introduces key requirements that apply regardless of the vendor.
The EU AI Act represents a milestone toward real accountability in how AI is used. AI systems are consistently getting more autonomous, and more deeply embedded into everyday workflows.
Naturally, regulators are stepping in with clear expectations: you need to be able to explain how your AI works, show that it behaves fairly, and step in when (ideally before) something goes off track. The Act also aims to protect democratic processes from undue influence by AI systems.
It is important to note that AI systems developed for the sole purpose of scientific research are generally exempt from the Act’s requirements.
The EU AI Act does not ban the use of large language models (LLMs), general purpose AI models, or advanced AI systems in high-risk domains. It allows a broad spectrum of technologies, including:
A central tool for compliance is the Code of Practice, which helps organizations meet the key provisions of the Act.
What the EU AI Act applies is this: the more complex, autonomous, or opaque your system is, the greater the governance burden.
Market surveillance authorities, national authorities, and national competent authorities are responsible for enforcing compliance with the EU AI Act.
In industries such as finance, employment, education, healthcare, and medical devices, where decisions can directly impact people’s fundamental rights or wellbeing, the reality is:
Classic ML models are often used in production to drive core decisions because:
LLMs and generative models are typically reserved for:
AI agents, orchestrated via platforms like Azure AI Foundry or similar, are used to:
In a loan approval workflow, an AI agent might retrieve financial data from internal banking systems, call an LLM to summarize an applicant’s income statement, trigger an ML model to assess credit risk, and then route the case to a human underwriter if the score falls within a grey zone. Each step, from data retrieval to the final decision, is logged, monitored, and governed.
To ensure compliance in high-risk sectors such as medical devices, organizations must work with the designated notifying authority in their member state.
Deploying LLMs without supporting systems can pose some challenges:
That’s why, in practice, no responsible organization deploys an LLM as the sole engine for decision-making in regulated use cases.
To address these risks while benefiting from AI, regardless of whether they are high-risk AI systems or any other category, most enterprises adopt hybrid architectures that combine:
Composite systems built with such hybrid architectures can help mitigate systemic risks and systemic risk associated with high-impact AI models, as they allow for better monitoring, evaluation, and control of broad, potentially harmful effects.
This can be an example of an architecture that covers the technical demands of real-world use cases and the regulatory requirements under the EU AI Act. The Act also provides support for start ups to help them comply with regulatory requirements.
A compliant, real-world implementation might look like this:
This setup allows each system to do what it does best, LLMs for language processing, ML for risk scoring, and AI agents for orchestration, while meeting audit, transparency, and governance requirements.
A common misconception is that small failures in AI systems, like a hallucinated explanation or a misinterpreted document, don’t carry much risk. However, even minor issues can lead to:
Responsible companies should be aware of serious incidents as compliance risks that need technical and operational mitigation.
A sandbox is a safe, isolated testing environment, often powered by synthetic data, that allows teams to build and test minimum risk, limited risk, and high risk AI systems without touching production systems or exposing sensitive information.
This is especially important under the EU AI Act, where even a proof of concept involving real data could trigger regulatory obligations.
What Makes a Sandbox Valuable:
Enterprise Benefits:
A good example is ANZ’s BlueSpace sandbox, developed in collaboration with Virtusa using our parent company’s Open Innovation Platform (OIP). It enables banks and partners to co-create, test AI agents, and evaluate models with synthetic transaction data, without the delays or risks of full integration, and while avoiding prohibited AI practices.
When to Use It:
Sandboxing gives tech teams a safe space to test and explore Artificial Intelligence ideas without getting tangled in compliance risk. It’s a practical way to validate what works, fix what doesn’t, and build with confidence before rolling anything out in production.
The EU AI Act is changing how organizations build and use AI, especially when it’s used in areas that affect people’s lives, like finance, HR, or healthcare. It’s not just about what your AI can do, but how it’s built, how decisions are made, and how risks are managed.
If you’re using tools like LLMs, ML models, or AI agents, it’s important to understand how the rules apply and to design your systems with those requirements in mind from the start.
If you need support building AI solutions that meet these requirements, reach out. Our team of AI experts can guide you through the entire process including how to ensure EU AI Act compliance for your AI.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript