Making AI Work Under the EU AI Act: Practical Steps and Proven Patterns
Table of Contents
Click to navigate to section

Making AI Work Under the EU AI Act: Practical Steps and Proven Patterns

Maciej Gos
Maciej Gos
Chief Architect & Team Leader
Ștefan Spiridon
Ștefan Spiridon
Content Marketing Specialist

The adoption of AI-driven solutions has led to increased regulatory scrutiny, especially in the European Union. The incredibly fast uptake of these technologies has sparked the interest of European regulators, who have chosen to intensify oversight to ensure that AI systems are used responsibly in high-risk domains like banking or any industry that handles very sensitive data.

The EU AI Act introduces obligations that affect how organizations design, deploy, and manage AI-powered agents. The European Commission and EU Commission played a central role in drafting and legislating the Act at the EU level, establishing a harmonized regulatory framework for AI across member states.

Using AI tools and platforms like those provided by Microsoft, AWS and Google now requires an even more rigorous assessment to ensure that your organization is mitigating any legal risk and complying with safe & responsible AI regulations.

Some see the EU AI Act as a barrier to AI innovation in Europe. But the goal of the Act is to strike a balance between protecting consumers and supporting innovation. The EU is actively working to make sure the rules are flexible enough to encourage progress, while still being firm enough to keep people safe.

This article will cover the key provisions of the EU AI Act. In this article we outline key considerations and practical step to ensure that you fully understand EU AI Act compliance when deploying AI systems.

Understanding the EU AI Act and Its Impact on AI Agents

The EU Artificial Intelligence Act (EU AI Act) sets the world’s first comprehensive regulatory framework for artificial intelligence. Its goal is simple in principle, but complex in practice: to ensure that AI systems deployed in the EU market are safe, transparent, fair, and accountable; especially when used in areas that impact the fundamental rights of natural persons, their wellbeing, or access to essential services. The Act classifies AI systems into different risk categories: unacceptable, high, limited, and minimal risk, each with specific regulatory requirements.

Prohibited AI Practices: What You Absolutely Cannot Do

The EU AI Act draws a clear line when it comes to unacceptable risks in artificial intelligence. Certain ai practices are strictly prohibited because they pose significant threats to safety, health, or fundamental rights.

These prohibited ai practices include the use of subliminal techniques or manipulative methods that can distort a person’s behavior or decision-making, especially when such practices exploit vulnerabilities due to age, disability, or socio-economic status.

Another major area of concern is the use of ai systems for social scoring, where individuals are evaluated or ranked based on personal characteristics or behavior in ways that can lead to discrimination or unfair treatment.

The act also bans the deployment of real-time remote biometric identification systems in publicly accessible spaces, except under very limited circumstances such as specific law enforcement purposes and with strict oversight.

To ensure compliance, organizations must implement a robust quality management system that includes regular risk assessments, audits, and clear documentation of their ai systems and processes.

Transparency and Accountability: Building Trust into Your AI Projects

Building trust in artificial intelligence starts with transparency and accountability - core principles of the EU AI Act.

Organizations must provide clear, accessible information about their ai systems, including details on the data used for training, the algorithms powering the system, and any potential risks or biases that may arise.

This transparency extends to explaining how decisions are made and ensuring that users understand the capabilities and limitations of the ai system.

Accountability is equally important.

The act requires that ai systems are designed with human oversight in mind, allowing for intervention or correction when necessary. Regular testing, validation, and monitoring are essential to ensure that ai systems function as intended and do not cause unintended harm.

Companies must also have procedures in place to report and address serious incidents, such as errors, biases, or system failures, and must respect individuals’ rights to opt out of automated decision-making processes.

To demonstrate compliance, organizations should implement strong data quality management, ensure model interpretability and explainability, and maintain thorough technical documentation.

Lessons from the U.S.: Less Regulation, Same Caution

Even in the United States, where AI regulation is more fragmented, companies are moving cautiously. LLM-driven mistakes have already prompted regulatory warnings (e.g., from the FTC), lawsuits, and reputational damage.

As a result, even in less regulated markets, organizations are adopting the same layered architecture patterns, balancing generative AI innovation with defensible governance frameworks.

A Risk-Based Approach

At its core, the EU AI Act is a comprehensive AI law that classifies AI systems based on the level of risk they pose to individuals and society. This risk-based approach determines what obligations apply and to whom. Certain applications are considered high risk under the Act, requiring strict compliance, while many AI systems are classified as minimal risk and are subject to fewer obligations, primarily related to transparency and compliance with existing laws.

Breakdown of the examples and requirements for each type of risk as described in the EU AI Act

The EU AI Act distinguishes between specific-purpose AI systems, designed for a single, well-defined task (like detecting credit card fraud or scoring resumes), and general-purpose AI systems, which can be applied across many domains and tasks. General-purpose AI models are capable of performing a wide range of distinct tasks and may exhibit significant generality, which is a key factor in their regulatory classification under the Act.

The latter includes large language models (LLMs), such as GPT or Gemini, which can generate text, summarize documents, translate languages, and more. The Act introduces additional obligations for these general-purpose systems, especially when they are used in high-risk contexts.

When general-purpose AI systems like LLMs are used in high-risk applications, they are subject to the same full compliance obligations as any other high-risk AI system under the EU AI Act.

Depending on the risk category, different rules and transparency obligations apply. Transparency requirements are especially strict for high-risk and general-purpose AI systems, including clear disclosure and labeling of AI-generated content to ensure user awareness and regulatory compliance.

Understanding your role in the AI supply chain is essential to determine how compliance applies.

What This Means for AI Systems

Whether you’re building internal copilots, customer-facing assistants, or multi-agent orchestration systems, AI agents are squarely in the scope of the EU AI Act. The classification of an AI system depends not on the technology stack itself, but on:

  • Purpose of the agent: Tasks that affect individuals' rights or access to services, like HR automation, are typically high-risk. In contrast, support functions such as generating marketing content are considered low-risk.
  • Type of data processed: Systems handling sensitive personal data (e.g., health or biometric info) are higher risk than those using only public or non-sensitive data.
  • Level of autonomy and impact: AI systems that make decisions independently are subject to stricter rules than those that merely provide suggestions or assist humans.
  • Safeguards in place: Agents with built-in oversight mechanisms—like human review, audit trails, and fallback options—are treated as lower risk compared to those without such controls.

In short, it’s not the fact that you're using an LLM or a Copilot that defines your obligations it’s what the agent is doing, and how you’ve designed its guardrails.

Implications for Cloud Ecosystem Users

Whether you're developing AI systems on Microsoft Azure, AWS, Google Cloud, or another platform, the EU AI Act introduces key requirements that apply regardless of the vendor.

  1. Architecture must align with risk level - Low-risk agents (e.g., summarizing a document or routing a ticket) can be lightweight. High-risk ai systems (e.g., scoring a loan or screening CVs) must include explainability, audit logs, human override, and bias mitigation.
  2. Governance must be continuous - Compliance isn’t static. If an AI system takes on more autonomy or shifts into a decision-making role, it may cross into a new risk category. This requires regular re-assessment and updates to your governance stack.

Why It Matters

The EU AI Act represents a milestone toward real accountability in how AI is used. AI systems are consistently getting more autonomous, and more deeply embedded into everyday workflows.

Naturally, regulators are stepping in with clear expectations: you need to be able to explain how your AI works, show that it behaves fairly, and step in when (ideally before) something goes off track. The Act also aims to protect democratic processes from undue influence by AI systems.

It is important to note that AI systems developed for the sole purpose of scientific research are generally exempt from the Act’s requirements.

What AI Compliance Looks Like Under the EU AI Act

The EU AI Act does not ban the use of large language models (LLMs), general purpose AI models, or advanced AI systems in high-risk domains. It allows a broad spectrum of technologies, including:

  • Traditional machine learning models (e.g., logistic regression, decision trees, gradient boosting) that are deterministic, auditable, and easier to explain.
  • General-purpose AI systems, such as transformer-based general purpose AI models like OpenAI’s GPT models, Google’s Gemini series, and others as long as they are governed under strict compliance controls.
  • Full-fledged AI multi-agent systems, you can build using orchestration platforms like Azure AI Foundry, that integrate data sources, apply policy logic, and manage end-to-end workflows.

A central tool for compliance is the Code of Practice, which helps organizations meet the key provisions of the Act.

What the EU AI Act applies is this: the more complex, autonomous, or opaque your system is, the greater the governance burden.

How does it all work in practice for providers of high-risk AI systems? Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=The%20AI%20Act%20is%20the%20first-ever%20legal%20framework,the%20first-ever%20comprehensive%20legal%20framework%20on%20AI%20worldwide

Market surveillance authorities, national authorities, and national competent authorities are responsible for enforcing compliance with the EU AI Act.

Real-World Expectations in High-Risk Use Cases

In industries such as finance, employment, education, healthcare, and medical devices, where decisions can directly impact people’s fundamental rights or wellbeing, the reality is:

Classic ML models are often used in production to drive core decisions because:

  • Their behavior is explainable (“this CV scored 78% based on these 5 weighted features”)
  • Bias and fairness frameworks are well established
  • They avoid hallucinations and produce consistent outputs

LLMs and generative models are typically reserved for:

  • AI generated document summarization
  • Natural language interaction
  • Drafting explanations or generating insights

AI agents, orchestrated via platforms like Azure AI Foundry or similar, are used to:

  • Coordinate multiple systems and services (e.g. pulling customer data from a CRM, retrieving financial records from internal databases)
  • Apply business logic, rules, and human review checkpoints
  • Log decisions and manage auditability end-to-end

In a loan approval workflow, an AI agent might retrieve financial data from internal banking systems, call an LLM to summarize an applicant’s income statement, trigger an ML model to assess credit risk, and then route the case to a human underwriter if the score falls within a grey zone. Each step, from data retrieval to the final decision, is logged, monitored, and governed.

To ensure compliance in high-risk sectors such as medical devices, organizations must work with the designated notifying authority in their member state.

Why LLM-Only Pipelines Raise Red Flags

Deploying LLMs without supporting systems can pose some challenges:

  • Lack of explainability – LLMs cannot consistently justify outputs in a way regulators accept.
  • Hallucination risk – Even grounded prompts may return inaccurate or misleading information.
  • Traceability challenges – Prompt-driven logic is not as easily documented or reproducible as ML scoring models.
  • Legal exposure – If a user challenges an AI-driven decision (e.g., loan rejection), it's extremely difficult to reconstruct the “why” using only an LLM.

That’s why, in practice, no responsible organization deploys an LLM as the sole engine for decision-making in regulated use cases.

Composite AI Systems Are the Real-World Standard

To address these risks while benefiting from AI, regardless of whether they are high-risk AI systems or any other category, most enterprises adopt hybrid architectures that combine:

  • LLMs for unstructured tasks (e.g., text extraction, user interaction, rationale generation)
  • ML models for decision scoring (e.g., credit risk models, CV matching)
  • AI agents to orchestrate workflows, integrate compliance checks, manage escalations, and ensure end-to-end observability

Composite systems built with such hybrid architectures can help mitigate systemic risks and systemic risk associated with high-impact AI models, as they allow for better monitoring, evaluation, and control of broad, potentially harmful effects.

This can be an example of an architecture that covers the technical demands of real-world use cases and the regulatory requirements under the EU AI Act. The Act also provides support for start ups to help them comply with regulatory requirements.

Example: Modular AI System for Loan Eligibility

A compliant, real-world implementation might look like this:

  • Step 1: A general purpose AI model like GPT extracts structured fields from PDF income statements and transaction logs.
  • Step 2: A trained ML model evaluates creditworthiness based on applicant history and financial metrics.
  • Step 3: An orchestrated AI system (built in Azure AI Foundry) applies internal policies, triggers fraud checks, and decides whether to route for human review.
  • Step 4: GPT generates a plain-language explanation for the applicant or underwriter.
  • Step 5: A loan officer reviews and finalizes the outcome, with all actions logged for compliance.

This setup allows each system to do what it does best, LLMs for language processing, ML for risk scoring, and AI agents for orchestration, while meeting audit, transparency, and governance requirements.

The Real Risk of “Minor” AI Mistakes

A common misconception is that small failures in AI systems, like a hallucinated explanation or a misinterpreted document, don’t carry much risk. However, even minor issues can lead to:

  • Regulatory investigations and audits
  • Legal complaints or discrimination claims
  • Internal rollout freezes or rework cycles
  • Reputational damage and loss of user trust

Responsible companies should be aware of serious incidents as compliance risks that need technical and operational mitigation.

Why Sandboxing Is So Important for AI Innovation Under the EU AI Act?

A sandbox is a safe, isolated testing environment, often powered by synthetic data, that allows teams to build and test minimum risk, limited risk, and high risk AI systems without touching production systems or exposing sensitive information.

This is especially important under the EU AI Act, where even a proof of concept involving real data could trigger regulatory obligations.

What Makes a Sandbox Valuable:

  • Synthetic data that mimics real-world structures without privacy risk - this allows to avoid an unacceptable risk scenario (prohibited AI practices)
  • Isolation from production environments to avoid unintended system impacts
  • API simulation and integrations for real-world testing scenarios
  • Audit-friendly experimentation, where assumptions and models can be logged, challenged, and iterated before going live

Enterprise Benefits:

  • Significantly lower compliance risk for early-stage projects
  • Faster experimentation, especially when building with Microsoft tools like Copilot Studio or Azure AI Foundry
  • Smarter co-development with vendors, and internal teams
  • Stronger due diligence ahead of real-world deployment
  • Enables medium sized enterprises to test and validate AI systems in a controlled environment, helping them compete in the EU artificial intelligence market

A good example is ANZ’s BlueSpace sandbox, developed in collaboration with Virtusa using our parent company’s Open Innovation Platform (OIP). It enables banks and partners to co-create, test AI agents, and evaluate models with synthetic transaction data, without the delays or risks of full integration, and while avoiding prohibited AI practices.

When to Use It:

  • When building a PoC for a high-risk use case or a high risk ai system (e.g., loan decisioning, HR automation)
  • When testing new LLM behavior or prompt strategies
  • When integrating with third-party APIs or data layers
  • When launching early-stage AI pilots in regulated sectors

Sandboxing gives tech teams a safe space to test and explore Artificial Intelligence ideas without getting tangled in compliance risk. It’s a practical way to validate what works, fix what doesn’t, and build with confidence before rolling anything out in production.

Conclusion

The EU AI Act is changing how organizations build and use AI, especially when it’s used in areas that affect people’s lives, like finance, HR, or healthcare. It’s not just about what your AI can do, but how it’s built, how decisions are made, and how risks are managed.

If you’re using tools like LLMs, ML models, or AI agents, it’s important to understand how the rules apply and to design your systems with those requirements in mind from the start.

If you need support building AI solutions that meet these requirements, reach out. Our team of AI experts can guide you through the entire process including how to ensure EU AI Act compliance for your AI.

Looking for support on your projects? Get in touch with our team!
360° IT Check is a weekly publication where we bring you the latest and greatest in the world of tech. We cover topics like emerging technologies & frameworks, news about innovative startups, and other topics which affect the world of tech directly or indirectly.

Like what you’re reading? Make sure to subscribe to our weekly newsletter!
Relevant Expertise:
Are your AI workflows aligned with the EU AI Act?
Book a call
Looking for support on your projects?
Schedule a Call

Subscribe for periodic tech

By filling in the above fields and clicking “Subscribe”, you agree to the processing by ITMAGINATION of your personal data contained in the above form for the purposes of sending you messages in the form of newsletter subscription, in accordance with our Privacy Policy.
Thank you! Your submission has been received!
We will send you at most one email per week with our latest tech news and insights.

In the meantime, feel free to explore this page or our Resources page for eBooks, technical guides, GitHub Demos, and more!
Oops! Something went wrong while submitting the form.

Related articles

Table of Contents

Heading

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Looking for support on your projects? Get in touch with our team!
Looking for support on your projects?
Schedule a Call