Read the AI Salaries & Skillset Benchmark Report for Poland in 2025
250+ AI/ML and 600+ Job Openings Total Analyzed
Access here, no email required
AI Ethics & Compliance Lead
Salary:
190
 - 
210
 Net B2B + VAT / Hour
43
 - 
48
 EUR B2B Contract / Hour
European Union
Apply Now!All Job Openings

AI Ethics & Compliance Lead

Salary:
29450
 - 
32550
 Net B2B + VAT / Month
6700
 - 
7400
 EUR B2B Contract / Month
190
 - 
210
 Net B2B + VAT / Hour
43
 - 
48
 EUR B2B Contract / Hour
29450
 - 
32550
 Brutto UoP / Month
Location:
European Union
Apply Now!See all Job openings

Job Description

Virtusa is seeking a specialized AI Ethics & Compliance Lead (T3) to join our Poland delivery center. In this role, you will be the primary authority responsible for ensuring that our AI/ML solutions are developed and deployed responsibly, ethically, and in full alignment with global regulatory standards. With 6–8 years of experience in risk management, compliance, or governance, you will bridge the gap between high-level ethical principles and technical execution. You will work directly with engineering squads to bake "Compliance by Design" into our AI lifecycles, ensuring transparency, fairness, and auditability for our global enterprise clients.

Key Responsibilities:

  • AI Governance & Framework Implementation: Define and execute an AI governance strategy aligned with OECD AI Principles, the NIST AI Risk Management Framework (RMF), and ISO/IEC 23894. Establish clear policies for every stage of the AI lifecycle: from design and development to deployment and continuous monitoring.
  • Regulatory Compliance & Data Protection: Serve as the subject matter expert for GDPR, HIPAA, and SOC 2 within the context of AI. Conduct comprehensive Data Protection Impact Assessments (DPIA) and AI-specific risk assessments to ensure privacy and security.
  • AI Risk Management: Develop and maintain a living AI Risk Register. Identify and mitigate risks related to algorithmic bias, performance degradation (drift), and explainability gaps. Define risk scoring models that guide go/no-go decisions for model deployment
  • Responsible AI Practices: Establish and lead an AI Ethics Review Board. Set the standards for Explainable AI (XAI), non-discrimination, and human-in-the-loop oversight to ensure accountability across all automated decisions.
  • Audit, Controls & Assurance: Design a robust AI controls framework. Support both internal and external audits (including SOC 2 and regulatory inquiries), ensuring full traceability of AI decisions and thorough documentation of model training and data lineage.
  • Monitoring & Continuous Compliance: Establish KPIs and Key Risk Indicators (KRIs) for AI governance. Implement continuous monitoring systems for bias detection and compliance adherence throughout the model production life.
  • Cross-functional Collaboration: Act as the strategic liaison between Data Science/Engineering, Legal, and Business stakeholders, translating complex regulatory requirements into actionable technical controls.

Requirements

  •  6–8 years of professional experience in Governance, Risk, and Compliance (GRC), with a significant focus on AI/ML or highly regulated digital transformation projects.
  • Deep, hands-on expertise in implementing NIST AI RMF, OECD AI Principles, and ISO/IEC 23894. You should be able to demonstrate how you have applied these to real-world AI projects.
  • Comprehensive knowledge of GDPR (specifically regarding automated decision-making and profiling), HIPAA (for PHI protection), and SOC 2 trust service criteria. (Familiarity with the EU AI Act is highly preferred for the Poland center).
  • Strong conceptual understanding of Large Language Models (LLMs), NLP, and classical ML algorithms. You must understand how models are trained and deployed to identify points of ethical risk effectively.
  • Practical knowledge of XAI techniques and tools to ensure that AI-driven outcomes can be audited and understood by non-technical stakeholders.
  • Proven ability to use bias-detection tools and frameworks to evaluate model fairness and implement remediation strategies.
  • Experience developing risk registers, defining control frameworks, and leading impact assessments (DPIA/AIA).
  • Background in supporting or leading technical audits for enterprise-grade software or AI systems.
  • Exceptional communication skills with the ability to influence C-suite executives, legal counsel, and technical engineering leads.
  • Ability to translate legal and ethical "prose" into technical "requirements" for developers and data scientists.
  • Master’s degree in Law, Computer Science, Philosophy (focused on Tech Ethics), or a related field.
  • High level of integrity, analytical rigor, and the ability to navigate ambiguous regulatory landscapes.
  • Native or C1-level English is mandatory for global client interaction and policy documentation.
  • Experience working within Agile/Scrum environments using Jira for tracking compliance tasks.




Benefits

  • Fully remote work model
  • Professional training programs – including Udemy and other development plans
  • Work with a team that’s recognized for its excellence. We’ve been featured in the Deloitte Technology Fast 50 & FT 1000 rankings. We’ve also received the Great Place To Work® certification for five years in a row

Ready to apply?
Check out our recruitment process*

* Please Note: different job opportunities may have a slightly different version of this process.