We are looking for experienced team players to fill theposition of Platform Engineer and participate in our up-and-coming project for our client from pharmaceutical industry.
Responsibilities:
Contribute to deployment of state-of-the-art machine learning algorithms applied to cancer research.
Support the development of our internal ML tooling for large image analysis in the medical domain.
Work towards productization of prototypic workflows into compliant ML products at scale.
Support development of internal ML tooling: develop novel, or improve existing tools
Develop, deploy and maintain complex data stream pipelines.
Evaluate, qualify and integrate cloud-based computing platforms in close collaboration with IT and development teams.
Requirements
At least 5 years of experience as DevOps/Platform Engineer
Practical, production experience operating Kubeflow Pipelines for reproducible ML workflows at scale.
Proven experience deploying and operating workloads on Kubernetes (EKS/GKE/AKS), including upgrades, autoscaling, RBAC, networking, and reliability; strong Unix/Linux fundamentals.
Hands-on experience with AWS services (EKS, EC2, S3, IAM, CloudWatch; RDS a plus) and the ability to design secure, cost-aware architectures.
Strong Terraform skills and Git-based workflows for repeatable infrastructure provisioning and configuration management.
Practical experience with CI/CD platforms (GitHub Actions/Jenkins/GitLab CI), including artifact management, environment promotion, and progressive delivery.
Solid Python and/or shell scripting for platform automation and toil reduction.
Experience implementing logging, metrics, and tracing with SLOs, alerts, and runbooks (e.g., Prometheus, Grafana, CloudWatch, Splunk/New Relic) and a security-first mindset.
Ability to lead technical initiatives, communicate trade-offs clearly, and collaborate effectively with engineering and science teams.
Nice to have:
Experience with MLflow, Feast, Argo, Airflow, Ray, and model versioning/monitoring.
Familiarity with S3/object storage, artifact registries, and handling large image datasets; basic SQL/NoSQL exposure.
Experience with digital pathology or large-scale image processing (e.g., whole-slide images) and tools like OpenSlide, scikit-image, or OpenCV.
Experience tuning high-throughput pipelines, concurrency, memory usage, and integrating GPUs/accelerators.
Experience with VPC design, ingress/egress, service meshes, secrets management, IAM, and policy as code.
Experience in regulated environments (e.g., GxP), including data governance, privacy, and building software under regulated processes.
Experience with Jira/Zendesk and with JavaScript/TypeScript for internal tools or dashboards.
Benefits
Hybrid work mode ( 1-2 days/per week from customer office in Warsaw)
Professional training programs – including Udemy and other development plans
Work with a team that’s recognized for its excellence. We’ve been featured in the Deloitte Technology Fast 50 & FT 1000 rankings. We’ve also received the Great Place To Work® certification for five years in a row
Questions? Get in touch with the recruitment person hiring for this position!
Ready to apply? Check out our recruitment process*
* Please Note: different job opportunities may have a slightly different version of this process.
Follow us and keep up with our latest opportunities!