DevOps SRE m/f/d

About

Qevlar AI is revolutionising the SOC with autonomous investigation and, in doing so, solving three key problems for analysts: the cybersecurity talent shortage, alert fatigue, and increasingly complex and sophisticated threats.

Founded in 2023, we’ve already made waves in cybersecurity, AI, and start-up communities. A few highlights:

  • Raised $14M in funding led by EQT Ventures and Forgepoint Capital

  • Accepted into Station F’s flagship AI program (Meta, Hugging Face, Scaleway)

  • Named by Sifted (Financial Times) as one of Europe’s cybersecurity startups to watch

  • Joined Platform 58, La Banque Postale’s premier startup incubator

  • Ranked among the top 10 most innovative startups in France by EU-Startups

  • Secured early partnerships with major players across EMEA, NAMER, and MENA

  • We have some exciting (and impressive) early customers we’re really excited about.

We’re at a pivotal stage — building fast, growing smart, and bringing on sharp minds to shape the future of autonomous cyber defense.

Job Description

As our DevOps SRE, you’ll help us build and operate the infrastructure needed to scale autonomous cybersecurity investigations powered by LLMs.

  • Deploy and Optimize Our LLM Infrastructure
    Design and run scalable inference environments, optimize performance (latency, throughput, cost), and set up monitoring and logging for everything.

  • Enable Multi-Tenant Client Deployments
    Architect and implement infrastructure that can support secure, isolated deployments across on-prem, hybrid, and cloud setups.

  • Automate CI/CD and Infrastructure as Code (IaC)
    Build bulletproof pipelines, reduce manual work, and implement best-in-class IaC using Terraform or similar tools.

Preferred Experience

You’re probably a great fit if you have:

  • Solid experience with cloud infrastructure (ideally GCP)

  • Experience with on-premise, private cloud and multi-tenant cloud deployment

  • Proficiency in Infrastructure as Code (e.g., Terraform)

  • Understanding of networking, IAM, and security best practices

  • Experience building robust observability stacks (monitoring, alerting, logging)

  • Familiarity with multi-tenant security and network isolation

  • Strong communication and problem-solving mindset

  • Bonus if you have experience deploying and optimizing LLMs in production

What success looks like

  • LLMs deployed at scale with optimal throughput, latency, and cost-efficiency

  • On-prem & multi-tenant deployments running smoothly and securely

  • Automated infrastructure that's stable, fast to iterate upon and can be reproduced easily

  • A better developer experience and stronger CI/CD reliability across the team

Our stack

  • Github CI/CD

  • Infrastructure: Kubernetes on Google Cloud Provider

  • Database: Postgresql

    • 5 services (in Python / Typescript)
    • Deployed via Helm Charts

Why join Qevlar AI?

  • Work on cutting-edge AI infrastructure challenges with a real-world impact

  • Contribute to building a secure, AI-native platform changing how cybersecurity is done

  • Be part of a tight-knit, high-caliber team that values autonomy and speed

  • Opportunity for equity and early-stage ownership

  • Remote-friendly, flexible work culture

Recruitment Process

  1. Introduction Call

  2. Technical Interview with our Head of Engineering deep dive into cloud architecture, IaC, and infrastructure troubleshooting.

  3. Case Study / Infrastructure Design

  4. Meet the Team

This is a high-impact role for someone who wants to architect modern AI infra — and be a key pillar in how we scale. Ready to dive in?

Additional Information

  • Contract Type: Full-Time
  • Location: Paris
  • Education Level: Master's Degree
  • Experience: > 7 years
  • Possible partial remote