Nebius
AI

Senior ML Engineer Token Factory

Nebius · Amsterdam, NH, NL

Actively hiring Posted 22 days ago

Role overview

Why work at Nebius

Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we work

Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

The role

Token Factory is a part of Nebius Cloud, one of the world's largest GPU clouds, running tens of thousands of GPUs. We are building a high-performance inference and fine-tuning platform designed to push foundation models to their hardware limits. Our mission is to maximize throughput, minimise latency, and optimise cost-per-token across tens of thousands of GPUs.

Some directions we are currently working on, and which you can be a part of:

  • Inference Optimization: Identifying LLM inference bottlenecks to drive production speedups. Squeezing the maximum performance for a wide range of LLM architectures at scale (e.g., GPT-OSS, Kimi K2.5, DeepSeek V3.1/V3.2, GLM-5).
  • Inference engines support: Implement novel speculative decoding architectures, optimise components of various LLM designs (dense/MoE, autoregressive/parallel), and contribute to open-source inference engines.
  • Low Precision Training & Inference: Design and productionise low-precision (FP8, NVFP4/MXFP4) training and inference pipelines with measurable gains in throughput and cost-efficiency.

What we're looking for

  • Experience working with open-source inference engines (vLLM, SGLang, TensorRT-LLM), including contributions
  • Experience with kernel languages or DSLs such as Triton, Cute, CUTLASS, CUDA
  • A track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment.
  • Strong engineering skills, including experience in developing large distributed systems or high-load web services.
  • Open-source projects that showcase your engineering prowess
  • Excellent command of the English language, alongside superior writing, articulation, and communication skills.
  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Tags & focus areas

Used for matching and alerts on DevFound
Ai Machine Learning