Inference
AI

Applied Machine Learning Engineer

Inference · San Francisco, CA, US · $220k - $320k

Actively hiring Posted about 1 month ago

Role overview

You will be responsible for building and improving the core ML systems that power our custom model training platform, while also applying these systems directly for customers. Your role sits at the intersection of applied research and production engineering. You'll lead projects from data intake to trained model, building the infrastructure and tooling along the way.

Your north star is model quality at scale, measured by how well our custom models match frontier performance, how efficiently we can train and serve them, and how smoothly we can deliver results to our customers. You'll own the full training lifecycle: processing data, creating dashboards for visibility, training models using our frameworks, running evaluations, and shipping results. This role reports directly to the founding team. You'll have the autonomy, a large compute budget / GPU reservation, and technical support to push the boundaries of what's possible in custom model training.

What you'll work on

  • Lead projects from from data intake through the full training pipeline, including processing, cleaning, and preparing datasets for model training
  • Build and maintain data processing pipelines for aggregating, transforming, and validating training data
  • Create dashboards and visualization tools to display training metrics, data quality, and model performance
  • Train models using our internal frameworks and iterate based on evaluation results
  • Develop robust benchmarks and evaluation frameworks that ensure custom models match or exceed frontier performance
  • Build systems to automate portions of the training workflow, reducing manual intervention and improving consistency
  • Take research features and ship them into production settings
  • Apply the latest techniques in SFT, RL, and model optimization to improve training quality and efficiency
  • Collaborate with infrastructure engineers to scale training across our GPU fleet
  • Deeply understand customer use cases to inform training strategies and surface edge cases

What we're looking for

  • 2+ years of experience training AI models using PyTorch
  • Hands-on experience with post-training LLMs using SFT or RL
  • Strong understanding of transformer architectures and how they're trained
  • Experience with LLM-specific training frameworks (e.g., Hugging Face Transformers, DeepSpeed, Axolotl, or similar)
  • Experience training on NVIDIA GPUs
  • Strong data processing skills and comfortable building ETL pipelines and working with large datasets
  • Track record of creating benchmarks and evaluations
  • Ability to take research techniques and apply them to production systems

Tags & focus areas

Used for matching and alerts on DevFound
Fulltime Ai Machine Learning Generative Ai