Role overview
High-tech project at the intersection of GPU-accelerated ML inference and real-time graphics. The goal is to build fast, reliable, cost-efficient inference pipelines for visual computing and integrate ML components into real-time rendering / engine workflows. A key focus is model distillation and compression to reduce latency, memory footprint, and infrastructure cost, with production deployment using standard engineering practices.
What you'll work on
Build and optimize GPU inference pipelines for low latency and high throughput
Implement model distillation / compression to make models faster and cheaper
Profile and tune performance across CPU and GPU (latency, throughput, memory)
Integrate ML into real-time graphics workflows / pipelines (engine-side integration when needed)
Maintain production readiness: reproducible builds, basic CI/CD, containerized deployment
What we're looking for
Senior-level ML/Inference Engineer (3-5+ years), able to work independently
Strong deep learning + CNN background, practical experience shipping models to inference
Distillation / compression experience (any solid KD / compression practice)
Strong Python + PyTorch OR equivalent (enough to implement training/inference and debug)
Strong GPU inference/performance mindset: CUDA fundamentals, profiling, optimization approach
Solid software engineering skills (clean code, testing basics, Git, collaboration)
Hands-on with TensorRT / ONNX Runtime / Triton (any of them)
Quantization / mixed precision / operator fusion experience (any subset)
Experience integrating ML into graphics or engines (Unreal/Unity, rendering pipeline basics)
Kubernetes/Docker in production, observability/telemetry practices
Familiarity with image quality metrics (SSIM/PSNR/LPIPS)