Role overview
We are seeking an experienced Senior ML Inference Engineer to join our team, focusing on optimizing and deploying our production virtual staining models at scale. The ideal candidate will have deep expertise in ML inference optimization, GPU programming, and building production-grade inference systems. You will work on critical challenges such as reducing inference latency for whole slide imaging (WSI) from tens of minutes to under 2 minutes, deploying models on edge devices with NVIDIA hardware, and ensuring our inference infrastructure meets FDA and SOC2 compliance requirements. This role offers the opportunity to work at the intersection of cutting-edge AI and life-saving healthcare technology, making a tangible impact on patient outcomes.
Location: Remote US
Company: Pictor Labs
Employment Type: Full-time
What you'll work on
- Design, development, and optimization of production ML inference systems for virtual staining models (Deepstain, Restain, ClearStain) serving clinical and pharmaceutical customers
- Architect and implement high-performance inference pipelines capable of processing gigapixel pathology images with sub-2-minute latency requirements
- Work with ML Research and Engineering teams to optimize model architectures and deployment strategies for both cloud-based APIs and edge devices (NVIDIA DGX Sparc, Grace Blackwell superchips)
- Evaluate, implement, and maintain state-of-the-art inference frameworks (TensorRT, Triton Inference Server, ONNX Runtime) to maximize GPU utilization and throughput
- Profile and optimize deep neural networks on NVIDIA GPUs using tools such as NVIDIA Nsight, PyTorch Profiler, and custom instrumentation
- Design and implement efficient model serving architectures that support both synchronous REST APIs and asynchronous batch processing workflows
- Collaborate with Platform and Edge Device teams to containerize inference systems (Docker, Kubernetes) for deployment across cloud and on-premise environments
- Partner with cloud providers (AWS, GCP, Azure) to optimize hosted inference solutions and leverage latest hardware accelerators
- Ensure inference systems meet regulatory requirements (FDA 510(k), SOC2) with comprehensive monitoring, logging, and audit capabilities
- Prototype and productionize new inference optimization techniques, including quantization, pruning, distillation, and dynamic batching strategies
- Build robust telemetry and monitoring systems to track model performance, latency, throughput, and resource utilization in production
*Qualifications
Required:**
- 7+ years of experience building and optimizing production ML inference systems at scale
- Expert-level proficiency in Python and experience writing high-performance inference services
- 5+ years of hands-on experience with PyTorch and at least one production inference tools (TensorRT, Triton Inference Server, ONNX Runtime, TorchServe)
- Deep understanding of computer vision model architectures, particularly generative models (GANs, diffusion models) and vision transformers
- Extensive experience profiling and optimizing deep neural networks on NVIDIA GPUs, including memory optimization, kernel fusion, and mixed-precision inference
- Strong background in image processing pipelines and libraries (OpenCV, Pillow, scikit-image) for handling large-scale medical imaging data
- Proven experience deploying ML systems on Kubernetes and major cloud providers (AWS, GCP, Azure)
- Experience with Docker containerization and orchestration for ML workloads
- Strong software engineering practices including version control (Git), CI/CD, unit testing, and production debugging
- Excellent communication, collaboration, and technical documentation skills
What we're looking for
- Experience with medical imaging, digital pathology, or whole slide imaging (WSI) processing
- Knowledge of edge device deployment and embedded systems for AI inference
- Experience with MLOps tools (MLflow, Kubeflow, Apache Airflow) and model versioning
- Understanding of FDA regulatory requirements for AI/ML in medical devices
- Background in distributed inference systems and model parallelism techniques
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack)