Role overview
Bright.AI is a high-growth Physical AI company transforming how infrastructure businesses interact with the physical world through intelligent automation. Our AI platform processes visual, spatial, and temporal data from billions of real‑world events—captured across edge devices, mobile sensors, and cloud infrastructure—to enable intelligent decision‑making at scale.
We are now hiring a
Senior MLOps Engineer
to lead the build‑out of our cloud‑native ML developer platform and production pipelines. This role is pivotal to building an integrated ML/AI development platform with programmatic data analysis and algorithm development capability on AWS—so teams can move from notebook to secure, reliable, and cost‑efficient production services quickly.
You’ll work at the intersection of ML engineering, cloud infrastructure, and developer experience, designing scalable data/model workflows, CI/CD for ML, observability, and governance that turn ideas into durable, monitored ML services.
What you'll work on
- Design, build, and operate our ML/AI development platform on AWS—including Amazon SageMaker AI (Studio/Notebooks, Training/Processing/Batch Transform, Real‑Time & Async Inference, Pipelines, Feature Store) and supporting services.
- Establish golden‑path project templates, base Docker images, and internal Python libraries to standardize experiments, data processing, training, and deployment workflows.
- Implement Infrastructure‑as‑Code (e.g., Terraform) and workflow orchestration (Step Functions, Airflow); optionally support EKS for training/inference.
- Build automated data pipelines with S3, Glue, EMR/Spark (PySpark), Athena/Redshift; add data quality (Great Expectations/Deequ) and lineage.
- Stand up experiment tracking and a model registry (SageMaker Experiments & Model Registry or MLflow); enforce versioning for data, code, and models.
- Implement CI/CD for ML (CodeBuild/CodePipeline or GitHub Actions): unit/integration tests, data contracts, model tests, canary/shadow deployments, and safe rollback.
- Ship real‑time endpoints (SageMaker endpoints/FastAPI on Lambda/ECS/EKS) and batch jobs; set SLOs and autoscaling, and optimize for cost/performance.
- Build monitoring & observability for production models and services (drift, performance, bias with SageMaker Model Monitor; service telemetry with CloudWatch/Prometheus/Grafana).
- Enforce security & governance: least‑privilege IAM, VPC isolation/PrivateLink, encryption, secret management.
- Partner with backend engineers to productionize notebooks and prototypes.
- Help integrate GenAI/Bedrock services where appropriate; support RAG pipelines with vector stores (OpenSearch) and evaluation harnesses.
What we're looking for
- Distributed training at scale (SageMaker Training, PyTorch DDP, Hugging Face on SageMaker).
- Data engineering at scale (e.g., Spark/EMR, Glue, Redshift).
- Observability stacks (e.g., Grafana), performance tuning, and capacity planning for ML services.
- LLMOps/RAG (Bedrock, vector databases, evals) as optional capabilities.
- Prior startup experience building ML platforms and products from the ground up.