Role overview
The AI/ML Engineer will design and deploy high-impact machine learning systems, working end-to-end from data collection to production-grade solutions. This high-visibility role includes building AI models, engineering advanced pipelines, and implementing agentic AI (autonomous/self-directed AI agents) for forward-thinking, customer-focused solutions.
What you'll work on
Build, deploy, and optimize robust AI and ML models utilizing prominent tools, frameworks, and cloud services (Azure ML, AWS SageMaker, GCP Vertex AI, or on-premise as needed).
Build efficient data pipelines for data collection, cleaning, and transformation across structured and unstructured sources.
Perform feature engineering using tools like Pandas, PySpark, or Databricks to ensure high-quality inputs for ML models.
Engineer, productionize, and monitor agentic AI workflows, leveraging frameworks such as LangChain, LlamaIndex, OpenAI Agents, Semantic Kernel, or comparable orchestration libraries.
Develop and integrate APIs or software services for AI solution delivery using languages like Python, Java, or C#.
Support end-to-end data and model pipeline design, including large language models (LLMs), prompt engineering, and retrieval augmented generation (RAG) architectures.
Implement best practices in MLOps, CI/CD, containerization (Docker, Kubernetes), and automated testing.
Collaborate with data scientists, engineers, and customers to translate conceptual models and research into scalable, real-world systems.
Stay abreast of research, experiment with new agentic and generative AI technologies, and advocate for innovative approaches aligned to business needs.
Provide mentorship and participate in design/code reviews.
What we're looking for
Bachelor’s or Master’s in Computer Science, Engineering, or a related field.
3-5 years’ experience building and deploying ML systems in cloud or on-premise environments.
Proficiency in Python (preferred), and familiarity with Java, C++, or C#.
Hands-on with at least one ML framework (TensorFlow, PyTorch, Keras).
Experience with leading agentic AI tooling (LangChain, LlamaIndex, Semantic Kernel, OpenAI tools, AutoGen, CrewAI, etc.).
Practical knowledge of integrating LLMs (OpenAI, Azure OpenAI Service, Hugging Face, etc.) into workflows.
Familiarity with cloud AI/ML platforms (Azure ML, AWS SageMaker, GCP AI Platform) or strong on-premise deployment knowledge.
Containerization skills (Docker, Kubernetes) and source control proficiency (Git).
Understanding of software engineering fundamentals and experience with automated pipelines (MLOps).
Strong written and verbal communication, a knack for innovation, and a customer-first mindset.