Role overview
We are looking to hire passionate Senior AI Engineers to help turn data into intelligent, production-ready solutions. You will work across the full AI stack: traditional machine-learning models, large language models (LLMs), computer-vision pipelines, and analytics / forecasting workflows. If you enjoy exploring data, building state-of-the-art models, and shipping reliable AI services, we would love to meet you.
What you'll work on
- Model Development – Design, train, fine-tune, and evaluate models spanning classical ML, deep learning (CNNs, transformers), and generative AI (LLMs, diffusion).
- Data Exploration & Analytics – Conduct exploratory data analysis, statistical testing, and time-series / forecasting to inform features, prompts, and business KPIs.
- End-to-End Pipelines – Build reproducible workflows for data ingestion, feature engineering / prompt stores, training, CI/CD, and automated monitoring.
- LLM & Agentic AI Engineering – Craft prompts, retrieval-augmented generation (RAG) pipelines, and autonomous/assistive agents; fine-tune LLMs on domain-specific datasets to boost accuracy and align outputs with product requirements.
- AI Automation & Integration – Expose AI components as micro-services and event-driven workflows; integrate with orchestration tools (Airflow, Prefect) and business APIs to automate decision pipelines.
- Continuous Learning – Track advances in LLMs, vision, and analytics; share insights and best practices with the wider engineering team.
- Mentor junior engineers and contribute to technical direction and engineering best practices.
What we're looking for
- Knowledge of C++ or C# for performance-critical modules.
- Experience deploying models via Docker, Kubernetes, or cloud AI services.
- Exposure to vector databases and RAG workflows.
- Skill in BI / dashboard tools (Power BI, Tableau, Streamlit) or time-series frameworks (Prophet, statsmodels).
- Familiarity with MLOps / LLMOps tooling (DVC, MLflow Tracking, Weights & Biases, BentoML).
- Experience with image processing techniques (e.g., OpenCV, image segmentation, feature extraction)
- Experience with Spark (PySpark) and distributed data processing, including usage of platforms such as Databricks, AWS EMR, or GCP Dataproc.
- Strong SQL skills and experience working with large-scale datasets, including partitioning and performance tuning.
- Familiarity with modern data lake architectures and scalable data storage concepts.