Role overview
We’re supporting a major global financial technology organization that’s making significant investments in AI innovation. They’re scaling their engineering teams across North America to drive development of next-generation Generative AI solutions. Multiple openings are available for engineers at varying levels — from early-career developers to senior leads and architects — across areas like AI platform engineering, chatbot development, and data engineering for AI-driven systems.
**Why This Role**
This is a chance to be part of a global enterprise that’s putting real resources behind AI strategy — building tools, platforms, and models that impact client experiences and internal productivity at scale. You’ll join a high-performing engineering group that’s delivering enterprise-grade AI capabilities across multiple business lines.
**What You’ll Do**
* Build and enhance production-grade AI and LLM-based systems for enterprise applications.
* Contribute to model fine-tuning, prompt optimization, and training workflows.
* Develop APIs, microservices, and SDKs for internal and client-facing AI products.
* Collaborate with engineering and data teams to operationalize AI solutions and support MLOps/LLMOps processes.
* Partner cross-functionally to design and deliver reliable, scalable AI integrations.
**What You Bring**
* Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
* 4+ years of hands-on Python development experience.
* Strong understanding of Generative AI, LLMs, and related model architectures.
* Experience working with NLP, model training, and fine-tuning workflows.
* Solid grasp of Linux environments and modern DevOps practices.
**Nice to Have (Highlight These on Your Resume)**
* Hands-on experience with frameworks like Flask, Django, or FastAPI.
* Familiarity with Python libraries such as numpy, pandas, scikit-learn, matplotlib, or opencv.
* Experience deploying AI solutions using cloud services like Azure OpenAI, AWS Bedrock, AWS Sagemaker, or Google Vertex AI.
* Background in AI/ML lifecycle management — MLflow, Databricks, or Dataiku.
* Understanding of MLOps or LLMOps principles.
* Exposure to TensorFlow or PyTorch.
* Experience integrating AI models into enterprise or regulated environments.
* Familiarity with containerized cloud environments (Docker, Kubernetes).
* Version control experience with GitHub or Bitbucket.
* Bonus: experience working with conversational AI platforms (e.g., Copilot Studio, Kore.ai, Amelia).
* Experience collaborating with software development teams to embed AI into core applications.
**What’s In It for You**
* Join an organization that’s putting real investment behind AI and automation initiatives.
* Work on cutting-edge technology in a large-scale, data-rich environment.
* Collaborate with top-tier engineers and data scientists driving AI innovation in financial technology.
* Opportunities for career growth across multiple teams and projects.