Role overview
From its academic roots, Aurora Energy Research is a thriving, rapidly growing company, currently serving over 950 of the world’s most influential energy sector participants, including utilities, investors, and governments.
While we constantly strive to reach new markets and diversify our product portfolio, we are already active across the globe in Asia-Pacific, Latin America, Europe, South Africa and North America, working with leading organisations to provide comprehensive market intelligence, bespoke analytic and advisory services, and cutting-edge software.
We are a diverse team of experts with vast energy, financial, and consulting backgrounds, covering power, hydrogen, carbon, and fossil commodities. With this, we provide data-driven intelligence to fuel strategic decisions in the global energy transformation.
What you'll work on
- Design and develop AI solutions, taking end-to-end ownership of AI features
- Build and maintain RAG pipelines, agentic orchestration, developing agent-based systems for complex multi-step tasks. Establish robust evaluation methods to measure the quality of our solutions, and its component parts
- Identify and evaluate new data sources, design robust, scalable data pipelines
- Deploy and scale AI solutions on AWS cloud infrastructure
- Work closely with the broader software engineering team and stakeholders (internal and external) to innovate highly effective solutions
- Mentor and coach junior and mid-level engineers, fostering their growth
- Monitor the performance and accuracy of deployed AI solutions and iterate to improve them
- Champion a culture of innovation, continuous learning, and operational excellence
What we're looking for
Required attributes:**
- Extensive experience in AI/ML development, 5+ years building complex software solutions with a focus on machine learning and AI
- Strong expertise in Python programming. Experience with ML frameworks such as scikit-learn, PyTorch, or TensorFlow is expected. Familiarity with Node/TypeScript is a plus
- Hands-on experience with modern AI/LLM tooling. We are looking for comfort with frameworks and libraries like LangGraph for building LLM applications
- Proven experience deploying and operating AI solutions on cloud platforms (AWS preferred). You have used cloud services like AWS Lambda (or EC2/ECS) to host models or run AI workloads, and are familiar with data storage options (S3, databases). Experience with CI/CD pipelines for rapid deployment, containerisation (Docker), and automating infrastructure (Terraform/CloudFormation or similar) is required to manage our AI services lifecycle
- Exceptional analytical and problem-solving skills. You can break down ambiguous problems (like improving an AI model’s relevance or figuring out why a pipeline is slow) and iterate to develop effective solutions
- Demonstrated ability to design and interpret complex quantitative analyses, using prototypes to translating insights into actionable strategies for business and product teams. Experience mentoring junior engineers or data scientists (providing guidance, code reviews, and fostering best practices) is required, as this role will help shape the growth of our AI team
- Excellent communication and collaboration abilities. You can effectively communicate complex AI concepts to different audiences, whether it’s explaining model results and limitations to product stakeholders or discussing technical details with fellow engineers