Role overview
- Work on a real enterprise AI solution with immediate business impact
- Build production GenAI systems — not prototypes
- Modern AI stack (LangGraph, RAG, vector search)
- High ownership and strong technical autonomy
- Opportunity to grow into deeper AI or architecture responsibilities over time
What you'll work on
- LLM Integration & Development:
Implement LLM-based chatbot functionality using frameworks such as LangChain and LangGraph.
- RAG Implementation:
Build and maintain document ingestion pipelines, embeddings, chunking logic, and vector search mechanisms.
- Backend Development (Python-first):
Develop backend services using Python (e.g., FastAPI), exposing APIs for chatbot interaction and system integration.
- Agent Workflows:
Implement structured LLM flows including prompt orchestration, tool calling, conversation memory, and response validation.
- Vector Database Integration:
Work with vector databases and hybrid search systems to enable efficient semantic retrieval.
- Enterprise Data Integration:
Connect to internal systems (APIs, document repositories, databases) and handle structured/unstructured data.
- Quality & Evaluation:
Improve prompt quality, reduce hallucinations, and implement evaluation mechanisms to ensure consistent responses.
- Containerized Deployment:
Package services using Docker and support CI/CD pipelines for deployment.
- Collaboration:
Work closely with product, infrastructure, and other developers to iteratively improve the system.
What we're looking for
- Experience with Go
Experience with:
LlamaIndex
Haystack
HuggingFace Transformers
LangSmith (tracing & evaluation)
- Basic knowledge of Kubernetes
- Experience with authentication systems (OAuth2/OIDC)
- Experience building chat interfaces or internal tools