Role overview
At JFrog, we're reinventing DevOps to help the world's greatest companies innovate – and we want you along for the ride. This is a special place with a unique combination of brilliance, spirit, and just all-around great people. Here, if you're willing to do more, your career can take off. Thousands of customers, including the majority of the Fortune 100, trust JFrog to manage, accelerate, and secure their software delivery from code to production – a concept we call "liquid software." Wouldn't it be amazing if you could join us on our journey?
We are seeking an experienced, hands-on Senior AI Engineer to join the Generative AI applications Platform group at JFrog and lead the backend implementation and architecture of AI/LLM solutions – from agent graphs and tooling to RAG, streaming, and production deployment.
- Backend–LLM & agent architecture – 5+ years in production ML/AI and backend systems; recent hands-on experience with backend LLM systems, including agent workflows (e.g., LangGraph or similar), LangChain tooling and chains, state management, and streaming (e.g., SSE). You think in terms of nodes, state schemas, routing, and human-in-the-loop.
- Technical stack – Proficient in Python; comfortable with LangGraph, LangChain, FastAPI, PostgreSQL, and optionally Azure AI Search or similar. Experience with LLM providers (OpenAI/Azure, Google Vertex AI, etc.) and RAG (retrievers, chunking, reranking) expected.
- Generative AI in production – Proven track record building production GenAI applications, including multi-step agents, RAG, tool-augmented LLMs, and ideally human-in-the-loop or review flows. You care about observability, validation, and safe rollout.
- Bachelor's degree or higher in Computer Science or a related field, and strong communication and collaboration skills.