Role overview
Our mission is to enable hardware deployment at the speed of software development. We are working towards automatic code transpilation and optimization for any hardware application.
**In this role, you will collaborate with a small team of talented researchers on ambitious, greenfield projects in generative AI and reinforcement learning.**
****Core responsibilities:****
* Design, execute, and analyze experiments with a high degree of independence
* Contribute to core models and frameworks
* Create high-quality datasets (both in-the-wild and synthetic)
* Perform literature reviews and implement new techniques from papers
* Contribute to publications, present at conferences and workshops, etc
****Research Areas of Interest:****
An incomplete list of current and near-term research directions:
* Contrastive representation learning
* Steerability and guided decoding
* Tractable probability models
* Code-specific architectures
* LLM fine-tuning, post-training, RLHF
**Requirements**
* Ph.D. in Computer Science or a closely related field
* Prior LLM research experience
* Comfortable programming in Python and familiar with frameworks such as PyTorch and HuggingFace
****Preferred Qualifications:****
* Publications at peer-reviewed conferences such as NeurIPS, ICLR, ICML, etc
* Experience with large-scale LLM training, particularly in a distributed computing environment
**Benefits**
* Competitive salary
* Health care plan (medical, dental, and vision)
* Retirement plan (401k, IRA) with employer matching
* Unlimited PTO
* Flexible hybrid work arrangement
* Relocation assistance for qualifying employees