FocusKPI Inc.
AI

Software Engineer - Machine Learning

FocusKPI Inc. · Mountain View, CA, US · $197k - $228k

Actively hiring Posted about 2 hours ago

FocusKPI is seeking a Software Engineer - Machine Learning to join one of our clients, a high-tech SaaS company.

We are looking for an experienced Machine Learning Engineer to lead the development of prompt injection and prompt safety models to protect the client's downstream agentic AI systems across phones, the cloud, and XR/AR. You will design, train, and deploy classifier and guardrail models (both cloud-based and hybrid on-device) that screen agent inputs and outputs for injection attacks, unsafe content, and policy violations. A core part of the role is post-training these models with RLHF, DPO, and related optimization techniques to push detection accuracy and false-positive rates beyond what off-the-shelf solutions provide.

Work Location: Mountain View, CA (Onsite role, 5 days/week onsite)

Duration: 12-month contract with potential to extend the contract depending on your performance & budget

Pay Range:$95 - 110/hr

No C2C resumes are considered

Position Responsibilities:

  • Design and train prompt-injection detection models and prompt-safety classifiers that operate on both inputs to and outputs from the client's agentic AI systems.
  • Build hybrid deployment pipelines that split safety inference between on-device (phone, XR/AR) and cloud, optimizing for latency, privacy, and detection coverage.
  • Apply post-training techniques (e.g. RLHF, reward modeling, policy optimization) to optimize guardrail model performance, calibration, and robustness against adaptive adversaries.
  • Curate and generate adversarial training data: direct and indirect prompt injections, jailbreaks, tool-use exploits, and unsafe-output cases drawn from red-teaming and production signals.
  • Build evaluation harnesses that measure attack success rate, false-positive rate, latency, and on-device footprint across model iterations and threat categories.
  • Partner with agent, device, and platform teams to integrate safety models into mobile-use agents, XR/AR assistants, and cloud agentic workflows, and to close the loop from production incidents back into training data.
  • Work cross-functionally with security researchers, modeling teams, and product engineers; document methods and, where appropriate, contribute to patents and publications.

Qualifications:

  • M.S. or Ph.D. in Computer Science, Machine Learning, Electrical Engineering, or a related field; or B.S. with equivalent industry experience.
  • 3+ years of industry experience in ML engineering or applied AI research, with demonstrated ownership of production ML systems.
  • 2+ years of industry experience in software engineering.
  • Strong proficiency in Python and PyTorch (or JAX/TensorFlow), with solid software engineering fundamentals (version control, testing, and reproducible experimentation).
  • Hands-on experience post-training LLMs with RLHF, DPO, RLAIF, or reward modeling, including reward design, preference data curation, and training stability.
  • Hands-on experience training and deploying classifier or guardrail models for safety, content moderation, abuse detection, or adversarial robustness.
  • Familiarity with prompt injection, jailbreak, and agentic AI threat models, and with distributed training frameworks (DeepSpeed, FSDP, Accelerate).

Preferred Qualifications:

  • Experience building safety or moderation systems for agentic AI: tool-use guardrails, indirect prompt injection defenses, or output filtering for autonomous agents.
  • Experience with red-teaming, adversarial data generation, or automated attack pipelines (e.g., GCG, PAIR, generator–critic frameworks).
  • Experience with on-device or edge ML deployment (ExecuTorch, Core ML, TFLite, MLC-LLM, vendor NPU toolchains) and model compression (quantization, distillation, pruning) for safety models.
  • Experience with telemetry, logging, or user-facing data systems on mobile, XR/AR, or consumer platforms, including privacy-preserving handling of user data (e.g., anonymization, on-device processing, federated approaches).
  • Publications at top-tier ML/NLP/security venues (NeurIPS, ICML, ICLR, ACL, EMNLP, USENIX Security, IEEE S&P), patents, or open-source contributions in the safety, alignment, or AI security space.

No C2C resumes are considered

Thank you!

FocusKPI Hiring Team

Founded in 2010, FocusKPI, Inc. (FocusKPI) is a data science and technology firm specializing in predictive analytics practice and methodologies. FocusKPI is a US company headquartered in Silicon Valley, California, with an East Coast office in Boston, Massachusetts.

NOTICE: Please be aware of fraudulent emails regarding job postings, job offers and fake checks. FocusKPI's recruiting team will strictly reach out via @focuskpi.com email domain. If you have received fraudulent emails now or in the past, please report it to https://reportfraud.ftc.gov/ .

The domain @focuskpijobs.com is fraudulent and not related to FocusKPI. Please do not not reply or communicate to anyone with @focuskpijobs.com.

zfa34MtQpN

Tags & focus areas

Used for matching and alerts on DevFound
Contract Ai Machine Learning