Role overview
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
The DeepMind Abuse Detection team, a key part of core user protection, works in close collaboration with DeepMind to pioneer account-centric detection technologies for the innovative generative AI safety platform. As DeepMind rapidly expands its generative AI offerings, our team was formed to address the critical need for dedicated defenses against product misuse, shifting from reactive measures to proactive, actor-focused prevention. Our mission is to ensure a friction-free experience for benign users, provide guidance for those who are confused, and implement proportionate actions for malicious actors.
The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.
What you'll work on
- Design, build, and maintain components of our account-centric abuse detection systems on the generative AI safety platform.
- Develop and implement classification models and rules to identify policy-violation and abusive behavior in near real-time, leveraging account-level signals.
- Collaborate with other engineers to integrate detection systems with enforcement mechanisms like strikes and limited disables.
- Analyze data to understand abuse patterns and improve the effectiveness of our detection methods.
- Write and test production quality code, and participate in code reviews.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
What we're looking for
- Master's degree or PhD in Computer Science, or a related technical field.
- 2 years of experience with data structures and algorithms.
- Experience with generative AI techniques (e.g., LLMs, multi-modal, large vision models) or generative AI-related concepts (language modeling, computer vision).