Role overview
GeForce NOW (GFN) provides high-performance gaming to millions, regardless of their hardware. Within the Voice of the Customer team, we bridge the gap between petabytes of telemetry and the human experience. We don't just report on what is happening; we build the systems that explain why it’s happening. Our work directly influences cloud streaming performance, security, and user retention by translating massive, unstructured datasets into precise engineering actions.
We are looking for a validated practitioner who understands the "how" as much as the "why." As a Senior Data Scientist for VOC, you will be responsible for the end-to-end lifecycle of data products that protect the GFN user experience. This isn't a "reports and slides" role—you will be architecting systems that detect anomalies in real-time and correlate qualitative feedback with quantitative system performance.
What you'll work on
- Engineering Scalable Logic: Design and deploy production-ready algorithms that automate root-cause analysis for global streaming issues.
- Unstructured Data Synthesis: Build pipelines to process and analyze forum discussions and direct feedback, using semantic chunking and vector databases to link sentiment to technical telemetry.
- System Ownership: Navigate the full GFN software stack to identify data inconsistencies and refine models that predict churn and capacity needs.
- Pipeline Architecture: Build automated ETL workflows that transform raw logs into actionable signals for our engineering and business leadership.
- Strategic Collaboration: Work directly with product and infrastructure teams to turn statistical patterns into prioritized product roadmaps.
What we're looking for
- Foundational Depth: B.S., M.S., or PhD in Computer Science, Statistics, or Mathematics with mastery of probability and statistical modeling (or equivalent experience), along with 8+ years of proven experience.
- Core Execution: Expert-level Python. You write code that is modular, testable, and built for production.
- Data at Scale: Extensive experience with Spark, SQL, and Databricks. You should be comfortable wrangling massive datasets where efficiency is a requirement, not an afterthought.
- NLP & Text Mastery: Practical experience in text processing, specifically handling vector databases and unstructured data structures.
- Statistical Toolkit: Proficiency in both supervised and unsupervised learning, with a specific focus on time-series analysis and anomaly detection.