Ripple
AI

Senior Data Engineer

Ripple · CA San Francisco, California, United States · $91k - $96k

Actively hiring Posted almost 4 years ago

Role overview

Ripple is looking for a Senior Data Engineer to help build scalable data infrastructure. You will be part of the effort to to democratize data for Ripplers by developing self-service, curated data models through financial hub. Data is core to everything that Ripple does today, and you will design and mentor the data engineers and Data Ops pods that will be at the center of data. Our data science research direction, machine learning strategy, and product analytics will rely on the infrastructure this team builds.

What you'll work on

  • Own the delivery, quality, and reliability of our Financial Data Hub 
  • Provide Technical Leadership for strategy, architecture, design, development, testing, production launch, and ongoing support of data applications
  • Build and implement Operations like monitoring and alerting tools and best practices for Financial Data Warehouse. support for production issues, profile performance, and drive in root cause analysis.
  • Collaborate  and ensure business teams by providing technical input to Data Governance policies, standards and processes related to data with clear data classification and data ownership, access, and security (privacy & protection) of sensitive data.
  • Participate in evaluating and recommending 3rd party technologies and ability to make build vs. buy decisions 
  • Design, develop, manage, and maintain application development standards that meet or exceed industry standards. Institute and manage an effective application development Systems Lifecycle methodology.
  • Build processes/tools to optimize warehouse cost and performance
  • Ensure a compliant and secure data environment
  • Perform presentations of our products/features to internal stakeholders
  • Mentor team members on administration activities and best practices

What we're looking for

  • 5+ years experience in data platform preferably data warehousing
  • 4+ years of experience in distributed data store like BigQuery, Snowflake, Presto, Netezza, or similar
  • 3+ years of experience with cloud technologies like AWS, GCP, and Azure
  • Deep knowledge in data warehouse architecture, and integration
  • Experience with data management in SQL and/or NoSQL databases (PostgreSQL, Oracle, Teradata, Hadoop)
  • Familiar to cloud-based architectures, development, test, and deployment automation
  • String understanding of SQL, data structures, data integrity, and schema design best practices
  • Experience in driving initiatives as a technical lead and subject matter expert in one or more areas
  • Deep experience with distributed systems, distributed data stores, data pipelines, and other tools in cloud services environments (e.g, AWS, GCP)
  • Prior experience with data processing technology (e.g. Spark, Hadoop) and workflow management tools (e.g. Airflow, DBT)
  • Experience with schema design and dimensional data modeling
  • A strong advocate of data governance and data quality
  • Experience with distributed event streaming platforms like Kafka, Flink, etc.
  • Experience with database internals, database language theories, database design, SQL and database programming
  • You have experience building internal infrastructure that is shared across teams
  • Solid leadership and communication skills

Tags & focus areas

Used for matching and alerts on DevFound
Engineer Gcp Gaming Aws Azure Spark Airflow