G
AI

Staff Data Engineer

Gemini · New York, New York; San Francisco, California · $168k - $240k

Actively hiring Posted about 1 month ago

Role overview

About the Company

Gemini is a global crypto and Web3 platform founded by Cameron and Tyler Winklevoss in 2014, offering a wide range of simple, reliable, and secure crypto products and services to individuals and institutions in over 70 countries. Our mission is to unlock the next era of financial, creative, and personal freedom by providing trusted access to the decentralized future. We envision a world where crypto reshapes the global financial system, internet, and money to create greater choice, independence, and opportunity for all — bridging traditional finance with the emerging cryptoeconomy in a way that is more open, fair, and secure. As a publicly traded company, Gemini is poised to accelerate this vision with greater scale, reach, and impact.

The Department: Data

What you'll work on

  • Lead the architecture, design, and implementation of data infrastructure and pipelines, spanning both batch and real-time / streaming workloads
  • Build and maintain scalable, efficient, and reliable ETL/ELT pipelines using languages and frameworks such as Python, SQL, Spark, Flink, Beam, or equivalents
  • Work on real-time or near-real-time data solutions (e.g. CDC, streaming, micro-batch) for use cases that require timely data
  • Partner with data scientists, ML engineers, analysts, and product teams to understand data requirements, define SLAs, and deliver coherent data products that others can self-serve
  • Establish data quality, validation, observability, and monitoring frameworks (data auditing, alerting, anomaly detection, data lineage)
  • Investigate and resolve complex production issues: root cause analysis, performance bottlenecks, data integrity, fault tolerance
  • Mentor and guide more junior and mid-level data engineers: lead code reviews, design reviews, and best-practice evangelism
  • Stay up to date on new tools, technologies, and patterns in the data and cloud space, bringing proposals and proof-of-concepts when appropriate
  • Document data flows, data dictionaries, architecture patterns, and operational runbooks

What we're looking for

  • 8+ years of experience in data engineering (or similar) roles
  • Strong experience in ETL/ELT pipeline design, implementation, and optimization
  • Deep expertise in Python and SQL writing production-quality, maintainable, testable code
  • Experience with large-scale data warehouses (e.g. Databricks, BigQuery, Snowflake)
  • Solid grounding in software engineering fundamentals, data structures, and systems thinking
  • Hands-on experience in data modeling (dimensional modeling, normalization, schema design)
  • Experience building systems with real-time or streaming data (e.g. Kafka, Kinesis, Flink, Spark Streaming), and familiarity with CDC frameworks
  • Experience with orchestration / workflow frameworks (e.g. Airflow)
  • Familiarity with data governance, lineage, metadata, cataloging, and data quality practices
  • Strong cross-functional communication skills; ability to translate between technical and non-technical stakeholders
  • Proven experience in mentoring, leading design discussions, and influencing data-engineering best practices across teams

Tags & focus areas

Used for matching and alerts on DevFound
Engineer Aws Blockchain Crypto Python Spark Airflow Fulltime