H
AI

Senior Data Engineer

Hermeneutic Investments · Taipei or Remote · $126k - $131k

Actively hiring Posted 10 months ago

Role overview

We are looking for a Senior Data Engineer to help us architect, implement and operate the complete data infrastructure pipeline for our Research and Trading operations. This role will be crucial in building a scalable, reliable, and cost-efficient system for handling vast amounts of market trading data, real-time news feeds and a variety of internal and external data sources. The ideal candidate will be a hands-on professional who understands the entire data lifecycle and can drive innovation while collaborating across research and engineering teams to meet their needs.

Throughout the process, you'll be assessed for cultural fit through our company values:

We are a rapidly growing hedge fund, 2 years old, managing a 9-figure AUM, generating 200%+ annualized returns with a 4 Sharpe.

What you'll work on

  • Design, build, and optimize scalable pipelines for ingesting, transforming, and integrating large-volume datasets (market data, news feeds and various unstructured data sources).
  • Ensure data quality, consistency, and real-time monitoring using tools like DBT, 3rd party libraries that can facilitate data validation processes.
  • Develop processes to normalize and organize our data warehouse for use across different departments.
  • Apply advanced data management practices to ensure the scalability, availability, and efficiency of data storage
  • Ensure the infrastructure supports trading and research needs while maintaining data integrity, security, and performance at scale
  • Collaborate with research and analytics teams to understand their data needs and build frameworks that empower data exploration, analysis, and model development. Create tools for overlaying data from multiple sources
  • Ensure that data storage, processing, and management are done in a cost-effective manner, optimizing both hardware and software resources. Implement solutions that balance high performance with cost control.
  • Stay ahead of the curve by continuously evaluating and adopting the most suitable technologies for the organization’s data engineering needs. Ensure that company’s systems align with the latest best practices in data management

What we're looking for

  • Strong problem-solving and analytical thinking
  • Clear communication skills for cross-functional collaboration
  • Proficiency in building robust data quality checks for ingested data
  • Experience identifying anomalies in ingested data
  • Strong proficiency in writing complex SQL (and similar) queries and optimize performance
  • Proficiency in Python or Java/Scala
  • Experience building and maintaining complex ETL pipelines with tools like Apache Airflow, dbt, or custom scripts
  • Strong understanding of dimensional modeling, star/snowflake schemas, normalization/denormalization principles
  • Proven experience with platforms like Snowflake, Redshift, BigQuery, Synapse
  • Expert knowledge of Apache Spark, Kafka, Flink, or similar
  • Strong understanding of data security and privacy standards

Tags & focus areas

Used for matching and alerts on DevFound
Data Science Engineer Senior Aws Docker Java Kubernetes Scala