Skip to main content
TRM Labs logo

Senior Data Engineer, Data Lakehouse Infrastructure

TRM Labs
Full Timesenior
United States$190k – $220kPosted February 22, 2026

Salary Context

This role offers $190k–$220k. The median for Senior-level security roles is $130k–$179k (based on 92 listings). 33% above median.

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonSQLGCPApacheSnowflakeBigQueryKafkaSparkAirflow

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Build a Safer World. 

TRM Labs provides blockchain analytics and AI solutions to help law enforcement and national security agencies, financial institutions, and cryptocurrency businesses detect, investigate, and disrupt crypto-related fraud and financial crime. TRM’s blockchain intelligence and AI platforms include solutions to trace the source and destination of funds, identify illicit activity, build cases, and construct an operating picture of threats. TRM is trusted by leading agencies and businesses worldwide who rely on TRM to enable a safer, more secure world for all.

We’re building the foundational data infrastructure powering next-generation analytics at scale. As part of our mission, we’re architecting a modern data lakehouse to support complex workloads, real-time data pipelines, and secure data governance—at petabyte scale.

We are looking for a Senior Data Engineer to help us design, implement, and scale core components of our lakehouse architecture. You will have ownership over data modeling, ingestion, query performance optimization, and metadata management using cutting-edge tools and frameworks like Apache Spark, Trino, Hudi, Iceberg, and Snowflake. We’re looking for engineers with deep expertise in at least one area and a solid understanding of the trade-offs among different technologies.

The impact you’ll have here:

  • Architect and scale a high-performance data lakehouse on GCP, leveraging technologies like StarRocks, Apache Iceberg, GCS, BigQuery, Dataproc, and Kafka.
  • Design, build, and optimize distributed query engines such as Trino, Spark, or Snowflake to support complex analytical workloads.
  • Implement metadata management in open table formats like Iceberg and data discovery frameworks for governance and observability using Iceberg compatible catalogs.
  • Develop and orchestrate robust ETL/ELT pipelines using Apache Airflow, Spark, and GCP-native tools (e.g., Dataflow, Composer).
  • Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement

What we’re looking for:

  • 5+ years of experience in data or software engineering, with a focus on distributed data systems and cloud-native architectures.
  • Proven experience building and scaling data platforms on GCP, including storage, compute, orchestration, and monitoring.
  • Strong command of one or more query engines such as Trino, Presto, Spark, or Snowflake.
  • Experience with modern table formats like Apache Hudi, Iceberg, or Delta Lake.
  • Exceptional programming skills in Python, as well as adeptness in SQL or SparkSQL.
  • Hands-on experience orchestrating workflows with Airflow and building streaming/batch pipelines using GCP-native services.

About the Team:

  • The Data Platform team is the funnel between all of TRM's data world and product world. We care about all layers of stack including petabyte of data stores, pipelines, and processes.
  • We have quite a big scope as a the team with new and exciting projects every quarter. As a result, we collaborate across the board with most teams at TRM.
  • We believe in async communication and are also not afraid to jump on a quick huddle if that helps to move things faster. We are both scrappy when the situation demands and also process-oriented when we need to achieve our OKRs.
  • We are always looking for people who can elevate the quality our tech and our execution. If you enjoy a remote-first and async friendly environment to achieve efficacy and efficiency at petabyte scale, our team could be a great pick for you!
  • Team members are based in the US across almost all timezones! Our on-call tends to be in EST/PST shift, whatever suits you the best.
  • We do try to reserve some overlap in the day for meetings. Our north star - no IC spends more than 3-4 hours/week in meetings.

Learn about TRM Speed in this position:

  • Build scalable engines to optimize routine scaling and maintenance tasks like create self-serve automation for creating new pgbouncer, scaling disks, scaling/updating of clusters, etc.
  • Enable tasks to be faster next time and reducing dependency on a single person.
  • Identify ways to compress timelines using 80/20 principle. For instance, what does it take to be operational in a new environment? Identify the must have and nice to haves that are need to deploy our stack to be fully operation. Focus on must haves first to get us operational and then use future milestones to harden for customer readiness. We think in terms of weeks and not months.
  • Identify first version, a.k.a., "skateboards" for projects. For instance, build an observability dashboard within a week. Gather feedback from stakeholders after to identify more needs or bells and whistles to add to the dashboard.

About TRM's Engineering Levels:

Engineer: Responsible for helping to define project milestones and executing small decisions independently with the appropriate tradeoffs between simplicity, readability, and performance. Provides mentorship to junior engineers, and enhances operational excellence through tech debt reduction and knowledge sharing.

Senior Engineer: Successfully designs and documents system improvements and features for an OKR/project from the ground up. Consistently delivers efficient and reusable systems, optimizes team throughput with appropriate tradeoffs, mentors team members, and enhances cross-team collaboration through documentation and knowledge sharing.

Staff Engineer: Drives scoping and execution of one or more OKRs/projects that impact multiple teams. Partners with stakeholders to set the team vision and technical roadmaps for one or more products. Is a role model and mentor to the entire engineering organization. Ensures system health and quality with operational reviews, testing strategies, and monitoring rigor.

The following represents the expected range of compensation for this role:

  • Individual pay is determined by skills, qualifications, experience, and location. The compensation details listed in this posting reflect the US base salary only.
  • The estimated base salary range for this role is $190,000 - $220,000.
  • Additionally, this role may be eligible to participate in TRM’s equity plan.
  • Please note – we factor in the different costs for geographies outside the United States.

Life at TRM

We are building a safer world. That promise shows up in how we work every day.

TRM runs fast. Really fast. We’re a high‑velocity, high‑ownership team that expects clarity, follow‑through, and impact. People who thrive here are energized by hard problems, experimentation, and direct feedback. If something takes months elsewhere, it often ships here in days. 

That pace isn’t for everyone. If you are optimizing primarily for consistent work-life balance, use the interview process to pressure-test fit. We want teammates who thrive here, not just survive here.

AI Fluency at TRM

AI fluency is a baseline expectation at TRM.

We believe AI meaningfully changes how top performers operate. We expect every team member to use AI to accelerate and reimagine their craft, not just automate surface tasks.

At TRM, AI fluency means you are among the top 10 percent of operators in your function in how you apply AI to:

  • Accelerate repeatable workflows
  • Structure and solve problems
  • Improve output quality

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free