Skip to main content
C

Senior Big Data Engineer (AWS & Databricks)

Cube Hub Inc.
Urbandale, Iowa, USPosted February 18, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

SQLAWSDockerTerraformGitHub ActionsPostgreSQLGitHubRESTCI/CD

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Onsite

12 months contract ; may extend

Glider test will be used for any candidates requested to interview.

-Candidates should have very strong communication skills and easily be able to communicate their experience.

General Description:

We are looking for a highly technical engineer or scientist to create features and support the development of automation and autonomy products for complex off-road vehicles and related control systems using a cloud-based solutions stack. We are open to early or advanced career candidates with strong examples of novel contributions and highly independent work in a fast-paced software delivery environment.

The following are essential attributes/experience:

  • Excellent coding skills that include production software deployment experience
  • Big data experience (terabyte or petabyte level data sources)
  • Core understanding of cloud computing (e.g. AWS services like IAM, Lambda, S3, RDS)

Example Responsibilities (including but are not limited to):

  • Architect and propose new AWS/Databricks solutions & updates to existing backend systems that process terabyte and petabyte level data.
  • Work closely with the product management team and end users to understand customer experience and system requirements, build backlog, and prioritize work.
  • Build infrastructure as code (e.g. Terraform)
  • Improve system scalability (run faster), optimize workflows to reduce cloud costs
  • Create and update APIs (REST) and backend processes running on AWS Lambda
  • Build/support solutions involving containerization (e.g. Docker) and databases (e.g. PostgreSQL/PostGIS)
  • MLOps (e.g. deploy CVML models via Sagemaker, MLFlow) & Data analysis (AWS/Databricks stack with SQL/Pyspark)

Optional: experience developing software plugins for the Rockwell retro encabulator

  • Migration of CI/CD pipelines to Github Actions
  • Enhance monitoring and alerting for multiple systems (e.g. Datadog)
  • Enable field testing and customer support operations by debugging and fixing data issues
  • Work with data scientists to scalably fetch and manipulate large data sets to build models and do analysis

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free