Data Engineer I (Base Camp)
Quorum Business Solutions, Inc.Resume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
Opportunity Details
QRR-4412 Data Engineer I (Base Camp)
Dallas, TX
Data Engineer I
Location: Houston, TX or Dallas, TX
Model of Work: Hybrid in Texas (On-Site/In-Office a minimum of 2 days/week)
We are currently recruiting a class of new graduate recruits to start with Quorum July 13, 2026
Are you excited by challenges? Do you enjoy working in a fast-paced, international and dynamic environment? Then now is the time to join Quorum Software, a rapidly growing company and industry leader in oil & gas transformation.
Quorum Software is the world's largest provider of digital technology focused solely on business workflows that empower the next evolution of energy. From emerging companies to supermajors, throughout every region of the globe, customers rely on Quorum's proven innovation and unmatched global expertise to streamline business operations and make data-driven decisions that optimize profitability and growth. Our industry-leading solutions are transforming energy companies across the entire value chain, helping visionary leaders evolve their organizations into modern energy companies.
Overview
We are looking for a Data Engineer I to join our Data Platform team and help build the foundational data infrastructure that powers analytics, reporting, and AI/ML capabilities across Quorum's product portfolio. You will work alongside experienced data engineers and architects to design data models, build and maintain data pipelines, and ensure data quality across our platform.
This is an entry-level role ideal for recent graduates or early-career professionals who are passionate about data engineering and eager to grow. You'll gain hands-on experience with modern cloud data technologies while contributing to a strategic platform that serves 1,800+ energy companies worldwide. You will report to the Data Platform team manager and collaborate closely with product engineering teams, data architects, and data scientists.
Responsibilities
- Build, test, and maintain ETL/ELT data pipelines that ingest, transform, and deliver data from multiple source systems into our centralized data platform
- Develop and maintain dimensional data models (fact and dimension tables) following established patterns and standards set by the data architecture team
- Write and optimize SQL queries and transformations for data processing workloads
- Understanding and ability to build medallion architecture within a Data Lake
- Implement and monitor data quality checks, validation rules, and alerting to ensure data accuracy and reliability
- Work within our cloud data platform (Databricks, Azure Data Services, or similar) to build scalable, production-grade data solutions
- Collaborate with product engineering teams to understand source system schemas, data flows, and business context across Quorum's Upstream, Measurement, and Midstream product lines
- Support the development and maintenance of data catalogs, documentation, and metadata to promote data discoverability and governance
- Participate in code reviews, pair programming, and team retrospectives to continuously improve engineering practices
- Troubleshoot data pipeline failures, investigate data anomalies, and implement fixes in a timely manner
- Contribute to the team's agile development processes including sprint planning, estimation, and daily standups
- And other duties as assigned
Requirements
- Bachelor's degree in Computer Science, Data Science, Information Systems, Statistics, Industrial Engineering, or a related technical field
- Strong proficiency in SQL, including the ability to write complex queries, joins, aggregations, and window functions
- Programming experience in Python, with exposure to data processing libraries (e.g., Pandas, PySpark) preferred
- Foundational understanding of data modeling concepts (relational, dimensional, star schema)
- Familiarity with cloud platforms, preferably Microsoft Azure (Azure Data Factory, Azure SQL, Azure Data Lake Storage, or similar services)
- Exposure to or coursework in ETL/ELT pipeline design and data integration concepts
- Basic understanding of version control systems (Git) and collaborative development workflows
- Strong analytical and problem-solving skills with attention to detail
- Excellent communication skills (written and verbal) and ability to work effectively in a team environment
- Eagerness to learn, take feedback, and grow in a fast-paced engineering organization
Preferred Skills
- Experience with Databricks, Apache Spark, or Delta Lake (including coursework, internships, or personal projects)
- Familiarity with data orchestration tools such as Apache Airflow, Azure Data Factory, or dbt
- Exposure to big data concepts and distributed computing frameworks
- Understanding of data governance principles, data lineage, and metadata management
- Familiarity with AI/ML concepts and how data engineering supports machine learning workflows (e.g., feature engineering, training dataset preparation)
- Internshi
Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free