Resume Keywords to Include
Make sure these keywords appear in your resume to improve ATS scoring
Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score
Job Description
At Dow, we believe in putting people first and we’re passionate about delivering integrity, respect and safety to our customers, our employees and the planet.
Our people are at the heart of our solutions. They reflect the communities we live in and the world where we do business. Their diversity is our strength. We’re a community of relentless problem solvers that offers the daily opportunity to contribute with your perspective, transform industries and shape the future. Our purpose is simple - to deliver a sustainable future for the world through science and collaboration.If you’re looking for a challenge and meaningful role, you’re in the right place.
About this role
Dow has an exciting opportunity for a Data Engineer located in Midland, MI or Houston, TX. This role will make significant technical contributions to critical data initiatives within our team at Dow. You will be responsible for driving the technical implementation and contributing to the design of scalable, Gold-layer data products on the Azure Databricks Lakehouse Platform.
This role focuses on solving complex technical challenges, optimization, architecture contribution, and reliability, ensuring our datasets are performant and ready to power advanced use cases, including:
- Machine Learning (ML) Pipelines
- Real-Time Data Consumption
- Generative and Agentic AI Systems
- Core Enterprise Reporting and BI
- Data-driven Applications
Responsibilities
- Technical Design Contribution: Collaborate with senior data engineers to translate complex business requirements and ambiguous problem statements into clear, robust, and scalable technical designs and data models (e.g., dimensional modeling, star schemas), and independently drive the implementation of these designs.
- Performance Optimization: Design, build, and deploy high-volume data transformation logic using highly optimized PySpark. You will apply advanced techniques to tune Spark jobs and diagnose performance bottlenecks to ensure maximum efficiency and minimal cloud compute cost.
- Architecture & Deployment: Contribute significantly to the design and improvement of CI/CD pipelines in Azure DevOps/Git, ensuring reliable, automated, and secure deployment of data solutions across environments.
- Diverse Data Integration: Deeply understand and connect to various source systems, demonstrating proficiency in managing data persistence and query performance across diverse technologies like SQL Server, Neo4j, and CosmosDB.
- Quality & Governance: Proactively implement and maintain advanced data quality frameworks (e.g., Delta Live Tables, Great Expectations) and monitoring solutions to ensure data reliability for mission-critical applications.
- Collaboration & Mentorship: Serve as a go-to technical resource for peers, conducting technical code reviews and informally mentoring Associate Data Engineers on PySpark and Databricks best practices.
A successful candidate will possess the experience and technical depth required to independently implement and optimize complex data solutions:
- Core Technical Expertise (2-5 Years Demonstrated Experience)
- PySpark and Distributed Processing: Proven ability to write highly optimized, production-grade PySpark/Spark code. Experience identifying and resolving performance bottlenecks in a distributed computing environment.
- Advanced Data Modeling: Practical experience designing and implementing analytical data models (e.g., dimensional modeling, star/snowflake schemas) and handling Slowly Changing Dimensions (SCDs).
- Cloud Orchestration: Expertise in using Azure Data Factory (ADF), Databricks Workflows, or equivalent tools (e.g., Airflow) for complex dependency management, error handling, and end-to-end pipeline orchestration.
- Database Versatility: Demonstrated experience with advanced SQL and hands-on experience querying and integrating data from at least one non-relational or Graph database (e.g., CosmosDB, Neo4j).
- Engineering Mindset and Professional Growth
- Technical Design Contribution: Ability to rapidly synthesize information and contribute clear, well-documented technical specifications and architectural diagrams to the design process.
- Feature Ownership: Demonstrated history of taking ownership of complex features and modules within larger projects, driving them to completion, and managing technical dependencies autonomously.
- Pragmatism and Initiative: A strong bias for action, coupled with a pragmatic approach to delivering stable, maintainable, and cost-effective solutions.
- Communication & Influence: Excellent verbal and written communication skills, with the ability to articulate technical designs to both engineering peers and senior stakeholders, effectively influencing technical decisions.
Required Qualifications
- A minimum of a bachelor’s degree or relevant military experience at or above a U.S. E5 ranking or Canadian Petty Officer 2nd Class or Sergeant OR 5 years relevant experience in lieu of a Bachelor’s degre
Similar Jobs
Junior Developer, Data Science
GIRO Inc. / Le Groupe en informatique et recherche opérationnelle
Data Engineer II
Thermo Fisher Scientific
Data Management -Senior Data Engineer
EY
SQL & Data Analytics Tutor Needed (Goal: Advanced Data Analyst / Tech Roles)
FreelanceJobs
Sr. Associate Data Scientist
Amgen
More Jobs at THE DOW CHEMICAL COMPANY
View all →Want AI-powered job matching?
Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.
Get Started Free