Skip to main content
B

Principal Data Engineer and Architect

BHP Career Portal
Full Timeprincipal
CAPosted February 11, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonJavaScalaSQLAWSPostgreSQLMySQLMongoDBDynamoDBCassandraSnowflakeGitGitLabKafkaSparkAgileScrumCI/CDDevOps

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

About BHP

At BHP we support our people to grow, learn, develop their skills and reach their potential. With a global portfolio of operations, we offer a diverse and inclusive environment with extraordinary career opportunities. Our strategy is to focus on creating a safe work environment where our employees feel strongly connected to our values and objectives, and where the capability of our people is key to our success.

Come and be a part of this success!

About Potash

The Jansen project in Saskatchewan, Canada is set to become one of the largest potash mines in the world and is located approximately 140 kilometers east of Saskatoon. As the largest investment in Saskatchewan’s history, Jansen is expected to generate approximately 5,500 workforce opportunities during construction and 900 long-term roles.

Once fully ramped up, Jansen is expected to have an initial production capacity of approximately 8.5 million tonnes per annum (Mtpa), with the potential to produce 16 to 17 Mtpa in future stages. To find out more about Potash and the Jansen project, click here.

BHP has recently been named one of Canada's top 100 employers again for 2026. This formal recognition is a testament to our dedication to fostering a positive, diverse and rewarding work environment.

Purpose

The Principal Data Engineer and Architect establishes the data engineering capabilities for Potash, ensuring scalable design patterns, consistent practices, and high‑quality data solutions that support reliable, enterprise-wide decision‑making.

About the Role

As a Principal Data Engineer and Architect, you will play a pivotal role in shaping the future of data engineering for the Potash asset. This role blends deep technical capability with strategic architectural leadership.

You will partner closely with regional Data Utility teams around the world to remove technical blockers, uplift engineering practices, and drive consistency in design patterns, frameworks, and reusable components. You will help build and foster a data community —promoting knowledge sharing, collaboration, and alignment across teams.

In this role, you will:

  • Work closely with internal customers to understand their data requirements, model data structures, and design and implement scalable ingestion pipelines from operational and enterprise systems.
  • Lead the design and development of integration solutions and ETL pipelines, ensuring high‑quality documentation and approval of engineering patterns.
  • Collaborate with on‑prem and cloud platform teams to identify capability gaps and evaluate emerging tools and technologies.
  • Work with the Enterprise & Global (E&G) Data Utility team to enhance and evolve the E&G data platform to meet customer needs.

This role offers the opportunity to design and shape the data platform and data ecosystem for Potash, and influence global engineering standards, support high‑impact data initiatives, and play a critical part in maturing BHP’s data engineering landscape.

Work Location

Downtown, Saskatoon office. 40 hours per week. Hybrid working (flexible, shared workstations and a combination of office and work from home) is standard for BHP. This position will require regular travel to the Jansen Mine Site.

The roster schedule may be adjusted based on business requirements or as operations progress.

About You

Essential

You will have:

  • Experience working across distributed processing, traditional RDBMS, MPP and NoSQL database technologies.
  • Strong background with ETL and data warehousing tools such as Informatica, Talend, Pentaho or DataStage.
  • Hands‑on experience with Hadoop, Spark, Storm, Impala and related platforms.
  • Strong understanding of RDBMS concepts, ETL principles and end‑to‑end data pipeline development.
  • Solid knowledge of data modelling techniques (ERDs, star schema, snowflake schema).
  • Experience with AWS services including S3, EC2, EMR, RDS, Redshift and Kinesis.
  • Exposure to distributed processing (Spark, Hadoop, EMR), RDBMS (SQL Server, Oracle, MySQL, PostgreSQL), MPP (Redshift, Teradata) and NoSQL technologies (MongoDB, DynamoDB, Cassandra, Neo4J, Titan).
  • Experience designing and building streaming pipelines using tools such as Kafka, Kafka Streams or Spark Streaming.
  • Strong proficiency in Python and at least two of: Scala, SQL or Java.
  • Experience deploying production applications, including testing, packaging, monitoring and release management.
  • Proficiency with Git‑based source control and CI/CD pipelines, ideally GitLab.
  • Strong engineering discipline including code reviews, testing frameworks and maintainable coding practices.
  • Master’s degree in Computer Science, MIS, Engineering or a related field.
  • At least 10 years’ experience in Data Engineering or Architecture.
  • Experience working within DevOps, Agile, Scrum or Continuous Delivery environments.
  • Ability to mentor team members and support capability development across teams.
  • Strong communication, listening and influe

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free