Skip to main content
DocuPet logo

Senior Software QA Engineer (Data Team)

DocuPet
Full Timesenior
CAPosted 4 days ago

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

SQLSnowflakeBigQueryGitRESTdbtCI/CDAPI

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

About Us

As the official pet registration provider for more than 250 jurisdictions, DocuPet is the largest and fastest growing pet registration platform in North America.

Our proprietary platform consolidates all pet information into a single place and provides the services for pet owners, community members and animal shelters to ensure pets can be reunited quickly if they become lost.

Beyond our platform, DocuPet offers specialized pet tags, an AI-powered pet tracker, lost pet alert system, and will soon be launching a first-of-its-kind pet parenting mobile app - all aimed to ensure every pet in North America is registered and that each has a safe and happy home.

Our work is very important. More than 6 million pets enter animal shelters every year. Just 10% of those animals are returned to their owners. Effective registration, pet identification, reunification tools, and animal shelter resources, all provided by DocuPet, is the solution that will measurably reduce shelter intakes while providing significant new funding for animal welfare organizations.

A new National Pet Record Search Tool, available for free to all shelters joining our National Animal Shelter Network will be launched in Q2 of 2025. DocuPet has the support of the animal welfare industry, and with astute strategic leadership will become the de facto National Pet Registry program serving tens of millions of pet owners by 2027.

About the Role

As a Senior QA Engineer on the Data team at DocuPet, you will play a critical role in ensuring the accuracy, reliability, and quality of our data and platform infrastructure. The Data team is responsible for building and maintaining robust data pipelines, ETL processes, and analytics and reporting systems that inform business decisions across the entire organization.

In this role, you will design, develop, and execute thorough testing strategies for data workflows, validating pipelines, transformations, APIs, and reporting outputs while collaborating closely with Data Engineers, Analysts, and Product Managers, embedded in the work rather than at the end of it. You will help shape the future of our quality processes by building automated testing solutions across multiple layers of our technology stack, and by designing the infrastructure that lets AI generate, run, and learn from tests at scale. As a senior team member, you will mentor others on QA best practices, testing strategies, and automation approaches.

AI generates the bulk of our functional and regression tests from Engineering and Product specs. Your job is to make those tests trustworthy, fill the gaps AI can't, and own the quality signal end to end. Strong QA fundamentals are the foundation, AI fluency is how you scale them. Your work will directly impact the integrity of DocuPet's data and support both operational excellence and business decision-making.

What you’ll do

  • Data and Pipeline Quality: Design AI-assisted test strategies for data pipelines, ETL processes, and schema evolution, using AI tooling to generate broad coverage while applying deep SQL knowledge to validate the transformations and edge cases that automated generation alone won’t catch.
  • Test Automation & Quality Ownership: Architect the spec-to-suite pipeline: feed Engineering and Product specs into AI tooling, evaluate output against your own quality bar, assertion depth, flake risk, PII safety, schema coverage and sign off. AI generates; you are accountable for what ships.
  • Test Infrastructure: Build and maintain the entire AI-assisted test system depends on: fixtures, seed data, API contract harnesses, page objects. Weak infrastructure produces weak AI output: this work sets the ceiling for everything built on top of it.
  • Exploratory & Risk-Based Testing: Define where AI-generated coverage is insufficient and lead targeted exploratory sessions in those gaps, complex data states, cross-pipeline dependencies, and failure scenarios that require human reasoning to anticipate and validate.
  • Reporting & Analytics: Own end-to-end validation of BI dashboards and reporting outputs: combining AI-assisted consistency checks with deep analytical scrutiny to catch discrepancies between source data and what the business is actually making decisions on.
  • Quality and Continuous Improvement: Treat quality metrics as a feedback system; monitor flake rate, escape defects, and AI output tweak rate, and use those signals to continuously improve prompts, specs, and infrastructure. The system should get measurably better each cycle.
  • Defect Management: Own the full defect lifecycle from AI-accelerated triage and clustering through to root cause diagnosis, impact assessment, and resolution in partnership with Data Engineering. Speed up the process; never delegate the judgment.
  • Upstream Influence: Operate at the requirements stage, not just the test stage — challenge ambiguous specs, shape acceptance criteria, and make testability a design consideration. The quality of AI-generated tests is only as good as the specs they come from.
  • Mentorship & Standards: Set the standard for AI-first QA practice on the team. Mentor engineers on prompt craft, context engineering, and rigorous evaluation of AI output, and build the documentation and shared libraries that make good practice repeatable.

What we’re looking for

QA & Data Skills (Technical Foundation)

  • 5+ years in software QA with significant focus on data systems, pipelines, or analytics platforms.
  • Deep SQL expertise - you trace transformations, validate aggregates, and find anomalies others miss.
  • Hands-on experience testing data-intensive web applications and backend systems end-to-end
  • Strong test automation experience with Playwright, Cypress, or similar; you own frameworks, not just write tests in them.
  • Solid API testing fundamentals: REST endpoints, payloads, schema, error handling, and performance.
  • Experience with CI/CD pipelines and Git as daily tools.
  • Bachelor's in Computer Science, Software Engineering, or equivalent experience.

AI-First Mindset

We treat AI as a force multiplier, not a shortcut. It accelerates how we generate, validate, and improve test coverage, but the judgment, standards, and ownership behind it are human. AI fluency is the baseline for everyone on this team; as a senior, you are expected to raise that bar.

  • Proficient in AI coding assistants (Claude Code, Copilot, or similar) as a primary working tool, across test generation, debugging, edge case discovery, and root-cause analysis.
  • Exercises sound judgment on AI output, knowing where it can be trusted, where it needs scrutiny, and building infrastructure that reflects that understanding.
  • Diagnoses weak AI output at its source, whether that is the prompt, the context, or the underlying infrastructure, and improves it rather than working around it.
  • Applies the same rigour to AI-generated tests as to any production code, covering assertions, edge cases, flake risk, and security considerations.
  • Maintains awareness of the AI tooling landscape and applies that knowledge to continuously improve how the team works.
  • Serves as the quality reference point for AI usage on the team, guiding others with well-reasoned, experience-backed judgment.

Senior Qualities

  • Sound judgment under ambiguity: you decide what to test deeply, what to skip, and you make the tradeoff explicit.
  • Systems thinking: you see how a flaky test or an unclear spec ripples across CI trust, release confidence, and team velocity.
  • Ownership without authority: you drive quality outcomes across teams through influence and craft, not title.
  • Clear and direct: you give actionable feedback and raise concerns early, not after the fact.
  • Teamplayer and Unblocker: you ship the shared library, the harness, the SOP. The work that makes everyone else faster, not just yourself

Nice-to-Have Skills:

  • Data observability tooling (Great Expectations, Monte Carlo, dbt tests)
  • Cloud data platforms (Snowflake, BigQuery, Databricks)
  • AI/ML pipeline testing or model output validation
  • AI-native testing platforms (Mabl, Testim, Applitools) for context on the landscape, not as a substitute for fundamentals
  • Performance testing for data APIs at volume

Job Type: Full-time

Pay: $100,000.00-$115,000.00 per year

Benefits

  • Dental care
  • Life insurance
  • Paid time off
  • Vision care

Experience

  • SQL: 3 years (required)
  • data-intensive web applications: 2 years (preferred)
  • maintaining test automation frameworks: 3 years (required)
  • AI coding assistants (Claude Code, Copilot, or similar): 1 year (required)
  • quality assurance: 5 years (required)
  • creating use cases and unit tests: 5 years (required)
  • testing RESTful APIs: 3 years (required)

Language:

  • English (required)

Work Location: Remote

About DocuPet

DocuPet logo

DocuPet

docupet.com

MobileOn-site

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free