Skip to main content
FreelanceJobs logo

AI & Agentic Platform QA Engineer – Enterprise | Manual & Automated Testing

FreelanceJobs
Full Timemid
CAPosted March 6, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

GraphQLPostmanRESTAPI

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

We're looking for an experienced QA Engineer to own quality assurance across an AI-powered, agentic enterprise platform in the defense technology and regulated software space.

This isn't traditional software testing — you'll be validating systems where AI agents make decisions, chain tasks, and interact with real infrastructure.

You'll be our last line of defense before anything ships, and you need to think like both a tester and a systems thinker.

This is a hands-on individual contributor role. You own the testing function end-to-end.

What You'll Be Working On

∙ Designing and executing test plans for AI agent workflows, LLM-integrated features, and multi-step agentic pipelines

∙ Testing deterministic and non-deterministic system behavior — knowing the difference and building strategies for both

∙ Performing functional, integration, regression, and adversarial testing across web and desktop platform surfaces

∙ Validating AI outputs for accuracy, consistency, hallucination risk, and alignment with expected behavior

∙ Building and maintaining automated test scripts for repeatable coverage across platform releases

∙ Identifying, documenting, and tracking defects with clear reproduction steps, severity classifications, and follow-through

∙ Supporting API testing, tool-call validation, agent-to-agent interaction testing, and role-based access control coverage

∙ Contributing to QA process standards, release checklists, and AI-specific quality frameworks

You're a Great Fit If You:

∙ Have 3+ years of QA experience, with at least some direct exposure to AI, ML, or LLM-integrated systems

∙ Understand agentic system architecture — tool use, orchestration layers, memory, context windows — well enough to test them intelligently

∙ Are strong in both manual testing discipline and at least one automation framework (Playwright, Cypress, Selenium, Postman, or similar)

∙ Can write clear, reproducible bug reports that engineers and AI/ML teams actually appreciate

∙ Have experience testing APIs (REST/GraphQL), agent tool calls, and complex multi-user or multi-agent workflows

∙ Are comfortable operating in ambiguity — AI systems don't always fail the same way twice

∙ Have experience in regulated, government, or compliance-driven software environments — a strong plus

∙ Familiarity with CMMC, FedRAMP, or NIST-aligned systems — a plus, not required

What We're NOT Looking For

∙ Testers who only validate static, deterministic software and have never engaged with AI system behavior

∙ Developers moonlighting as QA — we want someone who owns the QA function

∙ Agencies or teams — this is an individual contributor role

Engagement Details

∙ Type:

Contract / Freelance, ongoing potential

∙ Hours:

Part-time to start (~15–20 hrs/week), scalable to full-time

∙ Location:

Remote

∙ Communication:

Async-friendly with structured sprint touchpoints

To Apply, Please Include:

1. Overview of your QA experience and any direct work with AI or agentic systems

2. Your preferred automation tools and testing stack

3. One example of a complex or non-obvious bug you caught — and how you found it

4. Your hourly rate

AI systems demand a different kind of rigor.

If you've thought seriously about how to test things that don't always behave the same way twice, we want to hear from you.

​​​​​​​​​​​​​​​​

Contract duration of 1 to 3 months. with 40 hours per week.

Mandatory skills:

Web Testing, Software QA, Manual Testing, Software Testing, Functional Testing

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free