Skip to main content
CareFirst BlueCross BlueShield logo

Lead, Responsible AI, Security, and Model Risk (Remote)

CareFirst BlueCross BlueShield
Full TimestaffRemote
RemoteRemote$13k – $23kPosted April 16, 2026

Job Description

Resp & Qualifications

PURPOSE:

Drives enterprise trust and risk management for AI by owning the organizations end-to-end AI risk posture, inclusive of Responsible AI, model risk management, AI security, privacy, and compliance. Establishes and operationalizes governance frameworks, guardrails, and approval pathways that enable rapid delivery of AI solutions while meeting regulatory, security, and audit expectations. Ensures AI systems are safe, compliant, auditable, and secure across the full model lifecycle from ideation through production monitoring.

This role owns the risk decisions and guardrails that determine what AI systems may operate in production and under what conditions.

ESSENTIAL FUNCTIONS:

Enterprise AI Risk & Governance Ownership

  • Own the enterprise AI risk management framework covering model risk, Responsible AI, AI security, privacy, and compliance.
  • Define, implement, and enforce AI policies, standards, operating procedures, and control requirements across the AI lifecycle.
  • Establish clear decision rights and approval paths that support rapid delivery while maintaining strong controls.
  • Act as the final risk authority for AI systems entering or operating in production, including documentation and sign-off expectations.

Model Risk Management & Lifecycle Oversight

  • Lead model risk assessments, validation, approvals, and documentation for AI/ML/GenAI systems.
  • Define standards for transparency, explainability, performance monitoring, evaluation, and retraining triggers.
  • Partner with audit and enterprise risk teams to support internal/external reviews and ensure continuous audit readiness.
  • Oversee AI incident management from triage root cause remediation reporting, including control failures and model performance issues.

AI Security & Threat Posture

  • Own AI security risk posture, including risks unique to LLMs and generative AI (e.g., prompt injection, data leakage, misuse).
  • Partner with cybersecurity teams to implement AI-specific threat modeling, security requirements, monitoring, and detection controls.
  • Define requirements for model access controls, data usage, logging, and release governance aligned to enterprise security policy.
  • Ensure AI systems meet enterprise security expectations without duplicating platform/infrastructure ownership.

Regulatory, Privacy & Compliance Partnership

  • Serve as the primary AI liaison to Legal, Privacy, Compliance, ERM, and Security teams.
  • Interpret and operationalize regulatory requirements (e.g., HIPAA, state/federal AI guidance, emerging AI regulations).
  • Ensure AI systems adhere to data protection, consent, and responsible data usage requirements.
  • Translate regulatory expectations into practical guardrails delivery and engineering teams can implement.

Responsible AI & Ethical Deployment

  • Lead responsible AI practices related to fairness, bias detection/mitigation, explainability, and human-in-the-loop controls.
  • Define when/how human oversight is required in AI-supported decisions, including escalation and override requirements.
  • Embed Responsible AI requirements into design and release processes, rather than post-hoc reviews.

Cross-Functional Enablement (Risk, Not Training)

  • Educate delivery and engineering teams on approval criteria, risk expectations, control requirements, and compliance needs.
  • Provide clear, actionable guidance so teams understand how to get to yes quickly and safely.
  • Build dashboards and executive reporting that provide visibility into risk exposure, approvals status, incidents, and control health.

Planning, Roadmap Input, and Continuous Improvement

  • Provide risk input to enterprise AI roadmaps and investment planning.
  • Identify governance automation opportunities (templates, tooling, standard evidence packages) to reduce friction.
  • Continuously improve policies, standards, and controls based on incidents, audits, and evolving threat/regulatory landscapes.

QUALIFICATIONS

Education Level: Bachelor's Degree in Computer Science, Information Technology, or related field OR in lieu of a Bachelor's degree, an additional 4 years of relevant work experience is required in addition to the required work experience.

Licenses/Certifications Upon Hire Preferred:

  • IAAP - AIGP (Artificial Intelligence Governance Professional.

Experience: 10 years Experience in Architecture Domain.

Preferred Qualifications

  • Advanced de

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free