Skip to main content
HCLSoftware logo

Senior Product Security Engineer

HCLSoftware
Full Timesenior
Visakhapatnam, Andhra Pradesh, INPosted April 25, 2026

Resume Keywords to Include

Make sure these keywords appear in your resume to improve ATS scoring

PythonAWSGCPAzureTensorFlowPyTorchDevOpsAPISaaS

Sign up free to auto-tailor your resume with all these keywords and get a higher ATS score

Job Description

Position Summary

HCL Software develops, markets, sells, and supports enterprise software solutions across multiple pillars including Customer Experience, Digital Solutions, Secure DevOps, and Security & Automation. As our products increasingly incorporate Artificial Intelligence and Machine Learning capabilities, ensuring the security of AI-driven systems has become a critical priority.

The Lead Product Engineer Security will be responsible for identifying and exploiting security weaknesses in AI/ML systems, applications, and models embedded within HCL Software products. This role focuses on evaluating the resilience of AI systems against emerging threats such as prompt injection, data poisoning, model manipulation, adversarial attacks, and LLM abuse scenarios.

The individual will work closely with AI/ML engineering teams, DevOps, and Product Development groups to design and execute advanced security testing strategies for AI-enabled applications. The role will also support the secure adoption of different types of AI technologies across the organization while ensuring adherence to secure development practices.

This position requires strong expertise in application security and AI system behavior, along with the ability to simulate real-world attacks against AI-driven platforms.

What You Will Be Doing

  • Perform penetration testing of AI-powered applications and systems, including LLM-based applications, AI APIs, and ML pipelines
  • Identify vulnerabilities such as prompt injection, data leakage, insecure model outputs, model extraction, and adversarial inputs
  • Conduct red-teaming exercises for generative AI systems to simulate abuse scenarios
  • Evaluate AI systems for model integrity, training data risks, and inference security weaknesses
  • Collaborate with AI/ML engineering teams to ensure security best practices are embedded in model development and deployment
  • Develop attack methodologies and frameworks for AI security testing
  • Assess security risks associated with AI model hosting platforms, APIs, and inference services
  • Provide detailed vulnerability reports and remediation guidance to engineering teams
  • Integrate AI security testing into the Secure SDLC process
  • Research emerging threats in AI/ML security and adversarial machine learning
  • Work with internal and external teams to enhance AI security posture across HCL Software products

Required Qualifications / Experience

Skills

  • Security requirements.
  • Threatmodeling.
  • SCA, SAST, DAST, VAPT & Exploitations
  • Market-leading tools: AppScan, Black Duck, Fortify, etc.
  • Implementation and usage of AI in the VAPT.

Must-Have Technical Skills

  • 5–8+ years of experience in Application Security, Penetration Testing, or Offensive Security
  • Strong knowledge of web application and API security testing
  • Experience with security testing tools such as **Burp Suite, OWASP ZAP, Metasploit, and Nmap
  • Understanding of AI/ML architectures including LLMs, ML pipelines, and model deployment environments

Must-Have Functional Skills

  • Ability to simulate attacks against AI models and AI-driven applications
  • Strong knowledge of OWASP Top 10 and AI-specific security risks
  • Experience working with development teams to remediate security vulnerabilities
  • Strong analytical and problem-solving skills

Nice-to-Have Skills

  • Knowledge of LLM security risks (prompt injection, jailbreak attacks, hallucination exploitation)
  • Familiarity with AI security frameworks such as OWASP Top 10 for LLM Applications
  • Experience with Python-based AI/ML environments (TensorFlow, PyTorch, or similar frameworks)
  • Understanding of Adversarial Machine Learning concepts
  • Experience testing AI APIs or AI-powered SaaS platforms
  • Knowledge of cloud platforms such as AWS, Azure, or GCP

Certifications (Preferred)

  • OSCP / OSWE / CEH
  • AI or ML security related certifications (if available)

What We Offer

  • Remote-friendly work environment
  • Competitive salary and performance incenti

Want AI-powered job matching?

Upload your resume and get every job scored, your resume tailored, and hiring manager emails found - automatically.

Get Started Free