- Learning Path Libraries: This path is only available in the libraries listed. To access this path, purchase a license for the corresponding library.
- Security
AI Red Team Tools
AI systems are now core to many products and services; defenders need realistic, repeatable red-team skills to expose model vulnerabilities, data poisoning, prompt injection, model inversion, and supply-chain weaknesses. This path trains practitioners to think like attackers against ML/AI stacks so they can design better mitigations and evidence-based defenses.
Content in this path
AI Red Team Tools
A practical, tool-focused learning path that teaches security professionals how to discover, test, and exploit weaknesses in AI systems using modern red-teaming frameworks, attack toolchains, and adversarial techniques — with hands-on labs that mirror real-world model-and-pipeline environments.
Try this learning path for free
What You'll Learn
- ##
- Identify AI attack surfaces: APIs, prompts, data, and CI/CD.
- Perform prompt injection and LLM-specific abuses.
- Create adversarial examples to cause evasion or misclassification.
- Execute data poisoning and supply-chain attacks.
- Test for model extraction, inversion, and membership inference.
- Comfortable with Python scripting, Linux command-line, basic ML concepts (training vs. inference), familiarity with pentesting tools and HTTP/REST APIs, and an understanding of pen testing.
- Pen Testing
- AI/ML
- Offensive Security
- Red Team Tools
