- Course
Adversarial Robustness Toolbox (ART)
This course will teach you how to use the Adversarial Robustness Toolbox (ART) to evaluate and improve the security of AI models against adversarial attacks.
- Course
Adversarial Robustness Toolbox (ART)
This course will teach you how to use the Adversarial Robustness Toolbox (ART) to evaluate and improve the security of AI models against adversarial attacks.
Get started today
Access this course and other top-rated tech content with one of our business plans.
Try this course for free
Access this course and other top-rated tech content with one of our individual plans.
This course is included in the libraries shown below:
- Security
What you'll learn
Adversarial machine learning is one of the most critical and overlooked challenges in AI security. In this course, Adversarial Robustness Toolbox (ART), you’ll gain the ability to evaluate, attack, and defend AI models using open-source tools. First, you’ll explore the fundamentals of adversarial robustness and why traditional ML models are vulnerable. Next, you’ll learn how to perform adversarial attacks such as evasion and poisoning using ART’s attack modules. Finally, you’ll apply defensive countermeasures including adversarial training and preprocessing defenses to harden your models. When you’re finished, you’ll have the skills and knowledge to leverage ART for testing, improving, and demonstrating the robustness of machine learning systems in real-world security environments.