- Course
- Security
LLM Prompt Injection: Attacks and Defenses
Integrating LLMs into an application can enhance productivity, but without security considerations, there are risks. This course teaches key practices for implementing LLMs securely and demonstrates how to test those implementations for weaknesses.
What you'll learn
LLMs need to be implemented securely—you can’t rely on the LLM itself for protection. So how do you achieve that, and what should you watch out for? In this course, LLM Prompt Injection: Attacks and Defenses, you’ll learn to use LLMs securely within your applications. First, you’ll explore the risks LLMs present, including when to trust them and when not to. Next, you’ll discover some of the specific attacks your LLM enabled applications will encounter, understanding how they work and why you need defenses. Finally, you’ll learn how to protect yourself, including actionable insights and approaches. When you’re finished with this course, you’ll have the skills and knowledge of LLM prompt injection needed to protect your application from unwanted, and potentially malicious, behavior.
Table of contents
About the author
Gavin is passionate about security and has an extensive background in software development in regulated environments. He currently works in a Red Team at a FTSE 100 company.