- Course
- AI
Ethics and Regulations in Explainable AI
Ethical AI isn't just good practice, it's increasingly required by regulations. This course will teach you how to build explainable AI systems that prevent biases, satisfy regulators, help debug models, and provide clear explanations to users.
What you'll learn
AI systems are increasingly making decisions that affect people's lives, but most developers struggle to explain how these decisions are made — creating legal, ethical, and technical risks. In this course, Ethics and Regulations in Explainable AI, you'll learn to build transparent AI systems that meet regulatory requirements while improving your technical capabilities. First, you'll explore how to identify and prevent ethical risks through real-world case studies of AI failures. Next, you'll discover how regulations like GDPR and the EU AI Act require different types of AI explanations. Finally, you'll learn how to build technical explanations for different types of stakeholders. When you're finished with this course, you'll have the skills and knowledge of ethical AI development and regulatory compliance needed to build trustworthy AI systems that satisfy legal requirements and build user trust.
Table of contents
About the author
As a software engineer and lifelong learner, Dan wrote a PhD thesis and many highly-cited publications on decision making and knowledge acquisition in software architecture. Dan used Microsoft technologies for many years, but moved gradually to Python, Linux and AWS to gain different perspectives of the computing world.