Master hands-on implementation of XAI techniques in Python, from LIME and SHAP to neural network visualization.
Master hands-on implementation of XAI techniques in Python, from LIME and SHAP to neural network visualization.
This course cannot be purchased separately - to access the complete learning experience, graded assignments, and earn certificates, you'll need to enroll in the full Explainable AI (XAI) Specialization program. You can audit this specific course for free to explore the content, which includes access to course materials and lectures. This allows you to learn at your own pace without any financial commitment.
Instructors:
English
What you'll learn
Implement local explainability techniques like LIME and SHAP in Python
Create global explanations using PDP and ALE plots
Develop example-based explanations for machine learning models
Visualize and explain neural networks using state-of-the-art techniques
Apply explainability methods to LLMs and generative models
Skills you'll gain
This course includes:
2.43 Hours PreRecorded video
4 assignments, 7 labs
Access on Mobile, Tablet, Desktop
FullTime access
Shareable certificate
Get a Completion Certificate
Share your certificate with prospective employers and your professional network on LinkedIn.
Created by
Provided by

Top companies offer this course to their employees
Top companies provide this course to enhance their employees' skills, ensuring they excel in handling complex projects and drive organizational success.





There are 3 modules in this course
This comprehensive programming course focuses on implementing Explainable AI (XAI) techniques in Python. Students learn to code and apply both local and global explainability methods, including LIME, SHAP, and PDP plots. The curriculum covers practical implementation of neural network visualization, attention mechanisms, and emerging approaches for explaining generative AI models. Through hands-on programming labs and real-world examples, participants gain expertise in making complex AI models interpretable and transparent.
Model-Agnostic Explainability
Module 1 · 6 Hours to complete
Explainable Deep Learning
Module 2 · 4 Hours to complete
Explainable Generative AI
Module 3 · 4 Hours to complete
Fee Structure
Instructor
Pioneering Responsible AI and Machine Learning Innovator at Duke University
Dr. Brinnae Bent is an esteemed faculty member in Artificial Intelligence at Duke University, where she serves as the Executive in Residence for the Master of Engineering in Artificial Intelligence program. With a robust background that bridges research and industry, Dr. Bent has led significant projects and developed impactful algorithms for major global companies, focusing on applications that enhance human health and well-being, such as noninvasive glucose monitoring and assistive technologies for mobility. She is a prolific researcher with over 30 publications, recognized for her groundbreaking work on digital biomarkers and her commitment to advancing Responsible AI practices. As an educator, Dr. Bent teaches core courses in the AI program and introduces innovative electives, including a forthcoming course on Explainable AI. Outside of her academic pursuits, she curates a weekly tech newsletter called “Spill the GPTea” and balances her professional life with personal interests as a mother, ultramarathoner, and artist. Dr. Bent's contributions to AI education and research position her as a leading voice in the field, dedicated to solving real-world challenges through technology.
Testimonials
Testimonials and success stories are a testament to the quality of this program and its impact on your career and learning journey. Be the first to help others make an informed decision by sharing your review of the course.
Frequently asked questions
Below are some of the most commonly asked questions about this course. We aim to provide clear and concise answers to help you better understand the course content, structure, and any other relevant information. If you have any additional questions or if your question is not listed here, please don't hesitate to reach out to our support team for further assistance.