Master interpretable ML techniques in Python, from regression and decision trees to neural networks and mechanistic interpretability.
Master interpretable ML techniques in Python, from regression and decision trees to neural networks and mechanistic interpretability.
This course cannot be purchased separately - to access the complete learning experience, graded assignments, and earn certificates, you'll need to enroll in the full Explainable AI (XAI) Specialization program. You can audit this specific course for free to explore the content, which includes access to course materials and lectures. This allows you to learn at your own pace without any financial commitment.
Instructors:
English
What you'll learn
Describe and implement regression and generalized models
Explain and implement decision trees and rules
Demonstrate knowledge of interpretable neural networks
Understand mechanistic interpretability concepts
Analyze representation learning in LLMs
Skills you'll gain
This course includes:
1.78 Hours PreRecorded video
3 assignments, 7 labs
Access on Mobile, Tablet, Desktop
FullTime access
Shareable certificate
Get a Completion Certificate
Share your certificate with prospective employers and your professional network on LinkedIn.
Created by
Provided by

Top companies offer this course to their employees
Top companies provide this course to enhance their employees' skills, ensuring they excel in handling complex projects and drive organizational success.





There are 3 modules in this course
This comprehensive programming course focuses on implementing interpretable machine learning techniques. The curriculum covers regression models, decision trees, neural networks, and mechanistic interpretability concepts. Through hands-on Python programming labs, students learn to create transparent, ethical AI systems for critical domains.
Regression and Generalized Models
Module 1 · 5 Hours to complete
Rules, Trees, and Neural Networks
Module 2 · 4 Hours to complete
Introduction to Mechanistic Interpretability
Module 3 · 3 Hours to complete
Fee Structure
Instructor
Pioneering Responsible AI and Machine Learning Innovator at Duke University
Dr. Brinnae Bent is an esteemed faculty member in Artificial Intelligence at Duke University, where she serves as the Executive in Residence for the Master of Engineering in Artificial Intelligence program. With a robust background that bridges research and industry, Dr. Bent has led significant projects and developed impactful algorithms for major global companies, focusing on applications that enhance human health and well-being, such as noninvasive glucose monitoring and assistive technologies for mobility. She is a prolific researcher with over 30 publications, recognized for her groundbreaking work on digital biomarkers and her commitment to advancing Responsible AI practices. As an educator, Dr. Bent teaches core courses in the AI program and introduces innovative electives, including a forthcoming course on Explainable AI. Outside of her academic pursuits, she curates a weekly tech newsletter called “Spill the GPTea” and balances her professional life with personal interests as a mother, ultramarathoner, and artist. Dr. Bent's contributions to AI education and research position her as a leading voice in the field, dedicated to solving real-world challenges through technology.
Testimonials
Testimonials and success stories are a testament to the quality of this program and its impact on your career and learning journey. Be the first to help others make an informed decision by sharing your review of the course.
Frequently asked questions
Below are some of the most commonly asked questions about this course. We aim to provide clear and concise answers to help you better understand the course content, structure, and any other relevant information. If you have any additional questions or if your question is not listed here, please don't hesitate to reach out to our support team for further assistance.