Master transformer-based LLM fine-tuning with PEFT, LoRA, and QLoRA techniques.Generative AI Engineering and Fine-Tuning Transformers
Master transformer-based LLM fine-tuning with PEFT, LoRA, and QLoRA techniques.Generative AI Engineering and Fine-Tuning Transformers
This course cannot be purchased separately - to access the complete learning experience, graded assignments, and earn certificates, you'll need to enroll in the full Generative AI Engineering with LLMs Specialization program. You can audit this specific course for free to explore the content, which includes access to course materials and lectures. This allows you to learn at your own pace without any financial commitment.
Instructors:
English
Not specified
What you'll learn
Master parameter-efficient fine-tuning techniques
Implement LoRA and QLoRA adaptations
Optimize transformer models for specific tasks
Use Hugging Face and PyTorch frameworks effectively
Apply model quantization strategies
Develop practical fine-tuning solutions
Skills you'll gain
This course includes:
0.93 Hours PreRecorded video
4 assignments
Access on Mobile, Tablet, Desktop
FullTime access
Shareable certificate
Top companies offer this course to their employees
Top companies provide this course to enhance their employees' skills, ensuring they excel in handling complex projects and drive organizational success.





There are 2 modules in this course
This comprehensive course focuses on advanced techniques for fine-tuning transformer-based language models. Students learn parameter-efficient fine-tuning (PEFT) methods, including LoRA and QLoRA, and gain hands-on experience with both PyTorch and Hugging Face frameworks. The curriculum covers model quantization, pre-training transformers, and practical implementation of various fine-tuning techniques through interactive labs and real-world applications.
Transformers and Fine-Tuning
Module 1 · 4 Hours to complete
Parameter Efficient Fine-Tuning (PEFT)
Module 2 · 3 Hours to complete
Fee Structure
Instructors
Ashutosh Sagar: Expert in Generative AI and Fine-Tuning at IBM
Ashutosh Sagar is an instructor at IBM, offering courses in English focused on advanced topics in generative AI. His courses include "Generative AI Advanced Fine-Tuning for LLMs" and "Generative AI Engineering and Fine-Tuning Transformers." These courses delve into the techniques of fine-tuning large language models (LLMs) and transformers, equipping learners with the skills necessary to enhance model performance for specific applications.
Ph.D. Candidate in Health Informatics and Data Scientist at IBM
Fateme is a fourth-year Ph.D. candidate in Health Informatics at McMaster University, where she specializes in applying machine learning to detect behavior abnormalities in sensor data streams. In addition to her academic work, she is a data scientist at IBM. Fateme has published research in esteemed journals like ACM Transactions of Computing for Healthcare and has presented her work at leading institutions, including Mayo Clinic and the Duke Center for Health Informatics. Her contributions are advancing the field of data-driven healthcare solutions.
Testimonials
Testimonials and success stories are a testament to the quality of this program and its impact on your career and learning journey. Be the first to help others make an informed decision by sharing your review of the course.
Frequently asked questions
Below are some of the most commonly asked questions about this course. We aim to provide clear and concise answers to help you better understand the course content, structure, and any other relevant information. If you have any additional questions or if your question is not listed here, please don't hesitate to reach out to our support team for further assistance.