This LLM Fine-Tuning course equips you with the skills to optimize and deploy domain-specific large language models for advanced Generative AI applications. Begin with foundational concepts—learn supervised fine-tuning, parameter-efficient methods (PEFT), and reinforcement learning with human feedback (RLHF). Master data preparation, hyperparameter tuning, and key evaluation strategies. Progress to implementation using LLM frameworks and libraries, and apply best practices for model selection, bias monitoring, and overfitting control. Conclude with hands-on demos—fine-tune Falcon-7B and build an image generation app using LangChain and OpenAI DALL·E.


您将学到什么
Fine-tune LLMs using supervised learning, PEFT, and RLHF techniques
Prepare and structure datasets for efficient model training
Optimize model accuracy with hyperparameter tuning and bias checks
Build real-world GenAI apps with fine-tuned models like Falcon-7B and DALL·E
您将获得的技能
要了解的详细信息

添加到您的领英档案
July 2025
7 项作业
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有2个模块
Explore the foundations of LLM fine-tuning in this comprehensive module. Learn core principles of large language model (LLM) fine-tuning, from supervised and parameter-efficient methods (PEFT) to reinforcement learning with human feedback (RLHF). Gain hands-on experience in data preparation and hyperparameter tuning through real-world demos to optimize GenAI performance.
涵盖的内容
13个视频1篇阅读材料3个作业
Master LLM fine-tuning evaluation and deployment in this hands-on module. Learn to optimize and assess fine-tuned models, explore key libraries and frameworks, and implement best practices for data preparation, model selection, and bias monitoring. Apply concepts in real-time through demos including tuning Falcon-7B and building an AI image generation app with LangChain and DALL·E.
涵盖的内容
10个视频4个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
位教师

提供方
从 Machine Learning 浏览更多内容
- 状态:免费
DeepLearning.AI
- 状态:免费
DeepLearning.AI
- 状态:免费试用
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
Start by understanding the basics of large language models and their architecture. Then explore fine-tuning techniques like supervised learning, PEFT, and RLHF using tools such as Hugging Face, LangChain, and frameworks like PyTorch.
The time required depends on model size, dataset, and infrastructure. Fine-tuning smaller models can take a few hours, while larger models like Falcon-7B may require several days on high-performance GPUs.
A hands-on course that covers LLM architecture, fine-tuning methods, and real-world deployment—such as Generative AI programs with practical demos on Hugging Face and LangChain, is ideal for mastering LLMs.
更多问题
提供助学金,