This course provides a comprehensive, hands-on journey into model adaptation, fine-tuning, and context engineering for large language models (LLMs). It focuses on how pretrained models can be efficiently customized, optimized, and deployed to solve real-world NLP problems across diverse domains.

您将学到什么
Apply transfer learning and parameter-efficient fine-tuning techniques (LoRA, adapters) to adapt pretrained LLMs for domain-specific tasks
Build end-to-end fine-tuning pipelines using Hugging Face Trainer APIs, including data preparation, hyperparameter tuning, and evaluation
Design and optimize LLM context using relevance selection, compression techniques, and scalable context engineering patterns
Optimize, deploy, monitor, and maintain fine-tuned LLMs using model compression, cloud inference, and continuous evaluation workflows
您将获得的技能
要了解的详细信息

添加到您的领英档案
January 2026
17 项作业
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有5个模块
Explore how pretrained language models are adapted for new tasks using transfer learning techniques. Learn how parameter-efficient methods such as LoRA and adapters enable lightweight fine-tuning, and how domain-specific data improves model performance. By the end, you’ll understand how to customize large models efficiently while minimizing training cost and complexity.
涵盖的内容
13个视频5篇阅读材料4个作业1个讨论话题
Dive into the end-to-end workflows required to fine-tune language models effectively. Learn how to prepare and tokenize datasets, configure training pipelines using the Hugging Face Trainer API, and optimize hyperparameters for better results. By the end, you’ll be able to train, evaluate, and publish fine-tuned models with confidence.
涵盖的内容
10个视频4篇阅读材料4个作业
Explore how context influences LLM behavior and performance. Learn the fundamentals of context engineering, manage token limits, apply context compression techniques, and design scalable context patterns. By the end, you’ll understand how to structure and optimize context for reliable and production-ready LLM applications.
涵盖的内容
15个视频4篇阅读材料4个作业
Learn how to optimize fine-tuned models for efficient inference and real-world deployment. Explore model compression techniques such as quantization and knowledge distillation, scaling strategies in cloud environments, and continuous monitoring practices. By the end, you’ll know how to deploy, scale, and maintain LLMs while controlling cost and performance.
涵盖的内容
13个视频4篇阅读材料4个作业
Apply everything you’ve learned through a hands-on practice project focused on fine-tuning and adapting an LLM end to end. Reflect on key concepts, complete the final graded assessment, and identify next steps for advancing your skills. By the end, you’ll be prepared to apply model adaptation techniques in real-world AI systems.
涵盖的内容
1个视频1篇阅读材料1个作业1个讨论话题
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
从 Software Development 浏览更多内容
状态:免费试用
状态:免费试用
状态:免费DeepLearning.AI
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
常见问题
This course teaches how to fine-tune, adapt, optimize, and deploy large language models for real-world applications.
It helps you move beyond prompt usage and gain hands-on expertise in production-grade LLM adaptation.
It is designed for ML engineers, AI practitioners, NLP developers, and data scientists.
更多问题
提供助学金,





