This course covers the development of natural language processing (NLP), starting with basic concepts and moving to modern transformer architectures. You will learn about attention mechanisms and their impact on language modeling, as well as the details of transformer models, including scaled dot product attention and multi-headed attention. The course includes practical exercises in transfer learning using pre-trained models such as BERT and GPT, with instruction on fine-tuning these models for specific NLP tasks in PyTorch. By the end, you will understand the theory behind current NLP models and gain practical experience in applying them to real-world problems.

Introduction to Transformer Models for NLP: Unit 1


位教师:Pearson
访问权限由 New York State Department of Labor 提供
您将学到什么
Understand the evolution of NLP architectures and the transformative impact of attention mechanisms.
Analyze the structure and mathematical foundations of transformer models, including scaled dot product and multi-headed attention.
Apply transfer learning techniques using pre-trained language models such as BERT and GPT.
Gain practical experience with PyTorch to fine-tune NLP models for custom tasks.
您将获得的技能
要了解的详细信息

添加到您的领英档案
3 项作业
August 2025
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有1个模块
This module explores the evolution of natural language processing (NLP) through the development and application of attention mechanisms and transformer architectures. Beginning with the history and foundational concepts of attention in language models, it delves into the transformative impact of transformers and their unique attention mechanisms. The module concludes with practical instruction on transfer learning, demonstrating how to fine-tune state-of-the-art pre-trained models like BERT and GPT using PyTorch to achieve advanced NLP results.
涵盖的内容
14个视频3个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.







