Deep learning has revolutionized the field of natural language processing and led to many state-of-the-art results. This course introduces students to neural network models and training algorithms frequently used in natural language processing. At the end of this course, learners will be able to explain and implement feedforward networks, recurrent neural networks, and transformers. They will also have an understanding of transfer learning and the inner workings of large language models.

Deep Learning for Natural Language Processing
访问权限由 New York State Department of Labor 提供
您将学到什么
Define feedforward networks, recurrent neural networks, attention, and transformers.
Implement and train feedforward networks, recurrent neural networks, attention, and transformers.
Describe the idea behind transfer learning and frequently used transfer learning algorithms.
Design and implement their own neural network architectures for natural language processing tasks.
您将获得的技能
要了解的详细信息

添加到您的领英档案
17 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有4个模块
This first week introduces the fundamental concepts of feedforward and recurrent neural networks (RNNs), focusing on their architectures, mathematical foundations, and applications in natural language processing (NLP). We'll will begin with an exploration of feedforward networks and their role in sentence embeddings and sentiment analysis. We then progresses to RNNs, covering sequence modeling techniques such as LSTMs, GRUs, and bidirectional RNNs, along with their implementation in Python. Finally, you will examine training techniques, gaining hands-on experience in optimizing neural language models.
涵盖的内容
15个视频8篇阅读材料5个作业1个编程作业1个非评分实验室
This week we'll explore sequence-to-sequence models in natural language processing (NLP), beginning with recurrent neural network (RNN)-based architectures and the introduction of attention mechanisms for improved alignment in tasks like machine translation. The module also covers best practices for training neural networks, including regularization, optimization strategies, and efficient model training. At the end of the week, you will gain practical experience in implementing and training sequence-to-sequence models.
涵盖的内容
10个视频1篇阅读材料4个作业1个编程作业
This week explores transfer learning techniques in NLP, focusing on pretraining, finetuning, and multilingual models. You will first examine the role of pretrained language models like GPT, GPT-2, and BERT, and their challenges. We then explore multitask training and data augmentation, highlighting strategies like parameter sharing and loss weighting to improve model generalization across tasks. Finally, you will dive into crosslingual transfer learning, exploring methods like translate-train vs. translate-test, as well as zero-shot, one-shot, and few-shot learning for multilingual NLP.
涵盖的内容
17个视频4个作业1个编程作业
This final week introduces large language models (LLMs) and how they can be effectively used through techniques like prompt engineering, in-context learning, and parameter-efficient finetuning. You will explore language-and-vision models, understanding how multimodal architectures extend beyond text to integrate visual and other data modalities. We will also examine non-functional properties of LLMs, including challenges such as hallucinations, fairness, resource efficiency, privacy, and interpretability.
涵盖的内容
12个视频4个作业1个编程作业
攻读学位
课程 是 University of Colorado Boulder提供的以下学位课程的一部分。如果您被录取并注册,您已完成的课程可计入您的学位学习,您的学习进度也可随之转移。
位教师

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
从 Computer Science 浏览更多内容

University of Colorado Boulder

University of Colorado Boulder




