This course covers advanced deep learning topics, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and modern language models. You will learn techniques for image classification, time series prediction, and natural language processing. The course includes building and optimizing CNNs for image recognition, using architectures such as AlexNet, VGGNet, GoogLeNet, and ResNet, and working with pre-trained models. You will also work with RNNs and LSTMs for tasks like forecasting and text autocompletion. The curriculum covers neural language models, word embeddings (such as Word2vec and wordpieces), encoder-decoder architectures, attention mechanisms, and Transformers for machine translation. Hands-on projects using TensorFlow and PyTorch will help you develop practical skills for solving real-world problems in computer vision and language processing.

Learning Deep Learning: Unit 2
本课程是 Learning Deep Learning 专项课程 的一部分


位教师:Pearson
访问权限由 New York State Department of Labor 提供
您将学到什么
Build and optimize convolutional neural networks for advanced image classification tasks using TensorFlow and PyTorch.
Apply recurrent neural networks and LSTMs to sequential data problems, including time series forecasting and text autocompletion.
Develop neural language models and implement word embeddings for robust natural language processing.
Design and implement encoder-decoder architectures and Transformer models for machine translation and sequence-to-sequence tasks.
您将获得的技能
要了解的详细信息

添加到您的领英档案
4 项作业
August 2025
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有1个模块
This module provides a comprehensive introduction to advanced deep learning techniques for processing images and natural language. It covers convolutional neural networks for image classification, including architectures like AlexNet, VGGNet, GoogLeNet, and ResNet. The module then explores recurrent neural networks and LSTMs for time series and sequential data, followed by neural language models and word embeddings. Finally, it introduces encoder-decoder architectures, attention mechanisms, and Transformer models for neural machine translation, with practical implementations in TensorFlow and PyTorch throughout.
涵盖的内容
44个视频4个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.







