Northeastern University
Applied Natural Language Processing in Engineering Part 2
Northeastern University

Applied Natural Language Processing in Engineering Part 2

Ramin Mohammadi

位教师:Ramin Mohammadi

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
3 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
3 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度

要了解的详细信息

可分享的证书

添加到您的领英档案

最近已更新!

October 2025

作业

21 项作业

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有7个模块

This module delves into the critical preprocessing step of tokenization in NLP, where text is segmented into smaller units called tokens. You will explore various tokenization techniques, including character-based, word-level, Byte Pair Encoding (BPE), WordPiece, and Unigram tokenization. Then you’ll examine the importance of normalization and pre-tokenization processes to ensure text uniformity and improve tokenization accuracy. Through practical examples and hands-on exercises, students will learn to handle out-of-vocabulary (OOV) issues, manage large vocabularies efficiently, and understand the computational complexities involved. By the end of the module, you will be equipped with the knowledge to implement and optimize tokenization methods for diverse NLP applications.

涵盖的内容

1个视频13篇阅读材料2个作业1个应用程序项目

In this module, we will explore foundational models in natural language processing (NLP), focusing on language models, feedforward neural networks (FFNNs), and Hidden Markov Models (HMMs). Language models are crucial in predicting and generating sequences of text by assigning probabilities to words or phrases within a sentence, allowing for applications such as autocomplete and text generation. FFNNs, though limited to fixed-size contexts, are foundational neural architectures used in language modeling, learning complex word relationships through non-linear transformations. In contrast, HMMs model sequences based on hidden states, which influence observable outcomes. They are particularly useful in tasks like part-of-speech tagging and speech recognition. As the module progresses, we will also examine modern advancements like neural transition-based parsing and the evolution of language models into sophisticated architectures such as transformers and large-scale pre-trained models like BERT and GPT. This module provides a comprehensive view of how language modeling has developed from statistical methods to cutting-edge neural architectures.

涵盖的内容

2个视频19篇阅读材料4个作业

In this module, we will explore Recurrent Neural Networks (RNNs), a fundamental architecture in deep learning designed for sequential data. RNNs are particularly well-suited for tasks where the order of inputs matters, such as time series prediction, language modeling, and speech recognition. Unlike traditional neural networks, RNNs have connections that allow them to “remember” information from previous steps by sharing parameters across time steps. This ability enables them to capture temporal dependencies in data, making them powerful for sequence-based tasks. However, RNNs come with challenges like vanishing and exploding gradients which affect their ability to learn long-term dependencies. Throughout the module, you will explore different RNN variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs), which address these challenges. You will also delve into advanced training techniques and applications of RNNs in real-world NLP and time series problems.

涵盖的内容

2个视频22篇阅读材料2个作业1个应用程序项目

This module introduces students to advanced Natural Language Processing (NLP) techniques, focusing on foundational tasks such as Part-of-Speech (PoS) tagging, sentiment analysis, and sequence modeling with recurrent neural networks (RNNs). Students will examine how PoS tagging helps in understanding grammatical structures, enabling applications such as machine translation and named entity recognition (NER). The module delves into sentiment analysis, highlighting various approaches from traditional machine learning models (e.g., Naive Bayes) to advanced deep learning techniques (e.g., bidirectional RNNs and transformers). Students will learn to implement both forward and backward contextual understanding using bidirectional RNNs, which improves accuracy in tasks where sequence order impacts meaning. By the end of the course, students will gain hands-on experience building NLP models for real-world applications, equipping them to handle sequential data and capture complex dependencies in text analysis.

涵盖的内容

1个视频15篇阅读材料4个作业

This module introduces you to core tasks and advanced techniques in Natural Language Processing (NLP), with a focus on structured prediction, machine translation, and sequence labeling. You will explore foundational topics such as Named Entity Recognition (NER), Part-of-Speech (PoS) tagging, and sentiment analysis and use neural network architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Conditional Random Fields (CRFs). The module will cover key concepts in sequence modeling, such as bidirectional and multi-layer RNNs, which capture both past and future context to enhance the accuracy of tasks like NER and PoS tagging. Additionally, you will delve into Neural Machine Translation (NMT), examining encoder-decoder models with attention mechanisms to address challenges in translating long sequences. Practical implementations will involve integrating these models into real-world applications, focusing on handling complex language structures, rare words, and sequential dependencies. By the end of this module, you will be proficient in building and optimizing deep learning models for a variety of NLP tasks.

涵盖的内容

3个视频18篇阅读材料4个作业

In this module we’ll focus on attention mechanisms and explore the evolution and significance of attention in neural networks, starting with its introduction in neural machine translation. We’ll cover the challenges of traditional sequence-to-sequence models and how attention mechanisms, particularly in Transformer architectures, address issues like long-range dependencies and parallelization, which enhances the model's ability to focus on relevant parts of the input sequence dynamically. Then, we’ll turn our attention to Transformers and delve into the revolutionary architecture introduced by Vaswani et al. in 2017, which has significantly advanced natural language processing. We’ll cover the core components of Transformers, including self-attention, multi-head attention, and positional encoding to explain how these innovations address the limitations of traditional sequence models and enable efficient parallel processing and handling of long-range dependencies in text.

涵盖的内容

2个视频25篇阅读材料3个作业2个应用程序项目

In this module, we’ll hone in on pre-training and explore the foundational role of pre-training in modern NLP models, highlighting how models are initially trained on large, general datasets to learn language structures and semantics. This pre-training phase, often involving tasks like masked language modeling, equips models with broad linguistic knowledge, which can then be fine-tuned on specific tasks, enhancing performance and reducing the need for extensive task-specific data.

涵盖的内容

1个视频19篇阅读材料2个作业

位教师

Ramin Mohammadi
Northeastern University
4 门课程635 名学生

提供方

从 Machine Learning 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题