Northeastern University
NLP in Engineering: Concepts & Real-World Applications
Northeastern University

NLP in Engineering: Concepts & Real-World Applications

Ramin Mohammadi

位教师:Ramin Mohammadi

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
1 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
1 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度

要了解的详细信息

可分享的证书

添加到您的领英档案

作业

9 项作业

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有4个模块

This module provides an in-depth exploration of Natural Language Processing (NLP), a crucial area of artificial intelligence that enables computers to understand, interpret, and generate human language. By combining computational linguistics with machine learning, NLP is applied in various technologies, from chatbots and sentiment analysis to machine translation and speech recognition. The module introduces fundamental NLP tasks such as text classification, Named Entity Recognition (NER), and neural machine translation, showcasing how these applications shape real-world interactions with AI. Additionally, it highlights the complexities of teaching language to machines, including handling ambiguity, grammar, and cultural nuances. Through the course, you will gain hands-on experience and knowledge about key techniques like word representation and distributional semantics, preparing them to solve language-related challenges in modern AI systems.

涵盖的内容

2个视频17篇阅读材料2个作业1个应用程序项目2个讨论话题

This module focuses on optimization techniques critical for machine learning, particularly in natural language processing (NLP) tasks. It introduces Gradient Descent (GD), a fundamental algorithm used to minimize cost functions by iteratively adjusting model parameters. You’ll explore variants like Stochastic Gradient Descent (SGD) and Mini-Batch Gradient Descent to learn more about their efficiency in handling large datasets. Advanced methods such as Momentum and Adam are covered to give you insight on how to enhance convergence speed by smoothing updates and adapting learning rates. The module also covers second-order techniques like Newton’s Method and Quasi-Newton methods (e.g., BFGS), which leverage curvature information for more direct optimization, although they come with higher computational costs. Overall, this module emphasizes balancing efficiency, accuracy, and computational feasibility in optimizing machine learning models.

涵盖的内容

2个视频15篇阅读材料2个作业1个讨论话题

This module explores Named Entity Recognition (NER), a core task in Natural Language Processing (NLP) that identifies and classifies entities like people, locations, and organizations in text. We’ll begin by examining how logistic regression can be used to model NER as a binary classification problem, though this approach faces limitations with complexity and context capture. We’ll then transition to more advanced techniques, such as neural networks, which excel at handling the complex patterns and large-scale data that traditional models struggle with. Neural networks' ability to learn hierarchical features makes them ideal for NER tasks, as they can capture contextual information more effectively than simpler models. Throughout the module, we compare these methods and highlight how deep learning approaches such as Recurrent Neural Networks (RNNs) and transformers like BERT improve NER accuracy and scalability.

涵盖的内容

1个视频12篇阅读材料2个作业1个应用程序项目1个讨论话题

The Word2Vec and GloVe models are popular word embedding techniques in Natural Language Processing (NLP), each offering unique advantages. Word2Vec, developed by Google, operates via two key models: Continuous Bag of Words (CBOW) and Skip-gram, focusing on predicting a word based on its context or vice versa (Word2Vec). The GloVe model, on the other hand, created by Stanford, combines count-based and predictive approaches by leveraging word co-occurrence matrices to learn word vectors (GloVe). Both models represent words in a high-dimensional vector space and capture semantic relationships. Word2Vec focuses on local contexts, learning efficiently from large datasets, while GloVe emphasizes global word co-occurrence patterns across the entire corpus, revealing deeper word associations. These embeddings enable tasks like analogy-solving, semantic similarity, and other linguistic computations, making them central to modern NLP applications.

涵盖的内容

3个视频26篇阅读材料3个作业1个应用程序项目1个讨论话题

位教师

Ramin Mohammadi
Northeastern University
2 门课程474 名学生

提供方

从 Machine Learning 浏览更多内容

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'
Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'
Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'
Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题