This course provides an overview of some different Natural Language Processing (NLP) techniques, their underlying principles, and their applications in engineering. The focus will be on the practical implementation of NLP methods such as word embeddings, neural networks, attention mechanisms, and advanced deep learning models to solve real-world engineering problems.

NLP in Engineering: Concepts & Real-World Applications

位教师:Ramin Mohammadi
访问权限由 New York State Department of Labor 提供
您将获得的技能
- Artificial Intelligence
- Recurrent Neural Networks (RNNs)
- Embeddings
- Machine Learning Algorithms
- Deep Learning
- Machine Learning
- Artificial Intelligence and Machine Learning (AI/ML)
- Responsible AI
- Supervised Learning
- Data Preprocessing
- Artificial Neural Networks
- Model Evaluation
- Natural Language Processing
- Data Ethics
您将学习的工具
要了解的详细信息

添加到您的领英档案
9 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有4个模块
This module provides an in-depth exploration of Natural Language Processing (NLP), a crucial area of artificial intelligence that enables computers to understand, interpret, and generate human language. By combining computational linguistics with machine learning, NLP is applied in various technologies, from chatbots and sentiment analysis to machine translation and speech recognition. The module introduces fundamental NLP tasks such as text classification, Named Entity Recognition (NER), and neural machine translation, showcasing how these applications shape real-world interactions with AI. Additionally, it highlights the complexities of teaching language to machines, including handling ambiguity, grammar, and cultural nuances. Through the course, you will gain hands-on experience and knowledge about key techniques like word representation and distributional semantics, preparing them to solve language-related challenges in modern AI systems.
涵盖的内容
2个视频17篇阅读材料2个作业1个应用程序项目2个讨论话题
2个视频• 总计2分钟
- Natural Language Processing (NLP)• 1分钟
- Representing the Meaning of a Word• 1分钟
17篇阅读材料• 总计51分钟
- Course Overview• 1分钟
- Syllabus - NLP in Engineering: Concepts & Real-World Applications• 10分钟
- Academic Integrity• 1分钟
- Introduction to NLP• 5分钟
- Example: Chatbots• 2分钟
- Example: Email Filtering• 2分钟
- Example: Sentiment Analysis• 3分钟
- Example: GPT - 3• 3分钟
- Example: ChatGPT Capabilities• 5分钟
- Natural Language Processing• 1分钟
- Funny Takes on Language Evolution• 2分钟
- How Do We Represent the Meaning of a Word?• 2分钟
- How Do We Have Usable Meaning in a Computer?• 4分钟
- Words as Discrete Symbols• 5分钟
- Representing Words by Their Context• 2分钟
- Word Vectors• 2分钟
- Final Thoughts on NLP• 1分钟
2个作业• 总计36分钟
- Check Your Knowledge: What is NLP?• 18分钟
- Check Your Knowledge: Motivation• 18分钟
1个应用程序项目• 总计15分钟
- Challenges of Teaching Language to AI• 15分钟
2个讨论话题• 总计70分钟
- Meet Your Fellow Learners• 10分钟
- Challenges and Limitations of NLP• 60分钟
This module focuses on optimization techniques critical for machine learning, particularly in natural language processing (NLP) tasks. It introduces Gradient Descent (GD), a fundamental algorithm used to minimize cost functions by iteratively adjusting model parameters. You’ll explore variants like Stochastic Gradient Descent (SGD) and Mini-Batch Gradient Descent to learn more about their efficiency in handling large datasets. Advanced methods such as Momentum and Adam are covered to give you insight on how to enhance convergence speed by smoothing updates and adapting learning rates. The module also covers second-order techniques like Newton’s Method and Quasi-Newton methods (e.g., BFGS), which leverage curvature information for more direct optimization, although they come with higher computational costs. Overall, this module emphasizes balancing efficiency, accuracy, and computational feasibility in optimizing machine learning models.
涵盖的内容
2个视频15篇阅读材料2个作业1个讨论话题
2个视频• 总计8分钟
- Machine Learning and NLP• 4分钟
- Optimization Techniques• 4分钟
15篇阅读材料• 总计80分钟
- Machine Learning• 2分钟
- Variations of Gradient Descent• 2分钟
- Types of ML in NLP• 6分钟
- What is a Model in NLP and How Does it Learn?• 6分钟
- Understanding Cost Functions• 2分钟
- Minimizing the Cost Function in NLP• 10分钟
- Why Optimization Techniques Matter• 1分钟
- Why SGD Works• 10分钟
- Jacobian Matrix & Hessian Matrix• 5分钟
- Momentum• 10分钟
- Newton's Methods• 5分钟
- Quasi-Newton Methods• 5分钟
- Root Mean Square Propagation (RMSProp)• 5分钟
- Adaptive Moment Estimation (Adam)• 10分钟
- Overall Challenges of Second-Order Optimization Techniques• 1分钟
2个作业• 总计36分钟
- Check Your Knowledge: ML in NLP• 18分钟
- Check Your Knowledge: Optimization Techniques• 18分钟
1个讨论话题• 总计60分钟
- First- vs. Second-Order Optimization• 60分钟
This module explores Named Entity Recognition (NER), a core task in Natural Language Processing (NLP) that identifies and classifies entities like people, locations, and organizations in text. We’ll begin by examining how logistic regression can be used to model NER as a binary classification problem, though this approach faces limitations with complexity and context capture. We’ll then transition to more advanced techniques, such as neural networks, which excel at handling the complex patterns and large-scale data that traditional models struggle with. Neural networks' ability to learn hierarchical features makes them ideal for NER tasks, as they can capture contextual information more effectively than simpler models. Throughout the module, we compare these methods and highlight how deep learning approaches such as Recurrent Neural Networks (RNNs) and transformers like BERT improve NER accuracy and scalability.
涵盖的内容
1个视频12篇阅读材料2个作业1个应用程序项目1个讨论话题
1个视频• 总计4分钟
- Neural Networks Definitions• 4分钟
12篇阅读材料• 总计85分钟
- Named Entity Recognition (NER)• 5分钟
- NER as a Binary Regression Problem• 5分钟
- Neural Network• 5分钟
- Neural Network Structure• 5分钟
- How Does a Neural Network Learn?• 10分钟
- Mathematical Representation• 20分钟
- Steps in Back Propagation Algorithm• 5分钟
- Stochastic Gradient• 5分钟
- Classification Tasks• 5分钟
- Sequence-to-Sequence Tasks• 5分钟
- Sequence Labeling Tasks• 5分钟
- Regression Tasks & Divergence Measures• 10分钟
2个作业• 总计36分钟
- Check Your Knowledge: NER & Neural Networks• 18分钟
- Check Your Knowledge: Cost Functions• 18分钟
1个应用程序项目• 总计10分钟
- Some Common Activation Functions• 10分钟
1个讨论话题• 总计10分钟
- Exploring the Evolution of Named Entity Recognition (NER) and the Role of Neural Networks• 10分钟
The Word2Vec and GloVe models are popular word embedding techniques in Natural Language Processing (NLP), each offering unique advantages. Word2Vec, developed by Google, operates via two key models: Continuous Bag of Words (CBOW) and Skip-gram, focusing on predicting a word based on its context or vice versa (Word2Vec). The GloVe model, on the other hand, created by Stanford, combines count-based and predictive approaches by leveraging word co-occurrence matrices to learn word vectors (GloVe). Both models represent words in a high-dimensional vector space and capture semantic relationships. Word2Vec focuses on local contexts, learning efficiently from large datasets, while GloVe emphasizes global word co-occurrence patterns across the entire corpus, revealing deeper word associations. These embeddings enable tasks like analogy-solving, semantic similarity, and other linguistic computations, making them central to modern NLP applications.
涵盖的内容
3个视频26篇阅读材料3个作业1个应用程序项目1个讨论话题
3个视频• 总计11分钟
- GLoVe Training Process• 5分钟
- Word2Vec• 4分钟
- Skip-Gram• 2分钟
26篇阅读材料• 总计176分钟
- Introduction to GLoVe• 5分钟
- Co-occurrence Matrix• 5分钟
- Objective: Ratio of Co-occurrences• 5分钟
- Calculating Probability Ratios• 5分钟
- Symmetry and Linearity in GloVe• 5分钟
- Minimizing the Cost Function and Optimizing Word Vectors• 5分钟
- Optimization Process• 10分钟
- Final Word Vectors• 2分钟
- Implicit Properties in GloVe• 5分钟
- GLoVe Introduction• 2分钟
- What is Language Modeling?• 5分钟
- Co-occurrence Matrix• 5分钟
- Vector Representations for Word• 3分钟
- Continuous Bag of Words (CBOW)• 5分钟
- Mathematical Objectives• 10分钟
- Mathematical Objectives 2• 15分钟
- Limitations of CBOW• 1分钟
- Skip-Gram• 15分钟
- Gradient Derivation• 15分钟
- The Challenge of Training Skip-Gram• 10分钟
- Binary Classification Perspective• 10分钟
- Gradient of Negative Sampling Objective• 10分钟
- Connecting Between Skip-Gram, Negative Sampling, and One Sampling• 2分钟
- Skip-Gram with Negative Sampling Across All Words• 10分钟
- Negative Sampling in Skip-Gram Model• 10分钟
- Congratulations• 1分钟
3个作业• 总计54分钟
- Check Your Knowledge: GLoVe• 18分钟
- Check Your Knowledge: Word2Vec & CBOW• 18分钟
- Check Your Knowledge: Skip-Gram & Negative Sampling• 18分钟
1个应用程序项目• 总计3分钟
- GloVe Training Process• 3分钟
1个讨论话题• 总计60分钟
- Word2Vec & GloVe• 60分钟
位教师

提供方

提供方

Founded in 1898, Northeastern is a global research university with a distinctive, experience-driven approach to education and discovery. The university is a leader in experiential learning, powered by the world’s most far-reaching cooperative education program. The spirit of collaboration guides a use-inspired research enterprise focused on solving global challenges in health, security, and sustainability.
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
从 Data Science 浏览更多内容
NNortheastern University
课程
NNortheastern University
课程
LL&T EduTech
课程

