Introduction to Machine Learning: Supervised Learning offers a clear, practical introduction to how machines learn from labeled data to make predictions and decisions. You’ll build a strong foundation in regression and classification, starting with linear and logistic regression and progressing to resampling, regularization, and tree-based ensemble methods. Along the way, you’ll learn how to evaluate models, manage bias–variance trade-offs, and balance interpretability with predictive power, all while working hands-on in Python. By the end of the course, you’ll have the skills and intuition needed to confidently apply supervised learning techniques to real-world problems.

Introduction to Machine Learning: Supervised Learning

位教师:Daniel E. Acuna
访问权限由 Coursera Learning Team 提供
2,052 人已注册
您将学到什么
Explain and apply the core concepts of supervised learning.
Build, interpret, and evaluate predictive models for regression and classification.
Assess model reliability and improve generalization using validation and regularization techniques.
Apply tree-based and ensemble methods to capture complex relationships in data.
要了解的详细信息

添加到您的领英档案
6 项作业
January 2026
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有5个模块
Welcome to Introduction to Machine Learning: Supervised Learning. In this first module, you will begin your journey into supervised learning by exploring how machines learn from labeled data to make predictions. You will learn to distinguish between supervised and unsupervised learning, and understand the key differences between regression and classification tasks. You will also gain insight into the broader machine learning workflow, including the roles of predictors, response variables, and the importance of training versus testing data. By the end of this module, you will have a solid foundation in the goals and mechanics of supervised learning.
涵盖的内容
12个视频7篇阅读材料2个作业1个编程作业1个讨论话题
In this module, you will expand your understanding of linear models by incorporating multiple predictors, including categorical variables and interaction terms. You will learn how to interpret partial regression coefficients and assess the fit of your models using metrics like R² and RMSE. As you build more complex models, you will also explore the risks of overfitting and the importance of model validation. By the end of this module, you will be equipped to build and evaluate multiple linear regression models with confidence.
涵盖的内容
7个视频1篇阅读材料1个作业1个编程作业
In this module, you will transition from predicting continuous outcomes to modeling categorical ones. You will learn how logistic regression models binary outcomes, like whether a customer will default on a loan, using probabilities and odds, and how to interpret the results. You will also explore k-Nearest Neighbors, a flexible, non-parametric method that classifies observations based on their proximity to others in the dataset. To evaluate your models, you will use tools like confusion matrices, accuracy, and precision/recall, gaining insight into how well your classifiers perform. This module lays the groundwork for tackling real-world classification problems with confidence and clarity.
涵盖的内容
13个视频1篇阅读材料1个作业1个编程作业
In this module, you will learn how to evaluate your models more reliably and improve their generalization to new data. You will explore resampling methods like k-fold cross-validation and the bootstrap, which help estimate test performance without needing a separate test set. You will also be introduced to the regularization techniques Ridge and Lasso that prevent overfitting by constraining model complexity. Using cross-validation, you will learn how to select the optimal regularization strength, balancing predictive accuracy with model simplicity. These tools are essential for building models that perform well not just in theory, but in practice.
涵盖的内容
10个视频2篇阅读材料1个作业1个编程作业
This module introduces you to one of the most intuitive and interpretable machine learning models: decision trees. You will explore how trees split the feature space into regions, how to read their structure, and why they are prone to overfitting if left unchecked. Trees are just the beginning; this module also introduces ensemble techniques that elevate predictive accuracy by combining many models. You will get a first look at methods like bagging, random forests, and boosting, and see how they compare to the models you have already studied. By the end, you will understand when and why tree-based models can outperform simpler approaches, especially in capturing complex, non-linear relationships.
涵盖的内容
8个视频1篇阅读材料1个作业1个编程作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
位教师

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
从 Data Science 浏览更多内容

University of Colorado Boulder

Alberta Machine Intelligence Institute

University of Colorado Boulder

Dartmouth College


