Learn to Choose the Right ML Model is an intermediate course for data scientists, ML engineers, and analytics-minded developers who want to make model choices you can defend—not just experiment and hope for the best. As machine learning powers more business-critical systems, success depends on moving beyond intuition and automating robust, fair, and metrics-driven selection and deployment. In this course, you’ll practice structured problem typing, compare major algorithm families, and apply real-world metrics to pick and monitor models that work in the wild. You'll learn through case studies (like Zillow, Apple Card, and Google Flu Trends), hands-on labs with Python and scikit-learn, and scenario-driven coaching. By the end, you’ll be able to frame ML problems, select and justify models, automate fairness and drift checks, and deploy pipelines you can trust—so your solutions succeed, not just on paper, but in production.
以 199 美元(原价 399 美元)购买一年 Coursera Plus,享受无限增长。立即节省

您将获得的技能
- Applied Machine Learning
- Scikit Learn (Machine Learning Library)
- Case Studies
- MLOps (Machine Learning Operations)
- Machine Learning
- Continuous Monitoring
- Classification And Regression Tree (CART)
- Regression Analysis
- Performance Metric
- Machine Learning Algorithms
- Scenario Testing
- Model Evaluation
- Predictive Modeling
- Responsible AI
要了解的详细信息
了解顶级公司的员工如何掌握热门技能

该课程共有3个模块
In this opening lesson, learners see how correctly typing a machine-learning problem and inspecting data traits set the stage for every modeling decision. Guided by the Zillow Offers collapse (Problem: mis-priced homes from data drift; Why It Matters: $420 M loss), you'll practise spotting regression vs classification tasks, gauging feature quality, and flagging distribution shifts before they derail a project. Videos, a data-profiling lab, and a peer discussion build the analytical eye needed to choose the right model family with confidence.
涵盖的内容
3个视频3篇阅读材料1个作业
In this lesson, learners will analyze the strengths and limitations of the most widely used machine learning model families—linear models, tree-based ensembles, clustering, and deep learning—to understand when and why each is best applied. The lesson focuses on why simply “trying every algorithm” leads to wasted effort, and how matching problem type and data structure to the right family enables smarter, faster, and more defensible results.Real-world failures, such as the Amazon recruiting engine bias, illustrate the pitfalls of poorly chosen models. Through scenario-based videos, guided readings, peer discussions, and hands-on labs, learners will practice comparing algorithms for fairness, performance, and interpretability—shifting from a toolbox mindset to strategic model selection.
涵盖的内容
2个视频2篇阅读材料1个作业
In this lesson, learners discover how wiring continuous evaluation into every training and deployment step transforms model delivery from a sprint of experiments into a reliable, data-driven decision engine. A midnight release scenario—where an unmonitored metric drifted and customer limits halved unexpectedly—shows why automated checks must begin with the very first cross-validation split and extend into live A/B tests.Learners investigate practical tooling—MLflow for experiment tracking, Optuna for automated hyper-parameter tuning, Evidently for production drift alerts, and GitHub Actions workflows for reproducible evaluation—to ensure issues surface before a model reaches end users. Case studies of metric blindness and data drift (e.g., Apple Card’s gender-bias probe and Google Flu Trends’ over-forecasting) demonstrate how small oversights in monitoring or retraining cadence can spiral into reputational or financial damage, reinforcing the need for continuous oversight.Hands-on demonstrations guide participants through:• setting quantitative success criteria that mix accuracy, fairness, and cost• configuring gates that fail a training run when key metrics regress• running a live A/B test and interpreting uplift with statistical rigor—all without slowing delivery velocity.By the end of the lesson, learners will know both how to embed metric-driven workflows into real pipelines and why treating evaluation as an afterthought is no longer acceptable—validation must be continuous, integrated, and owned by every stakeholder in the ML lifecycle.
涵盖的内容
4个视频1篇阅读材料3个作业
位教师

提供方
从 Machine Learning 浏览更多内容
状态:免费试用
状态:免费试用Board Infinity
状态:免费试用
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,
¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。







