Imagine deploying a powerful machine learning model that performs flawlessly—until a single unpatched container, a poisoned dependency, or a misconfigured cloud service brings it crashing down. In today’s AI-driven world, securing ML systems is no longer optional; it’s essential to maintaining trust, compliance, and resilience.

您将学到什么
Apply infrastructure hardening in ML environments using secure setup, IAM controls, patching, and container scans to protect data.
Secure ML CI/CD workflows through automated dependency scanning, build validation, and code signing to prevent supply chain risks.
Design resilient ML pipelines by integrating rollback, drift monitoring, and adaptive recovery to maintain reliability and system trust.
您将获得的技能
- Threat Modeling
- CI/CD
- DevSecOps
- Engineering
- Resilience
- MLOps (Machine Learning Operations)
- Vulnerability Scanning
- Security Controls
- Continuous Monitoring
- Model Deployment
- Vulnerability Assessments
- AI Security
- Infrastructure Security
- Identity and Access Management
- Hardening
- Compliance Management
- Containerization
- Model Evaluation
- Responsible AI
- AI Personalization
- 技能部分已折叠。显示 10 项技能,共 20 项。
要了解的详细信息

添加到您的领英档案
1 项作业
December 2025
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有3个模块
This module lays the foundation for securing machine learning systems by focusing on the underlying infrastructure that supports them. Learners will explore why strong security controls at the operating system, cloud, and container levels are essential for protecting sensitive ML workloads. Real-world breaches often start with overlooked vulnerabilities in servers, misconfigured storage buckets, or unsecured APIs, and this module provides the knowledge to prevent such entry points. Through theory, demonstration, and an interactive scenario, learners will gain the skills to harden ML environments, apply IAM best practices, and perform vulnerability scans that reveal weaknesses before attackers exploit them. By the end of this module, learners will understand how infrastructure hygiene directly impacts the integrity of ML models and data.
涵盖的内容
5个视频2篇阅读材料1次同伴评审
This module builds on the infrastructure layer by addressing the unique risks found in machine learning build and deployment workflows. Continuous integration and continuous deployment (CI/CD) pipelines accelerate innovation, but they also introduce opportunities for adversaries to slip in malicious dependencies, poisoned data, or corrupted artifacts. Learners will study the anatomy of ML supply chain attacks and discover practical strategies to counter them, such as dependency scanning, code signing, and reproducible builds. The combination of theory, real-world case studies, and a hands-on demo will help learners see how insecure workflows can compromise entire AI systems. By the end of this module, participants will be able to design and implement CI/CD pipelines that embed security into every stage of model development and deployment.
涵盖的内容
3个视频1篇阅读材料1次同伴评审
This module brings together infrastructure and workflow security into a forward-looking focus on resilience. No pipeline is immune to compromise or error, but resilient pipelines are designed to detect issues quickly, recover gracefully, and maintain trustworthiness under stress. Learners will study common compromise vectors in ML systems, from adversarial inputs to model drift, and then explore resilience strategies like rollback, redundancy, and drift monitoring. The demo illustrates how even a simple rollback can protect business continuity when a model misbehaves in production. The scenario-based dialogue challenges learners to think critically about balancing speed, reliability, and safety in real-world ML operations. By the end of this module, learners will understand how to engineer resilience into ML pipelines so that failures and attacks become manageable events rather than catastrophic disruptions.
涵盖的内容
4个视频1篇阅读材料1个作业2次同伴评审
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.







