LearnQuest

Foundations of AI Governance and Responsible Development

LearnQuest

Foundations of AI Governance and Responsible Development

LearnQuest Network

位教师:LearnQuest Network

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
初级 等级

推荐体验

4 小时 完成
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
初级 等级

推荐体验

4 小时 完成
灵活的计划
自行安排学习进度

您将学到什么

  • Design AI lifecycle governance with checkpoints, roles, and audit-ready workflows.

  • Apply explainability methods (SHAP, LIME) to ensure transparent, compliant AI decisions.

  • Build traceable documentation, versioning systems, and audit-ready AI reports.

要了解的详细信息

可分享的证书

添加到您的领英档案

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有3个模块

AI systems move through distinct stages—data acquisition, model training, evaluation, and deployment—but without governance embedded at each stage, critical decisions go undocumented and accountability gaps emerge under regulatory scrutiny. In this module, you examine how to structure AI development as a traceable, governance-integrated pipeline. You map lifecycle stages to governance checkpoints aligned with frameworks like the NIST AI Risk Management Framework and the EU AI Act, and you design responsibility matrices that assign clear ownership for model decisions across technical, risk, and compliance roles. By the end of this module, you will be able to define governance checkpoints for each lifecycle stage and build accountability structures that connect developer work to audit and explainability requirements.

涵盖的内容

11个视频2篇阅读材料1个作业

In this module, you will explore the methods and governance practices that make machine learning models explainable and transparent to the people who oversee, audit, and are affected by them. You will examine how post-hoc techniques such as SHAP and LIME assign attribution to individual predictions, and why the distinction between global and local explanations matters for regulated decision-making. You will also examine how raw technical outputs from these methods must be translated into artifacts that satisfy compliance requirements and communicate meaningfully to risk committees, regulators, and business leaders. By the end of this module, you will be able to implement and validate an explainability pipeline, interpret its outputs for diverse audiences, and integrate those outputs into governance and compliance workflows.

涵盖的内容

9个视频1篇阅读材料1个作业

In this module, you focus on the documentation practices that make AI systems auditable in real-world corporate environments. You examine how to establish traceability across models, data, and configurations so that any decision can be reconstructed with confidence. You also learn how to structure audit-ready reports that translate technical evidence into governance artifacts aligned with regulatory expectations. These practices are critical when systems are reviewed by internal audit, regulators, or risk committees. By the end of this module, you will be able to design traceable AI documentation systems and produce structured audit reports that support compliance, accountability, and operational decision-making.

涵盖的内容

10个视频1篇阅读材料1个作业

位教师

LearnQuest Network
LearnQuest
204 门课程987,197 名学生

提供方

LearnQuest

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'

Jennifer J.

自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'

Larry W.

自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'

Chaitanya A.

''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题

¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。