Board Infinity

AI Risk and Compliance: Audit and Governance Foundations

Board Infinity

AI Risk and Compliance: Audit and Governance Foundations

Board Infinity

位教师:Board Infinity

访问权限由 Coursera Learning Team 提供

深入了解一个主题并学习基础知识。
中级 等级

推荐体验

2 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
中级 等级

推荐体验

2 周 完成
在 10 小时 一周
灵活的计划
自行安排学习进度

您将学到什么

  • Execute adversarial red teaming scans using Giskard to identify and prioritize AI vulnerabilities

  • Classify AI systems under the EU AI Act and apply NIST AI RMF across the AI lifecycle

  • Generate SHAP/LIME explanations and create audit-ready transparency documentation

  • Implement guardrails, PII scrubbing with Presidio, and governance controls to mitigate Shadow AI

要了解的详细信息

可分享的证书

添加到您的领英档案

作业

16 项作业

授课语言:英语(English)
最近已更新!

April 2026

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

积累特定领域的专业知识

本课程是 Managing AI Systems: Development, Deployment, and Governance 专项课程 专项课程的一部分
在注册此课程时,您还会同时注册此专项课程。
  • 向行业专家学习新概念
  • 获得对主题或工具的基础理解
  • 通过实践项目培养工作相关技能
  • 获得可共享的职业证书

该课程共有4个模块

In this module, learners dive into the adversarial threat landscape for modern AI systems and practice structured red teaming workflows. You will explore real-world AI threat models, including jailbreaks, prompt injection, leakage, and manipulation attacks, and distinguish benign failures from genuinely adversarial behavior. Through videos, readings, AI dialogues, and a hands-on lab using Giskard, you will learn how to execute automated red teaming, interpret vulnerability reports, and prioritize remediation actions. By the end of the module, you will be prepared to evaluate system readiness under adversarial conditions and document findings in an audit- and security-friendly format.

涵盖的内容

9个视频3篇阅读材料4个作业

This module focuses on the regulatory and risk-management frameworks that govern enterprise AI systems, with emphasis on the EU AI Act, the NIST AI Risk Management Framework (RMF), and key copyright and data usage issues. Learners will analyze EU AI Act risk tiers, high-risk obligations, conformity assessments, and post-market monitoring requirements. You will then map AI lifecycle activities to the NIST AI RMF functions and apply NIST-aligned risk assessment techniques. The module also examines training-data licensing, ownership of LLM outputs, enterprise liability, and unauthorized training risks. Through a lab and applied exercises, you will classify AI systems under the EU AI Act, map risks to NIST functions, and produce concise compliance documentation.

涵盖的内容

9个视频3篇阅读材料4个作业

In this module, learners explore explainable AI (XAI) techniques and transparency practices for large language models and other complex systems. You will investigate why explainability is challenging for LLMs and compare leading XAI methods such as SHAP, LIME, and attention maps, including guidance on when to use each. The module then turns to stakeholder-facing communication, showing how to generate human-readable explanations and present them effectively to executives and regulators while maintaining faithfulness and reliability. Finally, you will design transparency workflows that satisfy governance and compliance requirements, including documentation of system and decision flows. A hands-on lab guides you through applying SHAP or LIME to a text classifier and drafting a transparency report suitable for audits.

涵盖的内容

9个视频3篇阅读材料4个作业

This capstone module addresses practical governance controls for safe AI usage, focusing on guardrails frameworks, PII protection, and Shadow AI mitigation. Learners begin by implementing guardrails for safety and policy enforcement using Guardrails AI and NVIDIA NeMo, including rule-based and semantic guardrails and testing them against attacks. The module then introduces Microsoft Presidio for PII detection and anonymization, demonstrating how to detect, mask, and scrub sensitive data and integrate Presidio into RAG pipelines. Finally, you will examine Shadow AI risks in enterprises, monitoring and enforcement techniques, and organization-wide governance controls. A major lab ties these elements together by red teaming a chatbot with Giskard, implementing Guardrails and Presidio, and producing comprehensive evidence and documentation that serve as the practical course capstone.

涵盖的内容

9个视频3篇阅读材料4个作业

获得职业证书

将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。

位教师

Board Infinity
Board Infinity
241 门课程401,829 名学生

提供方

Board Infinity

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'

Jennifer J.

自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'

Larry W.

自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'

Chaitanya A.

''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'

从 Computer Science 浏览更多内容