In this course, we enter the space of Prompt Injection Attacks, a critical concern for businesses utilizing Large Language Model systems in their AI applications. By exploring practical examples and real-world implications, such as potential data breaches, system malfunctions, and compromised user interactions, you will grasp the mechanics of these attacks and their potential impact on AI systems.

Introduction to Prompt Injection Vulnerabilities


位教师:Kevin Cardwell
访问权限由 New York State Department of Labor 提供
1,558 人已注册
您将学到什么
Analyze and discuss various attack methods targeting Large Language Model (LLM) applications.
Demonstrate the ability to identify and comprehend the primary attack method, Prompt Injection, used against LLMs.
Evaluate the risks associated with Prompt Injection attacks and gain an understanding of the different attack scenarios involving LLMs.
Formulate strategies for mitigating Prompt Injection attacks, enhancing their knowledge of security measures against such threats.
您将获得的技能
要了解的详细信息
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有1个模块
In this course, we enter the space of Prompt Injection Attacks, a critical concern for businesses utilizing Large Language Model systems in their AI applications. By exploring practical examples and real-world implications, such as potential data breaches, system malfunctions, and compromised user interactions, you will grasp the mechanics of these attacks and their potential impact on AI systems.
涵盖的内容
17个视频6篇阅读材料4个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
学生评论
- 5 stars
55%
- 4 stars
5%
- 3 stars
5%
- 2 stars
25%
- 1 star
10%
显示 3/20 个
已于 Mar 15, 2025审阅
kindly provide certificate for free there is a reason we went for the free course. Kindly.
从 Computer Science 浏览更多内容

Johns Hopkins University

University of California, Davis
¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。





