This course offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and Retrieval-Augmented Generation (RAG). It addresses critical considerations and mitigations to reduce the overall risk in organizational AI system development processes. Experienced author and trainer Omar Santos emphasizes “secure by design” principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. You will be introduced to AI threats, LLM security, prompt injection, insecure output handling, and Red Team AI models. The course concludes by teaching you how to protect RAG implementations. You learn about orchestration libraries such as LangChain, LlamaIndex, and others, as well as securing vector databases, selecting embedding models, and more.
您将学到什么
Explore security for deploying and developing AI applications, RAG, agents, and other AI implementations
Learn hands-on with practical skills of real-life AI and machine learning cases
Incorporate security at every stage of AI development, deployment, and operation
您将获得的技能
要了解的详细信息

可分享的证书
添加到您的领英档案
最近已更新!
September 2025
作业
7 项作业
授课语言:英语(English)
了解顶级公司的员工如何掌握热门技能

从 Security 浏览更多内容

Edureka
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.
自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'

Jennifer J.
自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'

Larry W.
自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'

Chaitanya A.
''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'








