As large language models revolutionize business operations, sophisticated attackers exploit AI systems through prompt injection, jailbreaking, and content manipulation—vulnerabilities that traditional security tools cannot detect. This intensive course empowers AI developers, cybersecurity professionals, and IT managers to systematically identify and mitigate LLM-specific threats before deployment. Master red-teaming methodologies using industry-standard tools like PyRIT, NVIDIA Garak, and Promptfoo to uncover hidden vulnerabilities through adversarial testing. Learn to design and implement multi-layered content-safety filters that block sophisticated bypass attempts while maintaining system functionality. Through hands-on labs, you'll establish resilience baselines, implement continuous monitoring systems, and create adaptive defenses that strengthen over time.

Genießen Sie unbegrenztes Wachstum mit einem Jahr Coursera Plus für 199 $ (regulär 399 $). Jetzt sparen.

Empfohlene Erfahrung
Was Sie lernen werden
Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.
Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.
Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.
Kompetenzen, die Sie erwerben
- Kategorie: Penetration Testing
- Kategorie: Prompt Engineering
- Kategorie: Security Controls
- Kategorie: Threat Modeling
- Kategorie: Large Language Modeling
- Kategorie: Vulnerability Scanning
- Kategorie: AI Personalization
- Kategorie: Cyber Security Assessment
- Kategorie: Responsible AI
- Kategorie: Security Strategy
- Kategorie: System Implementation
- Kategorie: LLM Application
- Kategorie: Continuous Monitoring
- Kategorie: Scenario Testing
- Kategorie: Vulnerability Assessments
- Kategorie: Security Testing
- Kategorie: AI Security
Wichtige Details

Zu Ihrem LinkedIn-Profil hinzufügen
Dezember 2025
Erfahren Sie, wie Mitarbeiter führender Unternehmen gefragte Kompetenzen erwerben.

In diesem Kurs gibt es 3 Module
This module introduces participants to the systematic creation and execution of red-teaming scenarios targeting large language models. Students learn to identify common vulnerability categories including prompt injection, jailbreaking, and data extraction attacks. The module demonstrates how to design realistic adversarial scenarios that mirror real-world attack patterns, using structured methodologies to probe LLM weaknesses. Hands-on demonstrations show how red-teamers simulate malicious user behavior to uncover security gaps before deployment.
Das ist alles enthalten
4 Videos2 Lektüren1 peer review
This module covers the design, implementation, and evaluation of content-safety filters for LLM applications. Participants explore multi-layered defense strategies including input sanitization, output filtering, and behavioral monitoring systems. The module demonstrates how to configure safety mechanisms that balance security with functionality, and shows practical testing methods to validate filter effectiveness against sophisticated bypass attempts. Real-world examples illustrate the challenges of maintaining robust content filtering while preserving user experience.
Das ist alles enthalten
3 Videos1 Lektüre1 peer review
This module focuses on comprehensive resilience testing and systematic improvement of AI system robustness. Students learn to conduct thorough security assessments that measure LLM resistance to adversarial inputs, evaluate defense mechanism effectiveness, and identify areas for improvement. The module demonstrates how to establish baseline security metrics, implement iterative hardening processes, and validate improvements through continuous testing. Participants gain skills in developing robust AI systems that maintain integrity under real-world adversarial conditions.
Das ist alles enthalten
4 Videos1 Lektüre1 Aufgabe2 peer reviews
von
Mehr von Computer Security and Networks entdecken
Status: Kostenloser Testzeitraum
Pearson
Warum entscheiden sich Menschen für Coursera für ihre Karriere?




Häufig gestellte Fragen
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
Weitere Fragen
Finanzielle Unterstützung verfügbar,
¹ Einige Aufgaben in diesem Kurs werden mit AI bewertet. Für diese Aufgaben werden Ihre Daten in Übereinstimmung mit Datenschutzhinweis von Courseraverwendet.







