Building Reliable LLM Systems is a comprehensive course for AI practitioners looking to move beyond basic models and create production-grade applications. While getting an LLM to generate text is easy, ensuring a consistently accurate, relevant, and trustworthy output is a significant engineering challenge. This course provides a systematic framework for tackling the entire lifecycle of LLM reliability.
抓住节省的机会!购买 Coursera Plus 3 个月课程可享受40% 的折扣,并可完全访问数千门课程。

Building Reliable LLM Systems
包含在 中
您将学到什么
Build scripts with lexical/semantic metrics to evaluate LLMs, diagnose hallucinations, and balance vector-search recall against latency.
Apply hypothesis testing, confidence intervals, and significance metrics to evaluate model accuracy and validate results from A/B experiments.
Utilize parameterized SQL and data manipulation to segment user logs, calculate retention, and securely retrieve large-scale datasets.
Analyze LLM performance gaps to prioritize technical fixes and implement remediation measures for production-level reliability.
您将获得的技能
要了解的详细信息
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有5个模块
This module lays the groundwork for quantitative Large Language Mode (LLM) evaluation. Learners will discover why relying on intuition to judge model performance is unsustainable and explore the foundational metrics used to create automated, objective evaluation systems. We will cover both lexical similarity metrics (like BLEU and ROUGE-L) that assess text structure and semantic metrics (like cosine similarity) that capture meaning. By the end of this module, learners will have the conceptual understanding and practical code to build their first automated evaluation script.
涵盖的内容
8个视频3篇阅读材料3个作业3个非评分实验室
When a production chatbot starts giving incorrect answers, how do you find the problem and fix it? This module equips AI practitioners, ML engineers, and data analysts with the essential skills for debugging production LLMs. Go beyond theory and learn the systematic, data-driven workflow that professionals use to solve the critical problem of AI hallucinations. You will be equipped to transition from merely observing AI failures to expertly diagnosing and resolving them.
涵盖的内容
5个视频3篇阅读材料3个作业2个非评分实验室
When making high-stakes deployment decisions, a simple accuracy score is not enough. This module equips you with the statistical methods to rigorously validate LLM performance improvements. By the end of this module, you will be able to move beyond subjective "it seems better" evaluations to confidently state, "we can prove it's better," ensuring every deployment decision is backed by sound statistical evidence.
涵盖的内容
5个视频2篇阅读材料3个作业3个非评分实验室
In the world of large-scale AI, slow queries and inefficient search can bring a system to its knees. This module provides the critical skills to prevent that, focusing on practical database and vector search optimization techniques. By the end of this module, you will be equipped to systematically analyze and optimize production retrieval systems, ensuring your AI applications are not only powerful but also fast and reliable.
涵盖的内容
4个视频3篇阅读材料4个作业3个非评分实验室
In this module, you will conduct an end-to-end performance audit comparing two LLM variants using an A/B test dataset. You will implement a pipeline to calculate key performance metrics, including lexical and semantic similarity, and use statistical A/B testing to validate model improvements. The project culminates in a comprehensive report where you will correlate hallucination rates with retrieval logs and synthesize your findings into data-driven recommendations for stakeholders, guiding the decision for a production-level rollout in a customer support application.
涵盖的内容
2篇阅读材料1个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
位教师

提供方
从 Machine Learning 浏览更多内容
状态:免费DeepLearning.AI
状态:免费试用
状态:免费试用
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
常见问题
The course assumes basic familiarity with statistics. It includes practical, applied lessons on confidence intervals and hypothesis testing, and offers step-by-step examples so that practitioners with modest statistical knowledge can follow along. Consider a short statistics refresher if you are new to hypothesis testing.
You will write evaluation scripts in Python, analyze logs and segmented datasets, run A/B test analyses, use SQL for data retrieval, and evaluate vector-search parameters (e.g., HNSW) commonly used with vector databases. No proprietary tools are required.
The course focuses on measurable, repeatable engineering practices: automated evaluation pipelines, statistical experiment design, log-driven debugging, and data-layer tuning. These skills help you prioritize fixes and validate improvements in real production settings.
更多问题
提供助学金,
¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。





