This course is designed for intermediate-level software developers, cloud engineers, and system architects responsible for building and scaling LLM applications. As AI systems become more complex, a resilient and scalable architecture is no longer a luxury—it's a necessity. This course provides a focused, practical guide to designing robust, cloud-native microservices that can withstand failure and scale on demand.

Architect Resilient LLM Microservices for Scale

位教师:LearningMate
访问权限由 New York State Department of Labor 提供
您将学到什么
Design and implement scalable, resilient microservice architectures for LLM apps using the 12-factor app methodology for fault tolerance in the cloud
您将获得的技能
- Data Storage Technologies
- Cloud-Native Computing
- Cloud Deployment
- Cloud Computing Architecture
- Systems Architecture
- Application Deployment
- Maintainability
- Service Recovery
- Software Architecture
- Failure Analysis
- LLM Application
- Site Reliability Engineering
- Software Development
- Dependency Analysis
- Solution Architecture
- Configuration Management
- Microservices
- Service Management
- Reliability
- Scalability
- 技能部分已折叠。显示 8 项技能,共 20 项。
要了解的详细信息
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有1个模块
This module provides a comprehensive guide to designing, evaluating, and documenting scalable and fault-tolerant microservices for LLM applications. You will be immediately immersed in a design review to understand the importance of resilience. You will then learn the core principles of the 12-Factor App methodology and multi-region deployment strategies, understand their application in practice, and use that knowledge to begin documenting a new inference service and assessing architectural risks.
涵盖的内容
1个视频1篇阅读材料3个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
位教师

提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
从 Computer Science 浏览更多内容
¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。






