Building a data pipeline is easy. Building one that automatically recovers from failures, maintains data integrity during outages, and runs reliably in production—that's what separates junior engineers from platform architects.

Orchestrate & Recover Real-Time Data Pipelines


位教师:Starweaver
访问权限由 New York State Department of Labor 提供
您将学到什么
Build and schedule streaming and batch-adjacent workflows using a modern orchestrator, such as Airflow or Prefect.
IImplement reliability patterns like idempotence, checkpointing, DLQs, and backfills for fault-tolerant and exactly-once-ish processing.
Design multi-region recovery strategies (mirroring/replication) and run playbooks to restore pipelines after partial or regional failures.
您将获得的技能
要了解的详细信息

添加到您的领英档案
1 项作业
January 2026
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有3个模块
Learners set up a modern orchestrator and build a first DAG/flow that runs reliably. We cover scheduling, retries, task dependencies, and lightweight observability. By the end, learners will ship a minimal but production-aware pipeline.
涵盖的内容
4个视频2篇阅读材料1次同伴评审
We move from “works on my machine” to “recovers on its own.” Learners add exactly-once-ish processing, checkpointing, schema controls, and dead-letter queues. The module emphasizes designing for replay and safe backfills.
涵盖的内容
3个视频1篇阅读材料1次同伴评审
Learners design for failure domains—task, job, cluster, and region. We cover backfills vs. reprocessing, Delta time travel for safe fixes, and Kafka replication patterns (MirrorMaker 2, uReplicator) for DR.
涵盖的内容
4个视频2篇阅读材料1个作业2次同伴评审
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.





