Real-time data is everywhere — from fraud detection in financial transactions to personalized recommendations in e-commerce and anomaly detection in IoT devices. Traditional batch processing is too slow for these use cases, and businesses need insights the moment data is generated. This course teaches you how to design, build, and operate reliable streaming pipelines using Apache Spark Structured Streaming and Kafka.

Process Real-Time Data with Spark Streams


位教师:Caio Avelino
访问权限由 New York State Department of Labor 提供
您将学到什么
Explain the execution model of Spark Structured Streaming and build a simple pipeline from a file source to a console sink.
Develop streaming pipelines that integrate with Kafka, apply event-time processing with watermarks, and write reliable outputs to Delta Lake.
Build an end-to-end Spark streaming pipeline that can be deployed in real-world production environments.
您将获得的技能
要了解的详细信息
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有3个模块
Learners are introduced to the Spark Structured Streaming model and its core concepts, including micro-batch execution, triggers, checkpoints, output modes and data transformation.
涵盖的内容
4个视频3篇阅读材料
This module focuses on integrating Spark with real-world streaming systems. Learners will consume data from Kafka, transform and parse messages, and write results to sinks such as Delta Lake, ensuring reliability with checkpointing and triggers
涵盖的内容
3个视频2篇阅读材料1次同伴评审
Learners design an end-to-end streaming pipeline that combines ingestion, transformation, enrichment with static datasets, and reliable output.
涵盖的内容
4个视频3篇阅读材料1个作业1次同伴评审
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
¹ 本课程的部分作业采用 AI 评分。对于这些作业,将根据 Coursera 隐私声明使用您的数据。







