This course is designed for scientists, engineers, students, and professionals looking to develop efficient solutions for high-performance and distributed computing systems. It focuses on parallel programming using the Message Passing Interface (MPI), a standard for scalable communication across multiple processors. Learners should have basic programming experience in C or C++ and familiarity with Linux. No prior knowledge of MPI is required.

您将学到什么
Design and implement parallel algorithms using MPI
Apply key communication patterns, including point-to-point, collective, and nonblocking communication
Improve performance through load balancing and overlapping communication with computation
Work with custom communicators and derived data types
要了解的详细信息

添加到您的领英档案
5 项作业
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有5个模块
This module focuses on the key concepts and techniques for transforming serial algorithms into parallel solutions using the Message Passing Interface (MPI). You will explore the principles of message passing, synchronization, and parallel thinking, equipping them with the skills to efficiently utilize parallel computing in their projects.
涵盖的内容
5个视频3篇阅读材料1个作业1个编程作业
This module delves into the advanced communication techniques in MPI, focusing on transforming serial algorithms into parallel implementations. You will learn about nonblocking communication, point-to-point communication, and the intricacies of blocking sends and receives, along with strategies to avoid deadlock in their parallel applications.
涵盖的内容
5个视频1个作业1个编程作业
This module focuses on enhancing the performance of parallel applications using nonblocking communication and effective load-balancing strategies. You will learn how to implement nonblocking communication, overlap communication with computation, and achieve optimal load distribution to maximize speedup in their MPI programs.
涵盖的内容
4个视频1个作业1个编程作业
This module explores advanced parallel computing concepts using MPI, focusing on communicator creation, domain decomposition, and derived datatypes. You will learn to create custom communicators for process coordination and effectively divide computational domains. The module covers MPI's derived datatypes, including contiguous, vector, indexed, and struct types, enabling efficient communication for both regular and irregular data patterns in high-performance applications.
涵盖的内容
7个视频1个作业1个编程作业
This module focuses on parallel I/O in MPI, emphasizing efficient data management in high-performance computing. You will learn the principles of MPI I/O and explore practical examples of concurrent data operations. The module also introduces HDF5, a widely used data model and file format in scientific computing, highlighting its features for managing large datasets. By the end, students will be equipped to implement effective parallel I/O strategies using MPI and HDF5 in their applications.
涵盖的内容
5个视频1个作业
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
位教师


人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
学生评论
- 5 stars
53.84%
- 4 stars
46.15%
- 3 stars
0%
- 2 stars
0%
- 1 star
0%
显示 3/13 个
已于 Feb 24, 2026审阅
Short and straight forward to the key concept of parallel computing.
从 Computer Science 浏览更多内容

University of Colorado Boulder

University of Colorado Boulder

Johns Hopkins University

Rice University


