A Markov chain can be used to model the evolution of a sequence of random events where probabilities for each depend solely on the previous event. Once a state in the sequence is observed, previous values are no longer relevant for the prediction of future values. Markov chains have many applications for modeling real-world phenomena in a myriad of disciplines including physics, biology, chemistry, queueing, and information theory. More recently, they are being recognized as important tools in the world of artificial intelligence (AI) where algorithms are designed to make intelligent decisions based on context and without human input. Markov chains can be particularly useful for natural language processing and generative AI algorithms where the respective goals are to make predictions and to create new data in the form or, for example, new text or images. In this course, we will explore examples of both. While generative AI models are generally far more complex than Markov chains, the study of the latter provides an important foundation for the former. Additionally, Markov chains provide the basis for a powerful class of so-called Markov chain Monte Carlo (MCMC) algorithms that can be used to sample values from complex probability distributions used in AI and beyond.


您将学到什么
Analyze long-term behavior of Markov processes for the purposes of both prediction and understanding equilibrium in dynamic stochastic systems
Apply Markov decision processes to solve problems involving uncertainty and sequential decision-making
Simulate data from complex probability distributions using Markov chain Monte Carlo algorithms
您将获得的技能
要了解的详细信息

添加到您的领英档案
August 2025
15 项作业
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有6个模块
Welcome to the course! This module contains logistical information to get you started!
涵盖的内容
7篇阅读材料4个非评分实验室
In this module we will review definitions and basic computations of conditional probabilities. We will then define a Markov chain and its associated transition probability matrix and learn how to do many basic calculations. We will then tackle more advanced calculations involving absorbing states and techniques for putting a longer history into a Markov framework!
涵盖的内容
12个视频5个作业2个编程作业
What happens if you run a Markov chain out for a "very long time"? In many cases, it turns out that the chain will settle into a sort of "equilibrium" or "limiting distribution" where you will find it in various states with various fixed probabilities. In this Module, we will define communication classes, recurrence, and periodicity properties for Markov chains with the ultimate goal of being able to answer existence and uniqueness questions about limiting distributions!
涵盖的内容
9个视频3个作业2个编程作业
In this Module, we will define what is meant by a "stationary" distribution for a Markov chain. You will learn how it relates to the limiting distribution discussed in the previous Module. You will also spend time learning about the very powerful "first-step analysis" technique for solving many, otherwise intractable, problems of interest surrounding Markov chains. We will discuss rates of convergence for a Markov chain to settle into its "stationary mode", and just maybe we'll give a monkey a keyboard and hope for the best!
涵盖的内容
11个视频3个作业2个编程作业
In this Module we explore several options for simulating values from discrete and continuous distributions. Several of the algorithms we consider will involve creating a Markov chain with a stationary or limiting distribution that is equivalent to the "target" distribution of interest. This Module includes the inverse cdf method, the accept-reject algorithm, the Metropolis-Hastings algorithm, the Gibbs sampler, and a brief introduction to "perfect sampling".
涵盖的内容
13个视频2个作业2个编程作业4个非评分实验室
In reinforcement learning, an "agent" learns to make decisions in an environment through receiving rewards or punishments for taking various actions. A Markov decision process (MDP) is reinforcement learning where, given the current state of the environment and the agent's current action, past states and actions used to get the agent to that point are irrelevant. In this Module, we learn about the famous "Bellman equation", which is used to recursively assign rewards to various states and how to use it in order to find an optimal strategy for the agent!
涵盖的内容
5个视频2个作业2个编程作业4个非评分实验室
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
攻读学位
课程 是 University of Colorado Boulder提供的以下学位课程的一部分。如果您被录取并注册,您已完成的课程可计入您的学位学习,您的学习进度也可随之转移。
位教师

从 Probability and Statistics 浏览更多内容
- 状态:免费试用
University of California, Santa Cruz
- 状态:免费试用
Illinois Tech
- 状态:免费试用
University of California, Santa Cruz
EIT Digital
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,