Ready to level up your GenAI skills? Step into the exciting world of multimodal AI, where language, images, and speech come together to build smarter, more interactive applications.


Build Multimodal Generative AI Applications
本课程是 IBM RAG and Agentic AI 专业证书 的一部分


位教师:Hailey Quach
3,627 人已注册
包含在 中
您将学到什么
Build the job-ready skills you need to build multimodal generative AI applications in just 3 weeks
Understand the fundamental concepts and challenges in multimodal AI, including the integration of text, speech, images, and video
Build multimodal AI applications using state-of-the-art models and frameworks such as IBM’s Granite, Meta’s Llama, OpenAI’s Whisper, DALL·E and Sora
Develop multimodal AI solutions, including chatbots and image/video generation models, using IBM watsonx.ai, Hugging Face, Flask and Gradio
您将获得的技能
要了解的详细信息

添加到您的领英档案
May 2025
6 项作业
了解顶级公司的员工如何掌握热门技能

积累 Software Development 领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 通过 IBM 获得可共享的职业证书

该课程共有3个模块
This module provides an in-depth introduction to multimodal AI, focusing on how AI systems process and integrate multiple data types, including text, speech, and images. You will explore core concepts and some of the challenges you will face in multimodal AI, gaining foundational skills with text and speech processing techniques. Through hands-on labs, you will apply AI-powered storytelling, speech-to-text transcription, and text-to-speech synthesis to real-world applications, such as AI-generated audiobooks and automated meeting assistants.
涵盖的内容
5个视频2篇阅读材料2个作业2个应用程序项目6个插件
This module explores how AI processes generate visual data by integrating images and videos with text. You will examine text-to-image/image-to-text and text-to-video/video-to-text models, image captioning, and the fusion techniques necessary for effective multimodal AI systems. Through hands-on labs, you will apply state-of-the-art models like DALL·E and Sora to generate images and videos from text prompts. Additionally, you will implement an image captioning system using Meta’s Llama 4, gaining practical experience in combining vision and language models for real-world applications.
涵盖的内容
2个视频1篇阅读材料2个作业2个应用程序项目3个插件
The final module explores advanced multimodal AI applications, integrating image, text, and retrieval-based systems to build innovative solutions. You will dive into multimodal retrieval and search, multimodal Question Answering (QA), and chatbots, learning how cross-modal retrieval techniques enhance search engines and recommendation systems. Additionally, you will learn how integrating visual and textual data improves chatbot interactions. Through hands-on labs, you will build fully functional web applications with multimodal capabilities using Flask, applying state-of-the-art models and frameworks.
涵盖的内容
3个视频3篇阅读材料2个作业2个应用程序项目1个插件
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
提供方
从 Software Development 浏览更多内容
- 状态:免费试用
- 状态:免费试用
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
Skills in multimodal generative AI, where systems integrate text, speech, images, and video, are in high demand for roles such as AI developer, machine learning engineer, multimodal AI researcher, and full-stack developer specializing in AI-powered user experiences.
Not necessarily. If you’re a Python developer, you can start building with generative AI using tools like IBM watsonx.ai, Flask, and Gradio—no advanced ML background required.
Multimodal AI apps go beyond typical app development by incorporating multimodal large language models (MLLMs) and media-based inputs like speech, images, and video. You’ll still use familiar tools like Python, Flask and Gradio, but you’ll also learn to integrate and orchestrate models for tasks like transcription, image generation, and AI-powered storytelling.
更多问题
提供助学金,