By completing this course, you'll master building powerful machine learning systems that excel with limited data. You'll gain expertise in multi-task learning, meta-learning, and advanced data augmentation—from physics-based simulations to generative approaches—enabling models to adapt quickly and perform beyond their dataset size.


您将获得的技能
要了解的详细信息

添加到您的领英档案
September 2025
7 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有7个模块
In this module, we will introduce the fundamentals of Multi-Task Learning (MTL), a paradigm where multiple related tasks are learned simultaneously by sharing representations. This approach leverages the commonalities among tasks to improve generalization, reduce overfitting, and achieve better performance with fewer training examples. We will explore how MTL is applied across various domains, such as natural language processing, computer vision, and speech recognition, and examine practical examples such as using MTL to enhance image classification and object detection in autonomous systems. Students will gain insights into both the benefits and challenges of MTL, including issues such as task imbalance, negative transfer, and scalability. Additionally, we will delve into meta-learning techniques, such as Conditional Neural Adaptive Processes (CNAPs), that extend MTL by enabling models to quickly adapt to new tasks with minimal data.
涵盖的内容
1个视频15篇阅读材料1个作业
This module explores the concept of meta-learning, or "learning to learn," which enables models to generalize across various tasks by leveraging knowledge from similar tasks. We will delve into key meta-learning algorithms such as Model-Agnostic Meta-Learning (MAML) and Prototypical Networks and examine their applications in computer vision using datasets such as ImageNet, Omniglot, CUB-200-2011, and FGVC-Aircraft. The module also covers the Meta-Dataset framework, which provides a diverse range of tasks for training robust and adaptable meta-learning models.
涵盖的内容
1个视频7篇阅读材料1个作业
This module focuses on generative models for data augmentation, covering key generative AI techniques that enhance machine learning applications by generating synthetic but realistic data. We begin by introducing generative adversarial networks (GANs), Variational Autoencoders (VAEs), Normalizing Flows, Diffusion Models, and Motion Graphs, highlighting their mathematical foundations, training mechanisms, and real-world applications. Additionally, we discuss the limitations of each model and the computational challenges they present. The lecture provides insights into how generative models contribute to modern AI systems, including image synthesis, domain adaptation, super-resolution, motion synthesis, and data augmentation in small-data learning scenarios.
涵盖的内容
1个视频28篇阅读材料1个作业
This module focuses on physics-based simulation for data augmentation, exploring how physics-driven techniques generate realistic synthetic data to enhance machine learning models. We will discuss key advantages of physics-based simulations, such as scalability, cost-effectiveness, and their ability to model rare events. The module also covers notable approaches, including GeoNet (CVPR 2018) for depth and motion estimation, ScanAva (ECCVW 2018) for semi-supervised learning with 3D avatars, and SMPL (ACM Transactions on Graphics, Volume 15) for human body modeling. Additionally, we introduce equation-based simulation techniques such as Finite Element Method (FEM) and Navier-Stokes equations for modeling fluid dynamics. The module highlights challenges in bridging the simulation-to-reality gap and optimizing computational costs while ensuring high-fidelity synthetic data generation.
涵盖的内容
1个视频10篇阅读材料1个作业
This module introduces Neural Radiance Fields (NeRF), a deep learning-based approach for synthesizing novel views of complex 3D scenes. Unlike traditional 3D reconstruction techniques such as Structure-from-Motion (SfM) and Multi-View Stereo (MVS), which rely on explicit point cloud representations, NeRF learns a continuous volumetric representation of a scene using a fully connected neural network. By taking a set of 2D images captured from different viewpoints, NeRF estimates the density and color of light rays at each spatial location, enabling high-quality, photorealistic novel view synthesis. The lecture also explores how NeRF improves upon prior methods, such as depth estimation, photogrammetry, and classic geometric techniques. Understanding NeRF provides valuable insights into data-efficient 3D scene representation—a critical area for applications in computer vision, robotics, virtual reality (VR), and augmented reality (AR).
涵盖的内容
1个视频6篇阅读材料1个作业
This module explores diffusion models, a class of generative models that incrementally add noise to data and then learn to reverse the process to reconstruct high-quality samples. Diffusion models have gained prominence due to their state-of-the-art performance in image, video, and text generation, surpassing GANs in terms of sample quality and diversity. The module covers the foundational principles of Denoising Diffusion Probabilistic Models (DDPMs) and their training objectives, advancements such as Score-Based Generative Models, Latent Diffusion Models (LDMs), and Classifier-Free Guidance techniques. We also examine their real-world applications in text-to-image generation (Stable Diffusion, DALL·E), video synthesis (Sora, Veo 2), and high-resolution image synthesis. Finally, the module provides insights into the mathematical framework, the optimization strategies, and the growing role of diffusion models in AI-driven content creation.
涵盖的内容
1个视频11篇阅读材料1个作业
This lecture explores 3D Gaussian Splatting (3DGS), a novel approach in computer vision for high-fidelity, real-time 3D scene rendering. Unlike traditional methods like Neural Radiance Fields (NeRF), which rely on continuous neural fields, 3DGS represents scenes using a collection of discrete anisotropic Gaussian functions. These Gaussians efficiently approximate scene geometry, radiance, and depth, enabling real-time rendering with minimal computational overhead. We discuss the theoretical foundations, mathematical formulations, and rendering techniques that make 3D Gaussian Splatting a game-changer in virtual reality (VR), augmented reality (AR), and interactive media. Additionally, we highlight key differences between isotropic and anisotropic Gaussian splats, their impact on rendering quality, and how optimization techniques refine their accuracy. Finally, we compare 3DGS to NeRF, analyzing their trade-offs in rendering speed, computational efficiency, and application suitability.
涵盖的内容
1个视频6篇阅读材料1个作业
位教师

从 Machine Learning 浏览更多内容
- 状态:免费试用
Johns Hopkins University
- 状态:免费试用
DeepLearning.AI
- 状态:免费
Amazon Web Services
- 状态:免费试用
Illinois Tech
人们为什么选择 Coursera 来帮助自己实现职业发展




常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,