By completing this course, you'll master building powerful machine learning systems that excel with limited data. You'll gain expertise in multi-task learning, meta-learning, and advanced data augmentation—from physics-based simulations to generative approaches—enabling models to adapt quickly and perform beyond their dataset size.

您将获得的技能
- Machine Learning Algorithms
- Data Synthesis
- Computer Graphics
- Image Analysis
- Transfer Learning
- Deep Learning
- Machine Learning
- Simulations
- Computer Vision
- Machine Learning Methods
- Artificial Intelligence and Machine Learning (AI/ML)
- Simulation and Simulation Software
- Applied Machine Learning
- Generative Model Architectures
- Artificial Neural Networks
- Small Data
- 3D Modeling
要了解的详细信息

添加到您的领英档案
7 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有7个模块
In this module, we will introduce the fundamentals of Multi-Task Learning (MTL), a paradigm where multiple related tasks are learned simultaneously by sharing representations. This approach leverages the commonalities among tasks to improve generalization, reduce overfitting, and achieve better performance with fewer training examples. We will explore how MTL is applied across various domains, such as natural language processing, computer vision, and speech recognition, and examine practical examples such as using MTL to enhance image classification and object detection in autonomous systems. Students will gain insights into both the benefits and challenges of MTL, including issues such as task imbalance, negative transfer, and scalability. Additionally, we will delve into meta-learning techniques, such as Conditional Neural Adaptive Processes (CNAPs), that extend MTL by enabling models to quickly adapt to new tasks with minimal data.
涵盖的内容
1个视频15篇阅读材料1个作业
1个视频• 总计6分钟
- Multi-Task Learning• 6分钟
15篇阅读材料• 总计68分钟
- Course Introduction• 2分钟
- Syllabus - Machine Learning for Small Data Part 2• 10分钟
- Academic Integrity• 1分钟
- Introduction to Multi-Task Learning• 2分钟
- Examples of Multi-Task Learning• 5分钟
- Why Multi-Task Learning• 5分钟
- Key Challenges in MTL• 2分钟
- Meta-Learning and Few-Shot Learning for Multi-Task Learning• 5分钟
- An Overview of Conditional Neural Processes (CNPs)• 10分钟
- Conditional Neural Adaptive Processes (CNAPs)• 2分钟
- Adaptation Mechanisms of CNAPs• 10分钟
- CNAPs Balances Adaptation• 2分钟
- Key Extension in CNAPs• 5分钟
- CNAPs in Practice• 2分钟
- Adaptation Network for CNAPs• 5分钟
1个作业• 总计20分钟
- Module 8 Quiz• 20分钟
This module explores the concept of meta-learning, or "learning to learn," which enables models to generalize across various tasks by leveraging knowledge from similar tasks. We will delve into key meta-learning algorithms such as Model-Agnostic Meta-Learning (MAML) and Prototypical Networks and examine their applications in computer vision using datasets such as ImageNet, Omniglot, CUB-200-2011, and FGVC-Aircraft. The module also covers the Meta-Dataset framework, which provides a diverse range of tasks for training robust and adaptable meta-learning models.
涵盖的内容
1个视频7篇阅读材料1个作业
1个视频• 总计4分钟
- Meta Learning• 4分钟
7篇阅读材料• 总计36分钟
- What is Meta-Learning?• 3分钟
- Model-Agnostic Meta-Learning (MAML)• 5分钟
- Prototypical Networks• 5分钟
- Beyond Simple Meta-Learning • 3分钟
- Mathematical Formulation of Meta-Learning• 5分钟
- Mathematical Formulation of Transductive Learning• 5分钟
- An Overview of Some Vision Meta-Datasets• 10分钟
1个作业• 总计15分钟
- Module 9 Quiz• 15分钟
This module focuses on generative models for data augmentation, covering key generative AI techniques that enhance machine learning applications by generating synthetic but realistic data. We begin by introducing generative adversarial networks (GANs), Variational Autoencoders (VAEs), Normalizing Flows, Diffusion Models, and Motion Graphs, highlighting their mathematical foundations, training mechanisms, and real-world applications. Additionally, we discuss the limitations of each model and the computational challenges they present. The lecture provides insights into how generative models contribute to modern AI systems, including image synthesis, domain adaptation, super-resolution, motion synthesis, and data augmentation in small-data learning scenarios.
涵盖的内容
1个视频28篇阅读材料1个作业
1个视频• 总计5分钟
- Learning with Data Augmentation: Data-Driven Simulation• 5分钟
28篇阅读材料• 总计152分钟
- Introduction to Generative Models• 10分钟
- Limitations of Generative Models for Data Augmentation• 5分钟
- Generative Adversarial Networks (GANs)• 10分钟
- Applications of Generative Models• 2分钟
- Vanilla GAN• 2分钟
- Conditional GAN (cGAN)• 5分钟
- Deep Convolutional GAN (DCGAN)• 5分钟
- Wasserstein GAN (WGAN)• 4分钟
- CycleGAN• 5分钟
- Progressive Growing of GANs (PGGAN)• 5分钟
- InfoGAN• 5分钟
- BigGAN• 5分钟
- Super-Resolution GAN (SRGAN)• 5分钟
- Text-to-Image GAN• 5分钟
- Autoencoder Basics• 5分钟
- Variational Autoencoders• 5分钟
- Probabilistic Encoder, Reparameterization Trick• 10分钟
- VAE Loss Function• 5分钟
- Vanilla VAE • 2分钟
- Beta-VAE • 5分钟
- Conditional VAE• 5分钟
- VQ-VAE• 5分钟
- Flow-Based Models• 10分钟
- Advancements in Flow-Based Generative Models Part 1• 5分钟
- Advancements in Flow-Based Generative Models Part 2• 5分钟
- Advancements in Flow-Based Generative Models Part 3• 10分钟
- Diffusion Models• 5分钟
- Comparative Summary of Generative Models• 2分钟
1个作业• 总计20分钟
- Module 10 Quiz• 20分钟
This module focuses on physics-based simulation for data augmentation, exploring how physics-driven techniques generate realistic synthetic data to enhance machine learning models. We will discuss key advantages of physics-based simulations, such as scalability, cost-effectiveness, and their ability to model rare events. The module also covers notable approaches, including GeoNet (CVPR 2018) for depth and motion estimation, ScanAva (ECCVW 2018) for semi-supervised learning with 3D avatars, and SMPL (ACM Transactions on Graphics, Volume 15) for human body modeling. Additionally, we introduce equation-based simulation techniques such as Finite Element Method (FEM) and Navier-Stokes equations for modeling fluid dynamics. The module highlights challenges in bridging the simulation-to-reality gap and optimizing computational costs while ensuring high-fidelity synthetic data generation.
涵盖的内容
1个视频10篇阅读材料1个作业
1个视频• 总计5分钟
- Introduction to Physics-Based Simulation• 5分钟
10篇阅读材料• 总计64分钟
- Physics-Based Simulation• 3分钟
- GeoNet: Using Physical Relationship in Image Formation • 10分钟
- Avatar-Based Simulation• 3分钟
- ScanAva• 10分钟
- Skinned Multi-Person Linear Model (SMPL)• 10分钟
- Skinned Multi-Person Linear Model (SMPL) Part 2• 10分钟
- Governing Equations in Physics-Based Simulation• 3分钟
- Partial Differential Equations (PDEs)• 3分钟
- Numerical Methods for Solving PDEs• 10分钟
- Comparison of Methods• 2分钟
1个作业• 总计30分钟
- Module 11 Quiz• 30分钟
This module introduces Neural Radiance Fields (NeRF), a deep learning-based approach for synthesizing novel views of complex 3D scenes. Unlike traditional 3D reconstruction techniques such as Structure-from-Motion (SfM) and Multi-View Stereo (MVS), which rely on explicit point cloud representations, NeRF learns a continuous volumetric representation of a scene using a fully connected neural network. By taking a set of 2D images captured from different viewpoints, NeRF estimates the density and color of light rays at each spatial location, enabling high-quality, photorealistic novel view synthesis. The lecture also explores how NeRF improves upon prior methods, such as depth estimation, photogrammetry, and classic geometric techniques. Understanding NeRF provides valuable insights into data-efficient 3D scene representation—a critical area for applications in computer vision, robotics, virtual reality (VR), and augmented reality (AR).
涵盖的内容
1个视频6篇阅读材料1个作业
1个视频• 总计3分钟
- NeRF• 3分钟
6篇阅读材料• 总计35分钟
- Introducing Neural Radiance Fields (NeRF)• 10分钟
- Volume Rendering• 10分钟
- Discrete Approximation in Volume Rendering• 3分钟
- NeRF Network Structure• 5分钟
- NeRF Extension: NeRV• 5分钟
- NeRF vs. NeRV• 2分钟
1个作业• 总计25分钟
- Module 12 Quiz• 25分钟
This module explores diffusion models, a class of generative models that incrementally add noise to data and then learn to reverse the process to reconstruct high-quality samples. Diffusion models have gained prominence due to their state-of-the-art performance in image, video, and text generation, surpassing GANs in terms of sample quality and diversity. The module covers the foundational principles of Denoising Diffusion Probabilistic Models (DDPMs) and their training objectives, advancements such as Score-Based Generative Models, Latent Diffusion Models (LDMs), and Classifier-Free Guidance techniques. We also examine their real-world applications in text-to-image generation (Stable Diffusion, DALL·E), video synthesis (Sora, Veo 2), and high-resolution image synthesis. Finally, the module provides insights into the mathematical framework, the optimization strategies, and the growing role of diffusion models in AI-driven content creation.
涵盖的内容
1个视频11篇阅读材料1个作业
1个视频• 总计5分钟
- Introduction to Diffusion Models• 5分钟
11篇阅读材料• 总计98分钟
- Forward and Reverse Diffusion in Denoising• 5分钟
- Components of Denoising Diffusion Models• 10分钟
- Loss Decomposition and Noise Levels• 10分钟
- Variance Schedule and Training Steps• 5分钟
- The Rapidly Evolving Field of DDM• 5分钟
- Foundational Understanding of Diffusion Models• 10分钟
- Key Model Variants and Improvements• 10分钟
- Guided and Conditional Generation• 10分钟
- Video Diffusion Models I• 15分钟
- Video Diffusion Models II• 10分钟
- Video Diffusion Models III• 8分钟
1个作业• 总计20分钟
- Module 13 Quiz• 20分钟
This lecture explores 3D Gaussian Splatting (3DGS), a novel approach in computer vision for high-fidelity, real-time 3D scene rendering. Unlike traditional methods like Neural Radiance Fields (NeRF), which rely on continuous neural fields, 3DGS represents scenes using a collection of discrete anisotropic Gaussian functions. These Gaussians efficiently approximate scene geometry, radiance, and depth, enabling real-time rendering with minimal computational overhead. We discuss the theoretical foundations, mathematical formulations, and rendering techniques that make 3D Gaussian Splatting a game-changer in virtual reality (VR), augmented reality (AR), and interactive media. Additionally, we highlight key differences between isotropic and anisotropic Gaussian splats, their impact on rendering quality, and how optimization techniques refine their accuracy. Finally, we compare 3DGS to NeRF, analyzing their trade-offs in rendering speed, computational efficiency, and application suitability.
涵盖的内容
1个视频6篇阅读材料1个作业
1个视频• 总计6分钟
- 3D Gaussian Spatting• 6分钟
6篇阅读材料• 总计52分钟
- Introducing 3D Gaussian Spatting• 5分钟
- Isotropic & Anisotropic in 3DGS• 15分钟
- Key Concepts & Methodology in 3DGS• 10分钟
- Optimization & Training in 3DGS• 5分钟
- NeRF versus 3DGS• 15分钟
- Congratulations! • 2分钟
1个作业• 总计25分钟
- Module 14 Quiz• 25分钟
位教师

提供方

提供方

Founded in 1898, Northeastern is a global research university with a distinctive, experience-driven approach to education and discovery. The university is a leader in experiential learning, powered by the world’s most far-reaching cooperative education program. The spirit of collaboration guides a use-inspired research enterprise focused on solving global challenges in health, security, and sustainability.
从 Machine Learning 浏览更多内容

课程
NNortheastern University
课程
MMicrosoft
课程
FFractal Analytics
课程
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,

