Production ML models failing your latency targets? Learn how to make them run 3-5x faster without losing accuracy. This course helps ML engineers and data scientists optimize neural network inference for real-world deployment—across mobile, edge, and cloud environments. If you face slow model inference, high infrastructure costs, or deployment constraints, this course provides practical solutions. You'll master profiling techniques to identify performance bottlenecks, apply quantization to cut precision requirements, and make smart trade-offs between speed, accuracy, and resource constraints. You'll learn to benchmark optimization techniques and select the right approach for deployment scenarios. You'll explore inference profiling and metrics, pruning strategies, and quantization methods. You'll practice with real-world cases—from streaming platforms to autonomous vehicles—using industry-standard tools like PyTorch Profiler, TensorRT, and pruning utilities.

您将学到什么
Analyze inference bottlenecks to identify optimization opportunities in production ML systems.
Implement model pruning techniques to reduce computational complexity while maintaining acceptable accuracy.
Apply quantization methods and benchmark trade-offs for secure and efficient model deployment.
您将获得的技能
要了解的详细信息

添加到您的领英档案
1 项作业
December 2025
了解顶级公司的员工如何掌握热门技能

积累特定领域的专业知识
- 向行业专家学习新概念
- 获得对主题或工具的基础理解
- 通过实践项目培养工作相关技能
- 获得可共享的职业证书

该课程共有3个模块
In this module, learners will master profiling techniques to identify bottlenecks and understand the fundamental trade-offs in model inference optimization. You'll use industry-standard tools like PyTorch Profiler to diagnose where models waste time—whether in computation, memory bandwidth, or data transfer. By the end, you'll confidently analyze profiling data, prioritize optimization efforts, and establish performance baselines for production ML systems.
涵盖的内容
4个视频2篇阅读材料1次同伴评审
In this module, learners will master pruning techniques to reduce neural network complexity without sacrificing accuracy. You'll explore both structured and unstructured pruning approaches, implement them using PyTorch pruning utilities, and discover how to recover accuracy through fine-tuning and knowledge distillation. By the end, you'll confidently apply pruning to optimize models for resource-constrained environments like mobile devices and edge hardware.
涵盖的内容
3个视频1篇阅读材料1次同伴评审
In this module, learners will master quantization techniques to reduce numerical precision while maintaining model accuracy. You'll implement both post-training quantization and quantization-aware training using PyTorch, then compare quantization against pruning across speed, accuracy, and security dimensions. By the end, you'll understand how optimization choices affect adversarial robustness and confidently select the right technique for secure, high-performance deployments in mission-critical applications.
涵盖的内容
4个视频1篇阅读材料1个作业2次同伴评审
获得职业证书
将此证书添加到您的 LinkedIn 个人资料、简历或履历中。在社交媒体和绩效考核中分享。
提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.








