Whizlabs

Prompt Engineering Generative AI & LLM Models Fundamentals

抓住节省的机会!购买 Coursera Plus 3 个月课程可享受40% 的折扣,并可完全访问数千门课程。

Whizlabs

Prompt Engineering Generative AI & LLM Models Fundamentals

包含在 Coursera Plus

深入了解一个主题并学习基础知识。
中级 等级

推荐体验

6 小时 完成
灵活的计划
自行安排学习进度
深入了解一个主题并学习基础知识。
中级 等级

推荐体验

6 小时 完成
灵活的计划
自行安排学习进度

您将学到什么

  • Understand the fundamentals of Large Language Models (LLMs) and Generative AI

  • Apply prompt engineering techniques to guide LLM outputs and improve response quality.

  • Explore LLM optimization and advanced techniques such as fine-tuning, evaluation metrics, and Retrieval-Augmented Generation (RAG).

要了解的详细信息

可分享的证书

添加到您的领英档案

最近已更新!

March 2026

作业

6 项作业

授课语言:英语(English)

了解顶级公司的员工如何掌握热门技能

Petrobras, TATA, Danone, Capgemini, P&G 和 L'Oreal 的徽标

该课程共有3个模块

Welcome to the module Foundations of Large Language Models and Generative AI. In this module, you will explore the core concepts behind Large Language Models (LLMs) and understand how Generative AI systems are designed and applied. We begin by introducing LLMs and their role within artificial intelligence and machine learning. You will learn what defines a Generative AI model and examine the key components that power these systems. Through a hands-on demo using HuggingFace, you will see how LLMs are applied to common NLP tasks such as text generation and classification. The module also highlights the importance of training data, including how LLMs are trained on large datasets and why data cleaning is critical for improving model performance and reliability. By the end of this module, you will have a clear understanding of how LLMs and Generative AI systems work, how they are trained, and the role of high-quality data in building effective AI solutions.

涵盖的内容

6个视频2篇阅读材料2个作业

Welcome to the module LLM Training, Optimization, and Evaluation. In this module, you will dive deeper into how Large Language Models are trained, optimized, and assessed for performance and reliability. You will begin by understanding the fundamentals of LLM training and optimization, including how massive datasets and computational resources are used to build high-performing models. The module explores different learning techniques such as zero-shot, few-shot, instruction tuning, and Reinforcement Learning from Human Feedback (RLHF), helping you understand how models adapt to tasks with minimal examples. You will also learn about loss functions and how they guide model learning during training. The concept of LLM alignment is introduced to explain how models are tuned to produce safe, accurate, and human-aligned responses. On the evaluation side, you will examine key evaluation metrics, including perplexity, and understand how model quality is measured. The module highlights the critical role humans play in evaluating outputs and refining models, as well as the importance of GPUs in enabling large-scale model training. By the end of this module, you will have a strong understanding of how LLMs are trained, optimized, aligned, and evaluated in real-world AI systems.

涵盖的内容

8个视频1篇阅读材料2个作业

Welcome to the module Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures. In this module, you will focus on practical techniques for controlling, adapting, and enhancing Large Language Models to meet real-world requirements. You will start with Prompt Engineering, learning the fundamentals of prompt design and how prompt structure directly impacts model output. The module covers proven techniques for crafting effective prompts that improve accuracy, reasoning quality, and response consistency. A hands-on demo will help you see how small prompt changes can significantly influence LLM behavior. Next, you will explore LLM fine-tuning approaches, including prompt tuning and Parameter-Efficient Fine-Tuning (PEFT). You will understand how prompt-efficient methods such as P-Tuning adapt large models with minimal computational cost. The introduction to NVIDIA NeMo provides insight into frameworks used for customizing and optimizing enterprise-scale language models. Finally, you will examine Retrieval-Augmented Generation (RAG) architecture and learn how combining LLMs with external knowledge sources improves factual grounding and domain-specific performance. By the end of this module, you will understand how to design high-quality prompts, apply efficient fine-tuning techniques, and leverage advanced LLM architectures for scalable generative AI solutions.

涵盖的内容

9个视频1篇阅读材料2个作业

位教师

Whizlabs Instructor
Whizlabs
145 门课程 111,569 名学生

提供方

Whizlabs

人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

自 2018开始学习的学生
''能够按照自己的速度和节奏学习课程是一次很棒的经历。只要符合自己的时间表和心情,我就可以学习。'

Jennifer J.

自 2020开始学习的学生
''我直接将从课程中学到的概念和技能应用到一个令人兴奋的新工作项目中。'

Larry W.

自 2021开始学习的学生
''如果我的大学不提供我需要的主题课程,Coursera 便是最好的去处之一。'

Chaitanya A.

''学习不仅仅是在工作中做的更好:它远不止于此。Coursera 让我无限制地学习。'
Coursera Plus

通过 Coursera Plus 开启新生涯

无限制访问 10,000+ 世界一流的课程、实践项目和就业就绪证书课程 - 所有这些都包含在您的订阅中

通过在线学位推动您的职业生涯

获取世界一流大学的学位 - 100% 在线

加入超过 3400 家选择 Coursera for Business 的全球公司

提升员工的技能,使其在数字经济中脱颖而出

常见问题