Prompt Engineering, Generative AI & LLM Models Fundamentals course is designed for learners who want to build a strong foundation in Large Language Models (LLMs), Generative AI concepts, and prompt engineering techniques. The course focuses on helping technical professionals and AI enthusiasts understand how modern generative AI systems work and how to effectively interact with and optimize these models for real-world applications.
抓住节省的机会!购买 Coursera Plus 3 个月课程可享受40% 的折扣,并可完全访问数千门课程。

您将学到什么
Understand the fundamentals of Large Language Models (LLMs) and Generative AI
Apply prompt engineering techniques to guide LLM outputs and improve response quality.
Explore LLM optimization and advanced techniques such as fine-tuning, evaluation metrics, and Retrieval-Augmented Generation (RAG).
要了解的详细信息

添加到您的领英档案
March 2026
6 项作业
了解顶级公司的员工如何掌握热门技能

该课程共有3个模块
Welcome to the module Foundations of Large Language Models and Generative AI. In this module, you will explore the core concepts behind Large Language Models (LLMs) and understand how Generative AI systems are designed and applied. We begin by introducing LLMs and their role within artificial intelligence and machine learning. You will learn what defines a Generative AI model and examine the key components that power these systems. Through a hands-on demo using HuggingFace, you will see how LLMs are applied to common NLP tasks such as text generation and classification. The module also highlights the importance of training data, including how LLMs are trained on large datasets and why data cleaning is critical for improving model performance and reliability. By the end of this module, you will have a clear understanding of how LLMs and Generative AI systems work, how they are trained, and the role of high-quality data in building effective AI solutions.
涵盖的内容
6个视频2篇阅读材料2个作业
Welcome to the module LLM Training, Optimization, and Evaluation. In this module, you will dive deeper into how Large Language Models are trained, optimized, and assessed for performance and reliability. You will begin by understanding the fundamentals of LLM training and optimization, including how massive datasets and computational resources are used to build high-performing models. The module explores different learning techniques such as zero-shot, few-shot, instruction tuning, and Reinforcement Learning from Human Feedback (RLHF), helping you understand how models adapt to tasks with minimal examples. You will also learn about loss functions and how they guide model learning during training. The concept of LLM alignment is introduced to explain how models are tuned to produce safe, accurate, and human-aligned responses. On the evaluation side, you will examine key evaluation metrics, including perplexity, and understand how model quality is measured. The module highlights the critical role humans play in evaluating outputs and refining models, as well as the importance of GPUs in enabling large-scale model training. By the end of this module, you will have a strong understanding of how LLMs are trained, optimized, aligned, and evaluated in real-world AI systems.
涵盖的内容
8个视频1篇阅读材料2个作业
Welcome to the module Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures. In this module, you will focus on practical techniques for controlling, adapting, and enhancing Large Language Models to meet real-world requirements. You will start with Prompt Engineering, learning the fundamentals of prompt design and how prompt structure directly impacts model output. The module covers proven techniques for crafting effective prompts that improve accuracy, reasoning quality, and response consistency. A hands-on demo will help you see how small prompt changes can significantly influence LLM behavior. Next, you will explore LLM fine-tuning approaches, including prompt tuning and Parameter-Efficient Fine-Tuning (PEFT). You will understand how prompt-efficient methods such as P-Tuning adapt large models with minimal computational cost. The introduction to NVIDIA NeMo provides insight into frameworks used for customizing and optimizing enterprise-scale language models. Finally, you will examine Retrieval-Augmented Generation (RAG) architecture and learn how combining LLMs with external knowledge sources improves factual grounding and domain-specific performance. By the end of this module, you will understand how to design high-quality prompts, apply efficient fine-tuning techniques, and leverage advanced LLM architectures for scalable generative AI solutions.
涵盖的内容
9个视频1篇阅读材料2个作业
位教师

提供方
人们为什么选择 Coursera 来帮助自己实现职业发展

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
常见问题
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
更多问题
提供助学金,



