This course provides a comprehensive overview of data storage and management approaches for big data. Learners will explore structured, semi-structured, and unstructured data formats, compare SQL and NoSQL database technologies, and implement data lakes and data warehouses. The course includes working with various file formats and understanding the differences between batch and real-time processing approaches.

Erwerben Sie mit Coursera Plus für 199 $ (regulär 399 $) das nächste Level. Jetzt sparen.

Data Storage and Management for Big Data
Dieser Kurs ist Teil von Microsoft Big Data Management and Analytics (berufsbezogenes Zertifikat)

Dozent: Microsoft
Bei enthalten
Empfohlene Erfahrung
Was Sie lernen werden
- Manage big data storage and pipelines with Azure services.
- Process and analyze large datasets using Apache Spark and Databricks.
Kompetenzen, die Sie erwerben
- Kategorie: NoSQL
- Kategorie: Data Transformation
- Kategorie: Extract, Transform, Load
- Kategorie: Data Architecture
- Kategorie: Databases
- Kategorie: Data Management
- Kategorie: Azure Synapse Analytics
- Kategorie: Real Time Data
- Kategorie: Data Processing
- Kategorie: Data Warehousing
- Kategorie: Data Governance
- Kategorie: Data Pipelines
- Kategorie: Microsoft Azure
- Kategorie: Scalability
- Kategorie: Data Storage
- Kategorie: SQL Server Integration Services (SSIS)
- Kategorie: Data Integration
- Kategorie: Data Lakes
Wichtige Details

Zu Ihrem LinkedIn-Profil hinzufügen
Januar 2026
Erfahren Sie, wie Mitarbeiter führender Unternehmen gefragte Kompetenzen erwerben.

Erweitern Sie Ihr Fachwissen im Bereich Data Analysis
- Lernen Sie neue Konzepte von Branchenexperten
- Gewinnen Sie ein Grundverständnis bestimmter Themen oder Tools
- Erwerben Sie berufsrelevante Kompetenzen durch praktische Projekte
- Erwerben Sie ein Berufszertifikat von Microsoft zur Vorlage

In diesem Kurs gibt es 5 Module
Data Storage Technologies (SQL vs NoSQL) guides learners through the core principles of modern data storage and the trade-offs that shape today’s big data systems. The module examines how relational databases manage structured data, where they encounter limitations at scale, and how techniques such as partitioning, indexing, and lakehouse architectures mitigate performance gaps. Learners compare major NoSQL categories—including document, key-value, and column-family databases—to understand how flexible schemas and distributed designs support high-volume, high-velocity workloads. Through hands-on activities with SQL Server, Azure Synapse, and Azure Cosmos DB, learners practice essential operations, evaluate storage technologies based on workload requirements, and build the skills needed to select and implement effective database solutions for big data environments.
Das ist alles enthalten
6 Videos3 Lektüren8 Aufgaben
Working with Data Formats (Structured, Semi-structured, Unstructured) helps learners build a clear understanding of how different data formats function within big data systems and why format selection matters for performance, storage, and analytical success. The module introduces structured formats, such as CSV and TSV, and explores flexible semi-structured formats, including JSON and XML. It also examines optimized file types, including Parquet, Avro, and ORC, that support large-scale analytics. Learners practice transforming data between formats using Azure Data Factory, working with nested structures, applying schema inference, and evaluating performance trade-offs across file types. Through demonstrations, code exercises, and hands-on labs, this module equips learners to select, convert, and manage data formats effectively for diverse big data scenarios.
Das ist alles enthalten
6 Videos3 Lektüren8 Aufgaben
Data Lakes and Data Warehouses Implementation guides learners through the architectural foundations and hands-on skills needed to build modern analytical environments. The module explores the purpose and structure of data lakes, highlighting the zones of raw, cleaned, enriched, and curated data, and demonstrates how thoughtful design supports flexibility, governance, and large-scale analytics. Learners also study core data warehouse concepts, including dimensional modeling, star schemas, and data marts, to understand how structured storage enables high-performance querying. Through practical work with Azure Data Lake Storage Gen2 and Azure Synapse Analytics, learners design zone architectures, implement dimensional models, configure SQL pools, and apply best practices for partitioning, distribution, and optimization. By the end, they gain the ability to organize, govern, and integrate data across both lake and warehouse environments, supporting scalable, enterprise-ready analytics.
Das ist alles enthalten
6 Videos3 Lektüren7 Aufgaben
Building Data Pipelines (ETL/ELT with Azure Data Factory) equips learners with the skills to design, implement, and manage scalable data integration workflows using modern, cloud-native approaches. The module examines the differences between ETL and ELT, helping learners understand when each methodology delivers the best performance, flexibility, and cost efficiency. Learners gain hands-on experience with Azure Data Factory, configuring linked services, datasets, activities, and core orchestration components, and practice building both simple and advanced pipelines. The module also introduces transformation logic, control flow patterns, parameterization, and error handling strategies that support production-ready data engineering solutions. Through walkthroughs, labs, code exercises, and scenario-based decisions, learners learn to monitor pipelines, troubleshoot failures, and design reliable data workflows that support enterprise-scale analytics.
Das ist alles enthalten
6 Videos3 Lektüren9 Aufgaben
Batch and Real-Time Processing Fundamentals introduces learners to the core processing models that power modern big data systems, helping them understand when each approach delivers the most value. The module explores batch architectures, scheduling methods, and optimization strategies for large-scale historical processing, while also examining real-time stream processing concepts, including event handling, latency trade-offs, and throughput requirements. Learners gain hands-on experience implementing both models—building batch workflows with Azure Data Factory and configuring streaming pipelines using Event Hubs and Stream Analytics. Through architectural analysis, code exercises, and practical labs, learners learn to evaluate business needs, select the right processing approach, and design hybrid systems that combine batch and streaming for comprehensive analytics.
Das ist alles enthalten
6 Videos3 Lektüren9 Aufgaben
Erwerben Sie ein Karrierezertifikat.
Fügen Sie dieses Zeugnis Ihrem LinkedIn-Profil, Lebenslauf oder CV hinzu. Teilen Sie sie in Social Media und in Ihrer Leistungsbeurteilung.
Mehr von Data Analysis entdecken
Status: VorschauNortheastern University
Status: Kostenloser Testzeitraum
Status: Kostenloser TestzeitraumDeepLearning.AI
Status: Kostenloser Testzeitraum
Warum entscheiden sich Menschen für Coursera für ihre Karriere?




Häufig gestellte Fragen
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you enroll in the course, you get access to all of the courses in the Certificate, and you earn a certificate when you complete the work. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Weitere Fragen
Finanzielle Unterstützung verfügbar,
¹ Einige Aufgaben in diesem Kurs werden mit AI bewertet. Für diese Aufgaben werden Ihre Daten in Übereinstimmung mit Datenschutzhinweis von Courseraverwendet.




