Create and Monitor Data Pipelines for a Batch Processing Solution
Data analytics at the serving layer come easy with a well designed and implemented data process. This course will teach you key considerations and design principles of creating and monitoring data pipelines for a batch processing solution.
What you'll learn
As a data specialist, you may be required to design and implement an end-to-end data pipeline for a batch processing solution. In this course, Create and Monitor Data Pipelines for a Batch Processing Solution, you’ll learn to design and implement data pipelines for a batch processing solution. First, you’ll explore the available data storages on Azure. Next, you’ll discover and develop batch processing solutions using Azure Data Factory and the available data storages. Finally, you’ll learn how to automate the data processing process and how to monitor for optimization and efficiency. When you’re finished with this course, you’ll have the skills and knowledge of a data professional needed to build and monitor end-to-end data pipelines.
Table of contents
- Overview 1m
- Reviewing the Globomantics Scenario 3m
- Preparing Data for Upload 3m
- Configuring the Data Source 6m
- Configuring the Data Destination 6m
- Accessing Data Lake Storage - Configuring Key Vault 7m
- Accessing Data Lake Storage - Creating the Dimension Tables 9m
- Orchestrating Data Processing with Synapse Pipelines - Linked Services and Datasets 10m
- Orchestrating Data Processing with Synapse Pipelines 11m
- Summary 1m