In this course, you will learn foundational knowledge needed to apply CI/CD methodologies to your pipeline creation process in Azure Data Factory to deploy robust and well-tested data pipelines to production.
Course Overview Hi everyone. My name is Marcelo Pastorino, and welcome to my course, Deploying Data Pipelines in Microsoft Azure. I am a software developer and solutions architect with 20 years of commercial experience designing and developing software, services, and applications that run in the cloud, web, and mobile devices. Continuous integration, delivery, and deployment are a set of practices that allow software developers to continuously deliver value and also a great way to accelerate the feedback loop with customers. Did you know that that data engineers working with Azure Data Factory can take advantage of the same methodologies to deliver robust, well-tested data pipelines to production? In this course, you are going to learn about such practices and how to incorporate them into your Azure Data Factory pipeline creation process. Some of the major topics that we will cover include creating the infrastructure needed to support multiple deployment environments, integrating Azure Data Factory with a source controlled system, deploying data pipelines using Azure Data Factory visual tools and ARM templates, and also deploying data pipelines using a fully automated release pipeline in Azure DevOps. By the end of this course, you will have the skills and knowledge to apply CI and CD practices to your Azure Data Factory pipeline creation process. Before beginning the course, you should be familiar with Azure Data Factory and other Azure services, such as Azure Storage and Azure Key Vault, as well as having some familiarity with Git version control system. I hope you'll join me on this journey to learn with the Deploying Data Pipelines in Microsoft Azure course, right here at Pluralsight.