Expanded

Getting Started with Apache Spark on Databricks

This course will introduce you to analytical queries and big data processing using Apache Spark on Azure Databricks. You will learn how to work with Spark transformations, actions, visualizations, and functions using the Databricks Runtime.
Course info
Level
Beginner
Updated
Oct 25, 2021
Duration
1h 52m
Table of contents
Description
Course info
Level
Beginner
Updated
Oct 25, 2021
Duration
1h 52m
Your 10-day individual free trial includes:

Expanded library

This course and over 7,000+ additional courses from our full course library.

Hands-on library

Practice and apply knowledge faster in real-world scenarios with projects and interactive courses.
*Available on Premium only
Description

Azure Databricks allows you to work with big data processing and queries using the Apache Spark unified analytics engine. With Azure Databricks you can set up your Apache Spark environment in minutes, autoscale your processing, and collaborate and share projects in an interactive workspace.

In this course, Getting Started with Apache Spark on Databricks, you will learn the components of the Apache Spark analytics engine which allows you to process batch as well as streaming data using a unified API. First, you will learn how the Spark architecture is configured for big data processing, you will then learn how the Databricks Runtime on Azure makes it very easy to work with Apache Spark on the Azure Cloud Platform and will explore the basic concepts and terminology for the technologies used in Azure Databricks.

Next, you will learn the workings and nuances of Resilient Distributed Datasets also known as RDDs which is the core data structure used for big data processing in Apache Spark. You will see that RDDs are the data structures on top of which Spark Data frames are built. You will study the two types of operations that can be performed on Data frames - namely transformations and actions and understand the difference between them. You’ll also learn how Databricks allows you to explore and visualize your data using the display() function that leverages native Python libraries for visualizations.

Finally, you will get hands-on experience with big data processing operations such as projection, filtering, and aggregation operations. Along the way, you will learn how you can read data from an external source such as Azure Cloud Storage and how you can use built-in functions in Apache Spark to transform your data.

When you are finished with this course you will have the skills and ability to work with basic transformations, visualizations, and aggregations using Apache Spark on Azure Databricks.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Machine Learning for Financial Services
Beginner
1h 50m
Nov 24, 2021
Machine Learning for Healthcare
Beginner
1h 48m
Nov 24, 2021
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi. My name is Janani Ravi, and welcome to this course on Getting Started with Apache Spark on Databricks. A little about myself, I have a master's degree in electrical engineering from Stanford and have worked at companies such as Microsoft, Google, and Flipkart. I currently work on my own startup, Loonycorn, a studio for high‑quality video content. In this course, you will learn the components of the Apache Spark analytics engine which allows you to process batch, as well as training data using a unified API. First, you will learn how the Spark architecture is configured for big data processing. You will then learn how the Databricks runtime on Azure makes it very easy to work with Apache Spark on the Azure Cloud platform, and you will explode the basic concepts and terminology for the technologies used in Azure Databricks. Next, you will learn the workings and nuances of resilient distributed dataset, also known as RDDs. This is the core data structure used for big data processing in Apache Spark. You will see that RDDs are the data structures on top of which Spark data frames are built. You will study the two types of operations that can be performed on data frames, namely transformations and actions and understand the difference between them. You'll also learn how Databricks allows you to explore and visualize your data using the display function that leverages native Python libraries for visualizations. Finally, you will get hands‑on experience with big data processing operations such as projection, filtering, and aggregation operations. Along the way, you'll learn how you can read data from an external source, such as Azure Cloud storage and how you can use built‑in functions in Apache Spark to transform your data. When you're finished with this course, you will have the skills and ability to work with basic transformations, visualizations, and aggregations using Apache Spark on Azure Databricks.