Structured Streaming in Apache Spark 2

Many sources of data in the real world are available in the form of streams; from self-driving car sensors to weather monitors. Apache Spark 2 is a powerful, distributed, analytics engine which offers great support for streaming applications
Course info
Level
Beginner
Updated
Jun 22, 2018
Duration
2h 11m
Table of contents
Understanding the High Level Streaming API in Spark 2.x
Building Advanced Streaming Pipelines Using Structured Streaming
Integrating Apache Kafka with Structured Streaming
Course Overview
Description
Course info
Level
Beginner
Updated
Jun 22, 2018
Duration
2h 11m
Description

Stream processing applications work with continuously updated data and react to changes in real-time. Data frames in Spark 2.x support infinite data, thus effectively unifying batch and streaming applications. In this course, Structured Streaming in Apache Spark 2, you'll focus on using the tabular data frame API to work with streaming, unbounded datasets using the same APIs that work with bounded batch data. First, you'll start off by understanding how structured streaming works and what makes it different and more powerful than traditional streaming applications; the basic streaming architecture and the improvements included in structured streaming allowing it to react to data in real-time. Then you'll create triggers to evaluate streaming results and output modes to write results out to file or screen. Next, you'll discover how you can build streaming pipelines using Spark by studying event time aggregations, grouping and windowing functions, and how to perform join operations between batch and streaming data. You'll even work with real Twitter streams and perform analysis on trending hashtags on Twitter. Finally, you'll then see how Spark stream processing integrates with the Kafka distributed publisher-subscriber system by ingesting Twitter data from a Kafka producer and process it using Spark Streaming. By the end of this course, you'll be comfortable performing analysis of stream data using Spark's distributed analytics engine and its high-level structured streaming API.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Building Features from Image Data
Advanced
2h 10m
Aug 13, 2019
Designing a Machine Learning Model
Intermediate
3h 25m
Aug 13, 2019
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi, my name is Janani Ravi, and welcome to this course on Structured Streaming in Apache Spark 2. A little about myself, I have a master's degree in electrical engineering from Stanford, and have worked at companies such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real-time collaborative editing in Google Docs, and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high- quality video content. In this course, we focus on using the tabular data frame to work with streaming and bounded data sets using the same APIs that work with bounded batched data. We start off by understanding how structured streaming works and what makes it different and more powerful than traditional streaming applications. We'll understand the basic streaming architecture and the improvements included in structured streaming, allowing it to react to data in real-time. We'll start triggers to evaluate streaming results, and output modes to write results out to file or to screen. We'll then see how we can build streaming pipelines using Spark. We'll study event time aggregations, grouping and windowing functions, and how we perform join operations between batch and streaming data. We'll work with real Twitter streams, and perform analysis on trending hashtags on Twitter. We'll then see how Spark stream processing integrates with Kafka distributed publisher subscriber systems. We'll ingest Twitter data from a Kafka producer and process it using Spark streaming. At the end of this course, you should be comfortable performing analysis of stream data using Spark's distributed analytics engine and its high-level structured streaming API.