This course introduces how to build robust, scalable, real-time big data systems using a variety of Apache Spark's APIs, including the Streaming, DataFrame, SQL, and DataSources APIs, integrated with Apache Kafka, HDFS and Apache Cassandra.
This course aims to get beyond all the hype in the big data world and focus on what really works for building robust, highly-scalable batch and real-time systems. In this course, Applying the Lambda Architecture with Spark, Kafka, and Cassandra, you'll string together different technologies that fit well and have been designed by some of the companies with the most demanding data requirements (such as Facebook, Twitter, and LinkedIn) to companies that are leading the way in the design of data processing frameworks, like Apache Spark, which plays an integral role throughout this course. You'll look at each individual component and work out details about their architecture that make them good fits for building a system based on the Lambda Architecture. You'll continue to build out a full application from scratch, starting with a small application that simulates the production of data in a stream, all the way to addressing global state, non-associative calculations, application upgrades and restarts, and finally presenting real-time and batch views in Cassandra. When you're finished with this course, you'll be ready to hit the ground running with these technologies to build better data systems than ever.
Course Overview Hi! My name is Ahmad Alkilani, and welcome to my course, Applying the Lambda Architecture with Spark, Kafka, and Cassandra. We see big data discussed every day whether you're in the field actively working on big data projects, hear about the scale of problems companies like LinkedIn, Facebook, and Twitter have to deal with on a daily basis, or simply listening to the radio about some initiative where big data enabled the analysis and discovery of new insights into the data we have. In this course, our focus will be on building real-time systems that can handle real-time data at scale with robustness and fault-tolerance as first-class citizens using tools like Apache Spark, Kafka, Cassandra, and Hadoop. We'll look at how thoughtful design of your big data applications allows you to combine low latency streaming data in batch workloads. We'll design and build an application from scratch using Apache Spark, Spark DataFrames, and Spark SQL, in addition to Spark's Data Sources API to load, store, and manipulate data. We'll also look at Spark Streaming and Spark-Kafka integration techniques for reliability and speed. We'll also write and Kafka data producer to simulate our real-time data stream feed into our streaming application. And as we dive deeper into the course, we'll look at how you can preserve global state and use memory efficiently with approximate algorithms as we build a stateful Spark Streaming application. And a production application isn't complete without the ability to handle errors and code updates. We'll also learn how to use a scalable NoSQL database and persist your data to Cassandra and HDFS. By the end of this course, you'll feel comfortable building your own fault-tolerant scalable real-time big data systems and act on streaming and batch data with Spark, Kafka, Cassandra, and HDFS as the backbone for the lambda architecture. Before we begin this course, you should be familiar with some programming language, preferably Java, Scala, or C#. But you certainly don't have to be a master in any of these as we'll walk you through a gentle introduction to get you going. I look forward to you joining me in this journey to learn about lambda architectures with the Applying Lambda Architecture with Spark, Kafka, and Cassandra course at Pluralsight.