Architecting Big Data Solutions Using Google Bigtable

Google Bigtable is a sophisticated NoSQL offering on the Google Cloud Platform with extremely low latencies. By the end of this course, you'll understand why Bigtable is much more powerful offering than HBase, with linear scaling of your data.
Course info
Level
Beginner
Updated
Dec 4, 2018
Duration
2h 2m
Table of contents
Description
Course info
Level
Beginner
Updated
Dec 4, 2018
Duration
2h 2m
Description

Bigtable is Google’s proprietary storage service that offers extremely fast read and write speeds. It uses a sophisticated internal architecture which learns access patterns and moves around your data to mitigate the issue of hot-spotting. In this course, Architecting Big Data Solutions Using Google Bigtable, you’ll learn both the conceptual and practical aspects of working with Bigtable. You’ll learn how to best to design your schema to enable fast reads and write speeds and discover how data in Bigtable can be accessed using the command line as well as client libraries. First, you’ll study the internal architecture of Bigtable and how data is stored within it using the 4-dimensional data model. You’ll also discover how Bigtable clusters, nodes, and instances work and how Bigtable works with Colossus - Google’s proprietary storage system behind the scenes. Next, you’ll access Bigtable using both the HBase shell as well as cbt, Google’s command line utility. Later, you'll create and manage tables while practice exporting and importing data using sequence files. Finally, you’ll study how manual fail-overs can be handled when we have single cluster routing enabled. At the end of this course, you’ll be comfortable working with Bigtable using both the command line as well as client libraries.

About the author
About the author

A problem solver at heart, Janani has a Masters degree from Stanford and worked for 7+ years at Google. She was one of the original engineers on Google Docs and holds 4 patents for its real-time collaborative editing framework.

More from the author
Scraping Your First Web Page with Python
Beginner
2h 39m
Nov 5, 2019
More courses by Janani Ravi
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi. My name is Janani Ravi, and welcome to this course on Architecting Big Data Solutions Using Google Bigtable. A little about myself, I have a Masters degree in electrical engineering from Stanford and have worked at companies such as Microsoft, Google, and Flipkart. At Google, I was one of the first engineers working on real time collaborative editing in Google Docs and I hold four patents for its underlying technologies. I currently work on my own startup, Loonycorn, a studio for high-quality video content. In this course, we'll focus on both the conceptual and practical aspects of working with Bigtable. We'll see how best to design our schema to enable fast reads and writes, and we'll study how data in Bigtable can be accessed using the command line, as well as client libraries. We start off by studying the internal architecture of Bigtable and how data is stored within it using the four-dimensional data model. We'll understand how Bigtable clusters, nodes, and instances work, and how Bigtable works with Colossus, Google's proprietary storage system behind the scenes. We'll then access Bigtable using both the Hbase Shell, as well as cbt, Google's command-line utility specially built to work with Bigtable. We'll create and manage tables and see how we can export and import data using sequence files. We'll also see how we can use client libraries in Python, planning on cloud data lab to work with our data. We'll study how manual failovers can be handled when we have single cluster routing enabled. We'll then move on to how application profiles can be used to enable multi-cluster routing on Bigtable. We'll monitor our instance using Stackdriver and see how we can programmatically scale our Bigtable cluster. At the end of this course, you'll be comfortable working with Bigtable using both the command line, as well as client libraries, and you have a good understanding of how you can best design your schema to make the most of Bigtable's powerful functionality.