Google: Professional Cloud Architect

Paths

Google: Professional Cloud Architect

Authors: Howard Dierking, Janani Ravi, Vitthal Srinivasan

This skill path covers all the objectives needed to be a Cloud Architect on Google Cloud. You will learn in depth how to use the products and services as well as how to complete... Read more

What you will learn

  • Compute
  • Storage
  • Google App Engine
  • Google Kubernetes Engine
  • Google Cloud Functions
  • Networking and VPC’s
  • Load Balancing
  • Logging, Monitoring, and Debugging
  • IAM
  • Managed Instance Groups
  • Deployment
  • Billing
  • Security
  • Architectural Patterns

Pre-requisites

Learners should be familiar with generic fundamentals of cloud computing and the Google Cloud Platform. It is also assumed that Learners are already are proficient to the level of GCP Cloud Engineer exam.

Beginner

In this section you will cover the basic components of Google Cloud and then start looking at each specifically in more depth. You’ll see how to architect with compute and storage products. After this section you will be ready to learn more in depth about the other GCP products and how you use them.

Google Cloud Platform Fundamentals

by Howard Dierking

Dec 18, 2018 / 2h 7m

2h 7m

Start Course
Description

Over the last few years, the cloud has proven itself as a successful enabler for organizations of all sizes to improve agility, scale, reliability, and spend management. It has also made available capabilities that have greatly accelerated the adoption of a wide variety of new types of applications – from big data to Internet of things. This rapidly expanding set of capabilities can be difficult for anyone to keep up with, and that challenge is further compounded when attempting to map capabilities across different public cloud providers. This course, Google Cloud Platform Fundamentals, provides you with an extensive overview of Google Cloud Platform. While the Google Cloud Platform is a more recent offering than some of its competitors, it draws on years of experience running Google's massive internal infrastructure and exposes a streamlined set of solution-focused capabilities to help you build great systems. First you will explore the core building blocks of the platform. Next you'll explore the characteristics that differentiate Google's offering from other cloud platforms. Finally you'll learn the common application architectural patterns. By the end of this course, you will be able to understand how the areas fit together and provide starting points for deeper exploration.

Table of contents
  1. Course Overview1m
  2. Understanding Google Cloud Platform19m
  3. Core Building Blocks54m
  4. Security and Tools21m
  5. Building for Cloud 3.030m

Choosing and Implementing Google Cloud Compute Engine Solutions

by Janani Ravi

Sep 11, 2018 / 1h 58m

1h 58m

Start Course
Description

Provisioning and managing Google Cloud Compute Engine instances, i.e. VMs, is simple and straightforward. In this course, Choosing and Implementing Google Cloud Compute Engine Solutions, you will learn how to create, run, and manage virtual machines on the Google Cloud Platform (GCP). You will start off by understanding the breadth of offerings from the Google Cloud Platform - ranging from pure IaaS offerings such as the Google Compute Engine to pure PaaS offerings like the Google App Engine. Next, you'll see how you can create and work with these VM offerings on the cloud. You'll create and connect to Linux as well as Windows machines, reserve static IP addresses, attach local SSDs to VMs, communicate between VMs on a network and connect to Cloud Storage buckets. You'll then move on to administrating these instances on the cloud. You'll see how availability policies, to handle VM migrations, can be configured, how disk images and snapshots can be created, and how you can instantiate VMs using these images and snapshots. Finally, you'll be shown how to startup and shutdown scripts to customize VMs can be run. At the end of this course, you will be comfortable creating, connecting to, and working with virtual machine instances on the Google Cloud Platform.

Table of contents
  1. Course Overview1m
  2. Understanding GCP Compute Options44m
  3. Working with GCE VM Instances37m
  4. Managing GCE VM Instances34m

Architecting Google Cloud Storage Configurations

by Janani Ravi

Sep 24, 2018 / 1h 50m

1h 50m

Start Course
Description

Cloud Storage is a powerful storage solution, and is often the entry-point for enterprises who want to move their storage and compute from on-premises data centers to the cloud. In this course, Architecting Google Cloud Storage Configurations, you will learn how to create and harness elastic storage functionality on the cloud and understand how you can migrate your on premise data to the GCP. First, you will understand where exactly Cloud Storage fits in the range of storage services offered by the GCP. Then, you will see the features and pricing of the different kinds of Cloud Storage buckets and how to make the right choice for your use case. Next, you will explore in a hands-on manner how to create and use Cloud Storage buckets, seeing how data can be moved in and out of buckets, how object metadata can be created and updated, and the lifecycle management of objects. After that, you will administrate and regulate access to buckets and objects within buckets. Finally, you will know how data in buckets can be encrypted using customer supplied encryption keys, and how objects can be made publicly accessible either permanently or for a limited time-period using signed URLs. At the end of this course, you will be comfortable creating, configuring, and regulating access to Cloud Storage buckets on the Google Cloud Platform

Table of contents
  1. Course Overview1m
  2. Understanding Cloud Storage in the GCP Service Taxonomy34m
  3. Creating and Using Cloud Storage Buckets44m
  4. Regulating Access and Using from Other GCP Services29m

Intermediate

In this section you will learn about Paas and Iaas solutions. You’ll also learn about serverless, containers, and networking. You’ll finish with some logging and monitoring, and after this section you’ll be ready for security topics and managed instance groups.

Architecting Scalable Web Applications Using Google App Engine

by Janani Ravi

Jan 11, 2019 / 1h 53m

1h 53m

Start Course
Description

App Engine is the platform-as-a-service (PaaS) compute offering on the Google Cloud Platform and is one of the oldest offerings on the platform. Initially conceived as a way for cloud users to quickly deploy web applications, it now also has ways to run containers and use flexible runtimes. In this course, Architecting Scalable Web Applications Using Google App Engine, you will learn about the powerful features of App Engine, its two environments, as well as its integrations with other GCP services. First, you will discover how you can identify situations where App Engine is the most suitable computer option and about its fundamental building blocks. Next, you will explore the Standard App Engine environment. Finally, you will understand the App Engine Flexible environment and build and deploy an application to this environment. When you are finished with this course, you will be very comfortable choosing App Engine for your use case and will have the skills and knowledge to build and deploy apps on different types of App Engine environments.

Table of contents
  1. Course Overview2m
  2. Introducing Google App Engine40m
  3. Deploying Applications on the App Engine Standard Environment44m
  4. Deploying Applications on the App Engine Flexible Environment25m

Leveraging Advanced Features of Google App Engine

by Janani Ravi

Jan 16, 2019 / 1h 58m

1h 58m

Start Course
Description

In addition to just hosting web applications, App Engine offers some pretty interesting features such as programmatic deployment and asynchronous task processing. In this course, Leveraging Advanced Features of Google App Engine, you will explore and implement some of the advanced and interesting integrations available with App Engine, which go beyond the plain-vanilla use case of web application hosting. First, you will learn how you can programmatically create and deploy App Engine application using the App Engine Admin API, which allows release engineers to script the deployment process entirely. In addition, you will explore how App Engine applications can use the built-in mail service to send emails and also integrate with a third party email service such as SendGrid. Next, you will explore asynchronous processing with App Engine applications, first using cron jobs for scheduling periodic jobs, and then using pull and push queues which executes tasks asynchronously on worker services. Finally, you will build a complete end-to-end application using the Python Flask web framework using advanced features such as blueprints and application factories. This app will integrate with a number of GCP services such as Cloud Storage and Cloud Datastore, and will use the OAuth2 flow to allow users to log in using their Google credentials. You will then round the demo off by hosting your application on a custom domain. After finishing this course, you will be very comfortable using advanced features of App Engine based on your use case and gain the experience of building a full-featured web application in Python running on App Engine.

Table of contents
  1. Course Overview2m
  2. Working with App Engine APIs34m
  3. Working with Task Queues and Cron Jobs40m
  4. Deploying an End-to-end Application to a Custom Domain40m

Streamlining API Management Using Google Apigee

by Janani Ravi

Jan 10, 2019 / 1h 52m

1h 52m

Start Course
Description

Monolithic architectures are passing out of vogue these days and are increasingly being replaced by more modular service-oriented architectures with specific APIs for different purposes. Consequently, APIs are becoming valuable resources, and regulating access to APIs and monetizing their usage is becoming very important. In this course, Streamlining API Management Using Google Apigee, you will gain the ability to build, deploy and fine-tune API proxies to enforce policies and regulate access to your APIs on the Google Cloud Platform. First, you will learn the powerful features and often underestimated advantages of using a fully-fledged API management platform like Apigee where you can create policies to administer quotas, authorize users, charge for the usage of your APIs, enforce limits on usage and protect against security threats. Next, you will discover how to create, deploy and undeploy API proxies using Apigee Edge. Then, you will take advantage of Preflows, Postflows, and ConditionalFlows, which are ways to specify logic that the Apigee Edge Proxy will enforce. Finally, you will explore how to integrate Apigee with Google App Engine. When you’re finished with this course, you will have the skills and knowledge of Apigee Edge needed to protect, monetize, and fine-tune your APIs.

Table of contents
  1. Course Overview2m
  2. Getting Started with Apigee23m
  3. Deploying Proxies with Apigee Edge36m
  4. Building Proxies with Node.js on Apigee17m
  5. Using Apigee with Google App Engine31m

Deploying Containerized Workloads Using Google Cloud Kubernetes Engine

by Janani Ravi

Jan 11, 2019 / 2h 51m

2h 51m

Start Course
Description

Running Kubernetes clusters on the cloud involves working with a variety of technologies, including Docker, Kubernetes, and GCE Compute Engine Virtual Machine instances. This can sometimes get quite involved. In this course, Deploying Containerized Workloads Using Google Cloud Kubernetes Engine, you will learn how to deploy and configure clusters of VM instances running your Docker containers on the Google Cloud Platform using the Google Kubernetes Service. First, you will learn where GKE fits relative to other GCP compute options such as GCE VMs, App Engine, and Cloud Functions. You will understand fundamental building blocks in Kubernetes, such as pods, nodes and node pools, and how these relate to the fundamental building blocks of Docker, namely containers. Pods, ReplicaSets, and Deployments are core Kubernetes concepts, and you will understand each of these in detail. Next, you will discover how to create, manage, and scale clusters using the Horizontal Pod Autoscaler (HPA). You will also learn about StatefulSets and DaemonSets on the GKE. Finally, you will explore how to share states using volume abstractions, and field user requests using service and ingress objects. You will see how custom Docker images are built and placed in the Google Container Registry, and learn a new and advanced feature, binary authorization. When you’re finished with this course, you will have the skills and knowledge of the Google Kubernetes Engine needed to construct scalable clusters running Docker containers on the GCP.

Table of contents
  1. Course Overview2m
  2. Introducing Google Kubernetes Engine (GKE)54m
  3. Creating and Administering GKE Clusters47m
  4. Deploying Containerized Workloads to GKE Clusters49m
  5. Monitoring GKE Clusters Using Stackdriver17m

Leveraging Advanced Features on the Google Cloud Kubernetes Engine

by Janani Ravi

Jan 15, 2019 / 2h 31m

2h 31m

Start Course
Description

Kubernetes is a container orchestration technology that is fast emerging as the most popular computing option for hybrid and multi-cloud architectures. A key attraction of Kubernetes is its suitability for use cases involving Continuous Integration and Continuous Delivery (CI/CD); however, building such pipelines can get quite complicated. In this course, Leveraging Advanced Features on the Google Cloud Kubernetes Engine, you will gain the ability to fine-tune the networking and security aspects of your GKE clusters, as well as to orchestrate complex CI/CD pipelines on the Google Cloud Platform. First, you will learn the deployment of stateful and stateless applications, jobs and cron jobs. Next, you will discover the uses of network policies, private clusters, and pod-security policies. Finally, you will explore how to pull together Jenkins, Cloud Source Repositories, and the Google Container Registry to orchestrate a CI/CD pipeline." When you’re finished with this course, you will have the skills and knowledge of the Google Kubernetes Engine needed to fine-tune your clusters and construct CI/CD pipelines with minimal effort.

Table of contents
  1. Course Overview2m
  2. Creating and Managing Deployments on GKE Clusters1h 4m
  3. Working with Networking and Security on GKE Clusters40m
  4. Leveraging Continuous Integration and Continuous Delivery (CI/CD) Using GKE43m

Architecting Event-driven Serverless Solutions Using Google Cloud Functions

by Janani Ravi

Dec 4, 2018 / 2h 5m

2h 5m

Start Course
Description

Cloud Functions are lightweight serverless units of computing which can be deployed when external events occur. In this course, Architecting Event-driven Serverless Solutions Using Google Cloud Functions, you will learn how you can create and configure Google Cloud Functions with a number of different types of triggers - HTTP triggers, Cloud Storage, Pub/Sub, as well as many others. You’ll begin by discovering the various compute services that the GCP offers as well as where Cloud Functions fit in that ecosystem. Then, you’ll study the Python and Node.js runtimes that Cloud Functions currently supports and understand how events and triggers work. Next, you’ll work with HTTP Cloud Functions which can be used to implement webhooks. Following that, you’ll configure background functions to be retried in case of errors or failures. Lastly, you’ll learn how to use Stackdriver for monitoring and error reporting from within Cloud Functions. By the end of this course, you’ll be able to easily implement Google Cloud Functions as a part of your microservices architecture.

Table of contents
  1. Course Overview2m
  2. Getting Started GCP with Cloud Functions19m
  3. Implementing and Invoking HTTP Cloud Functions38m
  4. Implementing and Invoking Background Cloud Functions32m
  5. Leveraging StackDriver for Monitoring Cloud Functions32m

Leveraging Network Interconnection Options on the GCP

by Janani Ravi

Jan 14, 2019 / 1h 36m

1h 36m

Start Course
Description

The GCP provides a variety of networking services to interconnect different VPC networks on the GCP, as well as to connect on-premise installations with those on the cloud. Some of these involve the use of external IP addresses, while others support internal, or RFC 1918 IP addresses. Knowing the right interconnection technology for your use-case can get confusing. In this course, Leveraging Network Interconnection Options on the GCP, you will gain the ability to precisely understand the differing semantics of these interconnection options and configure the right one for your use case. First, you will learn the full range of available options, including both Peering and Interconnect options, as well as Shared VPCs, VPN, and VPC Peering. Next, you will discover how VPCs can be peered across projects or even organizations using VPC Network Peering. Finally, you will explore how to implement both static and dynamic VPN gateways. When you are finished with this course, you will have the skills and knowledge of the network interconnection options on the GCP needed to build efficient, fine-tuned connections between networks, whether those networks are on the GCP or on-premise.

Table of contents
  1. Course Overview2m
  2. Understanding GCP Interconnection Options for Enterprise Connectivity22m
  3. Designing and Implementing VPC Network Peering34m
  4. Implementing Dynamic VPN Gateways Using Cloud Router37m

Architecting Global Private Clouds with VPC Networks

by Janani Ravi

Jan 15, 2019 / 1h 49m

1h 49m

Start Course
Description

Understanding the exact semantics and features of network services on public cloud platforms can get complicated. In this course, Architecting Global Private Clouds with VPC Networks, you will gain the ability to create and correctly configure both auto and custom-mode Virtual Private Cloud (VPC) networks, understand the semantics of subnets, and work with routes as well as firewall rules. First, you will learn the fundamental concepts of networking on the Google Cloud Platform (GCP), and how GCP networking differs from that on other public cloud platforms. Next, you will discover how the default VPC and other auto-mode and custom-mode VPCs work. Finally, you will explore how to use Shared VPCs to adapt your network architecture to use cases such as multi-tier apps and hybrid scenarios. When you’re finished with this course, you will have the skills and knowledge of Google VPC Networks needed to design and organize your cloud resources for both ease-of-use and isolation.

Table of contents
  1. Course Overview2m
  2. Understanding VPC Networks on the GCP54m
  3. Working with Firewalls and VPCs33m
  4. Leveraging Shared VPCs20m

Leveraging Load Balancing Options on the GCP

by Vitthal Srinivasan

Jan 17, 2019 / 2h 23m

2h 23m

Start Course
Description

Load balancers used to be somewhat arcane tools that only some network planners or architects really had to worry about during edge-case planning; now they are absolutely mainstream. This is because, in the on-cloud world, backend compute instances and IP addresses both change frequently and unpredictably. Load balancers provide the stable front-end that end users of an application can reliably connect to, and have their requests routed to the appropriate backend instance. In this course, Leveraging Load Balancing Options on the GCP, you'll explore and work with the different kinds of load balancing options available on the Google Cloud and know the right one to pick for your specific use case. First, you’ll start off by understanding the different load balancing options on the GCP, the OSI layer at which they operate, and understand the differences between global and regional load balancing and external and internal load balancing. Next, you’ll be introduced to the various components that make up the global HTTP load balancer such as backend services, forwarding rules, and URL maps. Then, you'll get hands-on and create and configure two HTTP load balancers to demonstrate the use of both unmanaged and managed instance groups on the backend. Finally, you'll explore all of the other global as well as regional load balancers on the GCP, such as the TCP proxy and SSL proxy load balancing, network load balancing. When you’re done with this course, you'll possess a comprehensive conceptual and hands-on understanding of the various load balancing options on the GCP and you'll be able to pick the right one for your use case.

Table of contents
  1. Course Overview2m
  2. Understanding Load Balancing Options on the GCP21m
  3. Implementing Load Balancing with Instance Groups51m
  4. Configuring Load Balancers in the Google Cloud Platform1h 7m

Leveraging Advanced Networking and Load Balancing Services on the GCP

by Janani Ravi

Jan 18, 2019 / 1h 46m

1h 46m

Start Course
Description

In this course, Leveraging Advanced Networking and Load Balancing Services on the GCP, you will gain the ability to significantly reduce content-serving times using Google CDN, leverage DNS for authoritative name-serving, and gain all of the benefits of HTTPS load balancing for Kubernetes clusters using container-native load balancing. First, you will learn how Google CDN can be used to serve content to users from optimized web caches maintained by Google at its Points-of-presence throughout the world. These access points are at the edge of the Google network, and cache content based on specific keys. This is the same highly optimized technology that makes Youtube content so fast to load. You will implement CDN with an HTTP load balancer that has a backend cloud storage bucket and use that to cache images. Any cacheable service from the HTTP backend can be cached by Google CDN. Next, you will discover how to configure your domain with Google DNS. This is an authoritative DNS nameserver service which supports both public and private DNS zones. Your DNS records will reside in a highly available and scalable DNS serving network. Finally, you will explore how to combine two of the hottest services on the GCP - namely HTTP(S) load balancers and Kubernetes clusters. This is done using container-native load balancing, which configures an HTTP(S) load balancer to work with a specific type of backend known as a Network Endpoint Group. Network endpoint groups (NEGs) are zonal resources that represent collections of IP address and port combinations for GCP resources within a single subnet. Each IP address and port combination is called a network endpoint. Network endpoint groups can be used as backends in backend services for HTTP(S), TCP proxy, and SSL proxy load balancers. Because NEG backends allow you to specify IP addresses and ports, you can distribute traffic in a granular fashion among applications or containers running within VM instances. That is exactly what container-native load balancing does - it uses NEGs to distribute traffic across pods. When you’re finished with this course, you will have the skills and knowledge of powerful advanced features related to networking on the GCP, such as combining load balancers with backend buckets, backend instance groups and network endpoint groups to implement optimized serving of static content as well container-native load balancing.

Table of contents
  1. Course Overview2m
  2. Caching HTTP(S) Load Balanced Content Using Cloud CDN35m
  3. Using Cloud DNS for Low-latency Serving49m
  4. Using Container-native Load Balancing on Kubernetes19m

Managing Cloud Resources Using Google Stackdriver

by Janani Ravi

Jan 16, 2019 / 1h 35m

1h 35m

Start Course
Description

Stackdriver Monitoring is a powerful and versatile cloud monitoring tool that is tightly integrated with virtually every service on the Google Cloud Platform. You can significantly improve the performance and design of your architecture and simplify troubleshooting if you master the nuances of Stackdriver Monitoring. In this course, Managing Cloud Resources Using Google Stackdriver, you will gain the ability to monitor your cloud resources track both system and user-defined metrics and respond to alerts using Stackdriver Monitoring. First, you will learn Stackdriver concepts such as metrics, monitored resources, workspaces, and alerting policies. In this process, we will learn how to install the Stackdriver monitoring agent, and also when that agent is and is not required. Next, you will discover how to monitor third-party applications and work with custom metrics. We will create resources to monitor as well as metrics associated with those resources, then use the Metrics Explorer to create dashboards to keep track of those metrics. You will also configure uptime checks and alerts to notify you when resource health is not satisfactory. Stackdriver supports uptime checks in HTTP, HTTPS, and TCP. The probes sent by these checks are governed by VPC firewall rules, so those must be set up correctly as well. Finally, you will explore how to create checks for the absence of metrics, set variables in alerts, and explore incidents and events and integrate with third-party tools. Specifically, you will integrate Stackdriver Monitoring with OpsGenie, which is an alerting and incident management platform. You will round out the course by programmatically working with the Stackdriver Monitoring API from within Datalab python notebooks. When you’re finished with this course, you will have the skills and knowledge of Stackdriver Monitoring needed to monitor, troubleshoot, and analyze the usage of your cloud resources.

Table of contents
  1. Course Overview2m
  2. Introducing Stackdriver Monitoring42m
  3. Working with Advanced Monitoring Features39m
  4. Monitoring Resources Using Cloud Datalab11m

Managing Logs, Errors and Application Performance Using Google Stackdriver

by Vitthal Srinivasan

Jan 18, 2019 / 2h 10m

2h 10m

Start Course
Description

The Stackdriver suite of services offers the functionality to debug applications with no downtime, conduct sophisticated searches and analysis of logs, measure latencies and uptime with great granularity and integrate with other widely used software suites. In this course, Managing Logs, Errors and Application Performance Using Google Stackdriver, we explore each of the components of the Stackdriver suite with the exception of the monitoring service, which merits an entire separate course of its own. You will learn about the logging, error reporting, debugging, trace and profiler services within Stackdriver. First, you will study how you can work with log data using Stackdriver Logging. Stackdriver uses fluentd agents to configure and collect log metrics and you will install logging agents on Compute Engine VM instances and explore the metrics that can be monitored. You will work with counters as well as distribution metrics and learn exporting logging data and managing exclusions. Next, you will study how Stackdriver Error Reporting can be used with applications running on different GCP compute options, including Compute Engine VMs and Cloud Functions. You will see how you can view and manage errors in the Error Reporting UI and also work with the issue tracker to monitor issues and notifications to receive updates on a channel of your choice. Finally, you will study the three services that comprise the Stackdriver APM suite for application performance management. This includes Stackdriver Debugging which can be used to debug applications running on App Engine and Compute Engine VMs and Stackdriver Trace and Profiler which help you observe request latencies and code performance. When you’re done with this course, you will be well-versed with the different specialized services within the Stackdriver suite that can help you track, debug and profile your applications running on the GCP.

Table of contents
  1. Course Overview2m
  2. Working with Log Data Using Stackdriver Logging58m
  3. Error Reporting with Stackdriver22m
  4. Managing Application Performance with Stackdriver47m

Advanced

In this section you’ll dive into identity and access management, security, and billing. You’ll finish with an understanding of large scale architectural patterns and best practices.

Regulating Resource Usage Using Google Cloud IAM

by Vitthal Srinivasan

Jan 17, 2019 / 1h 55m

1h 55m

Start Course
Description

Intelligent, clearly thought-through Role-based Access Control (RBAC) is essential in any enterprise-scale cloud installation. The GCP offers several sophisticated security-related products to help thwart such threats, but none of these will be effective in the absence of well-designed access control. In this context, Cloud IAM is the service that governs both identities and access management. In this course, Regulating Resource Usage Using Google Cloud IAM, you will gain the ability to configure role-based access control to bind member identities and service accounts to permissions and monitor and control resource usage on the GCP with precision and granularity. First, you will learn how identities on the GCP could be member identities or service accounts. Next, you will discover how role-based access control on the GCP is implemented using the (Identity and Access Management) IAM service. Finally, you will explore how to use a specific feature on the GCP, the Identity-Aware Proxy, to implement role-based access to web applications running on App Engine, Compute Engine or Kubernetes. When you’re finished with this course, you will have the skills and knowledge of roles, identities, and service accounts to implement an intelligently designed strategy for resource regulation on the GCP.

Table of contents
  1. Course Overview2m
  2. Understanding Identities and Access Management on the GCP28m
  3. Working with Roles and Permissions in Cloud IAM33m
  4. Working with Service Accounts in Cloud IAM32m
  5. Simplifying Resource Access Using the Identity-Aware Proxy17m

Building Scalable Compute Solutions with Managed Instance Groups

by Vitthal Srinivasan

Jan 16, 2019 / 2h 6m

2h 6m

Start Course
Description

Two primary attractions of cloud computing are autohealing and autoscaling. Individual cloud VM instances do not come equipped with either of these features, however; for that you need to master a higher-level abstraction, the Managed Instance Group. In this course, Building Scalable Compute Solutions with Managed Instance Groups, you will gain the ability to instantiate, scale, and actually use Managed Instance Groups on the Google Cloud Platform. First, you will learn what an instance template is, how it is created, and how it can be used to instantiate either individual instances or an instance group. Instance templates are the basic building blocks of infrastructure automation on the GCP, and can be thought of as blueprints from which a VM instance can be created. You can use an instance template along with a health check and an autoscaling policy to create a Managed Instance Group. In this way, the GCP ensures the uniformity of all instances in the MIG. This allows the service to implement perfect horizontal scaling, in which generic instances enter and leave the group over time. Next, you will discover how updates and rollbacks are performed, and how individual instances can be debugged in a Managed Instance Group. Finally, you will explore how to configure a Managed Instance Group as the scalable backend for a Load Balancer. The GCP has several load balancing options at different levels of the OSI network stack, and in this course we focus on wiring up an HTTP load balancer to the backend instance group. Load balancers have a lot of moving parts, so this configuration is fairly involved. When you’re finished with this course, you will have the skills and knowledge of Managed Instance Groups needed to build scalable compute backends that provide both autohealing and autoscaling on the GCP.

Table of contents
  1. Course Overview2m
  2. Instantiating GCE VMs from Instance Templates52m
  3. Configuring and Using Managed Instance Groups43m
  4. Load Balancing and Autoscaling with Managed Instance Groups27m

Automating Infrastructure Deployment Using Google Cloud Deployment Manager

by Vitthal Srinivasan

Jan 17, 2019 / 1h 37m

1h 37m

Start Course
Description

Cloud Deployment Manager is Google’s service for infrastructure automation, also often referred to as “Infra-As-Code” (IAC). Infra-As-Code services allow us to programmatically provision resources using templates, commands, and constructs such as loops and conditionals. In this course, Automating Infrastructure Deployment Using Google Cloud Deployment Manager, you will learn the conceptual and practical aspects of working with Cloud Deployment Manager to configure complex GCP architectures in a repeatable and verifiable manner. First, you will study the basic concepts and terms used in the Deployment Manager. You’ll understand what configurations, resources, schemas, templates, manifests, and deployments are and how they fit together to allow you to programmatically create and manage your deployments. You’ll bring all these components together to provision a deployment for a Compute Engine virtual machine instance. Next, you will learn how you can use templates to parameterize your infrastructure deployments. Deployments can be thought of as directed-acyclic graphs where these graphs can be used to model dependencies between resources. You will learn how to configure these dependencies using references in your deployment specifications. You will also work with making your template reuse more robust by specifying schemas which are rules that govern template usage. Finally, you will study how you can use templates from the Deployment Manager Marketplace and also configure templates to define the architecture for a load-balanced applications. This will involve the use of composite types registered with the Type Registry and specifying containers to run on your provisioned resources. At the end of this course, you will have the knowledge and confidence to use Google’s Deployment Manager to programmatically create and provision resources to run your applications on the GCP.

Table of contents
  1. Course Overview2m
  2. Introducing Google Cloud Deployment Manager28m
  3. Automating Infrastructure Provisioning Using Templates31m
  4. Provisioning Complex Architectures with Deployment Manager36m

Analyzing and Visualizing Resource Usage Using the Google Cloud Billing APIs

by Vitthal Srinivasan

Jan 9, 2019 / 1h 24m

1h 24m

Start Course
Description

The costs associated with working on the cloud are very different from those associated with traditional on-premise installations. Cloud-based installations have important attractions from a financial perspective - most importantly the pay-as-you-go billing model and the promise of increased capacity exactly when it is needed. However, it would be an expensive mistake to believe that cloud installations are always or inherently cheaper than on-premise installations. In this course, Analyzing and Visualizing Resource Usage Using the Google Cloud Billing APIs, you will see how you can use Google’s dashboards, analytics tools, and billing APIs to measure, analyze, and control your cloud bills. First, you will be introduced to the various terms and concepts involved in billing such as organizations, projects, and resources. You will learn about the two different kinds of billing accounts: self-serve and invoiced, and understand the various billing roles in the Google IAM service that can be used to grant or limit a user’s access to billing information. You’ll also explore the billing console on the GCP using both the web interface and the gcloud command line utility. Next, you will move on to resources that you can use on the GCP to measure, analyze, and visualize billing data. You will learn how to track your costs and payment history, transactions, and cost trends on the billing console. You’ll also see how you can export your billing details to Cloud Storage buckets or BigQuery, the GCP’s analytical warehouse, which can support your advanced billing related queries. Once your billing information is in BigQuery, you will see how you can use Data Studio to analyze spends and resource usage. Finally, you will learn how you can set up budget alerts and receive notifications and manage billing details programmatically using billing APIs. You will also learn how you can configure access control to billing APIs and set up programmatic budget alerts using Cloud Functions and Pub/Sub. When you are done with this course, you will have the hands-on experience working with billing on the GCP and know how to measure and analyze your cloud bills.

Table of contents
  1. Course Overview2m
  2. Getting Started with Cloud Billing21m
  3. Managing, Analyzing, and Visualizing Billing Data31m
  4. Working with the Cloud Billing API28m

Leveraging Google Cloud Armor, Security Scanner and the Data Loss Prevention API

by Janani Ravi

Jan 18, 2019 / 2h 12m

2h 12m

Start Course
Description

Recent years have witnessed a steady increase in the number of reported instances of data being compromised, stolen and even sold for ransom. In this course, Leveraging Google Cloud Armor, Security Scanner and the Data Loss Prevention API, you will gain the ability to mitigate threats of DDoS attacks using Cloud Armor, scan your App Engine and Compute Engine web apps using Security Scanner, enforce audit rules using Forseti and use the Data Loss Prevention API to control access to sensitive data. First, you will learn how to use Cloud Armor to mitigate the threat of DDoS attacks directed at your HTTP(S) load balanced applications. Cloud Armor will enforce these rules at the edge of the Google network and prevent unwanted requests from permeating into the interior of your VPC network. Next, you will discover how to use the Security Scanner to identify potential vulnerabilities in your App Engine and Compute Engine web apps. These currently include checks for cross-site scripting, flash injection, mixed content, clear-text passwords, invalid headers and the use of outdated libraries. This list of vulnerabilities is constantly being added to, which means that your Security Scanner reports will change and get richer and better over time. You will also use Forseti, a third-party tool that is used to conduct security audits of IAM policies and compare the actual and desired state of system resources. Finally, you will explore how to use the Data Loss Prevention API to control access to sensitive data. The DLP API has a long list of country-specific types of sensitive data type - US Social Security Numbers and the tax identifiers of several countries. The API has built-in detectors to return probabilities that a given data item matches a certain type of sensitive data. It is also possible to add custom detectors, and to use powerful techniques for redaction and de-identification of such data. When you’re finished with this course, you will have the skills and knowledge of various security auditing and protection services to protect against DDoS attacks, as well as identify vulnerabilities in your apps and project settings to help identify and protect sensitive data.

Table of contents
  1. Course Overview2m
  2. Using Cloud Armor to Protect Against DDoS Attacks 35m
  3. Using Cloud Security Scanner to Identify App Vulnerabilities39m
  4. Using the Cloud Data Loss Prevention (DLP) API for Data Protection54m

Implementing Customer Managed Encryption Keys (CMEK) with Google Key Management Service

by Vitthal Srinivasan

Jan 9, 2019 / 1h 39m

1h 39m

Start Course
Description

At the core of cloud data encryption is a thorough knowledge of Customer-Managed Encryption Keying (CMEK). In this course, Implementing Customer Managed Encryption Keys (CMEK) with Google Key Management Service, you’ll see how to implement and manage encryption keys on the Google Cloud Platform. First, you’ll learn what symmetric and asymmetric keys are and how to create and rotate them. Next, you’ll explore how to protect secrets using symmetric keys and how to validate them using digital signatures. Finally, you’ll discover how to use advanced features to further secure your data and resources on the cloud. When you’re finished with this course, you’ll have a foundational knowledge of the Google Key Management Service that will help you as you move forward to create and rotate cloud-hosted keys and manage secrets on the GCP.

Table of contents
  1. Course Overview2m
  2. Introducing the Google Key Management Service31m
  3. Working with Cryptographic Keys42m
  4. Leveraging Other GCP Services with KMS23m

Leveraging Architectural Design Patterns on the Google Cloud

by Janani Ravi

Jan 10, 2019 / 2h 30m

2h 30m

Start Course
Description

The Google Cloud Platform offers up a very large number of services for every important aspect of public cloud computing. In this course, Leveraging Architectural Design Patterns on the Google Cloud, you will learn how the different core design choices in storage, compute, and networking can be made to assemble complex architectures for specific use cases. First, you will learn specific types of reusable design patterns built using GCP components. These include the use of managed instance groups for infrastructure, cloud functions for event-driven compute, lambda and kappa architectures for big data processing, and BigQuery ML and Cloud ML Engine for machine learning applications. Next, you will explore how to pull together Jenkins, Cloud Source Repositories, and the Google Container Registry to orchestrate a CI/CD pipeline. This involves first creating a cluster and installing Helm (which is the Kubernetes package manager), then deploying your app via a canary release, committing the code into the Cloud Source Repos and finally using Jenkins (which is an automated build server) to push the master branch into production. Finally, you will understand and construct various different networking patterns on the GCP. These include the use of a bastion host, or jump host to restrict the external touch-points within a VPC network. By the end of this course, you will be very comfortable identifying the important decisions that a Cloud Architect depends upon, and will have the skills and knowledge to use complex architectural design patterns that have been put to proven use by others.

Table of contents
  1. Course Overview2m
  2. Understanding Classic Architectural Patterns on the GCP43m
  3. Leveraging Container-based Pipelines on the GCP1h 3m
  4. Designing Network Architectures on the GCP40m
Offer Code *
Email * First name * Last name *
Company
Title
Phone
Country *

* Required field

Opt in for the latest promotions and events. You may unsubscribe at any time. Privacy Policy

By providing my phone number to Pluralsight and toggling this feature on, I agree and acknowledge that Pluralsight may use that number to contact me for marketing purposes, including using autodialed or pre-recorded calls and text messages. I understand that consent is not required as a condition of purchase from Pluralsight.

By activating this benefit, you agree to abide by Pluralsight's terms of use and privacy policy.

I agree, activate benefit