Podcasts

082 - Major releases and updates for AWS, Azure and GCP with David Tucker

June 01, 2021

Join cloud strategist and Pluralsight author David Tucker for latest news, updates and resources you need to stay up to speed on in the fast-moving world of AWS, Azure and Google Cloud. For all the resources mentioned in this episode, please check out the video descriptions in the links below: 


If you enjoy this episode, please consider leaving a review on Apple Podcasts or wherever you listen.

Please send any questions or comments to podcast@pluralsight.com.

Transcript

Seth Merrill:

Hello, and welcome to All Hands On Tech, conversations with top voices in software development, machine learning, cloud security, and leadership. I'm Seth Merrill. Today, we are presenting the audio from a recently launched video series from cloud strategist and Pluralsight author, David Tucker, to help get you up to speed on latest news resources and launches in the fast-moving world of cloud. Links to all the resources David mentions in this episode of All Hands On Tech are available in the show notes where you can find the full cloud tracker video series.

David Tucker:

AWS Step Functions is a powerful Serverless orchestration service, but up until now it has been painful to understand exactly what data is going into and out of each task in your workflow. AWS has addressed this by releasing the step functions data flow simulator in the AWS console that's going to take the guesswork out of configuring your input path, output path and result path.

This feature includes a real time view and to how your JSON path queries will change the data for a single task. In addition, this tool may give you insight into ways to customize the data within your tasks that you weren't familiar with. This tool initially launched in four different regions, but AWS has announced the intent to launch it in all commercial regions in the near future. Check out your preferred region, or just jump over to US-East-1, to give it a try.

Next, we have two big announcements for all of you that integrate Amazon Athena into your data workflow. First, Amazon Athena ML is now generally available. This feature enables you to call a SageMaker endpoint directly within your Athena queries. When working with SageMaker you define what data will be passed between Athena and SageMaker as well as the corresponding data types.

Once you have this in place and execute the query, you'll be able to get back the results of your inference call from SageMaker. I've included a link to an article from Amazon that shows how this was integrated for anomaly detection within an existing data set. The next Athena announcement is the ability to use Lambda for user defined functions.

If you want to execute custom code against your data within the query process, this feature will enable you to do just that. If you're using Java, you can leverage the open source, Amazon Athena query Federation SDK to create your user defined functions.

The launch announcement, which will be in the episode notes includes multiple resources to help you get started with this feature. I sometimes feature an announcement because it is so big, it can fundamentally change the way you use the platform. And sometimes I feature a service simply because there are so many smaller announcements that it makes sense to catch everyone up with what's going on. And the that's what I have today with the Amazon Elasticsearch Service.

First Amazon announced in April that they were creating a new community-driven open-source fork of Elasticsearch and Kibana. This fork was derived from version 7.10.2. This new project will be named OpenSearch. This also means that in the near future, the Amazon Elasticsearch Service will be renamed the Amazon OpenSearch Service. Now, in addition to this announcement, AWS announce that the Amazon Elasticsearch Service now supports version 7.10.

This release brings some indexing performance improve as well as composable index templates. With this new release, AWS has also added support for asynchronous queries. This can be critical for massive data sets across large clusters. This allows you to submit a query, monitor its progress and retrieve the results at a future time. Finally, AWS also announced that you can now effectively integrate power BI with their Elasticsearch Service using the ElasticSearch SQL engine links to all of these announcements can be found in the episode notes. Do you have virtual machines that take a long time to initialize? If you do, chances are that you have run into problems, scaling your workload, especially if you have burst of traffic. In some cases, you simply can't initialize new instances fast enough to respond to scaling needs. If this is you, AWS has a solution and it's called Amazon EC2 Auto Scaling Warm Pools.

This feature enables you to have instances that are ready to pull into your auto scaling group at a moment's notice. As a note, this feature could increase your cost as this pre initialization means you're going to be running more instances than you were previously. However, if you've run up against these challenges, chances are, you'd probably gladly trade a little bit of money to solve this problem for you. There is a good deal of configurability with this feature, so check out the documentation to see how you can integrate this into your auto of scaling groups. Next, we'll be diving into our list of platform updates and content that you should be familiar with while these might not be as big as our featured announcements. These updates could impact the work you're doing on the platform. And first up we have EventBridge. So EventBright was updated this month to support one of its biggest outstanding needs cross-Region event support.

Previously organizations had to handle events in the region they were dispatched in, but now you can centralize your handling of events with this new feature. This works by enabling you to use a cross-Region event bus as a target from other regions. At the time, this feature was launched. Your destination event bus needed to be in US East 1, US West 2 or EU West 1. For more information and a walkthrough of this new feature, check out the announcement blog post in the episode notes. Now, if you're building iOS apps that integrate with AWS amplify, you can now integrate with the platform more easily because the Amplify iOS SDK is now available via this Swift Package Manager. This means you no longer need to leverage CocoaPods to get this library added to your project. There is one note with this though, the AWS predictions plugin is not supported yet by the version that is in the Swift Package Manager.

So if you need to use that functionality dust back off CocoaPods and plug it back in. Visit the link in the episode notes to get your iOS app up and running with AWS Amplify and the Swift Package Manager. AWS released some big updates to API gateway that enable you to do a lot more with your custom domain names. You can now route different path segments under a custom domain to different APIs. And this works for both HTTP and rest-based APIs, but don't get me started on how confusing the naming of those two different API types are that would take a whole other video to cover. But this now opens up additional possibilities that just weren't possible before, you can now implement path based API versioning. You can configure a different API type per path, which is huge for organizations that want to leverage some of the features of a rest-based API in some places, but other HTTP API capabilities.

In other places. This feature is now available in all regions where API gateway is available. Now managing user accounts in AWS SSO is a huge improvement for multi-account configurations. However, if you've adopted it, you've learned that it isn't fully supported in all AWS tools. And yes, I'm looking at you, AWS CDK developers. This month, AWS announced support for both AWS SSO and assume role with multifactor authentication with the visual studio toolkit for AWS. Now, as a note, this is for visual studio and not visual studio code as VS code already had this capability. In the blog post I've linked to, you can see what you need to do to take advantage of this feature in visual studio, the Advanced Query Accelerator or AQUA is now generally available for Amazon Redshift. If you are new to AQUA, it is a high-speed cache for Redshift that according to AWS delivers up to 10 times faster query performance than other enterprise cloud data warehouses to leverage this feature you need to be using either the RA 316 XL or RA 34 XL nodes, with these nodes

You can leverage AQUA at no additional cost. At launch, this feature is available in five regions and is planned for additional regions in the near future. And finally, if you're using SageMaker, you need to click the link in the episode notes. AWS has announced some price cuts to SageMaker, as well as the ability to leverage savings plans for up to 64% savings on your machine learning workloads. So go check it out today if you're using SageMaker. So today I'm going to share several resources that can help you up level your skills on AWS. First, if you're interested in learning how to master VPCs on AWS, you can check out the video course by Ben Piper, AWS Networking Deep Dive Virtual Private Cloud. This video course will cover everything from creating a VPC, setting up a connection between VPCs with peering, leveraging network address translation, using a transit VPC and even working with IPv6.

It is important to remember that this information is a key part of multiple AWS certifications, and you're likely to see it on the Sysops associate, DevOps professional and networking specialty exams. Next, you can improve your security skills with an updated course monitoring AWS cloud security. This course will cover how you can leverage AWS services to monitor specific metrics and alert actions on specific AWS resources. Finally, Pluralsight has over 40 yes, 40 new labs for AWS in the last month. If you haven't tried out labs, yet, it is a way to grow your cloud skills without having to even set up your own account in AWS. Do you get to learn within a real AWS environment by performing a set of guided tasks.

Adding voice, video chat and SMS to your Azure-based applications is now even easier as Azure Communication Services is now generally available the same platform that powers Microsoft teams is now available for you to leverage. If you're new to the service, you can check out the links in the episode notes to get access to multiple samples that can guide you on your integration, whether you're building for the web iOS or Android. In addition, you can check out Azure Communication Services on GitHub to get links for the SDKs for each of the different features of this service, depending on the capabilities you're leveraging, you can find SDKs for javascript.net, Python, Java, iOS, and Android. From this link, you can also grab links to Microsoft Q&A as well as a tag on Stack Overflow, to connect with other developers that are building on the platform.

Now, Microsoft has continued their investment in Java with the preview release of the Microsoft build of the OpenJDK while not generally available yet, this release is intended to be an LTS or Long Term Support version that will be support through 2024. This version includes binaries for Java 11, and it can run across desktop and server with support for windows, Macs and Linux. In addition, Microsoft also released an early access binary for Java 16 for windows on ARM. Now, why is this announcement something that you should be paying attention to? Well, Microsoft states that later this year, the Microsoft build of OpenJDK will become the default distribution for Java 11 across Azure Managed Services. Now, Microsoft also announced their intention to release OpenJDK binaries of Java 17 by the end of the year. You can find links to the announcements as well as how to get the Microsoft build of the OpenJDK in the episode notes.

Microsoft is now making it easier to leverage Azure ML for developers and data scientists working in vs code with this preview release of the Azure Machine Learning extension, you can seamlessly connect to your Azure ML compute instances from within the IDE. This feature utilizes the vs code remote server to create a realtime connection between your machine and the cloud-based compute instance. You can even configure your Azure ML compute instance to be a remote notebook server when working in Jupyter. This makes it even easier to leverage the computing power of the cloud when you're analyzing data training models or optimizing models that you create. You can review the Microsoft documentation on how to get this up and running in the episode notes. I don't generally include service updates in the featured announcements, but cognitive services has multiple updates that provide some new functionality that you can leverage.

First, the computer vision API version 3.2 is now generally available. And with this release, there is now additional support for OCR text extraction across 73 different languages. They have even provided a way for this functionality to run on Premise with a container that you can deploy. With this capability you can now also do more extraction from Forms using Form Recognizer across all 73 of those languages. I've included a link to a video on MSDN that will walk you through the new capabilities in this feature. In addition, if you're leveraging cognitive services for anomaly detection, you can now leverage the service for multivariate Anomaly detection, which is now in preview. I've provided links to all the updates for Cognitive Services in the episode notes. Next we'll be diving into our list of platform updates and content that you should be familiar with while these might not be as big as our featured announcements.

These updates could impact the work that you're doing on the platform. And to start it off with, we're going to be talking about new region that Microsoft will be bringing to Northern China. According to Microsoft, this expansion is expected to effectively double the capacity of Microsoft's intelligent cloud portfolio in China. This region is expected to be online in 2022. Now, if you are leveraging application gateway, you can now leverage URL rewriting as this feature of the service is now generally available. This enables you to rewrite the query string path and even the host name of requests. Check out the episode notes for information on how to leverage this today. Now next, if you're looking for an efficient and elegant way to publish your API documentation, then look no further than Microsoft's open-source tool API portal with is now generally available. This feature enables you to create your documentation site through GitHub.

All you need is your open API file, and you can follow the instructions in the tutorial to get started. Grab the link to the tutorial in the episode notes. Now, if you've been looking to leverage system-assigned managed identities with Azure Automation, you are in luck. This feature is now in preview for both cloud and hybrid jobs. You can check out the documentation to review the prerequisites that are required to make this work for your automation Runbooks. Next, if you've been leveraging static web apps, but you want to support deployment from another source control repository, other than GitHub, you've been out of luck. However you can now leverage Azure DevOps and any source control repository that it supports. This feature is available today in preview. And you can check out a tutorial in the episode notes. And for our final platform update, Azure blob storage now supports objects up to 200 terabytes in size.

So if you want to upload a file that contains more data than all of the published books in human history, you can go right ahead. This is possible with Azure Data Lake Storage Gen2, this feature is now generally available, and you can leverage it today. Today I'm going to share several resources that can help you up level your skills on Azure. All of today's resources can be found in the Pluralsight content library. First, if you're looking to learn how to improve your skills with Azure functions, I have both a video course and a lab for you. The video course implement Azure functions by Mark Heath is a part of the Microsoft Azure develop first certification path. And it provides information on the fundamentals of functions triggers as well as input and output bindings. After the video course, if you want to test out your skills within a real environment, you can leverage the lab on creating and configuring Azure functions.

This lab will implement many of the concepts that you've covered within the video course. Now if you haven't tried out labs, yet, it is a great way to grow your cloud skills without having to even set up a subscription with Azure. Now, there are two additional labs that are new this month, manage storage accounts on Azure and manage APIs in Microsoft Azure with API management. If these are areas you're interested in, be sure to jump in and check these out. Google's functions as a service solution cloud functions now supports PHP. This release provides a PHP 7.4 environment to developers and enables them to respond to HTTP events as well as integrating with platform service like Pub/Sub cloud storage and Firestore. The platform even supports the customization of the PHP environment with a PHP INI file in your deployment package. Now as a part of this, Google has also released the functions framework for PHP.

This is an open-source framework that Google has released on GitHub that enables you to write portable PHP functions that you can run locally on Knative environments or Cloud Run. This framework can be installed using composer. And if you do use Composer, cloud functions will install all of your Composer dependencies and the auto loader will be registered. So with this release PHP developers can now leverage serverless compute capability is on GCP with a full development workflow from your local machine all the way to your production environment. If you've worked with a high demand application in the cloud, chances are at some point you have dealt with the challenge of rapidly changing load. In many cases, if you suddenly have an influx of users, auto scaling can't respond fast enough, especially if your infrastructure has a complex initialization process. In many cases, Automated Scaling simply does not respond quick enough.

Google is working to solve this problem with a new release, predictive scaling in active assist for managed instance groups. With this new feature, the platform can analyze trends in your group over time and provide a continuously recalculated number of instances for your group. In this way, you can respond before you even know you need to scale. This initial release of the feature isn't for every situation though, it is focused at workloads that do follow somewhat regular patterns and workloads with extended initialization time. For now, the service only works after it has been able to analyze your auto scaling history for three days. And it only works if CPU utilization is your scaling metric. However, Google states that predictive auto scaling is free of charge. However, if you enable predictive auto scaling to optimize for availability, you pay for the compute engine resources. That's your managed instance group uses. I'll include a link to this feature and a resource from Google that will help you determine if this is a fit for your workload.

Next Google is providing a new way for organizations to choose which regions they leverage. Some companies want to choose the region that provides the lowest cost for their workload. Some want to select the region that will be the fastest for those using the applications and resources that they deploy. And some want to choose a region that fits the organization's goals for sustainability. Irrespective of which goal your organization has, you can use GCP's new cloud region picker to select the region that makes sense for you. This web-based tool enables you to select the relative importance of latency, cost, and carbon footprint. When it comes to selecting your GCP region. This tool is available now, and a link to both the announcement and tool itself are available in the episode notes. Now, next we'll be diving into our list of platform updates, and this is content that you should be familiar with. Now, while these might not be as big as our featured announcements, these updates could impact the work that you're doing on the platform.

And first up today, we have Apogee X with Cloud CDN. So back in February, Google cloud launched Apogee X, the next generation of their API management platform. This solution brought a much deeper level of integration with the platform as a whole, including with cloud armor and cloud identity access management. But in April, Google released an article and some resources that can help you deploy global APIs using Apogee X with cloud CDN. This article includes a walkthrough of these two services working together to create a performant global API tier. Now we saw earlier that Google is providing new ways to help you select which platform region you should leverage. As of April. You now have a new choice in this space and it is located in Warsaw. This is now the 25th region available on the platform. Google states that this region opens with three availability zones to protect against service disruption and offers a portfolio of key products include compute engine, app engine, GKE cloud, Bigtable, cloud Spanner, and BigQuery.

If you are looking to expand your capabilities for reaching central and Eastern Europe, you should definitely consider leveraging this new region. Next, AppSheet Automation has now shifted to being generally available, for organizations looking to enable no code business processes, this means you can transition from using this to simply experiment with, to now plugging it into mission critical workflows. You can read the announcement located in the episode notes to see how companies like Globe Telecom are using this within their organization. Next, for those of you leveraging or evaluating Anthos the platform has been upgraded to version 1.7. This version of Google's portable Kubernetes platform includes several new features designed to help organizations that are embracing a multi-cloud strategy. This includes the ability to leverage cloud logging and monitoring from AWS. A preview of Google's managed control plane for Anthos Service Mesh, Windows Container support for vSphere environments and new IDE extensions for building your Antho config files.

These are only a few of the mini announcements associated with this release. So be sure to check out the release post in the episode notes. Every person working in the cloud has to deal with the challenge of determining how continually improve their skillset. Today, I'm going to share three resources that can help you with your security skills on GCP and all of the resources I'm sharing today are included in the Pluralsight library. Now, first we have managing security in Google Cloud. This course created by Google gives you the found foundational concepts you need for working within the platform. This includes information around identity, access management and working within VPCs. This course will touch everything from IAM to VPC firewalls to active directory integration. If you need to up your security game for GCP, this should be your first stop. And next we extend on this security concept with security best practices in Google Cloud.

This course covers four key and essential domains securing compute, securing your data in the cloud, securing your applications and securing Kubernetes. Each of these roughly half hour domains are packed with content to help you grow your skills. And finally, we have Mitigating Security Vulnerabilities in Google Cloud. This course will address common vulnerabilities that you will encounter, including distributed denial of service attacks and content related vulnerabilities, including ransomware. This course will also explore how you can use monitoring, logging, scanning, and auditing to help you before, during and after a security related event. All three of these courses are available today in the Pluralsight content library. Thank you for joining me for this first episode of Cloud Tracker. Be sure to let us know what you want to hear more of on this series. Moving forward. Also remember the links to everything I've discussed are available in the episode notes.