Get started with storage on Amazon Web Services using AWS Simple Storage Service (S3) and AWS Glacier. You'll learn how to best take advantage of AWS's storage service as well as their incredible scale to take your applications to the next level.
Finding the best way to store, manage, and distribute data has always been challenging. From application data to databases, images, logs, backups, user generated content, etc. developers always find ourselves in need of more storage. In this course, File Storage Using AWS S3 and Glacier: Developer Deep Dive, you'll learn how to take advantage of Amazon Web Services Simple Storage Service and Glacier to solve most of those challenges. First, you'll learn how to provision a bucket and uploading files to S3 and how to deliver and retrieve those objects. Next, you'll learn about automating maintenance with Lifecycle Policies and how to archive and restore objects from cold storage with Glacier. Finally, you'll learn about versioning objects and will learn about cross-region replication. By the end of this course, you'll have a better understanding about the different classes of storage available to you.
Fabien has been in the web development industry for over eight years, all of them working with Microsoft technologies. More recently, he has been busy building microservices in .Net and is passionate about continuous delivery and automation.
Course Overview Hi everyone. My name is Fabien Ruffin, and welcome to my course File Storage Using AWS S3 and Glacier: Developer Deep Dive. Finding the best way to store, manage, and distribute data has always been challenging. From application data to databases, images, logs, backups, user-generated content, etc. , we, as developers, always find ourselves in need of more storage. In this course, I will show you how to take advantage of Amazon Web Services Simple Storage Service and Glacier to solve most of those challenges. We will learn about the different classes of storage available to us and what scenarios they are best suited for by building a very simple image API. The major topics we will cover are provisioning a bucket and uploading files to S3, retrieving those objects and content delivery, automating maintenance with lifecycle policies, archiving and restoring objects from cold storage with Glacier, versioning objects, and cross-region replication. By the end of this course, we'll have learned all those concepts and more. Before beginning this course, you should be familiar with Amazon Web Services, but you don't need to be an expert. Please join me on this journey to learn the ins and outs of Amazon storage offering with the File Storage Using AWS S3 and Glacier: Developer Deep Dive course on Pluralsight. I hope you enjoy it.
Overview If you have to build and maintain a state of the art storage solution it will quickly cost you a lot of time and money. First you will have to buy racks and racks of hardware. You will also need to hire staff to get it all up and running, spend a lot of time making sure you always get the best performance out of it, that it is reliable and that you have a good recovery scenario in case of a disaster. Assuming you get through all that, you will also need to guess how much storage your applications will need in the future, which is always difficult. You will invariably get into situations where you don't have enough storage for your needs or spend a lot more money than needed for extra capacity you don't use. There has to be a better way. I'm Fabien Ruffin and I'm excited to welcome you to Pluralsight's File Storage Using AWS S3 and Glacier: Developer Deep Dive course. This course is a deep dive course, which makes it an ideal place to learn all there is to know about Amazon Web Services managed storage solutions and how to build apps using them. Before beginning this course you should be comfortable with the basics of building web applications. Familiarity with Amazon Web Services will be helpful, but you are in no way required to be an expert. If you want to learn more about the basics of AWS before getting started, check out the AWS Developer - The Big Picture course by Ryan Lewis here on Pluralsight.
Uploading Files to S3 Now that we understand a bit more what Amazon S3 is all about, we'll jump in and start uploading files to it. Let's go ahead and take a look at the overview of what we are going to cover throughout this module. We will start by exploring the AWS console. This is where we are going to create our first S3 bucket and upload our first objects. I will also show you how to accomplish the same in the command line with PowerShell. At this point we should be comfortable enough with Amazon S3 to start using it in our example image service. I'll take this opportunity to discuss briefly the different mechanism you can use to seek your access to your objects. Then finally we'll take a look at the best practices for storing your objects and getting the best performance out of the service.
Delivering Content Our simple API is now able to save objects to S3 along with some metadata. So it is now time to get these objects back and serve them to the public. In this module we will start by retrieving one of the images we have uploaded earlier via the command line with PowerShell. We will then go on to implement the GET endpoint of our API to deliver our images to the world. The problem with our current setup is that our images can be quite big and heavy, which is not suitable for all scenarios. So I will show you how to implement dynamic imagery sizing and cache the resized images to another S3 bucket using the Reduced Redundancy Storage class. Finally, I'll show you a neat little trick to handle errors and missing images gracefully by enabling web hosting on our S3 buckets. So let's go ahead and jump to the command line.
Restoring Objects from Glacier Even though Amazon S3 is designed with durability in mind and stores multiple copies of your objects across multiple facilities, sometimes you may want to have an extra copy of your data somewhere else to protect yourself against accidental deletion. In this module I will show you how you can use Amazon Glacier to store an archived copy of the original images uploaded via our image API. Glacier is ideal for this kind of scenario as it is a storage solution designed especially for what we can call cold data, meaning data that will be extremely infrequently accessed. I'll start with an introduction to Glacier covering it's fisher set, the scenarios it is good for, and the ones where it is not so great. Then we'll move back to the AWS web console to create our first Glacier vault before modifying our API to store an archive of the original images to Glacier, in addition to storing them in S3. Finally, I'll show you how to restore archives from Glacier back to S3 in case we lose an object or accidentally delete it.
Increasing Object Durability and Audit Trail In this last module we will make sure that nothing can ever go wrong with the data we store in S3. We already have a pretty good setup, considering that S3 already replicates our images across multiple facilities and that we are archiving our original images to Glacier. But all of this is still within one region and there is currently nothing to protect our data in case of a complete region failure or in case of a manual mistake. Luckily S3 has a few advanced features to help with that. First I will show you how to use object versioning to make sure our images are never accidentally lost. We will then look at the geo-replication feature and replicate our images to an S3 bucket in a different AWS region. And to finish this course, I will teach you how to gather audit logs for the bucket's activity and we will set up event notifications so that you can be alerted when certain types of events are happening.