In this tutorial, you'll learn to build a simple web application using the following AWS Services:
Here's the architecture diagram:
For the lambda functions, we will use a Node.js runtime environment with the Javascript SDK.
Everything will be made from the AWS Management Console, without external frameworks, SDK, or command-line interfaces (CLI).
The sample web app allows you to create/edit/delete courses. This is how it looks like:
It's a single-page application made with React and Redux. The credit for this application goes to Aries McRae. For this guide, I just modified his React CRUD boilerplate.
You can clone the app from this GitHub repository and test it locally with the included Express server.
This guide won't explain how the app is made. So don't worry if you are unfamiliar with React and Redux, as this tutorial will focus on building the API and setting up all the AWS services.
The API works with two data structures, course
and author
:
1Course:
2{
3 id: "web-components-shadow-dom",
4 title: "Web Component Fundamentals",
5 watchHref: "http://www.pluralsight.com/courses/web-components-shadow-dom",
6 authorId: "cory-house",
7 length: "5:10",
8 category: "HTML5"
9}
10
11Author:
12{
13 id: 'cory-house',
14 firstName: 'Cory',
15 lastName: 'House'
16}
Of course, you'll need an AWS account. If you don't have one, sign up here. Registration is free, and all the services used in this tutorial are within the AWS Free Tier, but you'll need to enter a credit card number in case you exceed that free tier.
The most difficult step is verification, in which an automated system will call you, and you have to enter a PIN code. It doesn't always work and you may have to ask the support team to call you to verify your account.
Once you have created your account, you automatically sign up for all services in AWS. However, it's a good practice to create an admin user instead of using your root account for day-to-day operations. Follow the steps in this guide to do this.
Now let's start by creating the tables to store the app information in DynamoDB.
DynamoDB is a fully-managed NoSQL database that stores the data in key-value pairs, like a JSON object:
1{
2 "ID": 1,
3 "Title": "Introduction to Angular 5",
4 "Category": "web-dev"
5}
There are no schemas so every record can have a different structure. The only restriction is that the field(s) defined as the partition key must be present in all the records.
Based on this partition key, DynamoDB store data in different drives. An efficient distribution will make accessing the data as fast as possible, so it's important to choose a good partition key.
This way, the partition key can become the primary key, but you can also use a combination of a partition key and a sort key as a primary key. For example, if you have multiple records with the same course ID (the partition key), you can add a timestamp as a sort key to form a unique combination. In addition, you can also have secondary indexes for any other field (or combination of fields) to make queries more efficient.
DynamoDB gives you a lot of options. You can learn more about it in the developer guide, but for now, let's dive into creating a database for our API.
Open the Services menu and choose DynamoDB:
Make sure you're in the correct AWS region (there's a DynamoDB database per region) and click on Create table:
Enter the following information, leave the default settings checked and click on Create:
courses
id
If you see the following message:
Your AWS account does not currently have the DynamoDBAutoscaleRole, but there's no problem if you're not going to use the auto-scaling service or if you're not sure what it is. If you want to use it, follow the documentation link to add the AmazonDynamoDBFullAccess and a custom inline policy to create the autoscale role automatically when you create a new table with auto scaling the first time, it's all in the documentation.
It might take a few seconds, but you should see a confirmation page like the following:
Take note of the table's Amazon Resource Name (ARN), you'll need it later.
In the Items tab you can query your table or add items:
I'm going to leave it empty for now, but I'll add items to this table when the application is finished. Meanwhile, I'll go ahead and create a new table for the authors with the following information:
authors
id
After the table is created, take note of its ARN.
Now, in the Items tab, create some authors, for example:
1{
2 "id": "cory-house",
3 "firstName": "Cory",
4 "lastName": "House"
5},
6{
7 "id": "samer-buma",
8 "firstName": "Samer",
9 "lastName": "Buma"
10},
11{
12 "id": "deborah-kurata",
13 "firstName": "Deborah",
14 "lastName": "Kurata"
15}
Now let's create the lambda functions that will use these tables.
AWS Lambda is a service that allows you to run functions upon certain events, for example, when data is inserted in a DynamoDB table or when a file is uploaded to S3.
In this case, a lambda function will be run whenever a request hits one of the API endpoints you'll set up in the next section.
At the time of this writing, a lambda function can be written in Node.js, Python, Java, or C#. This tutorial will use Node.js.
You can learn more about AWS Lambda in the developer documentation.
We'll create six lambda functions:
get-all-authors
to return all the users in the databaseget-all-courses
to return all the courses in the databaseget-course
to return only one coursesave-course
to create a new courseupdate-course
to update a coursedelete-course
to delete a courseLet's start by creating the get-all-authors
function. Open the Services menu and choose Lambda:
Click on :Create a function*:
And then, with the Author from scratch option selected, enter the following:
get-all-authors
Node.js 6.10
(or a superior version)Create a custom role
When selecting the option Create a custom role
, a new window will open to allow you to create a new role.
Alternatively, you could choose the option Create a new role from a template(s)
and choose the policy template Simple Microservice permissions
. This template will give you permissions to read, create, update, and delete items from any table:
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "dynamodb:DeleteItem",
8 "dynamodb:GetItem",
9 "dynamodb:PutItem",
10 "dynamodb:Scan",
11 "dynamodb:UpdateItem"
12 ],
13 "Resource": "arn:aws:dynamodb:region:accountId:table/*"
14 }
15 ]
16}
However, this is not good practice; custom services should require as few permissions as possible to reduce overhead and maximize privacy and security.
So in the window to create a new role, enter get-all-authors-lambda-role
as the role name:
At this point, it will only give you permissions to write to the CloudWatch logs, so click on the Edit link and modify the policy document so it looks like this:
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "logs:CreateLogGroup",
8 "logs:CreateLogStream",
9 "logs:PutLogEvents"
10 ],
11 "Resource": "arn:aws:logs:*:*:*"
12 },
13 {
14 "Effect": "Allow",
15 "Action": "dynamodb:Scan",
16 "Resource": "<YOUR_ARN_FOR_THE_AUTHORS_TABLE>"
17 }
18 ]
19}
Just replace the DynamoDB Scan Resource
field with the ARN of your author's table. This will give you permissions to perform a scan operation to read all the items of the author's table only.
Finally, click on the Allow button.
Back to the window to create the lambda function, choose the role get-all-authors-lambda-role
in the Existing role option and click on Create function:
After the lambda function is created, scroll down to the Function code section:
Enter the following code:
1const AWS = require("aws-sdk");
2const dynamodb = new AWS.DynamoDB({
3 region: "<YOUR_AWS_REGION_CODE>",
4 apiVersion: "2012-08-10"
5});
6
7exports.handler = (event, context, callback) => {
8 const params = {
9 TableName: "authors"
10 };
11 dynamodb.scan(params, (err, data) => {
12 if (err) {
13 console.log(err);
14 callback(err);
15 } else {
16 callback(null, data);
17 }
18 });
19};
First, you require the AWS SDK and create a DynamoDB instance passing the code of the region where the database is located, for example us-east-2
for Ohio (find out your region code here), and the API version (2012-08-10
is the latest at the time of writing this tutorial).
The exported function receives three parameters:
event
— an object that is used to pass data to the handler.context
— an object that provides runtime information about the Lambda function being executed.callback
— a function to return information to the caller (if it's called, otherwise the return value is null
). The callback takes two parameters:JSON.stringify
-compatible. If an error is provided, this parameter is ignored.Inside the handler
function, you execute a scan operation to get all the items of the authors
table.
Any error that occurs is passed to the callback function. Otherwise, the callback is executed passing the data returned.
Keep in mind that the scan
operation will return up to one MB of data. If there is more data available, a LastEvaluatedKey
value is included in the results along with the number of items exceeding the limit to continue the scan in a subsequent operation.
You can know more about the available methods of the AWS Javascript SDK for DynamoDB in the API documentation.
Now click Save. Then, if you click on the Test button, a window to configure some input properties will open:
This function does not take any parameters so give it a name and just click on Create and then on Test one more time.
You can check the execution results at the top of the page:
Or in the Execution Results tab below the code editor:
This is a sample output:
1{
2 "Items": [
3 {
4 "id": {
5 "S": "cory-house"
6 },
7 "firstName": {
8 "S": "Cory"
9 },
10 "lastName": {
11 "S": "House"
12 }
13 },
14 {
15 "id": {
16 "S": "samer-buma"
17 },
18 "firstName": {
19 "S": "Samer"
20 },
21 "lastName": {
22 "S": "Buma"
23 }
24 },
25 {
26 "id": {
27 "S": "deborah-kurata"
28 },
29 "firstName": {
30 "S": "Deborah"
31 },
32 "lastName": {
33 "S": "Kurata"
34 }
35 }
36 ],
37 "Count": 3,
38 "ScannedCount": 3
39}
As you can see, every key/value pair inside Items
also contains the type (S
for String
), so let's modify the format with a map
function to only return the key and the value for each item.
Replace the code of the else
block inside the callback function of scan
with the following:
1// ...
2
3exports.handler = (event, context, callback) => {
4 ...
5 dynamodb.scan(params, (err, data) => {
6 if(err) {
7 ...
8 } else {
9 const authors = data.Items.map(item => {
10 return { id: item.id.S, firstName: item.firstName.S, lastName: item.lastName.S };
11 });
12 callback(null, authors);
13 }
14 });
15};
If you save and then test the function once again, you'll get as result a much cleaner object:
1[
2 {
3 id: "cory-house",
4 firstName: "Cory",
5 lastName: "House"
6 },
7 {
8 id: "samer-buma",
9 firstName: "Samer",
10 lastName: "Buma"
11 },
12 {
13 id: "deborah-kurata",
14 firstName: "Deborah",
15 lastName: "Kurata"
16 }
17];
Now follow the Functions link at the top of the page. Your function should appear on the dashboard:
Next, we need to create the rest of the functions with their corresponding roles and policies. Of course, you can create one role with a full access policy and use it for all your functions, but remember, that is not the recommended approach. Custom roles that minimize the number of permissions are better in the long run.
You can also create the policies and roles from the Identity and Access Management (IAM) console with the visual editor, first creating a policy with required permission and then creating the role that will contain that policy.
But probably it's faster to enter the policy manually and have AWS create both the policy and the role for you. So here are the policies and the code for each function.
The policy to create a new course:
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "logs:CreateLogGroup",
8 "logs:CreateLogStream",
9 "logs:PutLogEvents"
10 ],
11 "Resource": "arn:aws:logs:*:*:*"
12 },
13 {
14 "Effect": "Allow",
15 "Action": "dynamodb:PutItem",
16 "Resource": "<YOUR_ARN_FOR_THE_COURSES_TABLE>"
17 }
18 ]
19}
And the code for the corresponding lambda function (save-course
):
1const AWS = require("aws-sdk");
2const dynamodb = new AWS.DynamoDB({
3 region: "<YOUR_AWS_REGION_CODE>",
4 apiVersion: "2012-08-10"
5});
6
7const replaceAll = (str, find, replace) => {
8 return str.replace(new RegExp(find, "g"), replace);
9};
10
11exports.handler = (event, context, callback) => {
12 const id = replaceAll(event.title, " ", "-").toLowerCase();
13 const params = {
14 Item: {
15 id: {
16 S: id
17 },
18 title: {
19 S: event.title
20 },
21 watchHref: {
22 S: `http://www.pluralsight.com/courses/${id}`
23 },
24 authorId: {
25 S: event.authorId
26 },
27 length: {
28 S: event.length
29 },
30 category: {
31 S: event.category
32 }
33 },
34 TableName: "courses"
35 };
36 dynamodb.putItem(params, (err, data) => {
37 if (err) {
38 console.log(err);
39 callback(err);
40 } else {
41 callback(null, {
42 id: params.Item.id.S,
43 title: params.Item.title.S,
44 watchHref: params.Item.watchHref.S,
45 authorId: params.Item.authorId.S,
46 length: params.Item.length.S,
47 category: params.Item.category.S
48 });
49 }
50 });
51};
To test this function, you can configure a test event with the following data:
1{
2 "title": "Web Component Fundamentals",
3 "authorId": "cory-house",
4 "length": "5:10",
5 "category": "HTML5"
6}
To update a course we're also going to use the putItem
operation so the policy is the same and you can reuse the role of the previous function.
The code for the corresponding lambda function (update-course
) is also similar, with the only difference being that the id
and watchHref
fields are not generated:
1const AWS = require("aws-sdk");
2const dynamodb = new AWS.DynamoDB({
3 region: "<YOUR_AWS_REGION_CODE>",
4 apiVersion: "2012-08-10"
5});
6
7exports.handler = (event, context, callback) => {
8 const params = {
9 Item: {
10 id: {
11 S: event.id
12 },
13 title: {
14 S: event.title
15 },
16 watchHref: {
17 S: event.watchHref
18 },
19 authorId: {
20 S: event.authorId
21 },
22 length: {
23 S: event.length
24 },
25 category: {
26 S: event.category
27 }
28 },
29 TableName: "courses"
30 };
31 dynamodb.putItem(params, (err, data) => {
32 if (err) {
33 console.log(err);
34 callback(err);
35 } else {
36 callback(null, {
37 id: params.Item.id.S,
38 title: params.Item.title.S,
39 watchHref: params.Item.watchHref.S,
40 authorId: params.Item.authorId.S,
41 length: params.Item.length.S,
42 category: params.Item.category.S
43 });
44 }
45 });
46};
To test this function, you can configure a test event with the following data:
1{
2 "id": "web-component-fundamentals",
3 "title": "Web Component Fundamentals",
4 "authorId": "cory-house",
5 "length": "5:03",
6 "category": "HTML5",
7 "watchHref": "http://www.pluralsight.com/courses/web-components-shadow-dom"
8}
The policy to get all the courses:
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "logs:CreateLogGroup",
8 "logs:CreateLogStream",
9 "logs:PutLogEvents"
10 ],
11 "Resource": "arn:aws:logs:*:*:*"
12 },
13 {
14 "Effect": "Allow",
15 "Action": "dynamodb:Scan",
16 "Resource": "<YOUR_ARN_FOR_THE_COURSES_TABLE>"
17 }
18 ]
19}
And the code for the corresponding lambda function (get-all-courses
):
1const AWS = require("aws-sdk");
2const dynamodb = new AWS.DynamoDB({
3 region: "<YOUR_AWS_REGION_CODE>",
4 apiVersion: "2012-08-10"
5});
6
7exports.handler = (event, context, callback) => {
8 const params = {
9 TableName: "courses"
10 };
11 dynamodb.scan(params, (err, data) => {
12 if (err) {
13 console.log(err);
14 callback(err);
15 } else {
16 const courses = data.Items.map(item => {
17 return {
18 id: item.id.S,
19 title: item.title.S,
20 watchHref: item.watchHref.S,
21 authorId: item.authorId.S,
22 length: item.length.S,
23 category: item.category.S
24 };
25 });
26 callback(null, courses);
27 }
28 });
29};
The policy to get one course:
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "logs:CreateLogGroup",
8 "logs:CreateLogStream",
9 "logs:PutLogEvents"
10 ],
11 "Resource": "arn:aws:logs:*:*:*"
12 },
13 {
14 "Effect": "Allow",
15 "Action": "dynamodb:GetItem",
16 "Resource": "<YOUR_ARN_FOR_THE_COURSES_TABLE>"
17 }
18 ]
19}
And the code for the corresponding lambda function (get-course
):
1const AWS = require("aws-sdk");
2const dynamodb = new AWS.DynamoDB({
3 region: "<YOUR_AWS_REGION_CODE>",
4 apiVersion: "2012-08-10"
5});
6
7exports.handler = (event, context, callback) => {
8 const params = {
9 Key: {
10 id: {
11 S: event.id
12 }
13 },
14 TableName: "courses"
15 };
16 dynamodb.getItem(params, (err, data) => {
17 if (err) {
18 console.log(err);
19 callback(err);
20 } else {
21 callback(null, {
22 id: data.Item.id.S,
23 title: data.Item.title.S,
24 watchHref: data.Item.watchHref.S,
25 authorId: data.Item.authorId.S,
26 length: data.Item.length.S,
27 category: data.Item.category.S
28 });
29 }
30 });
31};
To test this function, you can configure a test event with the following data:
1{
2 "id": "web-component-fundamentals"
3}
The policy to delete a course:
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "logs:CreateLogGroup",
8 "logs:CreateLogStream",
9 "logs:PutLogEvents"
10 ],
11 "Resource": "arn:aws:logs:*:*:*"
12 },
13 {
14 "Effect": "Allow",
15 "Action": "dynamodb:DeleteItem",
16 "Resource": "<YOUR_ARN_FOR_THE_COURSES_TABLE>"
17 }
18 ]
19}
And the code for the corresponding lambda function (delete-course
):
1const AWS = require("aws-sdk");
2const dynamodb = new AWS.DynamoDB({
3 region: "<YOUR_AWS_REGION_CODE>",
4 apiVersion: "2012-08-10"
5});
6
7exports.handler = (event, context, callback) => {
8 const params = {
9 Key: {
10 id: {
11 S: event.id
12 }
13 },
14 TableName: "courses"
15 };
16 dynamodb.deleteItem(params, (err, data) => {
17 if (err) {
18 console.log(err);
19 callback(err);
20 } else {
21 callback(null, data);
22 }
23 });
24};
To test this function, you can configure a test event with the following data:
1{
2 "id": "web-component-fundamentals"
3}
And these are all the functions you'll need. Now let's expose them to the world via a REST API.
API Gateway is a service that allows creating a REST API fully managed by AWS that acts as the front-end for other services.
Open the Services menu and choose API Gateway:
Press Get started:
You can create an API with a Swagger file here we're going to create the API manually, so choose to create a New API and give it a name:
This will create a new API and you'll be ready to start adding resources (URL paths) and methods (GET, POST, etc.):
Now open the Actions menu and choose Create Resource. Enter a resource name and check the option Enable API Gateway CORS:
By default, the URL path will be created from the resource name. Cross-origin resource sharing (CORS) is a mechanism that allows a web page to request resources hosted in another domain (which is going to be our case here). With this mechanism, the server sends some headers to tell the application that is OK to access the resources on that different server.
Next, click on Create Resource:
An Options
endpoint is created because the client sends a preflight request to see if the resource is available and if it contains the CORS headers:
Now to start adding methods to the /courses
endpoint, select it (don't select the root /
endpoint), open the Actions menu and click on Create Method option:
We're going to start with the POST method, so select this option and click on the check icon next to the menu to set up the method:
As the Integration type choose Lambda Function, then the region where you created your lambda functions, and enter the name of the function that will be associated to this method (save-course
):
Confirm the permission to invoke the lambda function:
By default, API Gateway will pass the request body to the lambda function and return to the client the object returned by the lambda function. Since the client is going to send exactly what the lambda function expects and the lambda function is going to do the same, there's no need to configure any request or response body mapping templates.
However, probably it's a good idea to use a model to validate the request.
So go to the Models section and click on Create:
Give it a name, enter application/json
as the Content type, and as the model schema, enter the following:
1{
2 "$schema": "http://json-schema.org/schema#",
3 "title": "CourseInputModel",
4 "type": "object",
5 "properties": {
6 "title": {"type": "string"},
7 "authorId": {"type": "string"},
8 "length": {"type": "string"},
9 "category": {"type": "string"}
10 },
11 "required": ["title", "authorId", "length", "category"]
12}
Models use the JSON Schema specification. You can learn more about it here.
Click on Create model and back to the POST method click on Method Request:
In Request Validator choose the option Validate body, in the Request Body section add the model for the application/json
content type and confirm all these choices:
Now if you click on TEST:
You can test the POST endpoint by entering a request body like:
1{
2 "title": "Web Component Fundamentals",
3 "authorId": "cory-house",
4 "length": "5:03",
5 "category": "HTML5"
6}
Everything should work correctly:
If you remove one or more attributes of the request, an error should be returned:
Moving on, we are also going to need a path like courses/web-component-fundamentals
, where web-component-fundamentals
corresponds to the ID of the course we want to update, delete or select one course.
Select the course
resource and from the Actions menu, create another resource, this time just adding a path variable using brackets (and enabling CORS):
Click on Create resource to create it:
Next, create a PUT method and link it to the update-course
lambda function:
Now the thing with this operation is that the ID of the course is in the URL, and the rest of the attributes are in the request body.
In this case, we'll have to use a body mapping template to send all the information as the lambda function requires it.
So select the PUT method and click on Integration Request:
Then, in the Body Mapping Templates section, choose the option When there are no templates defined, add a mapping template for the content-type application/json
, click on the check icon next to it, and enter the following template:
1{
2 "id": $input.params('id'),
3 "title" : $input.json('$.title'),
4 "authorId" : $input.json('$.authorId'),
5 "length" : $input.json('$.length'),
6 "category" : $input.json('$.category'),
7 "watchHref" : $input.json('$.watchHref')
8}
The mapping template uses the Velocity Template Language (VTL). Here's the link to the relevant section in the AWS documentation.
The $input
variable gives you access to all the request data, while $input.params()
to the request parameters from the path, query string, or headers (in that order) and $input.json()
returns a JSON string from the request body. You can know more about the $input
variable and their functions here.
Also, notice that the ID has to be "converted" to a string by enclosing it in quotation marks, while the rest of the data since it's already sent as a string, don't have to be converted.
Click Save and test it with the path variable web-component-fundamentals
(assuming the record exists in your database) and the following request body:
1{
2 "title": "Web Component Fundamentals",
3 "authorId": "cory-house",
4 "length": "5:03",
5 "category": "HTML5",
6 "watchHref": "http://www.pluralsight.com/courses/web-components-shadow-dom"
7}
Everything should work correctly:
If you like, add a model to validate the request, just like in the case of the POST method.
Now, applying what you have learned, create a DELETE and GET method on the /courses/{id}
resource, linking them to the delete-course
and get-course
functions and passing only the ID with the following template:
1{
2 "id": "$input.params('id')"
3}
Don't forget to add a GET method on the /courses
resource for the get-all-courses
function.
And another resource, /authors
, for the get-all-authors
function.
When you're done, you should have something like this:
Now, we need to add the CORS header to all the methods (the CORS option you chose when creating a resource only adds the CORS headers for the preflight request).
Select the /authors
endpoint, click on Actions and choose Enable CORS:
Deselect the OPTIONS method, click on Enable CORS and replace existing CORS headers:
And confirm to add the CORS headers to the GET method:
Do the same for all the methods of the /courses
and /courses/{id}
resources (except for the OPTIONS method).
Finally, to deploy the API, go to the Actions menu and choose Deploy API:
Then, create a new stage, give it a name, optionally a description and click on Deploy:
After the API is deployed, you'll get the URL to invoke it:
For example, in my case, the root URL of my API is https://hcqxdqvii3.execute-api.us-east-2.amazonaws.com/v1
and to get the course with ID web-component-fundamentals
, I'll have to use the URL https://hcqxdqvii3.execute-api.us-east-2.amazonaws.com/v1/courses/web-component-fundamentals
.
Now, the only step missing is uploading a client app for our API to S3.
Clone the app from this GitHub repository.
In a terminal window, go to the app directory and execute npm install
to install the dependencies.
Next, go to src/api/serverUrl.js
, and replace the exported value with the URL of your API.
If you want to test it locally, execute npm start
on the terminal.
Then, execute npm run build
to build a version of the app ready for production. You can find the output files in the build
folder.
Now, let's use Amazon S3 to host the app and go completely serverless.
Amazon S3 is a storage service that can also function as a web server.
In S3, you put your files in buckets. You can know more about S3 in this developer guide, for now, let's create a bucket.
Open the Services menu and choose S3:
Click on the Create bucket button and enter a name and the region where your bucket will be created:
Make sure you don't put any spaces in the name. Also, the name has to be unique across all existing names in S3, so use your domain name or something unique.
Next, click on the Create button, we'll take the default values for the properties and permissions tabs.
Once the bucket is created, click on it, and go to the Properties tab:
Click on the Static website hosting option and select Use this bucket to host a website:
Since this is a single-page application, enter index.html
as the Index and Error documents. Copy the URL that S3 gives you and click Save.
Next, in the Permissions tab, select Bucket Policy:
Enter the following policy to grant read-only permissions to your bucket, just change examplebucket
by your bucket name (in my case net.eherrera.serverless-demo
):
1{
2 "Version":"2012-10-17",
3 "Statement":[
4 {
5 "Sid":"AddPerm",
6 "Effect":"Allow",
7 "Principal": "*",
8 "Action":["s3:GetObject"],
9 "Resource":["arn:aws:s3:::examplebucket/*"]
10 }
11 ]
12}
Click on Save to apply the policy:
Now go to the Overview tab, click on the Upload button, drag the content of the build
directory and click on Upload:
Once the upload has finished, go to your app using the URL you copied (or go back to the Static website hosting option in the Properties tab to get it):
Optionally, you can create another bucket to store the logs of the web server, you just have to enable the Server access logging option in the Properties tab.
However, I would highly recommend using CloudFront.
CloudFront is a Content Delivery Network (CDN) service that will copy your app to edge locations around the world to improve the speed in which your app is served to your users. You can learn more about it in this developer guide.
Open the Services menu and choose CloudFront:
Click on Create Distribution and under the Web section click on Get Started:
In the Origin Domain Name select the bucket that contains your app, Origin ID will be populated:
Scroll down and under Distribution Settings, set the Default Root Object value to index.html
and click on Create Distribution at the bottom of the page.
Wait a few minutes until the deployment has finished, and then click on the ID of your distribution:
In the General tab, you'll find the new URL of your app in the Domain Name row. (You can use the HTTPS protocol for your URL if you prefer):
And that's it!
The idea of a serverless application is interesting:
In this tutorial, you have learned how to use AWS's API Gateway and Lambda functions to build a REST API that performs CRUD operations on a database built on the AWS DynamoDB database framework. This guide also covered how to host this API on S3 in a single-page application that can be distributed worldwide using CloudFront.
This was just a cursory look at each of these technologies; there is a lot more to learn about each technology and about other AWS components as well!
In case AWS excites you, some interesting projects would include authenticating users with Amazon Cognito or using a framework like Serveless or a service like AWS CloudFormation to simplify the managing and deploying of serverless applications.
Thanks for reading. I hope you found this guide informative and engaging.