Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Serving dynamic website content with serverless architecture

Jun 08, 2023 • 8 Minute Read

Please set an alt value for this image...

Webpage content is generally static or dynamic. Static content is the same content served to every visitor, whereas dynamic content changes. Some of the world’s busiest websites are based on dynamic content, such as news sites with lists of headlines, or ecommerce sites with the latest deals.

To serve dynamic content with a traditional server-based web app, typically a server script or application fetches the results from a database and renders the page. Scaling this design to serve large numbers of visitors can be challenging. Each page view requires the web server to query a database and render the results back to the reader. With heavy traffic, this adds load to both the server and database. 

For sites using LAMP stack architectures, such as WordPress, or other server-based web applications, you can take a hybrid serverless approach to provide this functionality. This can help accelerate development, makes it easier to scale, and promotes a microservice-based model

A serverless architecture can serve thousands of visitors per second much more easily by using services designed for scaling. In a hybrid approach, it also doesn’t change the existing server code or database schema. In this blog post, I explain how you can build a highly scalable solution to serve dynamic content using minimal custom code. 


While you can use serverless technology to build entire web applications, you can also use microservice-based backends to bring dynamic functionality to existing websites. 

The Serverless Land website uses this approach to generate dynamic results for visitors. On the site, the What’s New page aggregates feature release information from over a dozen different AWS services. A backend service collects this information from public RSS feeds, then the frontend fetches the data from a public endpoint. When visitors navigate to the site, they see a page that retrieves the aggregated results:

The serverless architecture

The solution uses two separate microservices. One fetches the data on a schedule from external data sources and stores the result in a database. The other serves the frontend web application, fetching the results from the database. The architecture looks like this:  

  1. An Amazon EventBridge scheduled rule runs every hour to invoke a Lambda function.
  2. The AWS Lambda function fetches new data from external data source(s) and filters for records to save.
  3. The function writes the items to an Amazon DynamoDB table.
  4. The website frontend calls an Amazon CloudFront distribution’s URL. The cache is set to 5 minutes.
  5. Whenever the cache is empty, or it expires, CloudFront fetches the latest content from an API Gateway endpoint.
  6. Amazon API Gateway invokes a second Lambda function.
  7. The Lambda function loads the data from the DynamoDB table.

Deploying the example

This example uses the AWS Serverless Application Model (AWS SAM) to deploy the solution to your AWS account. To get started, you need the AWS CLI and AWS SAM installed. You also need an AWS account but everything shown here can be deployed and tested within the AWS Free Tier

To deploy this example:

  1. Clone the code from the samples repo:
    git clone
  2. Change directory:
    cd ./serverless-content-service-from-rss
  3. Run AWS SAM build to download and install the dependencies used:
    sam build
  4. To deploy the application to your AWS account, run AWS SAM guided deploy:
    sam deploy –guided
  5. The deployment wizard prompts you with a series of questions. Enter your preferred stack name, AWS Region, and allow AWS SAM to create the API without authorization defined.

With the application deployed, the EventBridge scheduled rule invokes the Import Lambda function every hour. This fetches data from a public RSS feed and stores the items in the application’s DynamoDB table.

Understanding the AWS SAM template

The template in the repo defines all the resources used by this application. First, it declares a DynamoDB table with a partition key and sort key, using on-demand capacity mode:

    Type: AWS::DynamoDB::Table
      - AttributeName: PK
        AttributeType: S
      - AttributeName: SK
        AttributeType: N
      - AttributeName: PK
        KeyType: HASH
      - AttributeName: SK
        KeyType: RANGE
      BillingMode: PAY_PER_REQUEST

Next, the template defines the Import Lambda function. It sets the memory size at 128 MB and the runtime to Node.js 14. The RSS feed URL is provided as an environment variable to the function, so you can update it here for your own needs without needs to change the code in the Lambda function:

    Type: AWS::Serverless::Function 
      Timeout: 10
      CodeUri: parser/
      Handler: app.handler
      Runtime: nodejs14.x
      MemorySize: 128
          Feed: ''
          TableName: !Ref DDBtable        
        - DynamoDBCrudPolicy:
            TableName: !Ref DDBtable        
          Type: Schedule
            Schedule: 'rate(1 hour)'
            Name: ImportSchedule
            Enabled: True

This uses an AWS SAM policy template to grant permissions to the template’s DynamoDB table. AWS SAM automatically provides minimal permissions to invoke the function and save logs to Amazon CloudWatch. In Events, the template defines an hourly invocation using an EventBridge scheduled rule.  These two sections define everything needed by the first microservice to fetch the data on a schedule and store the results in DynamoDB.

The second microservice creates an HTTP endpoint for a client application to fetch the items. The template defines an API Gateway endpoint and a Lambda function:

    Type: AWS::Serverless::HttpApi
          - GET
          - POST
          - DELETE
          - OPTIONS
          - "*"   
          - "*"      

    Type: AWS::Serverless::Function 
      Timeout: 3
      CodeUri: getItems/
      Handler: app.handler
      Runtime: nodejs14.x
      MemorySize: 128
          TableName: !Ref DDBtable        
        - DynamoDBReadPolicy:
            TableName: !Ref DDBtable        
          Type: HttpApi
            Path: /items
            Method: get
            ApiId: !Ref MyApi

In the HTTP APIs configuration, this defines an open CORS configuration for testing and development purposes. In a production application, you should scope this access accordingly. The Lambda function’s event handler configures the API path and method.

Finally, the template defines the CloudFront distribution. This uses the API Gateway endpoint as an origin and sets a time to live (TTL) value of 300 seconds:

    Type: AWS::CloudFront::Distribution
        Enabled: true
        IPV6Enabled: true
        HttpVersion: http2
        Comment: !Ref 'AWS::StackName'
        - Id: APIGOrigin
          DomainName: !Sub ${MyApi}.execute-api.${AWS::Region}
            HTTPSPort: 443
            OriginProtocolPolicy: https-only
          AllowedMethods: ["GET", "HEAD"]
          CachedMethods: ["GET", "HEAD"]
            - Access-Control-Request-Headers
            - Access-Control-Request-Method
            - Origin
            QueryString: true
          TargetOriginId: APIGOrigin
          ViewerProtocolPolicy: https-only
          Compress: true
          DefaultTTL: 300

Testing the application

With the backend deployed, the Import function fetches new content every hour. You can also manually run the function in the Lambda console to fetch content immediately. 

After running the sam deploy command, the AWS CloudFormation outputs from the stack show the base URL for your distribution: 

The complete endpoint is the base URL with the ‘/items’ path appended. You can use tools like curl or Postman to query this URL and see the list of items returned:

Integrating with a web application

For the web application to use this backend, it must make an API call to the previous endpoint. This process uses JavaScript to call the endpoint directly from the browser frontend. In single-page application frameworks like React, Vue.js, or Angular, you can set up the page component to call the endpoint when the page first loads.

The Serverless Land website is written in Vue.js. The following code snippet shows how the component instance on the page makes the API call in the mounted lifecycle hook. The LAUNCHES_URL variable contains the URL for fetching the list of items from the CloudFront distribution:

const axios = require('axios')
const ITEMS_URL = '

export default {
  data () {
    return {
      items: [],
  mounted: async function() {
    // Load dynamic data from backend service
    const result = await axios.get(ITEMS_URL)
    this.items =

Currently, this returns the most recent 100 results. In an advanced implementation, the API could also support searching and paging via query parameters. You can use the sort key in the underlying DynamoDB table to sort the results by timestamp.

Configuring CloudFront caching

In this solution, the frontend web application could directly call the API Gateway endpoint and receive the same response. However, in busy web applications, this results in a large number of calls to API Gateway, which in turn invokes Lambda and DynamoDB. Since the list of dynamic items may only change infrequently, you can use CloudFront to cache the result, improve latency, and reduce the overall cost.

In this example, the Import function runs hourly. You could potentially schedule the polling process to run at a shorter interval, such as every minute. You can customize the CloudFront TTL values for a period as short as one second, depending upon the needs of your workload.

The live site uses a five-minute cache. If a large number of site visitors arrive, most requests are handled by the CloudFront distribution. The API Gateway endpoint is called by CloudFront only when the cache expires or the existing cache is empty.


Dynamic webpage content can make your site more useful to visitors. It’s commonly used to provide functionality like the latest ecommerce deals, the most recent customer comments, or the most recent feature releases for your product.

Using a traditional server-based approach, this can add more load to your webserver and underlying database. It can also be challenging to scale out or handle spiky workloads. 

This blog post shows a serverless approach that supports high throughput. This uses a microservice that fetches data and stores the results in a DynamoDB table. The client application calls a CloudFront distribution endpoint that uses another microservice to fetch items from the table.

For more serverless learning resources, visit Serverless Land.