Serverless showdown: AWS Lambda vs Azure Functions vs Google Cloud Functions
What are the differences between AWS Lambda vs Azure Functions vs Google Cloud Functions? Compare the FaaS services of AWS, Azure, and Google Cloud.
Jun 08, 2023 • 13 Minute Read
When it comes to trendy buzzwords, “serverless” might be the most popular.
At the heart of the serverless paradigm is the Function as a Service (FaaS) model, a category of services that make it ridiculously easy to run code in the cloud without provisioning any compute infrastructure.
FaaS has truly been a game-changer: it considerably accelerated the ability to deploy complex backend services and democratized application development. Gone are the days when companies need to invest capital and resources to take their ideas to market. Now, all you need is an idea, good code, and an account with a major cloud provider. Within minutes, your application can be running on managed infrastructure, serving millions of requests, and scaling (virtually) infinitely.
This article will compare the FaaS services of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) and offer some insight into one of the most transformative technologies of the modern cloud age!
Table of contents
- The TL;DR
- Feature Comparisons
- Scalability, Concurrency, and Cold Starts
- Configuration and Performance
- HTTP Integration
- Hosting Plans
- The Bottom Line
- Related Resources
At the core of any application is the code that makes up its logic and functionality. Whether it is the latest mobile game, or your typical boring enterprise finance software, there are lines of code (sometimes thousands of lines) that need to run somewhere. This “somewhere” is typically a server, or groups of servers, where CPU cycles execute the logic that powers these applications.
But servers, even virtual cloud servers, are expensive and can be a pain to maintain, often requiring highly trained and experienced administrators to secure and manage them. Additionally, when no users are playing that game or using the finance software, these expensive servers will sit idly, virtually twiddling their thumbs, waiting for new “work” to come in.
Traditional compute infrastructure can be very inefficient, and that is exactly what makes FaaS so appealing.
Instead of standing up complex infrastructure (servers, load balancers, etc), FaaS lets you run your code on a managed pool of compute resources, while paying only for the duration of execution.
FaaS functions are event-driven, meaning they run in response to certain events. While not always the case, those functions are often short-lived, ephemeral, stateless and single-purpose.
The table below is a brief summary of the FaaS services offered by AWS, Azure, and GCP:
|Service||GA Since||Regional Availability|
|AWS Lambda||November 2014||Global|
|Azure Functions||March 2016||Global|
|Google Cloud Functions||February 2016||Global|
One of the main reasons for the popularity of FaaS is cost: it is dirt-cheap, and in many cases, practically free. With that being said, like everything else in the cloud, that price tag could dramatically change as the scale gets larger.
The two main contributors to FaaS cost are:
- Number of requests: typically billed for every million requests per month
- Compute time: this is a measurement of the duration of the life of the function and the amount of memory provisioned, measured in GB-seconds (higher-memory execution will cost more than lower-memory execution of the same duration)
Additionally, other charges like data transfer or storage costs might apply, but those depend on the specific use case.
All of the three major providers have a monthly free tier quota, and cost is incurred after that quota is reached. If you are experimenting or building a proof-of-concept, you will most likely be covered in the free tier.
The table below compares what each provider offers in terms of free tier quota, and how much additional usage costs.
|Provider||Free Monthly Duration (GB-seconds)||Free Monthly Requests||Cost of Each Additional 1 Million Requests||Cost of Each Additional 1 GB-second||Duration is Rounded to the Nearest|
Takeaway: The pricing structure is almost identical across all three providers.
AWS and Azure have identical pricing and free monthly quotas, while GCP offers an extra 1 million free requests per month, and has comparable pricing for additional requests. Both AWS and Azure round their duration to the nearest 1ms, while GCP rounds to the nearest 100ms increment, which could add to the overall cost at scale.
There are more programming languages out there than there are Star Wars sequels, prequels, and spin-offs. And for obvious reasons, every programming language has strengths and weaknesses, so one needs to pick the right tool for the right job. Cloud providers, for the most part, support most of the popular languages in their respective FaaS offering.
The table below shows the currently supported FaaS runtimes for AWS, Azure, and GCP:
|AWS Lambda||C#, Go, Java, Node.js, PowerShell, Python, Ruby|
|Azure Functions||C#, F#, Java, Node.js, PowerShell, Python, TypeScript|
|GCP Cloud Functions||C#, F#, Go, Java, Node.js, Python, Ruby, Visual Basic|
Takeaway: Very comparable language support between the three major providers, with the only notable exceptions of GCP lacking support for PowerShell, and Azure lacking support for Go (a rather interesting observation considering the fact that Go was developed by Google and PowerShell by Microsoft!)
If your language of choice is not listed above, it is still possible to “bring your own runtime”, which allows you to implement a provider’s FaaS in any programming language. See the table below for current support for custom runtimes.
|Provider||Support for Custom Runtimes|
|AWS Lambda||Yes, using custom deployment packages or AWS Lambda Layers|
|Azure Functions||Yes, using Azure Functions custom handlers|
|GCP Cloud Functions||Yes, using custom Docker images|
Scalability, Concurrency, and Cold Starts
Early in the days of FaaS, the most common use case was to “kick the tires” or “mock something up” before moving to a more mature solution. Those days are gone, and today it is not uncommon to find global-scale production apps running fully on a FaaS backend.
To achieve this, it is important to understand how cloud providers scale FaaS workloads in response to increased demand, and how they handle concurrent requests.
As mentioned earlier, FaaS functions are event-driven, so when an event is received, an instance of the function is spun up and the request is processed. The instance is kept alive to process subsequent events, and if none are received within a certain time frame, it is recycled.
All three providers advertise virtually unlimited and automatic scaling, although there are other factors that might be at play here, which are discussed below.
When a request is being processed by a FaaS instance and another request is received, a second instance is spun up to process this additional request (instances process only one request at a time.) Concurrency is the number of simultaneous functions that can run at any given time.
|AWS Lambda||Standard: 1000 per region (soft limit) |
|Azure Functions||No advertised concurrency limit|
|GCP Cloud Functions||No advertised concurrency limit|
AWS is the only provider to offer highly customized concurrency management options, while Azure and GCP are a little vague on how concurrent executions are handled.
Given the relatively young age of FaaS as a technology, it has many detractors and naysayers. Their favorite critique, by far, is the notion of “cold starts”.
So what is a cold start, anyway?
Imagine this familiar action movie scene: our hero is racing down the highway at 120 mph, probably on his way to save some lives, when he zooms past an unsuspecting highway trooper, on his break, reading a newspaper. By the time the trooper fumbles to start his engine and gets up to highway speed to give chase, our hero is miles away already, and the trooper will need some time to catch up.
Similarly, a FaaS instance in an inactive state will require some additional time to respond to a request. This initial delay encountered is known as a cold start.
Contrast that with an FaaS instance that is already in an active state and receives a request, in this case no initial delay is experienced and the instance can start processing the request almost instantly. Using our example, this would be analogous to a highway trooper driving down the highway at normal speed when he gets passed by a speeding driver. In that case, the trooper can very quickly accelerate and catch up in mere seconds.
While cloud providers do not publish their cold start statistics, the following table shows estimated averages as observed by industry analysts:
|Service||Average Cold Start (in seconds)|
|GCP Cloud Functions||0.5 - 2|
Cold starts can affect the performance of FaaS workloads that are very sensitive to delay but there are ways to mitigate it, and it appears to affect the Azure platform more than its two competitors. Additionally, AWS now offers “provisioned concurrency” as an approach to eliminate cold starts with Lambda. We discuss this feature in more detail later in the article.
Configuration and Performance
Not all functions are created equal, so different workloads might require different settings to optimize performance.
Depending on how resource-hungry the code is, memory will have to be adjusted accordingly. If the memory allocated is too low, a function will take longer to execute and could potentially time out, but if the memory is set too high, you might end up over-paying for unused resources.
Cloud providers offer different maximum memory configurations, while CPU power is linearly and automatically configured in proportion to the amount of memory chosen.
|AWS Lambda||128 MB - 10240 MB|
|Azure Functions||128 MB - 1500 MB (Consumption Plan)|
128 MB - 14000 MB (Premium and Dedicated Plans)
|GCP Cloud Functions||128 MB - 4096 MB (in multiples of 128 MB)|
Of note is the maximum memory that can be configured on GCP Cloud Functions: at 4096 MB, that limit is considerably lower than what AWS and Azure offer.
The other configurable aspect of FaaS is the maximum execution time. While most functions in the wild take seconds (or less) to execute, some intensive workloads can potentially take much longer, on the order of minutes, or even hours (for example, intensive machine learning or data analysis workloads).
The table below shows the maximum timeouts that each cloud provider offers:
|AWS Lambda||15 minutes|
|Azure Functions||5 minutes (Consumption Plan)|
30 minutes (Premium and Dedicated Plans)
|GCP Cloud Functions||9 minutes|
It is important to note that increasing the timeout is not always the solution, and it should be considered in conjunction with adjusting the memory.
As discussed earlier, functions deployed on a FaaS are, by nature, stateless. In other words, functions are not aware of other functions or of the execution results of other functions.
Even invocations of the same function are completely independent of each other. This stateless paradigm is what makes FaaS so scalable and easy to provision.
While the stateless approach is excellent for executing a large number of short-lived and single-purpose functions (for example, a contact form on a website), it makes it difficult to build any meaningful complex applications that often require some sort of state management. Realizing this, cloud providers built orchestration services that integrate with these functions as “steps” in a workflow, where the output of one step can be passed as input to another step. This enabled building fairly complex workflows in a completely serverless approach!
The following table lists what each provider offers for orchestration services. Those services are often very scalable as well and have many features that are beyond the scope of this article:
|AWS||AWS Step Functions|
|Azure||Durable Azure Functions|
The power of FaaS services lies in the fact that they are event-driven, meaning certain “interesting events” can trigger an execution of the function.
“What are interesting events?” you might ask.
Anything from a simple cron schedule (example: run this function every day at midnight) to other services within the cloud provider’s ecosystem (for example, run this function when a file is uploaded to cloud storage). But one of the most popular scenarios is integrating FaaS with an HTTP endpoint.
While all three providers support HTTP integration, AWS requires provisioning and configuring a separate resource, API Gateway, which is billed separately as well. Azure and GCP have a much more streamlined HTTP integration.
|Service||HTTP Integration Support|
|AWS Lambda||Yes, requires API Gateway (billed separately)|
|Azure Functions||Yes, out of the box|
|GCP Cloud Functions||Yes, out of the box|
Given the various needs in terms of availability and latency required as discussed earlier, cloud providers reacted by providing different tiers of availability baked into their FaaS offering.
AWS Lambda historically came as one basic hosting plan, but more recently AWS started offering Provisioned Concurrency, which ensures that functions are initialized and ready to respond to events, cutting down the dreaded cold-start time to mere milliseconds. Azure offers a more complex variety of hosting options, while GCP just offers a one-size-fits-all plan.
|GCP Cloud Functions||General|
The Bottom Line
When it comes to comparing the FaaS offerings of the three major cloud providers, one thing is very obvious: they are extremely similar and comparable, both in terms of features and cost.
While AWS Lambda is the more mature and most popular of the three, Azure Functions appears to have some very similar features and in some ways, more options to accommodate edge cases. GCP Cloud Functions has a few less bells and whistles but is still fairly comparable to the other two.
The devil, as they say, is in the details. Most likely, there are other factors to consider when comparing these three FaaS services. But whatever the case is, the key takeaway here is that FaaS is here to stay, and it is a truly transformative technology that will only continue to gain adoption, so if you have not already, make sure that you are leveraging it!
- Applying Infrastructure as Code and Serverless Technologies to AWS Deployments
- Automating AWS with Lambda, Python, and Boto3
- Building a Full-Stack Serverless Application on AWS
- Building more cost-effective Lambda functions with 1 ms billing
- AWS Lambda is winning, but first it had to die
- Serverless Computing with Azure Functions
- How to build a serverless app using Go and Azure Functions
- How to create CRUD applications with Azure Functions and MongoDB
- Google Cloud Functions deep dive
- Introduction to Serverless on Google Cloud
Level up your cloud career
Looking to get certified or level up your cloud career? Learn in-demand cloud skills by doing with ACG's courses, labs, learning paths and sandbox software.