Anyone with a history of working with ASP.NET and then comparing it to, well, almost anything else will point out that, historically speaking, ASP.NET is not exactly a fast platform. One of the “fairest” groups for measuring the speed of various web platforms is TechEmpower’s Web Framework benchmarks. Chances are you haven’t heard of many of the platforms you’ll find there, and most aren’t really practical for most projects, so it’s probably best to only worry about those you know.
So, just how slow was ASP.NET 4.6? To find that, we need to go back to Round 9 in 2014 — the last time ASP.NET was even listed. The fastest framework was “cpoll_cppsp”, which handled 6,738,911 requests per second.

The first framework you might recognize would be Go, which was only 7.3% of what cpoll_cppsp managed at 490,728 requests per second.

That’s all well and good, but how about ASP.NET? Near the bottom of the list, you’ll find it coming in at 71,589 requests per second — barely 1% of what cpoll_cppsp can manage.

A Change in Priorities
In 2014, the .NET team decided it was time for a change. They had two primary reasons for this massive undertaking. First, getting started on your ASP.NET app was a multiple-hour endevour — as in, four hours after you decided you wanted to try ASP.NET, you were ready to write your first line of code. (We covered how much they’ve improved that experience in a previous blog post, Writing .NET Applications With Visual Studio Code.) Second, they wanted to improve ASP.NET’s performance as much as possible.
Getting Up to Speed
The first thing ASP.NET Core did to improve performance was change the strategy of “everything in one,” breaking up various parts of .NET into separate packages. This means you only took on the overhead for what you actually used versus it all being bundled in one massive thing.
When .NET moved to GitHub and truly embraced open source, they opened the floodgates so everyone could make improvements to the framework, explicitly challenging people to do what they could to help performance. Developers like Ben Adams took up this torch and helped optimize things to absurd levels.
Of course, every little improvement in performance required digging deeper for more ways to improve — and this is where the insanity begins. One annoying overhead for any web server is handling requests. Almost all of the data in requests are strings. URL — that’s a string. Headers — also strings. When it comes to performance, strings are one of the worst data types to deal with in almost any language.
Luckily, most of these strings are predictable. Request types could be GET, POST, PUT, DELETE, etc. URLs are going to start with http://, https://, ftp://, ftps://, etc. So .NET actually handles most of these as bytes, rather than taking the time to make them strings, and bytes are way faster than strings.
Getting Faster
We’ve been going down the rabbit hole, but let’s go even farther. When you develop day to day, it’s rare you need to consider what operators the processor will use to accomplish the task — but when you’re fighting for every nano- and microsecond… well, here we are.
So three commands come up in consideration: Peek, Take, and Seek. Most of what we’re doing here deals with Seek. Normally we’d just take the collection of bytes that is our URL, cast it to string, then process that value, but that costs precious nanoseconds. Instead, we keep it as bytes and use the Seek command to grab parts of the URL and match them with known values like HTTP://. This would normally take one to four Seeks, depending on what we’re checking.
We take this even further, though: Using processor vectorization, a single Seek can handle up to 128 bits of data, even if it’s multiple sets of data. This allows us to do up to four Seeks’ worth of instructions in a single instruction — which means we can do all of our Seeks in a single instruction! This is only possible because we’re turning those collections of bytes into longs to make things even faster.
A Way to Make Things Even Faster
Now we’re digging into the dark corners of performance scraping. Writing and copying memory is slow — that’ll cost us precious microseconds. Pointing to existing memory is faster! So rather than taking our headers and copying their values into memory, let’s make one giant constant byte array of all known headers. Then, rather than copying the headers, we can just point to the part of the memory that matches our header’s bytes!
I mean, “0x312E312F50545448” is just as understandable as “HTTP1.1”, right? Why would we ever want to make that into an expensive string that plainly reads “HTTP1.1” and memcopy it, costing ourselves precious microseconds?
…But You Shouldn’t Do This in Your Applications
One thing I want to make very clear: This level of optimization only makes sense with code executed hundreds of thousands of times per second or more, and you can’t just “throw more hardware at it.” This does improve performance — at the expense of being borderline impossible to read and maintain. For a framework or programming language to optimize to this level, there is some real justification, but for production applications, spinning up a new server makes more sense than burning hundreds of working hours trying to fight for nanoseconds. For those who’ve worked on code before, just think of maintaining 10,000 lines of the below code (and that’s just the file responsible for the headers).

So, What’s the Performance Like Now?
Based on the current benchmarks, ASP.NET is expected to make it into the top 20 in Round 13 of TechEmpower’s Benchmarks, coming in around 1.2 million requests per second (roughly 20% of the fastest framework). This would put ASP.NET just ahead of Go, which was 25th in Round 12 this year and is the first framework in the list that most people will recognize.
Now’s the time to learn ASP.NET Core — and what better way to get started than with our Try ASP.NET Core and Forging Ahead with ASP.NET Core courses? Since there is no such thing as an ASP.NET Core veteran yet, this gives you a chance to compete in a market that is both fresh and new, as well as stable and established — and maybe you can even work to try your hand at making it faster yourself.
5 keys to successful organizational design
How do you create an organization that is nimble, flexible and takes a fresh view of team structure? These are the keys to creating and maintaining a successful business that will last the test of time.
Read moreWhy your best tech talent quits
Your best developers and IT pros receive recruiting offers in their InMail and inboxes daily. Because the competition for the top tech talent is so fierce, how do you keep your best employees in house?
Read moreTechnology in 2025: Prepare your workforce
The key to surviving this new industrial revolution is leading it. That requires two key elements of agile businesses: awareness of disruptive technology and a plan to develop talent that can make the most of it.
Read more