A critical vulnerability was discovered in React Server Components (Next.js). Our systems remain protected but we advise to update packages to newest version. Learn More

Magnus Rahl
Jan 20, 2012
  9486
(0 votes)

Async Pages part 1: How async pages may save your (server’s) life

My account is fixed and I’m back in style with EPiServer World CRP Oracle Status! I celebrate that with publishing a three part TLDR about async pages that I haven’t been able to publish while my account was ill.

Intro

If you do only EPiServer projects and all of them are vanilla sites without integrations with or calls to any external systems you can probably stop stop reading now. EPiServer only calls it’s database and after the site is warmed up and cached such database calls are probably fairly uncommon and don’t impact performance.

If you do, on the other hand, use any kind of custom database, call webservices, do HTTP requests or do any other kind processing that isn’t “CPU-crunching-only” (perhaps even then) then you might want to look at asynchronous page processing. If you haven’t already.

If you want the background and the whys read the following section. If you just want the goodies, skip to the Solutions section. (And then probably go back to understand why those are actually goodies.)

Theory and empiric evidence (AKA problems)

I have seen many examples of this kind of asynchronous processing, but the pattern always looked ugly and cumbersome to code. And my sites worked just fine anyway until now. Or did they?

Sudden awful performance

The case at hand: Suddenly users experience spikes in load times, somtimes up to 60 seconds. CPU loads of web servers and database servers at the same time are not very high at all. So what’s going on?

Since we have access to nice tracing tools logging activities in the production environment we could soon find some requests running for very long times. They were doing web requests which eventually timed out. As you may know the default timeout of a WebRequest in ASP.NET is something like 120 seconds. Luckily we were using a 3 sec timeout but that was not enough.

The IIS Pipeline

To help understand why, here’s a recap of how ASP.NET and IIS work when delivering pages.

  • IIS starts processing the request on a thread.
  • IIS hands over the request to ASP.NET which hands it over to the ThreadPool.
  • The IIS thread is freed and can handle another request, for example for a static file which it handles itself without the help of ASP.NET.
  • A worker thread becomes free and picks up the request from the ThreadPool.
  • The worker thread checks to see how many requests are currently processing. If the number is higher than a certain threshold it puts the request on hold. Otherwise it starts processing it.
  • Once a request is processed it pretty much runs to the PreRender stage, in the case of a Page. If the page is synchronous (the default) it will run all the way through and is delivered back to the client.

The above holds for IIS7, things are slightly different in IIS6.

ASP.NET can handle a very large number of requests this way using modern hardware and if each request doesn’t take too long to complete. But synchronous requests will of course block their thread even if they are not processing, if they are waiting for an external resource like a web request. That’s what happened in our case.

Thread limit in ASP.NET

As you may know the ThreadPool can basically create any number of threads, and will create new threads if the load is low and there is work to do, within certain limits (because each thread uses memory and each switch between threads uses CPU, AKA context switching).

But ASP.NET enforces its own threshold, as mentioned above. This threshold is not set in number of threads but rather number of concurrent requests. That is of course equivalent if the requests are synchronous.

In .NET 3.5 this threshold is only 12 requests per CPU. So if your requests are synchronous take one second to complete ASP.NET can only handle 12 requests per second which isn’t a lot. And one second can be a very real delay if you consume external services, especially if those services are down and therefore have to time out before your processing completes. Other requests coming in will be queued.

In our case matters were even worse. The web requests were requests to get RSS feeds, and it turned out many of those feeds were actually set by editors to URLs on the same site. See the problem there? Yup, that’s right, requests can basically block themselves in a kind of deadlock situation if the queue fills up. One request is sitting in the pipeline waiting for the requests that are sitting behind it in the queue. This is what made response times go tectonic.

Solution

Configure ASP.NET concurrency

So how do you solve this? One way is to set the maxConcurrentRequestsPerCPU in aspnet.config to a higher value, effectively allowing more threads to process. Or use .NET 4 which sets this value to 5000 by default.

Make your Pages asynchronous

But increasing the number of concurrent synchronous requests can cause increased overhead. The only way to increase throughput is to increase the number of threads, and as we know thread creation and switching isn’t free performance wise, neither considering CPU nor memory. So the original problem is still basically there.

To learn how to write asynchronous Pages, move on to the next part: How to use asynchrony in your Pages

Jan 20, 2012

Comments

Please login to comment.
Latest blogs
A day in the life of an Optimizely OMVP: Learning Optimizely Just Got Easier: Introducing the Optimizely Learning Centre

On the back of my last post about the Opti Graph Learning Centre, I am now happy to announce a revamped interactive learning platform that makes...

Graham Carr | Jan 31, 2026

Scheduled job for deleting content types and all related content

In my previous blog post which was about getting an overview of your sites content https://world.optimizely.com/blogs/Per-Nergard/Dates/2026/1/sche...

Per Nergård (MVP) | Jan 30, 2026

Working With Applications in Optimizely CMS 13

💡 Note:  The following content has been written based on Optimizely CMS 13 Preview 2 and may not accurately reflect the final release version. As...

Mark Stott | Jan 30, 2026

Experimentation at Speed Using Optimizely Opal and Web Experimentation

If you are working in experimentation, you will know that speed matters. The quicker you can go from idea to implementation, the faster you can...

Minesh Shah (Netcel) | Jan 30, 2026