It was all going so smoothly. Jason Whittington, Mark Smith and I were teaching the big DevelopMentor event here in Los Angeles (Guerrilla.NET) when my presentation on the ThreadPool took a nose dive. It started with a great joke involving Wilson (the volleyball from Cast Away).
Wilson and I built an application to compute a multiplication table where each computation was (artificially) slow. To speed it up we threw it at the thread pool using delegate.BeginInvoke. We figured that the ThreadPool would allocate 25 or so threads and the table would display quickly. Here’s the expected output – pretty much the same thing we’ve seen since about .NET 1.0:
Each color represents the thread that did that computation.
For the last 7 years, the behavior has been that as the ThreadPool was overloaded, it would steadily start up new threads at the rate of one every 500 milliseconds until it hits its upper limit (typically). Using Performance Monitor (perfmon) we can watch the thread pool adding threads. It usually looks something like this:
Much to our surprise we saw completely different behavior. The thread pool added the first 15 or so threads quickly (as expected) but then stalled. New threads were not created every 500ms, instead they were added at increasingly long intervals. My demo took almost twice as long to run as it had the last time I did this demo a few months ago.
Jason, Mark, and I took this code, ported it back to .NET 1.1 and ran it side-by-side with .NET 3.5 and here’s what we saw (blue = 3.5, red = 1.1):
As of .NET 3.5 the upper limit of the ThreadPool was increased: Knew that.
But, it appears that v3.5 of the CLR changes the policy for adding threads to the thread pool. Rather than adding threads regularly when under load the thread pool uses a logarithmic backoff. My colleague Jason Whittington remarked that this behavior looked similar to the behavior of the thread pool in “Rotor” (the shared-source version of the CLR). We speculated that this backoff algorithm makes sense given the new 250-thread per CPU maximum – it would take a long time to reach that if the runtime waits longer and longer to start a new thread. The 250-thread limits makes it less likely that your application will deadlock the thread pool, and the exponential backoff algorithm keeps the thread pool from creating too many threads too quickly.
Why should you care? Usually you won’t, but it could have dramatic impact if you count on that behavior. For example, in our contrived case, .NET 1.1 ran about twice as fast as .NET 3.5 ( ! ):
Here’s the program and source code (trimmed down to run in both .NET 1.1 and 3.5).
Math.zip (6.38 KB)
Needless to say, Wilson and I felt kinda stood up.