Pluralsight Logo
Author avatar

Nate Cook

Author badge Author

Understanding and Avoiding Race Conditions in Multithreaded C# Applications

Nate Cook

Author BadgeAuthor
  • Oct 29, 2018
  • 7 Min read
  • 16 Views
  • Oct 29, 2018
  • 7 Min read
  • 16 Views
C#
Applications

When to Worry About Race Conditions

In modern applications, it is common to have more than one sequence of instructions executing at any given moment. These sequences of instructions are known as threads. All but the simplest of applications have multiple threads, so it's important to understand what can happen in a multithreaded application at run-time.

In some cases, a developer may only need to worry about a single thread, even though the application is multithreaded. For example, in .NET garbage collection happens on a separate thread, but the developer may not need to give much consideration to that fact. It is quite common however for a developer to initiate his or her own threads, to perform some work "in the background", as it were. It is these cases where race conditions most often appear.

As you might have guessed, a race condition is not something a developer codes or explicitly permits. Rather it is something that can happen in a multithreaded application that does not have proper safeguards. Most commonly, preventing race conditions requires synchronizing access to data that occurs from multiple threads.

The Case for Synchronizing Access to Data

To understand the need for data synchronization, let's look at an example: Say you are writing a web crawler console application that downloads the HTML for a particular URL and writes the links (e.g. <a href="/path/to...) that it finds to a file (e.g. links.txt). In true web crawler style, the application then downloads the HTML for each of those links, and continues recursively until some limit is reached, or until the HTML for all links have been retrieved/processed.

To do so synchronously would be quite slow because the application would have to wait for the HTML of one link to finish downloading before it even starts the request for the next one. So to speed things up, you decide to do it asynchronously by utilizing a separate thread for each link request. A simple implementation of such a web crawler might look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const int MaxLinks = 8000;
const int MaxThreadCount = 10;
string[] links;
int iteration = 0;

// Start with a single URL (a Wikipedia page, in this example).
AddLinksForUrl("https://en.wikipedia.org/wiki/Web_crawler");

while ((links = File.ReadAllLines("links.txt")).Length < MaxLinks)
{
  int offset = (iteration * MaxThreadCount);

  var tasks = new List<Task>();
  for (int i = 0; i < MaxThreadCount && (offset + i) < links.Length - 1; i++)
  {
    tasks.Add(Task.Run(() => AddLinksForUrl(links[offset + i])));
  }
  Task.WaitAll(tasks.ToArray());

  iteration++;
}

Where AddLinksForUrl looks something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
static void AddLinksForUrl(string url)
{
  string html = /* retrieve the html for said url */ ;
  List<string> links = /* extract the links from the html */ ;

  using (var fileStream = new FileStream("links.txt",
         FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None))
  {
    List<string> existingLinks = /* read the file contents */ ;
    foreach (var link in links.Except(existingLinks))
    {
      fileStream.Write(/* the link URL, as bytes, plus a new line */);
    }
  }
}

The key point to note in the main algorithm is that a new thread is being initiated with each call to Task.Run. Since we defined a MaxThreadCount of ten, ten threads would be initiated, then Task.WaitAll would wait until the work in all of those threads completed. After that, a new batch of threads is initiated in the next iteration of the while loop.

Fully implemented, this web crawler may actually work fine. But if you run it enough times, you'll eventually get an IOException. Why is that?

System.IO.IOException: The process cannot access the file '/path/to/links.txt' because it is being used by another process.

Notice in AddLinksForUrl that we use FileShare.None to obtain exclusive access to links.txt. And rightly so, since multiple processes writing to the same file simultaneously can cause problems, including data corruption. Depending on when the web servers respond and how long it takes to download the HTML, from time to time our web crawler may have more than one thread attempting to open links.txt at exactly the same time. We, therefore, need to synchronize access to the links.txt file, such that it never occurs from more than one thread simultaneously. Such synchronization is needed for any data shared between threads.

A Naive Approach to Data Access Synchronization

Consider for a moment the most straightforward attempt at synchronizing access to shared data—a boolean flag. We could simply set a flag to true when we open the file, set it to false when we're done, and check the flag before we attempt to open the file. That ought to do the trick, right?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
static bool fileIsInUse;

static void AddLinksForUrl(string url)
{
  ...

  while (fileIsInUse)
  {
    System.Threading.Thread.Sleep(50);
  }

  try
  {
    fileIsInUse = true;

    using (FileStream fileStream = new FileStream("links.txt"...))
    {
      ...
    }
  }
  finally
  {
    fileIsInUse = false;
  }
}

Actually yes, that approach may synchronize access to the links file to a certain extent. But run it enough times and eventually you will get another IOException. Essentially, the same problem still exists, but why?

Remember that we have multiple threads executing the code in AddLinksForUrl simultaneously. The mistake we are making with the naive approach is that we are not guaranteeing that only a single thread sets the fileIsInUse flag to true at a time. So, in the moment that fileIsInUse is set to false in the finally block, multiple threads may be waiting in the while loop above. If more than one thread breaks out of the while loop at the same (or almost the same) time while fileIsInUse is false, they will all enter the try block, and they will all think they have exclusive access to the file. In that situation, the IOException will occur. Such an anomaly is an example of a race condition.

Race conditions can be especially insidious because of the fact that the compiler translates a single C# instruction to multiple machine level instructions. That means that what appear to be back to back lines of code in C# may actually be separated by quite a few instructions in the corresponding machine code. The actual order of execution across threads at run-time may not match what we intended if we do not set guarantees for critical sections of our code. In short, when the order matters, we can't leave it to chance. And for shared data, any time a thread needs exclusive access, we need to guarantee such exclusive access.

The last thing we learn from the failure of the naive approach is that "shared data" in the context of multiple threads does not only refer to files. No, in fact it refers to anything shared across threads, which includes variables—be they value types such as the boolean in the example above, or reference types.

The Correct Way to Synchronize Access

Now that we know we need to guarantee synchronous access to shared data in multithreaded applications (in order to, among other things, avoid race conditions), how do we actually accomplish that? Well, C# and the .NET Framework provide a number of ways to do that, but one of the easiest and most common ways is to use the lock statement.

0