I can remember writing code in C on PCs years ago and when I got an out of memory exception I just blindly accepted that it meant there was literally no more memory for me to use. I realize now that this wasn’t the case – really. It really meant “hey, I don’t have that much memory left that’s all together.” Looking back allocating and deallocating memory had made the memory in the computer look like swiss cheese where I was using some memory locations and not other locations. At some point when I asked for a bit of memory that was larger than one of the holes I had the whole thing came crashing down.
Today in an era where we have virtual memory operating systems where quite literally our hard disk appears to be memory it would seem that this wouldn’t be a problem. Sure we would waste some amount of memory with gaps where we’ve deallocated objects but surely if we’ve got a hard disk to fill we won’t run out of memory – right? Well, yes, but we run into another problem. Whether you have 512 MB of memory of 8 GB when you’re running a 32 bit operating system no one process is going to get more than 2GB of RAM. (I’m purposefully ignoring PAE/AWE since it’s a whole different ball game that doesn’t apply to .NET developers.) How does that work? Well, the addressing of memory is still 32 bits. 32 bits gets you access to 4GB of RAM. The operating system lops off the top 2GB for its own use (Basically if the topmost bit of the 32 bits is a one the address belongs to the system) so any given process can have access to 2GB of its own memory. (There’s a switch to change where this split occurs so that the process gets 3GB and the operating system gets 1GB but it’s not supported for every situation.) So even if we have several TB of storage any one 32 bit process can only see 2GB of RAM.
How does this all impact .NET getting an OutOfMemoryException? Well, the first problem that I used to have with C programs .NET tries to help solve. The garbage collector (GC) in addition to getting rid of objects that are no longer in memory will pull objects together that are in memory but spaced out. The easiest way to think of this (for me) is to think of any presentation that you’ve been in. When the room fills up it fills up relatively randomly but there are generally spaces between people. (Memory actually fills up mostly sequentially but the spaces remain.) When the room starts to get full you’ll eventually get the presenter ask everyone to squeeze together to eliminate all of the spaces. This is what the garbage collector is doing; it’s squeezing together the objects. (For those of you wondering how, double dereferencing of pointers is the key here.)
However, just like in a presentation there are occasionally stubborn people who refuse to move. In the garbage collector’s case it’s that there are a set of objects that are pinned in memory – they can’t be moved. Why would that be? Well, generally it’s because something outside of .NET is holding a reference to that location. It knows that some memory buffer, resource handle, or something exists in that location. Since the garbage collector can’t tell something outside of .NET that the object has moved it stays in its spot. The net effect is that you get a little bit of space under those objects which remains unallocated.
If .NET’s GC is cleaning up memory for us shouldn’t we always stay underneath the 2GB memory limit of a 32 bit process? Well, maybe yes and maybe no. There are lots of other issues with the garbage collector and how good (or poorly) that it works that I won’t go into. For our discussion we need to know that the garbage collector is getting rid of objects and pushing objects together. We also need to know that this isn’t an instantaneous process. For objects without a finalizer (also known as a destructor) as soon as there are no more references to the object it can be discarded by the garbage collector. However, if the objects have a finalizer the finalizer has to be run. There’s a single thread per process that runs finalization. Thus even if an object is ready to be finalized and thus give up its memory, it may not be able to quick enough as Tess Ferrandez explains. The problem is that until the objects are finalized they can’t be removed and if they can’t be removed memory usage will keep creeping up. By the way, the use of the IDisposable interface and a call to GC.SuppressFinalize() in the Dispose() method can prevent the need for the finalizer to run on your objects (even if one is defined) and therefore allow the GC to free your object out of memory sooner. (Unfortunately, I’ve been in situations where even this wasn’t enough because the GC itself wasn’t running fast enough – but that was in .NET 1.1 – the algorithms are much better now.)
It’s also important to know that the .NET object heaps aren’t the only thing consuming memory in the process. There are still a ton of COM objects that get used by .NET for the actual work. These COM objects allocate their own memory on their own heaps outside of .NET’s visibility and control. These objects take up memory. They don’t always take up memory that you’re going to see in task manager. Task manager shows, by default, private working set. That is the bytes that the process is actively working on (please excuse my gross oversimplification of this.) The real number to watch is the virtual bytes – which you have to access from performance monitor’s process object. This counter tells you how much memory has been allocated from the system by the process. In other words, how many addresses have been used up by the process. When this number reaches 2GB and there’s another request for memory in .NET that it can’t fit into an existing allocation from the operating system – you get an OutOfMemoryException in .NET. (If you want to know much more about memory allocation in Windows I recommend the book Microsoft Windows Internals. I’ve never had a memory question it hasn’t been able to answer.)
As a sidebar, the working set for the W3WP (IIS Worker process) tends to get between 800-900MB before it gives an out of memory exception. I’ve seen processes (with very good memory allocation) drive the W3WP to over 1.3GB before it finally gave up and gave an out of memory exception. Based on what you’ve seen above what does this mean? It means that working set for the IIS worker process represents roughly half of the virtual bytes that are allocated.
You may have 8GB of physical RAM available and 1TB of disk space. You may actually have tons of space in memory that’s become fragmented because of references to COM objects and other non-.NET code who’s memory allocation can’t be moved. However, if you reach 2GB of allocations (without the /3GB switch to allow the process 3GB of address space) and ask for one more thing – it is game over.
So what do you do if you have this situation happen to you? There are a few things I’d recommend:
- If you’re using an object that implements IDisposable and you’re not calling the .Dispose() method – do it.
- If you’re using a disposable object and not doing the dispose via a using () { } or a try { } catch {} finally {} block – do it. If you get an exception inside of a method that doesn’t use one of these two techniques the .Dispose() method won’t get called – if you do have an exception in either of these cases the .Dispose() will still get called.
- If you open any kind of a resource make sure you close it. Whether it’s a TCP/IP port, a SQL connection, or anything else, they’re going to get pinned in memory and need a finalizer … try to take care of that yourself.
- Get a dump of your process and look at what objects are in memory. You can do this without a ton of knowledge about debuggers. (Production Debugging for .NET Framework Applications will get you at least this far.)
No comment yet, add your voice below!