Perfmon & .NET
(Local copy, no formatting:)
On this post we will talk about the Performance Monitor (a.k.a PerfMon) that comes with all recent Windows version (Including Windows 2000, Windows XP and Windows 2003).
You can find the PerfMon in your Start Menu -> Control Panel -> Administrative Tools -> Performance.
Although it is not a debugger, some of the issues we have previously disccused such as memory issues, blocked finalizer thread and deadlocks, have many manifestations that can be easily monitored and found using PerfMon.
In fact, PerfMon can provide a first mean of identifying such problems as well as providing additional “second opinion” that can endorse a certain diagnosis.
I will go through some of the most interesting Performance Objects and list the most interesting counters that belong to it. Afterwards, I will show to each common problem the counters that can indicate on it.
The Counters
.NET CLR Exceptions:
# of Exceptions Thrown / sec - Counts the number of managed exceptions and unmanaged exceptions that were translated into managed exceptions such as null pointer reference that translates into System.NullReferenceException.
This indicator is good to determine performance problems due to too many exceptions thrown during the course of the application. This usually indicates a more serious problem of bad implementation such as using Exceptions as a mean of handling normal program flow.
.NET CLR Interop:
# of CCWs - Number of Com Callable Wrappers (CCWs) current living. A CCW is a .NET proxy that is used to wrap .NET objects that are referenced from unmanaged COM.
If you see memory growing (or not getting freed when it should have) and this counter increasing it might suggest that you have unmanaged code holding on to some of your managed objects and this is the cause of memory either not being freed or increasing.
.NET CLR LocksAndThreads: Contention Rate / sec - The rate at which threads in the runtime attempt to acquire a managed lock unsucessfully.
Managed locks are acquired using either the “lock“ statement in C# or by using System.Threading.Monitor.Enter.
If this number is increasing it means we have a bottleneck in our code. This area in the code is synchronized, so only one thread at a time enters it, but it is being “hammered” by multiple threads that are all trying to get into this piece of code.
We need find that piece of code and see how can we avoid this situation in order to resolve this bottleneck.
Current Queue Length - This counter displays the the total number of threads currently waiting to acquire some managed lock in the application. It only shows the last observed vaule.
This counter is similar to Contention Rate / sec but it shows the number of threads waiting to acquire a managed lock at a given point in time, not only the failed ones. It will also outline possible bottlenecks in the code that are being accessed by multiple threads multiple times.
If Current Queue Length value is almost equal to Threads Count and % Processor Time is at a fixed value it might also indicate that we have a CPU spin issue.
If % Processor Time is 0 (or almost 0) and the application is not responding it means that we have a deadlock.
.NET CLR Memory:
# Bytes in all heaps - This counter is the sum of four other counters: Gen 0 Heap Size, Gen 1 Heap Size, Gen 2 Heap Size and Large Object Heap Size. It displays the current memory allocated in bytes on the GC (managed) heaps.
If this counter keeps on rising it indicates that we have a managed leak. Some managed objects are always being refereneced and are never collected.
# GC Handles - Displays the current amount of GC handles in use. GCHandles are handles to external resources to the CLR and managed environment. They occupy small amounts of memory on the GC heap but potentially expensive unmanaged resources hide behind them.
If this counter keeps on growing in addition to the private bytes (we will talk about this counter later on), it means that we are probably referencing unmanaged resources and not releasing them causing a memory leak.
# Induced GC - Indicates the number of times GC.Collect was explicitly called.
Calling GC.Collect is not a good practive in production code. It is usually useful to find memory leaks (a good material for another method of finding memory leaks, but we will talk about it in a later post) while debugging or while developing but you should NEVER call it in production code and let the GC tune itself.
# of Pinned Objects - This counter shows the number of pinned objects encountered since the last GC. A pinned object is an object that the GC cannot move in memory.
Pinned objects are usually objects that were passed as pointers to unmanaged code and are pinned so that the Garbage Collector will not move them in memory while it compacts the heap, otherwise it will cause unexpected behavior in the unmanaged code and might even lead to memory corruption.
Generally, this number shouldn’t be that high if you don’t call to unmanaged code too much. If it is increasing it might suggest that we are pinning objects due to passing them into unmanaged code and not release the unmanaged code or we explicitly pinned it and forgot to unpin it. If this counter is increasing and the Virtual Bytes counter (we will talk about this counter later on) is also increasing it means that we do pin objects too much and the GC is not able to effectivly compact the heap, thus forcing it to reserve additional virutal memory so the GC heap could grow and accomodate the requested needs of allocation.
# of Sink Blocks in use - This counter displays the current number of sync blocks in use. Sync blocks are per-object data structures allocated for storing synchronization information. Sync blocks hold weak references to managed objects and need to be scanned by the Garbage Collector. Sync blocks are not limited to storing synchronization information and can also store COM interop metadata.
This counter was designed to indicate a performance problem that occurs due to exessive usage of synchronization primitives. If this counter keeps on increasing we should probably take a look in all the places that we are using synchronization objects and see if they are truely needed. It will also show us, combined with Current Queue Length counter and Contention Rate / Sec counter, that we probably have some synchronization bottlenecks in our application that should be addressed to improve performance.
# Total committed Bytes - This counter show the total amount of virtual memory (in bytes) currently committed by the Garbage Collector.
This counter actually shows us the total amount of virutal memory that is actually being used at a given point in time in the GC heap. If # Total Reserved Bytes is significantly larger than this counter it means that the GC keeps on growing segments and reserve more memory.
This indicates one of two things:
The GC is having problems compacting the heap due to a large amount of small sized pinned objects or a small amount of pinned objects that takes a lot of space (usually large arrays that are being passed to unmanaged code). We are leaking in the managed sense of the word, which means, we have a lot of objects that were suppose to die by something is still holding a reference to them.
# Total reserved Bytes - This counter shows the total amount of virtual memory (in bytes) currently reserved (not committed) by the Garbage Collector.
In addition to what I’ve mentioned above in # Total committed Bytes, this counter, when increasing over a large period of time, might also suggest fragmentation of the virtual address space. This situtaion is usually common in a mixed application that has a lot of managed and unamanged code tangled together and effectivly will limit your application’s total life time before commiting application suicide.
NOTE: Virtual address space fragmenetation may also occur natually (in unmangaed code or in a mixed managed/unmanaged application) due to the nature of your application’s memory allocation profile, so not every increasing in reserved bytes might indicate that.
Finalization Survivors - This counter displays the number of managed objects that survive a collection because they are waiting to be finalized. This counter updates at the end of the GC and displays the number of survivors at that specific GC.
This counter will indicate if we have too many objects that needs finalization. Having too many finalizable objects is usually not a good idea since it requires at least 2 GCs before they are truely collected, it might also indicate that the finalizer thread (I talked about it in the last post) will have a lot of work and if we are not careful in the implementation of the finalizable objects, we might reach a blocked finalizer thread issue.
Gen 2 Heap Size - Indicates the size in bytes of generation 2 objects.
If this number keeps on growing it indicates that we have too many objects that manage to survive and reach generation 2. Since Gen 2 collections are not as common as Gen 0 and Gen 1 it means this memory will stick around for a while, burdening the application’s memory.
Large Object Heap - All objects that are greater than 85 Kbytes are allocated on the large object heap mainly due to performance issues. The major difference in the Large Object heap from the other heaps (Gen 0, Gen 1 and Gen 2) is that it is never compacted and it is only collected in Gen 2 collections.
If this counter keeps on growing it will inidcate that we are allocating too many large objects. Doing that may lead to memory fragmentation because these objects are never compacted and only get collected in Gen 2 (they will burden the GC heap). It might also indicate that someone is still referencing these objects and they are not being collected at all.
Process:
Virtual Bytes - Indicates the current size (in bytes) of the allocated (reserved and committed) virtual memory.
Private Bytes - Indicates the current size (in bytes) of the allocated (commited) virtual memory. This memory cannot be shared with other processes.
Threads Count - Shows the number of threads currently active in the current process.
Processor:
% Processor Time - The percentage of elapsed time that the processor spends to execute a non-Idle thread.
Counters Indicating Common Problems
Below is a list of common problems and the counters that might indicate on these problems.
Remember that these are only indicators and in some cases when they will indicate on a problem it might just be the application’s behavior.
Memory Leaks Indicators:
# bytes in all Heaps increasing Gen 2 Heap Size increasing
# GC handles increasing
# of Pinned Objects increasing
# total committed Bytes increasing
# total reserved Bytes increasing Large Object Heap increasing Virtual Address Space Fragmentation Indicators:
# total reserved Bytes significantly larger than # total committed Bytes
# of Pinned Objects increasing
# GC handles increasing
# bytes in all heaps always increasing. CPU Spin Indicators:
Current Queue Length is very close to Threads Count and stays that way for a long time. % Processor Time is continuously at a fixed level for a long period of time (as long as the Current Queue Length is at the same value). Managed Deadlock Indicators:
Current Queue Length is very close to Threads Count and stays that way for a long time. % Processor Time is 0 (or close to 0) (as long as the Current Queue Length is at the same value) and the application stopped responding. Blocked Finalizer Thread Indicators:
# bytes in all heaps increasing Private Bytes incresing Virtual Bytes increasing
As I’ve mentioned above, these are all indicators and might not actually tell you if that certain problem is occuring. To actually prove the problem there is a need for debugging (either with a full blow development environment, live debugging with WinDbg or a post mortem debug using a memory dump).
To conclude, PerfMon gives us a great tool that comes built in with Windows to better monitor our application for common problems. Combining PerfMon with an additional technique such as taking subsequent memory dumps using adplus.vbs at fixed intervals can give a better indication and usually point you to the cause of the problem in no time.
PerfMon - Don’t leave home without it (well, you can’t because its always there ).