Fully featured low overhead profiler for Java EE and Java SE platforms.
Easy-to-use performance and memory .NET profiler for Windows, Linux and macOS.
Secure and easy profiling in cloud, containers and clustered environments.
Performance monitoring and profiling of Jenkins, Bamboo, TeamCity, Gradle, Maven, Ant and JUnit.

Profiling overhead

The nature of overhead

When profiling a .NET application, the profiler agent collects data about the running program. This involves calling operating system methods, invoking .NET runtime APIs, and instrumenting bytecode (inserting special support code into function bodies). Naturally, data collection consumes resources (memory and CPU) and adds overhead, leading to slower application performance.

The extent of the slowdown depends on the profiling mode: the more data collected and the costlier the data acquisition, the greater the slowdown. For instance, CPU tracing requires measuring the time at the entry and exit of each method to calculate the time spent inside the method and count how many times the method was called. Despite efforts to optimize the profiler agent, overhead is inevitable.

To manage this, YourKit .NET Profiler is designed to give developers complete control over the amount of collected data, allowing them to adjust the profiler's overhead. By disabling the collection of unnecessary data, you can effectively reduce the overhead to zero. This approach ensures that you can maintain application performance while still gaining valuable insights from profiling.

Key strategies for reducing profiler overhead

  1. On-demand profiling: Enable profiling only when needed, rather than running it continuously throughout the application's lifecycle.
  2. Adjust data granularity: Choose a lower level of detail for data collection.
  3. Limit data retention: Configure the profiler to discard older or less relevant data, keeping memory usage in check.

CPU profiling

  1. Turn CPU profiling off, if you are not profiling the CPU.
  2. Use sampling instead of tracing if method invocation counts are not required.
  3. If method invocation counts are unnecessary, utilize the disable_tracing option to completely disable bytecode instruction for tracing.

Thread profiling

  1. Turn thread profiling off if you are not analyzing thread interactions and do not need to estimate CPU usage over time intervals.
  2. Collect only thread states instead of states and stacks if the CPU estimation feature is not required.
  3. Avoid specifying a long telemetry_limit; in most cases, the default 1-hour limit suffices.

Memory profiling

  1. Turn off object allocation profiling if you are not analyzing the creation of temporary objects.
  2. Opt for the Count allocated objects mode if you do not require exact stack traces of where the objects were created, but only need the number of created instances.
  3. If object allocation profiling is not needed, use the disable_alloc option to completely disable bytecode instruction for allocation profiling.

Exception profiling

  1. Turn off exceptions profiling if you are not analyzing exception throwing.
  2. Use the option exceptions=disable to completely avoid the overhead of exception profiling.

Telemetry

  1. Do not collect telemetry if it is not needed.
  2. Avoid setting a fast telemetry sampling rate with the option telemetry_period. The default 1-second rate suffices in most cases.
  3. Avoid specifying a long telemetry_limit; in most cases, the default 1-hour limit suffices.

Probes

  1. Turn off or disable the probes you are not interested in collecting.
  2. Avoid using large values for probe_table_length_limit.
  3. Use the option probe_disable=* to disable all probes and completely avoid the overhead of probe collection.

YourKit uses cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content, to analyze our website traffic, and to understand where our visitors are coming from.

By browsing our website, you consent to our use of cookies and other tracking technologies in accordance with the Privacy Policy.