Fully featured low overhead profiler for Java EE and Java SE platforms.
Monitoring and profiling solution for Gradle, Maven, Ant, JUnit and TestNG.
Easy to use performance and memory profiler for .NET framework.

Profiled application crashes capturing memory snapshots

Can't find your answer? Please refer to documentation and demos, ask your question in forum, or contact support.

The cause and possible solutions depend on the version of JVM you are using to profile your Java applications.

Profiling with Java 5 or newer:

When a memory snapshot is being captured, objects in the heap are temporarily "tagged" with 8-byte numbers. JVMTI, the profiling API of Java 5 and newer, provides all heap data in terms of these tags. The mapping is stored inside JVM in some JVM internal data structures. When JVM fails to allocate memory for these structures, it terminates with an error.

A profiler agent doesn't have the means to reliably predict the lack of memory to prevent the JVM crash.

This problem is mostly a problem of 32-bit JVMs, that have limited virtual memory available (2-4 GB, depending on the OS and virtual memory settings).

Suggested solutions:

  • Use 64-bit JVM if possible.
  • Decrease the heap size (-Xmx) when profiling.
  • Click "Force garbage collection" several times before capturing a memory snapshot.
  • Use JVM built-in heap dumper. The dumper produces files in HPROF format, fully supported by YourKit Java Profiler. The dumper is a part of JVM and requires almost no additional memory resources to make the dump, because it accesses the heap data structures directly, unlike profiler agents that use higher-level APIs to access the Java heap. The built-in dumper doesn't do any tagging. Read more...
  • Ensure you use the latest version of the profiler.

Profiling with Java 1.3/1.4:

When capturing a memory snapshot, JVM of version 1.3/1.4 can crash with an error like this:

Exception java.lang.OutOfMemoryError:
requested 291955644 bytes for unsigned char in
Out of swap space?

The problem is that Java versions earlier than 5 were, by design, not capable of capturing memory snapshots for big heaps. The actual size may differ depending on the kind of objects in the heap, but in our experience, the limit is usually 1 GB.

Technically, the out of memory error happens when the JVM internally allocates a continuous block of memory and fills it with heap data, to pass it to a profiler. The size of memory needed for this temporary structure can be compared with the size of the heap.

This problem cannot be solved in 1.3/1.4 JVMs.

Possible workarounds:

  • Use the newest Java version if possible. Even if the old Java version is a customer requirement and thus you cannot easily upgrade, we'd recommend considering to profile in development with a newer JVM. In many cases, performance problems are not JVM-specific.
  • Decrease the heap size (-Xmx) when profiling. In most cases, dumping works fine with heaps smaller than 1 GB.
  • Click "Force garbage collection" several times before capturing a memory snapshot.