Memory leak only occurs when recording allocations?

Questions about YourKit Java Profiler
Post Reply
swruch
Posts: 3
Joined: Mon Oct 31, 2005 8:02 pm

Memory leak only occurs when recording allocations?

Post by swruch »

Sorry for the long-winded post. I wanted to provide as much information
up front as possible...

I'm on YK 5.0.3, Windows XP, Sun JDK 1.5.0_05. I normally launch YK
from IntelliJ 5.0.1.

I've only been using YK for a year or so and I'm not a heavy user.
Hopefully this is either a dumb user error or a common coding problem
and it will sound familiar to someone...

I used YK to debug a memory leak (of sorts - the problem was in the
java.io.ObjectInputStream's reference mechanism, which is not strictly
a memory leak, but use of this class to deserialize large amounts of
data results in large amounts of retained references - by design).

Anyway, I coded around the behavior by breaking up the streams
and closing and reopening to allow references to deserialized objects
to be GC'ed as I go along. My new implementation runs fine and
memory usage is as expected. I can even restrict the heap size to
8m (-Xmx8m) and a test program can read through 140 256Kb
serialized object files with no out of memory errors.

However, when I run the application with YK and object allocation
turned on and no CPU profiling, I will run out of memory even when
using the default 64m heap size.

I cranked up the heap to 128m to see if I could get it to finish and
it did. I captured a snapshot on exit and the size of the retained
objects was less than a megabyte (750k), being made up of mostly
Strings and Class objects tied to java.io.* classes and buffers
allocated in Sun's native code. Nothing that was retained could be
tied to my own application classes.

I'm puzzled, obviously - and concerned that I still have a memory
leak somewhere. I'm not inclined to blame YK, but I'm wondering
how I can run the a 36m data set through a JVM with a max heap of
8m and the same dataset requries somewhere between 64 and 128m
to be processed with the YK agents enabled and collecting memory allocations.

Can anyone shed some light on this? Thanks!
Anton Katilin
Posts: 6172
Joined: Wed Aug 11, 2004 8:37 am

Post by Anton Katilin »

Hi,

In what form do you receive the OutOfMemory error?

E.g. if it is printed in the output console, could you please provide the exact text.

Best regards,
Anton
swruch
Posts: 3
Joined: Mon Oct 31, 2005 8:02 pm

Post by swruch »

Unfortunately, there's not much output at the console, other than
a clear indication that the application ran out of heap:

Code: Select all

DEBUG 2005.11.01 09:38:58:226 [SequencingDirectoryInputStream] Advancing to next stream.
DEBUG 2005.11.01 09:38:58:226 [SequencingDirectoryInputStream] Opening next file: recordedData-92.dat
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:86)
Caused by: java.lang.OutOfMemoryError: Java heap space
Exception in thread "main" 
Process finished with exit code -1
The two DEBUG traces are from my application code. This particular
run was with the default heap size of 64m.
Anton Katilin
Posts: 6172
Joined: Wed Aug 11, 2004 8:37 am

Post by Anton Katilin »

Possibly the OutOfMemory happens because of the following:

I may suspect that there are very many temporary objects created.

When the profiler records allocations, the profiled application works slower. Temporary objects should be collected by the garbage collector. But before object is actually deleted, it is placed by the JVM to finalizer queue, which is processed by a special internal JVM thread. Under heavy load, caused by allocation recording, this thread might get less CPU time, thus it processes less objects from the queue, the queue gets bigger and bigger, and finally causes the out of memory.

Objects in the reference queue are reported by the profiler as collected, because normally objects from that queue will be deleted after some time. Thus these objects that should die soon are not included to the counts of live object, in particular, to the overall retained size.

If this is the case than the reason of massive temporary object allocation should be detected and if possible avoided.

To do this, please profile the application with -Xmx big enough to finish profiling with no OutOfMemory, capture snapshot and then analyse the Collected objects section to learn the methods where most of the garbage is produced.
swruch
Posts: 3
Joined: Mon Oct 31, 2005 8:02 pm

Post by swruch »

Thanks for the explanation - that is exactly the case. The program in question analyzes a (potentially) large volume of serialized data. Individual
objects are discarded after they are deserialized and inspected, but, as
you suggest, the amount of objects eligible for collection during the
lifetime of the program is probably always a fairly large number.

Thanks again.
Post Reply