When i was in SF we worked on a pretty resource intensive app that easily goes beyond the Java heap limit on Android if one counts native resources against the Java heap. We received quite a few OutOfMemoryExceptions, which would happen if we’d have used Android’s Bitmap class, which stores pixel data on the native heap, but counts it against the Java heap. We did not use Bitmap, so it was puzzling to us how our app could throw OOMs. On the Java side we barely scratched the 10mb barrier.
To communicate with native code (non-Java, e.g. C/C++) one has two options. Either transfer data from the Java heap to the native heap, or allocate space on the native heap in the first place, circumnavigating the Java heap completely. One mechanism to do the later is to use direct buffers, A Java object that wraps the address and size of a native heap memory location. The VM usually doesn’t count the native heap memory of a direct buffer against it’s Java heap memory. Turns out Android’s Dalvik VM is special in that regard compared to say Hotspot.
The first, generally known, situation where the Dalvik VM counts native heap memory against the Java heap memory is the use of the Bitmap class. A Bitmap is a thin wrapper around a native heap memory area that stores pixel data. To enfore application memory limits, that native heap memory of the Bitmap is counted against the Java heap, which is limited to 16-25mb on Android. Hard to cope with for Android developers at times (see Stackoverflow), but understandable from the perspective that a mobile device has limited resources, even more so if you have true multitasking.
For direct buffers i assumed the situation to be different. Direct buffers provide a way to allocate memory outside the bounds of the Java heap memory and are not counted against the Java heap on most VMs. Let’s see what happens on Dalvik. Observe the following code:
This method is called when one allocates a new direct ByteBuffer through ByteBuffer.allocateDirect(int numBytes), which returns a ByteBuffer that points at a native memory area of numBytes bytes. Following through we get to
This is where it gets interesting. The first statement in that constructor allocates the native memory through a call to PlatformAddressFactory.alloc(). The second statement tells the address wrapper object (SafeAddress) that this memory area should be automatically deallocated when the buffer instance (and thus the address wrapper instance) is garbage collected. Let’s see what the PlatformAddressFactory.alloc() method does.
It calls into OSMemory.malloc(), which invokes a C++ method that actually allocates the native heap memory. Also observe line 140, where the memorySpy is informed of the newly allocated memory area. We’ll get to that later, as it has some more implications.
What happens in OSMemory.malloc()? This:
It actually tracks the native memory allocated and counts it against the Java heap:
jboolean allowed = env->CallBooleanMethod(gIDCache.runtimeInstance, gIDCache.method_trackExternalAllocation, static_cast
This will return false if the requested memory size would exceed the Java heap. In line 64 we see that the first 8 bytes store the size of the buffer so memory can be subtracted from the used Java heap size again if the buffer is deallocated (when it’s GCed), see
The second issue is the memory spy thing i mentioned above. What does that do? Bucket of salt: this is my understanding of it which might be wrong.
Every time we allocate a new direct buffer (and thus PlatformAddress) the memory spy is informed of that event. It keeps so called PhantomReferences to all of these PlatformAddresses. These references allow us to get information on whether a Java object was garbage collected. The problem is that you have to poll that information, as you can’t register a callback that gets invoked when the GC kills a Java object. So, the designers of that code had to come up with a mechanism to frequently do that.
This method is called whenever we allocate new native heap memory. The first couple of lines poll the reference queue, if a platform address was GCed, the spy can also free up the native heap memory (orphanedMemory(ref)). Due to this design you always leak one direct buffer, as you have to allocate a new buffer so that the polling mechanism is kicked off.
Games written in C/C++ do not suffer from these problems. They can allocate memory all day long without Android ever getting angry about it. Games like Shadowgun wouldn’t be possible without that ignorance of Dalvik. If you write a pure Java game for Android you’ll suffer from that problem.
For libgdx games you don’t have to worry as we added a very nasty 3 line hack that will ensure that the memory allocated for a ByteBuffer is not counted against the Java heap. While i completely understand that there have to be memory limits, the situation is different for games. Adding this little “hack” at least allows us to cheat just as the native guys do. (See BufferUtils#newUnsafeByteBuffer()).
I’d like to introduce a new change to libgdx, where we do not load any resources from disk anymore. Instead we cache everything (pixel and vertex data) in memory. This would mean we’d have to change the semantics of a few things for the better. Any unmanaged textures would be managed now, e.g. textures constructed from Pixmaps.