A lot of people seem to have a problem understanding why there are managed and unmanaged textures. This post is actually a reply to a comment on the last post. After writting it as a comment i figured it would make a nice blog post so here we are.
I plan on adding a TextureAtlas class along with a TextureRegion class and a simple Sprite class in the next release. The TextureAtlas and the TextureRegions must be created offline via an external tool.
A user suggested to model the texture atlas functionality after pyglet, a game programming framework for Python. Its texture atlas implementation lets you dynamically modify the atlas at runtime, drawing new images to it as you see fit. Pyglet is nice and dandy but it is intended for desktop gaming. OpenGL on the desktop does not suffer from context loss (as oposed to directshit). That’s why they can alter the TextureAtlas during run-time. Let me explain the contex loss problem.
On Android your OpenGL context is lost each time the Activity is going to the background, e.g. on an incoming call, pressing the home button, starting a new activity on top of the current activity, letting the screen sleep and so on and so forth. When the OpenGL activity gets resumed the context is recreated. So where’s the problem?
An OpenGL context is responsible for managing resources such as textures, meshes, shaders or frame buffer objects that reside in video memory (well, actually the OpenGL spec does never say that they must reside in video memory but almost all implementations of OpenGL work like that now…). The OpenGL driver is responsible for allocating memory for each resource in video memory and keeps track of those allocations in the OpenGL context an application has acquired. If the context is destroyed the driver will deallocate the video memory for the resources. So each time our Activity is paused all our textures, meshes, shaders and frame buffer objects are lost.
Now imagine we create a dynamic texture atlas at run-time and insert new images with Texture.draw(Pixmap). Everything is looking good, the world is a happy place. The evil user decides to briefly check his home screen or gets a call. Our game gets paused and the OpenGL context along with our shiny texture atlas is lost as the phone app or the home screen come to the foreground. The user goes back to our game and wants to pick up from where he left. Everything is white now as our texture atlas is dead for quite some time already. Next thing you know you check the comments of your app in the market and find something along the lines of “Gay as AIDS!” (an actual comment i got for Newton). How could we solve that?
In case of managed textures you might have noticed that to construct such a texture you always specify a FileHandle (Graphics.newTexture( FileHandle file, …)). That’s the secret to managed textures. They recognize when the OpenGL context was lost and automatically reload the texture from the file formerly specified via the FileHandle. If i’d allow you to draw to such a texture via Texture.draw(Pixmap) i’d have a bit of a problem. I’d have to keep track of your changes to the texture. I could not just reload the original file as all subsequent changes via the draw() calls you make would be lost since i can’t write the changes to the original file.
I have two options to work around that. The first solution goes like this: I’d need to keep an in-memory copy of the original bitmap i used for the texture, apply all your draw calls to that bitmap and upload it to the texture in video ram each time.
The second solution would be to store a copy on the SD-card, read that in each time you call Texture.draw(), draw the Pixmap to it, upload it to the texture and save the modified image back to the
SD card. Neither solution is a real option. The first one suffers from duplicating the memory usage (in-memory copy of the texture’s bitmap). The second option will only work if we have an SD-card (what if we don’t?), we’d need to come up with a naming schema for our cached bitmaps on the SD-card and make sure they get deleted when the program exits as we’d otherwise fill up the SD-card fast (and we also have to do that in case the app crashes hard! It gets even worse if we crash in native code). Apart from that we also have the problem that constantly loading and saving from and to the SD-card on each Texture.draw() call is slow. Really slow. Like, your grandma slow.
So you see that having the best of both worlds, managed textures and runtime-modifieable textures, will just not work in an acceptable way. Pre libgdx 0.7 i used the first option and every texture was managed, even the dynamic ones. That was stupid as it used up twice the memory. Hence the new solution.
The same commenter asked for a way to make an unmanaged texture a managed texture. This doesn’t work either. Now you might say: “But Mario! All you need to do is draw to the actual Texture in video memory and when i say ‘gogo gadgeto convert” you just grab the texture from video memory save it to the SD-card and reload it as a managed Texture. Surely you can do that internally so that i don’t have to code that myself?”. The short answer: no.
The long answer: OpenGL ES does not support glGetTexImage() which is necessary to get the pixels of a texture from video memory to client memory. glReadPixels() allows me to read from the frame buffer, so i could draw the texture to the framebuffer and grab the contents from there. This has two problems: one, the framebuffer size is most likely less than the texture size (e.g. 480×320 vs. 512×512),. but i could work around that. Two, the framebuffer must be a 32-bit argb framebuffer if we want to keep our fancy alpha channel. While most Android devices support such an EGL surface the performance is shitty, we usually use 24-bit or even 16-bit frame buffers to which the hardware says “mhhh, i like”. Could we change the framebuffer color depth for the purpose of fetching a texture? No, cause we’d need to destroy our fast 24-bit frame buffer and create a 32-bit one. We’d lose the texture before we are able to grab it (surface destruction == context loss == texture loss). Could we spawn a new EGL surface? On some devices yes, on most current devices no and i don’t want to open that can of worms.
So the only option to convert an unmanaged texture to a managed texture is keeping a copy of the texture bitmap either in RAM or on the SD-card. And we already talked about what that means (memory consumption x2, or dog slow reads/writes plus file naming issues and disc fill up problems).
To summarize: we can’t have both dynamic and managed textures. Period. I’m open to suggestions though if someone can figure this out.