Orthogonal Tilemaps and Blending with Libgdx

I had a bit of time today to test out (orthogonal) tilemap rendering on Android with libgdx. Usually a tilemap is composed of one or more layers, each layer is encoded as a 2D array where each element indexes a tile in a tile texture also known as tile set. Here’s a tile set example:

Each tile in the texture has an index. To encode a layer of a tilemap we could use a 2D byte array (unless we have more than 256 tiles). Here’s how we’d encode a 5×5 tilemap using the indices we defined (virtually) in the tileset:

Our tilemap layer has a size of 5×5 tiles in this example. When we want to get the index of a tile at position (x,y) we’d do it via tileLayer[y][x]. Yes that’s a little silly but by using the first index as the y-coordinate we can store our tile layer as we want it to be seen on screen.

Libgdx SpriteCache and SpriteBatch usually operate in a coordinate system with the y-coordinate pointing upwards. Here’s a screen how we’d like the above tile layer plus the tileset texture be rendered in a little test app:

Now let us implement a very simple ApplicationListener that can render the above tilemap layer with the given tileset. Let’s define a couple of things first:

  • Our tilemap only has a single layer as defined above
  • The tileset we use is the same as above. We can load it as a texture. We have to make the image dimensions powers of two of course
  • Each tile in the tileset has a size of 32×32 pixels with a 1 pixel wide border to it’s right and bottom side. Additionally there’s a border on the left and upper edge of the tileset texture. We need this border so that we don’t get nasty artifacts when the texture is stretched and filtered
  • We assume that the screen we render to is 480×320 pixels in size. On devices with a high screen resolution we simply stretch the image a little, hence the border pixels in the tileset. That stretching is hardly noticeable so we live with it. We’ll use an OrthographicCamera to achieve this.

Let’s start by defining our ApplicationListener implementation called SimpleTilemapTest:

Ok, we have a Texture for the tileset texture, a SpriteCache that will store the geometry or our tilemap, an int that stores the handle to the entry in the SpriteCache and an OrthographicCamera that will help us setup our viewport. Let’s setup those things in the onCreate() method of our SimpleTileMapTest:

We start of by defining our tilemap layer as a byte[][] array. Next we define some constants that tell us how many tiles there are on the x-axis and y-axis and how big one side of a sprite is in pixels (the tiles are square). We instantiate an OrthographicCamera, set it’s viewport to be 480×320 pixels (which magically works on higher resolution devices as well with some stretching) and finally set it’s position so that we look directly at the center of the tilemap. The last piece of code simply loads the tileset texture. Note that we set the TextureFilter to Nearest. That will give us a nice pixelated look on higher resolution devices.

Time to define our geometry via a SpriteCache (we are still in the create() method):

First we instantiate a new SpriteCache instance that will hold the geometry of our tilemap layer. By calling cache.beginCache() we signal the SpriteCache that we are going to define a new batch of geometry now. The two nested loops iterate over out tileMap array and generate a quad for each tile in the tile layer. To define the quad we need to know two things: the position of the quad in worldspace, were one unit equals one pixel of our viewport, and the texture region of the tile itself.

For the texture region of the current tile we use the linear index of the tile and calculate an (x,y) offset into the texture. That’s what the modulo and division operators are for.

For the position of the quad in world space we simple take the x and y tilemap coordinate and multiply it by the size of a tile in pixels (32 in this case). For the y coordinate we do a little bit more: we subtract the current y tilemap coordinate from the HEIGHT of the tilemap. This will effectively flip the tiles on the y-axis so we get the same image as above (remember, y is up!).

The final call to cache.add() then creates a new quad in the cache, using the word position we just precalculated and the texture region. The later is looking a little strange. But think about it: we have to adjust for the pixel border around each tile in the texture. We add 1 in each direction to compensate for the left and upper pixel border of the tileset texture and multiply textureX and textureY by 32 (=TILE_SIZE+1) so we compensate for the right and bottom pixel border of each tile.

The last thing we do is getting a handle to the geometry we just defined by calling cache.endCache(). With that we can now render the tilemap. Let’s have a look at the code for the SimpleTileMapTest.render() method, schall we?

First we clear the screen. Next up we tell the OrthographicCamera to update it’s matrices. We could actually do this only once in create() as the camera parameters like viewport or position don’t change. But if you want to add panning for example you’ll need to call this after you changed the camera’s position. We just leave it in there. Next we tell the SpriteCache to use the camera’s combined matrices for rendering our cached tilemap geometry. And that’s what we do in the rest of the method: tell the SpriteCache that we are going to draw, command it to draw the geometry of our tilemap we created in the create() method and finally tell it to finish all rendering. That’s it. The rest of the class are just the other ApplicationListener methods that are emtpy.

That wasn’t to hard was it? The total source code is 90 lines of code, and that includes the license header. Strip all that away and you end up with a little over 45 lines of code for a tilemap renderer that is already pretty generic. Here’s a few things you might want to add:

  • The way we defined the tilemap layer sucks. We could store that information in a file for bigger profit. What format that file has doesn’t really matter, we just need to be able to load it into an array as above and being able to reference the appropriate texture regions from the tileset texture
  • While SpriteCache is a pretty high performance class there’s of course an upper limit to how much it can take. When you think of a platformer like Super Mario Bros. you can find that a single level spans multiple screens. Our camera viewport only ever shows us a fraction of the complete level, so why render all tiles? What we’d want to do is called culling. This basically means to figure out what tiles are currently seen in the viewport and only render those. That’s actually pretty simple, however it’s also easy to get wrong. And by wrong i mean: reconstructing the geomtry every frame, only adding those tiles that are currently visible (other frameworks do this with great “success”…). There’s a better approach which i might talk about in a future blog post. Alternatively you could check out the forum thread here.
  • Our current renderer only allows a single layer. Many platform games have multiple layers though, like Replica Island which has a background layer and a foreground layer (the objects are rendered separately). Extending the above class to support multiple layers is straight forward though but it leads is to another problem…
  • When we have multiple layers, stacked above each other, chances are high that the foreground layers don’t have tiles in all places and that some of the tiles might have transparent areas. For this we need to enable blending and don’t construct geometry for tiles that aren’t there. Easy right? Yes, it is. But the result might be a tad bit surprising

To test out the impact of a multi-layer setup i wrote a little benchmark that is a lot like what we have above. It operates in a 480×320 screen space and renders a tilemap with 5 layers, composed of 15×10 tiles each with a tile size of 32×32 pixels. The tileset i used is identical to the one above, so there’s actually no tiles that are transparent. I also define all tiles for each layer. We can still measure the effect of blending with such a setup as the GPU doesn’t care whether the pixels in the texture actually have an alpha value < 1.0 or not. Blending is blending for the GPU, and oh boy does it hate blending. Here's the output of the benchmark:

For each layer i just randomly generate tiles, nothing to fancy. Of course none of the background layers shines through the foreground layers. But blending is still working as it should. So here’s some benchmark results:

  • Hero, Android 1.5, MSM720xa chip: 43fps with blending, 44fps without blending
  • Milestone, Android 2.1.1, PowerVR SGX 350 chip: 22fps with blending, 46fps without blending
  • Nexus One, Android 2.2.1, Adreno 200 chip: 22fps with blending, 22fps without blending

So the Hero kills both the Droid and the Nexus one with a 5 layer tilemap. The interesting thing is that the Adreno doesn’t care whether blending is enabled or not. It performs like shit in all cases. I have to investigate this a little further but my previous experience showed similar problems: fill the screen one more than twice and you are screwed on Adreno. I assume it has early z-out so there’s not a lot of overdraw actually. It seems to be more suited for 3D graphics. The PowerVR performs reasonable more or less. Keep in mind though that the Hero has a lot less pixels to fill, so the comparison is not entirely fair.

How to solve this issue? Don’t use blending if possible. Merge tiles that have transparency into a separate geometry batch. That should bring up performance considerably. Another approach would be to constructo tightly fitting geometries for tiles that have transparent pixels. You can then completely disable blending. It’s a little bit more involved and might actually decrease performance due to the higher computational cost of transforming more vertices. I need to test that at some point.

We’ll that’s it for todays installment of “Who can create the biggest wall of text?”. You can find 2 sample projects for the simple tilemap renderer above here. The source for the benchmark can be found here.

9 thoughts on “Orthogonal Tilemaps and Blending with Libgdx

  1. Did you try using alpha testing instead of blending?

    glAlphaFunc(GL_GREATER, 0.5);
    glEnable(GL_ALPHA_TEST);

    For a game with a retro pixelated look, it might be a better fit, and it allows you to use RGB5551, with a 1-bit alpha channel, saving some texture space too.

    What I’m wondering about, of course, are the benchmarks 🙂

  2. Yeah, i tried alpha testing as well. Same results. The chip manufacturers also explicitely state that one should not use alpha testing as it messes with the tiled deferred rendering even more.

    Using a 16-bit texture format (argb4444, argb1555) doesn’t help either i’m afraid. The link to the benchmark is at the bottom of the blog post.

  3. Cool just what i was looking for =) now i have to wait and test on job while i dont get my own device, cause on emulator it runs on 3 FPS =/

    great job!

  4. oh, another thing

    i didnt get why you said having a border is good thing, my tileset doesnt have any borders, ill have to adjust my code to compensate this, or would you advice on changing the tileset to a “border version”?

    thanks

  5. ok, one last question 😛

    you mention culling, but I have no idea how to do that on opengl, for example on allegro you could use double buffering, and then just blit the “viewport” to the final buffer, but i guess the concept in opengl is quite different.

    i followed the link you sent but TMX looks overkill for what im looking for: just one layer, pretty small tileset, nothing too fancy =)

  6. hi ,great tutotorial !
    works perfectly fine on the android emulator but when i wanna start the java-pc project i get the following error:

    Exception in thread “main” com.badlogic.gdx.utils.GdxRuntimeException: Creating window failed
    at com.badlogic.gdx.backends.jogl.JoglApplication.(Unknown Source)
    at com.badlogic.gdx.tilemap.SimpleTilemapTestDeskop.main(SimpleTilemapTestDeskop.java:19)
    Caused by: java.lang.reflect.InvocationTargetException
    at java.awt.EventQueue.invokeAndWait(Unknown Source)
    at javax.swing.SwingUtilities.invokeAndWait(Unknown Source)
    … 2 more
    Caused by: java.lang.UnsatisfiedLinkError: C:\Users\ChrizZz\AppData\Local\Temp\690104323656719gluegen-rt-win64.dll: Can’t find dependent libraries
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(Unknown Source)
    at java.lang.ClassLoader.loadLibrary(Unknown Source)
    at java.lang.Runtime.load0(Unknown Source)
    at java.lang.System.load(Unknown Source)
    at com.badlogic.gdx.backends.jogl.JoglNativesLoader.loadLibrary(Unknown Source)
    at com.badlogic.gdx.backends.jogl.JoglNativesLoader.loadLibraries(Unknown Source)
    at com.badlogic.gdx.backends.jogl.JoglApplication.initialize(Unknown Source)
    at com.badlogic.gdx.backends.jogl.JoglApplication$1.run(Unknown Source)
    at java.awt.event.InvocationEvent.dispatch(Unknown Source)
    at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
    at java.awt.EventQueue.access$000(Unknown Source)
    at java.awt.EventQueue$1.run(Unknown Source)
    at java.awt.EventQueue$1.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.security.AccessControlContext$1.doIntersectionPrivilege(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)

    Note; i even added the gdx-backend-fogl-natives (as recommend in the “MyFirstTriangle” totorial but it still won’t work :/

    (running under win7 64 bit)

  7. update:
    i made some changes in the libs..now i am using the gdx.jar from this actual project (Orthogonal Tilemaps and Blending with Libgdx) and the gdx-backend-jogl.jar and the gdx-natives.jar from the latest libgdx nightly rar.

    iam only getting this errormessage now:

    Exception in thread “main” java.lang.NoClassDefFoundError: com/badlogic/gdx/Preferences
    at com.badlogic.gdx.tilemap.SimpleTilemapTestDeskop.main(SimpleTilemapTestDeskop.java:19)
    Caused by: java.lang.ClassNotFoundException: com.badlogic.gdx.Preferences
    at java.net.URLClassLoader$1.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    … 1 more

Leave a Reply

Your email address will not be published.