Isometric Tilemap Rendering with libgdx

Mcortez asked for some advice on iso tile rendering on IRC tonight. I send him to the example in the SVN repo, but figured it is a bit lacking :). So here’s how you can easily draw iso tiles with Sprites, a SpriteBatch and a Camera.

The basic idea goes like this:

  • We want to work in full 3D! Our tiles should be arranged in the x/z plane
  • We want to standard 45° isometric look.
  • We want to use our SpriteBatch and Sprite classes to render the tiles. Mesh is confusing, right?
  • We want to have a camera that is panable by dragging it around.
  • We want to check whether a touch hit a tile in the map

Let’s begin with the 3D stuff. Here’s an image that should give you an impression of how things will look, logically and geometrically.

For simplicities sake we will have 10×10 tiles extending along the positive x-axis and the positive z-axis. The tile at (0,0) is at the origin, the tile at (1,0) is to the right of that one, the tile at (0, 9) is the one in the front left, and so on. Note that the second coordinate is a Z-COORDINATE, not an y-coordinate as with the usual SpriteBatch rendering business.

The ugly sphere/cone hybrid thing that screams “kill me” is actually our camera. The way the cone is oriented should indicate where the camera is looking towards. It looks in the direction (-1, -1, -1) which means it looks to the left (first -1), downwards (second -1) and from the front to the back (third -1). This will give us our 45° look.

SpriteBatch works in the x/y plane, and our Sprite’s positions are given in x/y as well. How can we make those two things work in the x/z plane instead? Easy! We create a rotation matrix that will rotate the plane we work in by 90° around the x-axis!

Let’s put this into code. I wrote a new test example (actually i rewrote the old We’ll go through it at a slow pace.

Don’t get confused by the GdxTest thing, this just means that our class implements the ApplicationListener interface. The GdxTest class has stub implements of the methods of the ApplicationListener interface to reduce the size of test code.
You also see that it implements the InputProcess interface, we’ll need that later on to make the camera dragable.

The class has a few members:

  • A Texture, that stores the image of the tile we render. You can of course use TextureRegions or whatever can be rendered with Spritebatch. I use a simple Texture for simplicity here.
  • An OrthographicCamera. Surprise! It’s actually a full featured 3D camera with an orthographic projection. Look at this blog post if you want to know more about ortho and perspective cameras. Quintessen: it has a position in 3D space as well as a direction.
  • A SpriteBatch, for obvious reasons.
  • A two dimensional array of Sprites which represent our tiles. Indexing works like this: sprites[x][z], where x is the x-coordinate and z is the z-coordinate. Just like in the first image above.
  • A Matrix4 we’ll use to make the SpriteBatch draw in the x/z axis by rotating everything from the x/y to the x/z axis

In the create() method i first load the texture, business as usual. Then i create the OrthographicCamera. That looks a bit like black magic. What does the 10 stand for, and why do i multiply the second 10 by the displays height divided by its width?

The answer is a big complex. If you read the article i linked to in the list above, you should know that an OrthographicCamera has a so called viewport. That’s the area the image the camera takes gets projected to. The multiplication by height / width is there so that the aspect ratio of the viewport matches the one of the actual display.

The two 10s define the width and height of the viewport. I chose 10×10 units. This means that if our camera looks straight down on the x/z plane (direction=(0, -1, 0) instead of (-1, -1, -1)) it would show us an area of 10×10 units of the world. It would actually show a little less on the displays y-axis since we multiply the viewport height by the displays aspect ratio. Make sense? Hopefully!

Next i set the camera’s position to (5, 5, 10) so that it is at the middle of the front edge of the tile map, hovering 5 units above that edge. Look at the first image above and try to make sense of that! I also set the camera’s direction to (-1, -1, -1) so it looks to the left, downwards and along the negative z-axis.

The cam.near and cam.far fields define how far away an object can be minimally and maximally from the camera to be visible in it’s view (frustum). These values are always positive as they are distances and independent of the camera’s direction. An object that is behind the camera would of course not be visible. An object further away than the far value will also not be visible.

The next line might look like magic but it’s really simple! I just set the matrix to a rotation around the x-axis in counter clockwise order by 90°. As mentioned earlier we’ll use that matrix to tell the SpriteBatch to draw everything in x/z instead of x/y, just as shown in the second image above.

Finally i create 10×10 sprites with a size of 1×1 units located at the coordinates (0,0) through (9,9), as in the first image above. Note that we can use any units we want here! We are working in 3D. I tend to use “easy” units, so 1 makes sense as it could represent 1 meter. The SpriteBatch must also be created of course.

The last statement sets our IsoCamTest instance to be the InputProcessor. We need to do this for our dragging code which will be located in the touchDragged() method.

On to the draw method!

Wow, that was surprisingly short. I start off by clearing the screen. Since we work in 3D now we could also clear the so called z-buffer, but we’ll leave that for now. Next i update the camera to make sure all it’s matrices are up to date according to its position and direction and other parameters.

Then comes the magic part. First i set the camera’s combined matrix (look into the other blog post i linked above!) as the projection matrix of the SpriteBatch. Next i set the transform matrix of the SpriteBatch to the rotation matrix we defined in the create() method. These two things have the effect that everything will be drawn from the camera’s view point AND that the SpriteBatch will render all Sprites/TextureRegion/whathaveyou in the x/z plane instead of the x/y plane.

From there on it’s a simple matter of iterating over our tiles and render them via the SpriteBatch. Easy! The last method checks whether a finger went down on screen and will try to find out whether a tile was hit. But let’s have a look at how the things look so far!

Cool, exactly what we wanted! As you can see, the badlogic icons are upside down. That’s because the y-axis is now aligned with the z-axis and that’s pointing somewhat out of the screen. You can easily fix that if necessary by flipping your sprites/texture regions/whathaveyou texture coordinates vertically.

On to the intersection testing method called checkTileTouched(). Our goal is it to highlight the last touched tile with the color read. So we first have to figure out if a tile was hit, then set it’s color to red. If there was another tile hit previously we just reset that ones color to white again. Code:

Scary, eh? Nah, it’s really simple. What we are going to do is create a ray via the camera based on the display coordinates the touch event happened. That ray originates from our camera’s position and goes into the direction of the camera in the world. We just need to intersect that ray with something. In this case we want to intersect it with the x/z plane, cause that is where our tiles are located! Once we have that intersection point we can check whether it’s coordinates are within a tile.

We need a few additional members, namely the xzPlane, a vector that stores the intersection point and a member that stores the last touched tile, or null if there was no tile touched. The plane is defined via it’s normal and distance to the origin, just as you learned in school (hopefully).

In the method we start by checking whether a touch down event happened. If that is the case we get a picking Ray from our camera by feeding the touch coordinates to the camera’s getPickRay() method. That ray is a 3D ray with an origin (roughyl the camera’s location) and a direction, both given in our 3D world coordinate system. We intersect that ray with our xzPlane via the Intersector class’ intersectRayPlane() method. The result is stored in the intersection member. The method returns a boolean indicating whether the two geometrical objects intersect. In our case the ray will ALWAYS intersect the x/z plane so we don’t have to check the return value.

To get the indices into our tile array we simply cast the intersection points x and z coordinate to int. We can do that since our tiles have a size of 1×1 units! Finally we check whether the hit tile is in range (between 0 and 9) and if that’s the case we set its color and store it as the last touched tile. A previously touched tile will have its color reset to white again.

And that is all it takes to do basic intersection testing with an iso tilemap!

The final piece of the puzzle is dragging the camera around via touch or mouse dragging. That’s a little more involved. Let me post the code first:

We need three members: curr stores the currently touched point on the x/z plane. last stores the last mouse/touch coordinates. I store it in a Vector3 cause i’m a bit silly. It’s really just the 2D screen coordinates of the last touch event. Finally i have a helper Vector3 called delta that we’ll use to calculate the distance between the current touch point and the last touch point.

The method we implement the touch dragging in is the touchDragged() method. Really. What we want to do goes like this: Take the difference betwen the current touch position in the 3D world and the last touch position in the 3D world and add that 3D vector to our camera’s position. This will make it look like the camera is really attached to our finger.

To achieve this we use our old friend the pick ray, plane/ray intersections and some basic vector math. The first thing we do is set the current touch position to the intersection between the pick ray, derrived from the current touch coords, and the xzPlane we defined earlier. Next we check if there was a previous touch drag event, we need at least one old mouse position to actually do the dragging. I do this by checking whether all the coordinates of the last vector are -1. A little hackish but we’ll survive.

If we have a last touch event i also calculate it’s position in the 3D world, as usual with a pick ray the plane and the Intersector class. Now we have the two positions of the current and last touch event in the 3D world. We just take the difference vector and apply it to our camera position so the camera moves! Simple, eh? Finally we remember the current touch position in 2D screen coordinates for the next drag event.

The last bit of code will just make sure we reset the last touch position to (-1,-1,-1) in case the finger is lifted (or the mouse button is released).

And that is all. What we have no is:

  • A full 3D iso tile renderer.
  • Intersection testing with the tilemap and mouse/touch events.
  • A dragable camera.

Here’s the überbonus: we are working in 3D so you could even display 3D objects on top of your tilemap (enable z-testing for that)! Further more you can disable blending while rendering the tiles if they don’t have any transparent pixels. That can give you a HUGE performance increase.

Now, take that code and make the best of it. You can optimize the tile rendering via a SpriteCache since your tiles are likely to not change at all 🙂 You can find the full working code at

box2d velocity threshold in libgdx

Ever had stuff sticking to walls? That was due to the b2_velocityThreshold being set to 1.0f by default. In the original box2d sources that one was a #define, so it could not be changed. I fixed that in our version of box2d. You can now use the following static methods of the World class to manipulate that threshold globally.

Nightlies are building and should be ready in 6 minutes from now on. Use at your own risk.

Application.exit() (I’m on a spree…)

I just implemented another often requested feature: Application.exit(). It will close your app. Surprise. Here’s a simple app from our tests:

Note that the Application.exit() method will schedule the termination of your app in the future! This means that the exit() method will return and the pause() and dispose() events will be fired in the future. It’s the same as Activity.finish() on Android, on the desktop i post a Runnable to the rendering thread.