Isometric Tilemap Rendering with libgdx

Mcortez asked for some advice on iso tile rendering on IRC tonight. I send him to the example in the SVN repo, but figured it is a bit lacking :). So here’s how you can easily draw iso tiles with Sprites, a SpriteBatch and a Camera.

The basic idea goes like this:

  • We want to work in full 3D! Our tiles should be arranged in the x/z plane
  • We want to standard 45° isometric look.
  • We want to use our SpriteBatch and Sprite classes to render the tiles. Mesh is confusing, right?
  • We want to have a camera that is panable by dragging it around.
  • We want to check whether a touch hit a tile in the map

Let’s begin with the 3D stuff. Here’s an image that should give you an impression of how things will look, logically and geometrically.

For simplicities sake we will have 10×10 tiles extending along the positive x-axis and the positive z-axis. The tile at (0,0) is at the origin, the tile at (1,0) is to the right of that one, the tile at (0, 9) is the one in the front left, and so on. Note that the second coordinate is a Z-COORDINATE, not an y-coordinate as with the usual SpriteBatch rendering business.

The ugly sphere/cone hybrid thing that screams “kill me” is actually our camera. The way the cone is oriented should indicate where the camera is looking towards. It looks in the direction (-1, -1, -1) which means it looks to the left (first -1), downwards (second -1) and from the front to the back (third -1). This will give us our 45° look.

SpriteBatch works in the x/y plane, and our Sprite’s positions are given in x/y as well. How can we make those two things work in the x/z plane instead? Easy! We create a rotation matrix that will rotate the plane we work in by 90° around the x-axis!

Let’s put this into code. I wrote a new test example (actually i rewrote the old We’ll go through it at a slow pace.

Don’t get confused by the GdxTest thing, this just means that our class implements the ApplicationListener interface. The GdxTest class has stub implements of the methods of the ApplicationListener interface to reduce the size of test code.
You also see that it implements the InputProcess interface, we’ll need that later on to make the camera dragable.

The class has a few members:

  • A Texture, that stores the image of the tile we render. You can of course use TextureRegions or whatever can be rendered with Spritebatch. I use a simple Texture for simplicity here.
  • An OrthographicCamera. Surprise! It’s actually a full featured 3D camera with an orthographic projection. Look at this blog post if you want to know more about ortho and perspective cameras. Quintessen: it has a position in 3D space as well as a direction.
  • A SpriteBatch, for obvious reasons.
  • A two dimensional array of Sprites which represent our tiles. Indexing works like this: sprites[x][z], where x is the x-coordinate and z is the z-coordinate. Just like in the first image above.
  • A Matrix4 we’ll use to make the SpriteBatch draw in the x/z axis by rotating everything from the x/y to the x/z axis

In the create() method i first load the texture, business as usual. Then i create the OrthographicCamera. That looks a bit like black magic. What does the 10 stand for, and why do i multiply the second 10 by the displays height divided by its width?

The answer is a big complex. If you read the article i linked to in the list above, you should know that an OrthographicCamera has a so called viewport. That’s the area the image the camera takes gets projected to. The multiplication by height / width is there so that the aspect ratio of the viewport matches the one of the actual display.

The two 10s define the width and height of the viewport. I chose 10×10 units. This means that if our camera looks straight down on the x/z plane (direction=(0, -1, 0) instead of (-1, -1, -1)) it would show us an area of 10×10 units of the world. It would actually show a little less on the displays y-axis since we multiply the viewport height by the displays aspect ratio. Make sense? Hopefully!

Next i set the camera’s position to (5, 5, 10) so that it is at the middle of the front edge of the tile map, hovering 5 units above that edge. Look at the first image above and try to make sense of that! I also set the camera’s direction to (-1, -1, -1) so it looks to the left, downwards and along the negative z-axis.

The cam.near and cam.far fields define how far away an object can be minimally and maximally from the camera to be visible in it’s view (frustum). These values are always positive as they are distances and independent of the camera’s direction. An object that is behind the camera would of course not be visible. An object further away than the far value will also not be visible.

The next line might look like magic but it’s really simple! I just set the matrix to a rotation around the x-axis in counter clockwise order by 90°. As mentioned earlier we’ll use that matrix to tell the SpriteBatch to draw everything in x/z instead of x/y, just as shown in the second image above.

Finally i create 10×10 sprites with a size of 1×1 units located at the coordinates (0,0) through (9,9), as in the first image above. Note that we can use any units we want here! We are working in 3D. I tend to use “easy” units, so 1 makes sense as it could represent 1 meter. The SpriteBatch must also be created of course.

The last statement sets our IsoCamTest instance to be the InputProcessor. We need to do this for our dragging code which will be located in the touchDragged() method.

On to the draw method!

Wow, that was surprisingly short. I start off by clearing the screen. Since we work in 3D now we could also clear the so called z-buffer, but we’ll leave that for now. Next i update the camera to make sure all it’s matrices are up to date according to its position and direction and other parameters.

Then comes the magic part. First i set the camera’s combined matrix (look into the other blog post i linked above!) as the projection matrix of the SpriteBatch. Next i set the transform matrix of the SpriteBatch to the rotation matrix we defined in the create() method. These two things have the effect that everything will be drawn from the camera’s view point AND that the SpriteBatch will render all Sprites/TextureRegion/whathaveyou in the x/z plane instead of the x/y plane.

From there on it’s a simple matter of iterating over our tiles and render them via the SpriteBatch. Easy! The last method checks whether a finger went down on screen and will try to find out whether a tile was hit. But let’s have a look at how the things look so far!

Cool, exactly what we wanted! As you can see, the badlogic icons are upside down. That’s because the y-axis is now aligned with the z-axis and that’s pointing somewhat out of the screen. You can easily fix that if necessary by flipping your sprites/texture regions/whathaveyou texture coordinates vertically.

On to the intersection testing method called checkTileTouched(). Our goal is it to highlight the last touched tile with the color read. So we first have to figure out if a tile was hit, then set it’s color to red. If there was another tile hit previously we just reset that ones color to white again. Code:

Scary, eh? Nah, it’s really simple. What we are going to do is create a ray via the camera based on the display coordinates the touch event happened. That ray originates from our camera’s position and goes into the direction of the camera in the world. We just need to intersect that ray with something. In this case we want to intersect it with the x/z plane, cause that is where our tiles are located! Once we have that intersection point we can check whether it’s coordinates are within a tile.

We need a few additional members, namely the xzPlane, a vector that stores the intersection point and a member that stores the last touched tile, or null if there was no tile touched. The plane is defined via it’s normal and distance to the origin, just as you learned in school (hopefully).

In the method we start by checking whether a touch down event happened. If that is the case we get a picking Ray from our camera by feeding the touch coordinates to the camera’s getPickRay() method. That ray is a 3D ray with an origin (roughyl the camera’s location) and a direction, both given in our 3D world coordinate system. We intersect that ray with our xzPlane via the Intersector class’ intersectRayPlane() method. The result is stored in the intersection member. The method returns a boolean indicating whether the two geometrical objects intersect. In our case the ray will ALWAYS intersect the x/z plane so we don’t have to check the return value.

To get the indices into our tile array we simply cast the intersection points x and z coordinate to int. We can do that since our tiles have a size of 1×1 units! Finally we check whether the hit tile is in range (between 0 and 9) and if that’s the case we set its color and store it as the last touched tile. A previously touched tile will have its color reset to white again.

And that is all it takes to do basic intersection testing with an iso tilemap!

The final piece of the puzzle is dragging the camera around via touch or mouse dragging. That’s a little more involved. Let me post the code first:

We need three members: curr stores the currently touched point on the x/z plane. last stores the last mouse/touch coordinates. I store it in a Vector3 cause i’m a bit silly. It’s really just the 2D screen coordinates of the last touch event. Finally i have a helper Vector3 called delta that we’ll use to calculate the distance between the current touch point and the last touch point.

The method we implement the touch dragging in is the touchDragged() method. Really. What we want to do goes like this: Take the difference betwen the current touch position in the 3D world and the last touch position in the 3D world and add that 3D vector to our camera’s position. This will make it look like the camera is really attached to our finger.

To achieve this we use our old friend the pick ray, plane/ray intersections and some basic vector math. The first thing we do is set the current touch position to the intersection between the pick ray, derrived from the current touch coords, and the xzPlane we defined earlier. Next we check if there was a previous touch drag event, we need at least one old mouse position to actually do the dragging. I do this by checking whether all the coordinates of the last vector are -1. A little hackish but we’ll survive.

If we have a last touch event i also calculate it’s position in the 3D world, as usual with a pick ray the plane and the Intersector class. Now we have the two positions of the current and last touch event in the 3D world. We just take the difference vector and apply it to our camera position so the camera moves! Simple, eh? Finally we remember the current touch position in 2D screen coordinates for the next drag event.

The last bit of code will just make sure we reset the last touch position to (-1,-1,-1) in case the finger is lifted (or the mouse button is released).

And that is all. What we have no is:

  • A full 3D iso tile renderer.
  • Intersection testing with the tilemap and mouse/touch events.
  • A dragable camera.

Here’s the überbonus: we are working in 3D so you could even display 3D objects on top of your tilemap (enable z-testing for that)! Further more you can disable blending while rendering the tiles if they don’t have any transparent pixels. That can give you a HUGE performance increase.

Now, take that code and make the best of it. You can optimize the tile rendering via a SpriteCache since your tiles are likely to not change at all 🙂 You can find the full working code at

17 thoughts on “Isometric Tilemap Rendering with libgdx

  1. Mario, I really wonder if your day also has only 24h?? 😀 If you found a way to slow down time in order to increase the amount of work down per day PLEASE LET ME KNOW 😀

  2. Wow, awesome post. It was actually me who originally brought up isometric views using 3D rendering (using immediate renderer for simple logic debugging) and mcortez was a huge help (as was ever one else, which I appreciate, though mcortez was most enthusiastic).

    You’ve just cured the headache I had all morning.

  3. excellent! love these kind of examples!

    (and about the timemachine dertom mentioned. If it’s java based, carefull with the garbage collector, I happen to like this current dimension 🙂 )

  4. Great job Mario!

    Maybe this justifies a new package? please?? 🙂

    I however am a bit worried that touch-event handling in Android (which already sucks big time) now becomes even slower due to all these matrix operations in touchDragged()?

  5. i’m afraid there won’t be a 2point5d package. Don’t worry about the matrix operations in the touch event handling code. There’s no way around it, and they are executed on the main loop/rendering thread not the ui thread.

  6. I have a simpler solution to the much larger issue you allude to above.

    When you monetion that the sprites are upside down and need to be flipped, that isn\’t the whole story. The truth is, everything is flipped, including tile coordinates, so your bottom left tile (0, 0) is actually now your right most tile in the isometric view.

    The way to avoid all this isues is to stick with keeping the tiles extending into +x and -z so that your base vectors aren\’t changing (the above aproach flips the sign of z).

    Luckily it\’s the easist change you could possibly make. You just need to add a single characte to change the dirction of rotation of the plane the sprites get rendered in.

    matrix.setToRotation(new Vector3(1, 0, 0), -90);

    Now just keep in mind that moving into your screen now involves moving into -z so you may have to update your camera position.

  7. Hey Mario, great work ! Keep the good things coming !
    Got one question though, what if I needed a stage of actors inside the screen to be rendered the same iso way ? The cam works just fine but the batch rotation seems to be overrided deep inside the stage.

  8. Hi Mario,

    Could you also show us how to put a 2D sprite (the unit) on top of the tile? The 2D sprite would be projected differently from the tile itself.


  9. mario with libgdx could u just instantly try ur game examples in the boook with no framework?not that im lazy ive already done the framework

  10. Thanks it really helped me ! Awesome support ^^

    Here’s my implementation with hexagonal map, I’m sure my method to find the closest center can optimized (i.e with quadtree) :

    The creation :
    for(int z = 0; z < 10; z++) {
    for(int x = 0; x < 10; x++) {
    float tmpX = x * .75f;
    float tmpY = z;
    if(x%2 == 0) tmpY -= .5f;
    mesCases[x][z] = new Case(tmpX,tmpY);

    Input :

    Ray pickRay = cam.getPickRay(Gdx.input.getX(), Gdx.input.getY());
    Vector2 clique = trouverPlusProche(intersection.x, intersection.z);

    Closest :

    static float distX;
    static float distZ;
    static float dist = 999999999999999f;
    static float distPrecedente = 9999999999999999999999f;
    static int plusProcheX = 0;
    static int plusProcheZ = 0;

    private static Vector2 trouverPlusProche(float pointX, float pointZ) {
    dist = 999999999999999f;
    distPrecedente = 9999999999999999999999f;
    for (int z = 0; z < 10; z++) {
    for (int x = 0; x < 10; x++) {
    distX = pointX – mesCases[x][z].centre.x;
    if (distX < 0.43301270189221932338186158537647f) {
    distZ = pointZ – mesCases[x][z].centre.y;
    if (distZ 0 && distZ > 0)
    dist = distX + distZ;
    if (distX > 0 && distZ < 0)
    dist = distX + (-distZ);
    if (distX 0)
    dist = (-distX) + distZ;
    if (distX < 0 && distZ < 0)
    dist = (-distX) + (-distZ);
    if (dist < distPrecedente) {
    plusProcheX = x;
    plusProcheZ = z;
    distPrecedente = dist;
    return new Vector2(plusProcheX,plusProcheZ);

  11. Even if this one is a little bit older, i hope to find some help here.
    The code works fine for me, I adjusted it a little bit, so i can’t scroll out of the displayed map. It seems a bit nasty to me, but the code works. I calculate the point-Vectors of the four corners and then detect, wheter the center of the screen is out of my displayed area. If so, I calculate the nearest point on the line between the two nearest corners and let the camera flip to that point. That works fine, and the calculation of the points is no problem, IF the y-coordinate of the camera is exactly half the number of the tiles. Then the (0 | 0) coordinate of my displayed icons equals (tiles/2 | tiles/2) for the camera. That’s very handy, because now 1 step of the camera is one step of my displayed icons.
    BUT, the problem that comes up, is: If i want to display bigger areas, with maybe 120 x 120 tiles, the camera just cuts some part of the view out. The y-coordinate is now 60. If i make the y-coordinate smaller, let’s say 30, everything is displayed fine again. BUT: Now the calculation does not work properly, because the (0 | 0) coordinate of my displayed icons does NOT equals (tiles/2 | tiles/2) any more.
    Why does that happen? I don’t really get what the y-coordinate stands for, and what it does after all.

  12. Hi,
    Will this setup work for draw cube or grid of cubes, draw using GL_LINES, so that they’re 3D but use Orthographic projection?

    I want to be able draw something Rubiks-cube-like, in 3D, but don’t want it to have perspective, i.e. the cubes towards the back of the world space should appear to be the same size as those closest to the camera (i.e. extended lines should never meet, as you might expect for from Orthographic projection).

    The cubes would be drawn using Vertices and Indices (i.e. Mesh class), and would probably not include textures or pre-built models. Is the above code relevant?



  13. sorry, better without the link. Some changes have to be made to the link I posted also. Thanks anyway for the great tutorial.

Leave a Reply

Your email address will not be published.