Taking inspiration from the Demo scene

I’ve never boughtered to get into the demo scene but was always impressed by the visual and audio perfection many products arrive at. Especially small prods in say 64k do a lot procedurally which is also what i’m going to do for the upcoming music game i work on. I know quiet a lot about standard graphics programming from writting my own software renderer to other topics like implementing various partitioning schemes and so on. I never boughtered much with graphical effects and that’s really a problem. Over the years i have seen incredible prods that made me wonder how to achieve this level of beauty. Well, after learning all that’s needed for the audio side of things i will try to forget all i have learned about graphics programming over the last decade and dive into the art of demo programming. Maybe this will give my upcoming games the graphical edge they need to stand out.

I found this series of articles i haven’t read yet be the introduction seemed to be pretty good. We’ll see how all that works out over the next couple of days/weeks when i try to put together some demo effects myself. If i find the time i’ll write about that in more detail here. To get yourself started i suggest looking at pouet.net and scene.org. There’s a couple of other intersting blocks talking about the topic, e.g. this one

Edit: well i skimmed over the articles i linked to above and they seem to be a bit to old school. But they reminded me of my old mode 13h DOS days. Oh how i loved my old 486…

Edit2: so i did a shitload of research already and found a few things i like to share with people that want to start demo coding. First of there’s the videos of the seminars held at Assembly. You can find them for 2009 at http://media.assembly.org/vod/2009/Seminars/. I only looked at the “Demo programming for beginners” video so far and it was ok. Not a lot of surprises and some things i wouldn’t do like they were presented but the links in the end are nice. Here’s the links:

http://frontend.outracks.net/ A portable game/demo engine written in c++. Comes with example demos.
http://elsewhere.stc.cx/demoprogramming/ the side of the videos presenter, features engine and demo code.
http://pouet.net/sourceprod.php sources to many demos on pouet.net. Awesome!

Now that should get me started.

Ilomilo is sweet

While surfing Android Guys today i found a game seemengly being ported to the next Android generation called Ilomilo. It uses OpenGL ES 2.0 for all it’s rendering and looks incredible. They seem to use all that can be done with OpenGL ES 2.0 starting from simple light and bump maps to dynamic shadow maps, ambient occlusion rim lighting and post process effects like depth of field and vignettes. Their development blog is pretty awesome so go check it out.

Yes, that’s really ported to android and from the video over at Android Guys they pulled off the quality in the above video on Android too. (around minute 3:45 the game is demonstrated)

I’m confident i could pull off the same effects to some extend when OpenGL ES is finally available officially. The lack of a proper artist is prohibitive though. If any of you readers out there think you got what it takes to model and texture your way through a complete game, get in touch with me, maybe we can get something done (yes, high hopes…).

Anyways, i look forward to this game it looks awesome!

Libmad on Android with the NDK

So i was porting all the decoders i had build for the onset detection tutorial to C++, using libmad as the mp3 decoder of choice. After getting that to work on the desktop properly i had to make it work on Android too. Now, there’s no build of libmad in the NDK for obvious reasons, so i had to build that myself. As the autotools configure script of libmad is not useable with the NDK toolchain i used the config.h file from http://gitorious.org/rowboat/external-libmad/blobs/raw/master/android/config.h, which has all the settings needed for building libmad on Android. Compiling libmad is then a simple matter of creating a proper Android.mk and Application.mk file. The Android.mk file looks like this:

Now there’s a couple of things that initially bogged down the performance of this. I tested it with the song “Schism” by tool which is a 6:47min long song, encoded at 192kbps. The file weights in at 9.31mb, pretty big for an mp3 imo. NativeMP3Decoder is just a libmad based implementation of the MP3Decoder in the onset detection tutorial framework. So it has a simple NativeMP3Decoder.readSamples method which will fill a float array with as many samples as there are elements in the float array. If the input file is in stereo the channels get mixed down to mono by averaging. The NativeMP3Decoder.readSamples() method internally calls a native method with a similar signature. Instead of a float array i pass in a direct ByteBuffer that has enough storage to hold all the samples requested. The native wrapper then writes the samples to this direct buffer which in turn then gets copied to the float array passed into the NativeMP3Decoder.readSamples() method. It looks something like this:

The call to buffer.get( samples ) kills it all. Without any optimizations (thumb code, -O0, -DFPM_DEFAULT == standard fixed point math in libmad, no arm assembler optimized fp math) decoding the complete files takes 184 seconds on my Milestone. Holy shit, batman! If i eliminate the buffer.get( samples ) call that gets down to 44 seconds! Incredible. Now i still thought that is way to slow so i started adding optimizations. The first thing i did was compiling to straight arm instead of thumb code. You can tell the NDK toolchain to do so by placing this in the Android.mk file:

With this enabled decoding takes 36 seconds. The next thing i did was agressive optimization via -O3 as a CFLAG. That shaved off only 2 more seconds, so nothing to write home about. The last optimization is libmad specific. The config.h file i linked to above does not define the fixed point math mode libmad should use. Now, when you have a look at fixed.h of libmad you can see quiet some options for fixed point math there. There’s also a dedicated option for arm processors that uses some nice little arm assembler code to do the heavy lifting. You can enable this by passing -DFPM_ARM as a CFLAG. Now that did wonders! i’m now down to 20 seconds for decoding 407 seconds of mp3 encoded audio. That’s roughly 20x real-time which is totally ok with me. The song i chose is at the extreme end of the song length spectrum i will have to handle in my next audio game project. A song a user uses will be processed once and waiting for that 20 seconds is ok in my book.

I’m afraid i won’t release the source of the ported audio framework as it’s a bit of a mess and would need some work to clean up. What i can give you is the plain source for the native side of the NativeMP3Decoder class if you guarantee me not to laugh. My C days are long over so there’s probably a shitload of don’ts in there. The “handle” system is also kind of creative but good enough for my needs. I learned how to use the low level libmad api by looking at the code here. I actually like doing it this way more than with the shitty callback high-level API. Your mileage may vary. So here it is, be afraid:

To compile that for Android all you have to do is download libmad and put the source files into your Android project’s jni folder along with the code above. Then use the Android.mk from above et voila you got yourself a native mp3 decoder for Android. You can use it in combination with the AndroidAudioDevice class of the last post. If you feel adventureous you could even extend it to return stereo data.