Preferences in libgdx

Yeah, you can use that now. Nighlies are building. To fetch a Preferences instance do this:

Note that the name must be useable as a file name. On Android we wrap SharedPreferences. On the desktop we write a Properties XML file to user/preferences-name (“my-preferences” above). That’s not perfect and it’s why we didn’t want to implement that in the first place (Java Preferences are not a solution either…).

Once you have that you can put and get booleans, ints, longs, floats and strings (will be stored as UTF-8).

To make sure your changes are actually persisted you can use

That’s really only relevant on the desktop. If your app terminates due to a severe error the mechanism persisting the thing might not get invoked (no its not a finalizer, but something similar). Flush will immediately persist the current preferences state. So its good practice to call this after you changed your preferences.

That’s it. Enjoy.

Browsing the Renderscript Sources

edit: according to Jason a lot has changed in Renderscript from Gingerbread to Honeycomb (LLVM backend for example). Let’s wait for the Honeycomb sources. The links below reference the Gingerbread sources.

I was really interested in how RenderScript looks under the hood after playing around with it a little today (yay, 3.0 SDK, emulator makes me a little sad though :(). The core of RenderScript is implemented in pure C++ as excepted with dependencies on libacc for compilation, skia for Bitmap related things as well as OpenGL ES. I was surprised to find that OpenGL ES 1.x is used as well in some places.

If you want to get your hands a little dirty with the source as well here’s a few interesting links to the repo:

  • framework/base/libs/rs/: that’s the Renderscript implementation, minus libacc (which i have to track down as it is the thing i’m most intersted in.)
  • rsScriptC.cpp: here acc magic happens. As pointed out in the docs the result of the compilation is cached for future invocations. Also responsible for actually invoking the script. Looks pretty straight forward!
  • rsVertexArray.cpp: just for funzors. Looks a lot like what we have in libgdx. Vertex attributes etc. There’s a code path for both GLES 1.x and GLES 2.0. Also looks like only single texturing is supported via this class at this point. I assume that’s only used for the immediate mode drawing API available in your graphics Renderscript
  • rsScriptC_Lib.cpp: that seems to be the runtime for the CPU backend, implementing the APIs available to you in your Renderscript. This includes the immediate mode drawing functions and math intrinsics. The implementation seems to be hooked up to the JIT compiled Renderscript via the lookup table at the end of the file. (also see rsScriptC). Porting this to the GPU will not work for all methods of course (can’t do drawing calls directly on the CPU), so i assume the GPU backend will “only” work for compute scripts. I wonder if those scripts get compiled to binary GLSL or something GPU specific. GLSL might not be expressive enough. We’ll see, Romain and Jason said they have a well formulated plan for that. Looking forward to see it in action.
  • The rest of the directory contains various helper classes. rsFileA3D.cpp is interesting to some extend as it is the parser for the new A3D format natively supported by Renderscript
  • framework/base/graphics/jni/android_renderscript_RenderScript.cpp: that’s the C JNI wrapper for all the C++ classes in the native Renderscript implementation. Pretty straight forward.
  • platform/frameworks/base/graphics/java/android/renderscript/: and here we have the Java API to renderscript. All those classes actually delegate the work to the RenderScript class which in turn calls the JNI methods defined in android_renderscript_RenderScript.cpp. There’s a mirror Java class for nearly every C++ class. I’d suggest going with the C++ source directly to understand what’s going on under the hood. All allocation bindings, function invocations and so on happen via name lookups and setter/invoke methods of the RenderScript class anyways, which themselves delegate to the C++ classes.
  • RenderScript.java: basically just the JNI interface plus an interesting threaded mechanism. The docs say Renderscript is a master/slave implementation. That’s done via a message thread here. There seems to be a second thread on the native side in rsThreadIO.cpp which is firing messages to a FIFO that get then received by the Java thread and are executed there. I’m no good with threads so i leave it at that 🙂
  • Reflection of user defined structs and so on is done by generating stub Java classes that themselve invoke the RenderScript JNI wrapper which then does it’s magic on the native side. I haven’t looked to hard into this yet as i want to understand the general mechanisms first. It seems to be pretty straight forward though. Due to the way stuff is reflected i don’t think that you will achieve super high performance. But usually I’d assume you’ll just fill up your data arrays, pass them to your script once and let the script itself do the heavy lifting.

From what i saw so far there are only a few tie ins with the actual Android system. I haven’t looked into libacc yet. But if it is based on LLVM a port to the desktop might even be possible to some extend. I could imagine this to be a fun side project 🙂

It seems that due to the preparations for having scripts run on the GPU the mechanism of passing data to and from the script is a little strange. You create Allocations which are stored in global vars in your renderscript (at least that’s how it is done in the samples). Once your root renderscript is invoked you take the pointers to those allocations and pass them to your actual functions as arguments (again, just judging from the samples which probably illustrate best practices). That will make it a lot easier to get this going on the GPU, altough i still have my reservations with regards to what is actually used as GPU code (bitcode to GLSL? GPU specific machine code? Who knows). There also seem to be a couple of intrinsics like rsForEach:

(taken from the HelloCompute example, a basic image filter that transforms an rgb888 image to a monochrome image). I assume those intriniscs will be were the vectorization/distribution will come in. That looks a lot easier than what you have to do in CUDA (basically use threadIds to partition your data within the kernels themselves which is a huge pain in the ass…). It looks a lot more like OpenMP with its pragmas to me.

So, after a closer look Renderscript looks pretty interesting after all. Using it in conjunction with the new 3.0 SDK is a breeze (once you understand it). However, i still think it is lacking in one crucial aspect: documentation.

Dear Google, please write a proper developer guide AND reference for Renderscript. The current introduction to Renderscript is utterly confusing while the actual implementation is pretty straightforward to understand. Not everyone has the time to plow through the sources to figure stuff out. Also, a complete reference of the intrinsics and API would be a huge plus.

And finally, here’s an excerpt from the CPU runtime lib (at least i call it like that now…):

OH GOD! ARE YOU SERIAL! 😀

In all seriousness though. I can see how that is the easy way out so you don’t impose any explicit batching on your API clients. I know the intention is to provide a simple way to slap an image to the screen. But people will assume it can do a lot more. Maybe adding an optional explicit batching mechanism would be a good idea. I know there’s no dynamic allocation during runtime, so a batching mechansim that lets you “draw” rects/pointsprites/points/lines/whatever primitives to a preallocated Mesh would be a nice alternative imo (e.g. something like ImmediateModeRenderer in our shitty lib, with packed vertex colors…).

In any case. After that little journey i can see the usefulness of Renderscript. That was not appearant to me from reading the official docs though. I’d probably still use straight NDK for anything more complex than say a basic image filter or pitch detection, especially due to the available debugging facilities. But for things like vertex skinning Renderscript could be a nice alternative to quickly churn out some native code. Plus, the JIT is indeed a real time safer. I’m not so sure about the parallelism feature though which seems to be explicit (albeit elegantely so). Image manipulation and skinning lend themselves well to parallelization. I also have to look into how vectorization is achieved (NEON etc.). The NDK Gcc can supposedly do the same as the JIT compiler (at least i assume it can) or you can use the intrinsics for NEON (which is off course a lot more cumbersome than having the JIT/AOT compiler do that for you). In any case, it’s pretty nifty.

Final thought: Renderscript is only available on Android 3.0. From what i understand it is not targeted at phones but tablets instead, which is perfectly fine. Can we expect a port of Renderscript in it’s 3.0 form to Android 2.4? I sure hope so.

edit: just found out what libacc is/does/where it comes from. Obfuscated Tiny C compiler! Awesome sauce! Gonna give that a try. Would be awesome for a simple scripting solution. That’s an interesting solution for a “JIT” compiler i have to say. I don’t see the tie in with LLVM anywhere. I assume that’s not done in rs_ScriptC after all. Hrm… puzzling.