edit2: See more insightful comments by Jason Sams in the comments
edit: See insightful comments by Romain Guy
Today more details about renderscript emerged, along with 3.0 SDK. Naturally i was all giggly inside to get my hands on something official. I have no idea what to make off it. Note that i only read through the (sparse and incomplete) docs so far and looked over the samples provided in the new SDK. This is by no means a well-grounded, scientific analysis of Renderscript. Just some random thoughts, observations and first impressions.
- Renderscript is a way to write native code that is either run on the CPU or GPU.
It is only availble on Android 3.0 and thus far clearly targeted at tablets only.edit: Romain Guy pointed out that this is in fact wrong: it’s been available since Eclair to some extend.
- It can be used for either compute or graphical stuff. Why it is called Renderscript in that case is a little beyond me, but i guess it has historical reasons.
- The language used is C99 with a couple of extensions, mainly stuff similar to what you can find in GLSL, matrices, vectors etc.
- The Renderscripts are compiled to an intermediate format (probably LLVM IM? Couldn’t check yet) which is then JIT compiled on the device the first time the Renderscript is invoked.
If it is indeed Dalvik bytecode and the Dalvik JIT is used i wonder what benefit this approach has. If its not Dalvik bytecode i wonder why there are now two JIT compilers. Given that Renderscript should also be able to target GPUs at some point (i doubt that’s the case yet, docs are unspecific about this…) i assume it’s LLVM which has a GLSL backend.. edit: It’s indeed LLVM based. Sweet. In any case it’s pretty neat technically
- Structs and functions you define in your Renderscript are reflected to Java classes and methods. It’s a little involved for obvious reasons. Reminds me a lot of JNA. The Java bindings allow you to interact with the native Renderscript, passing data to it and issuing function invocations.
- If the Renderscript is a graphical one you additionally have a special SurfaceView (RSSurfaceView) to which a specific method (root() (?)) of your Renderscript will render to. There’s a native API available to Renderscripts that is used for rendering. Additionally you can set the vertex and fragment shaders along with some states for blending and similar things. It’s utterly confusing as it is a mixture of fixed-function and shader based graphics programming.
- The SDK 10 page says Renderscript has its own shading language, whereas the docs say GLSL is used in custom vertex/fragment programs. I guess whoever wrote the SDK 10 page considers Renderscript itself a shading language? In the end it just exposes a specialized graphics API on top of GLES 2.0 plus some intrinsic types for linear algebra.
- There doesn’t seem to be any way to debug Renderscripts (and that includes the CPU backend as well, for the GPU it of course makes sense…).
Seriously, i don’t know what to make of it, and i don’t just mean that i don’t fully crasp it yet. The samples in the SDK are either trivial and actually not really poster childs for when you should use Rendersrcipt, or they are beating you over the head with they heaviest stick they could find. The notion of being able to just drop an .rs/.rsh files into my java package structure to have native methods is great as oposed to writting JNI bridges. How they want to pull off a GPU backend is a little beyond me, but i give them the benefit of the doubt. In its current state the C99 + extensions doesn’t seem to translate well to the GPU (see OpenCL for a standardized solution). I don’t mind though, running computationally heavy code on the CPU would make me a happy coder monkey just as well. So, here’s a list of things that i think will make Renderscript only useful for a select few. Again, take this with a grain of salt, i probably don’t know what i talk about:
- The documentation is severly lacking at this point. From experience i know that documenting something you’ve worked on for months (years?) is damn hard, especially with an uninitiated target audience in mind. In their current state no normal application developer can hope to understand what’s going on or how to assess whether to use Renderscript or not. Compute Renderscripts are probably a little bit easier to pick up. Graphical Renderscripts on the other hand demand such intimate knowledge of GLES 2.0 (at least that’s my impression seeing how you have to take care of vertex attributes, uniforms, constants, samplers etc) that most app developers will probably never be able to get to use that new shiny feature of Honeycomb
- The fixed/shader based hybrid is really really really strange to someone who’s used to write straight GLES 1.x or 2.0. It’s hard to connect the dots and relate the Renderscript graphics API with what will happen in the shader. While there are a shitton of helper classes and builders that should probably ease the pain, they make matters even worse. Documentation is pretty spares in this case again. I don’t consider myself to be an expert, but i think i know my way around GLES enough to be able to assess this issue.
- There doesn’t seem to be a way to debug things, even when run on the CPU. That will make it even harder for newcomers to get into Renderscript. The general trial and error and debug the hell out of shit approach just won’t work. It gets even worse when writting graphical Renderscripts i assume. The classic “wtf, why is my screen black” expression will be found on a lot of faces i guess. Not that this is any better when using straight GLES of course. But the layer of obfuscation will make it hard to look up solutions to common problems.
- I hope there will be a community around Renderscript. Again, i’m far from being an expert, but to build that community you either have to have a very accesible API/framework/programming model or top notch documentation. At this point i don’t see either of those two things. A community could somewhat lower the impact of the documentation issue. But it’s really kind of a hen and egg problem. Without good docs not a lot of people will become knowledgeable experts, and without such experts there can’t be a community. Again, take this with a grain of salt. Reading through the docs and examples for a few hours certainly don’t allow me to make an ultimate statement about this. It’s just an impression i get from reading all this
- People will take the easiest path to achieve the result they want. With graphical Renderscripts that’s probably using the fixed-function shaders provided as defaults and only invoking the “simple” drawing functions of the native graphics API. Seeing things like this
12345678910111213float startX = 0, startY = 0;float width = 256, height = 256;rsgDrawQuadTexCoords(startX, startY, 0, 0, 0,startX, startY + height, 0, 0, 1,startX + width, startY + height, 0, 1, 1,startX + width, startY, 0, 1, 0);startX = 200; startY = 0;width = 128; height = 128;rsgDrawQuadTexCoords(startX, startY, 0, 0, 0,startX, startY + height, 0, 0, 1,startX + width, startY + height, 0, 1, 1,startX + width, startY, 0, 1, 0);
- Hardware fragmentation with regards to GPUs. If the current situation is any indication then people will cry tears of anger after finding out that all the time taken to learn Renderscript did not pay off because some manufacturer couldn’t get its act together and adhere to the GLES 2.0 specs. Since Renderscript is primarily targeted at tablets and most of those seem to sport a Tegra by Nvidia i can imagine that it might be less of a problem. As soon as Renderscripts enters the handset space it will be a different story. Any phone advertised as supporting GLES 2.0 is a target now.
So, i wonder what’s the benfit of using Renderscript over custom native code written via the NDK in conjunction with GLES 2.0 (also exposed in the NDK). Ingoring the possibility for Renderscript to run on the GPU, i don’t see any reason why you couldn’t just create a standard Android App with a native layer doing all the heavy lifting. Benefits: debugging support, 3rd party libraries, greater control of what’s going on under the hood, less obfuscation. edit: derp, looks like Renderscript will automatically distribute computations over multiple cores. That’s of course a plus!. Of course, the graphics API and linear algebra intrinsics of Renderscript are very nice and make live easier (to some extend). But why not provide those to the NDK? Also, if you want easier access to native functions, why not make that work with the NDK in some way? Include the NDK toolchain with ADT. Get rid of JNI and brew your own thing. Make it easy for us to write general native code (including debugging).
In terms of compute operations i have one word: OpenCL. It’s a standard. It’s being adopted by ATI, Nvidia and even Apple (not sure about IOS but i think i read about that somewhere). It has a few limitations of course, but overall it would be a valid alternative to compute Renderscripts.
All in all i think Renderscript is an impressive technical feat. But while the demos based on it are really nice (Youtube app etc.) i just don’t see regular app developers getting a hold of it. Grain of salt, yadda yadda. Go check out the examples. And hope that your emulator can render them :p
This makes me wonder: does the emulator now support GLES 2.0?