edit2: See more insightful comments by Jason Sams in the comments
edit: See insightful comments by Romain Guy

Today more details about renderscript emerged, along with 3.0 SDK. Naturally i was all giggly inside to get my hands on something official. I have no idea what to make off it. Note that i only read through the (sparse and incomplete) docs so far and looked over the samples provided in the new SDK. This is by no means a well-grounded, scientific analysis of Renderscript. Just some random thoughts, observations and first impressions.

  • Renderscript is a way to write native code that is either run on the CPU or GPU.
  • It is only availble on Android 3.0 and thus far clearly targeted at tablets only.edit: Romain Guy pointed out that this is in fact wrong: it’s been available since Eclair to some extend.
  • It can be used for either compute or graphical stuff. Why it is called Renderscript in that case is a little beyond me, but i guess it has historical reasons.
  • The language used is C99 with a couple of extensions, mainly stuff similar to what you can find in GLSL, matrices, vectors etc.
  • The Renderscripts are compiled to an intermediate format (probably LLVM IM? Couldn’t check yet) which is then JIT compiled on the device the first time the Renderscript is invoked. If it is indeed Dalvik bytecode and the Dalvik JIT is used i wonder what benefit this approach has. If its not Dalvik bytecode i wonder why there are now two JIT compilers. Given that Renderscript should also be able to target GPUs at some point (i doubt that’s the case yet, docs are unspecific about this…) i assume it’s LLVM which has a GLSL backend.. edit: It’s indeed LLVM based. Sweet. In any case it’s pretty neat technically
  • Structs and functions you define in your Renderscript are reflected to Java classes and methods. It’s a little involved for obvious reasons. Reminds me a lot of JNA. The Java bindings allow you to interact with the native Renderscript, passing data to it and issuing function invocations.
  • If the Renderscript is a graphical one you additionally have a special SurfaceView (RSSurfaceView) to which a specific method (root() (?)) of your Renderscript will render to. There’s a native API available to Renderscripts that is used for rendering. Additionally you can set the vertex and fragment shaders along with some states for blending and similar things. It’s utterly confusing as it is a mixture of fixed-function and shader based graphics programming.
  • The SDK 10 page says Renderscript has its own shading language, whereas the docs say GLSL is used in custom vertex/fragment programs. I guess whoever wrote the SDK 10 page considers Renderscript itself a shading language? In the end it just exposes a specialized graphics API on top of GLES 2.0 plus some intrinsic types for linear algebra.
  • There doesn’t seem to be any way to debug Renderscripts (and that includes the CPU backend as well, for the GPU it of course makes sense…).

Seriously, i don’t know what to make of it, and i don’t just mean that i don’t fully crasp it yet. The samples in the SDK are either trivial and actually not really poster childs for when you should use Rendersrcipt, or they are beating you over the head with they heaviest stick they could find. The notion of being able to just drop an .rs/.rsh files into my java package structure to have native methods is great as oposed to writting JNI bridges. How they want to pull off a GPU backend is a little beyond me, but i give them the benefit of the doubt. In its current state the C99 + extensions doesn’t seem to translate well to the GPU (see OpenCL for a standardized solution). I don’t mind though, running computationally heavy code on the CPU would make me a happy coder monkey just as well. So, here’s a list of things that i think will make Renderscript only useful for a select few. Again, take this with a grain of salt, i probably don’t know what i talk about:

  • The documentation is severly lacking at this point. From experience i know that documenting something you’ve worked on for months (years?) is damn hard, especially with an uninitiated target audience in mind. In their current state no normal application developer can hope to understand what’s going on or how to assess whether to use Renderscript or not. Compute Renderscripts are probably a little bit easier to pick up. Graphical Renderscripts on the other hand demand such intimate knowledge of GLES 2.0 (at least that’s my impression seeing how you have to take care of vertex attributes, uniforms, constants, samplers etc) that most app developers will probably never be able to get to use that new shiny feature of Honeycomb
  • The fixed/shader based hybrid is really really really strange to someone who’s used to write straight GLES 1.x or 2.0. It’s hard to connect the dots and relate the Renderscript graphics API with what will happen in the shader. While there are a shitton of helper classes and builders that should probably ease the pain, they make matters even worse. Documentation is pretty spares in this case again. I don’t consider myself to be an expert, but i think i know my way around GLES enough to be able to assess this issue.
  • There doesn’t seem to be a way to debug things, even when run on the CPU. That will make it even harder for newcomers to get into Renderscript. The general trial and error and debug the hell out of shit approach just won’t work. It gets even worse when writting graphical Renderscripts i assume. The classic “wtf, why is my screen black” expression will be found on a lot of faces i guess. Not that this is any better when using straight GLES of course. But the layer of obfuscation will make it hard to look up solutions to common problems.
  • I hope there will be a community around Renderscript. Again, i’m far from being an expert, but to build that community you either have to have a very accesible API/framework/programming model or top notch documentation. At this point i don’t see either of those two things. A community could somewhat lower the impact of the documentation issue. But it’s really kind of a hen and egg problem. Without good docs not a lot of people will become knowledgeable experts, and without such experts there can’t be a community. Again, take this with a grain of salt. Reading through the docs and examples for a few hours certainly don’t allow me to make an ultimate statement about this. It’s just an impression i get from reading all this
  • People will take the easiest path to achieve the result they want. With graphical Renderscripts that’s probably using the fixed-function shaders provided as defaults and only invoking the “simple” drawing functions of the native graphics API. Seeing things like this
    makes me really really nervous. Unless there’s heavy batching performed it will be hard for these methods to let people achieve the performance they expect from a specialized DSL like Renderscript which is advertised to solve performance problems with regards to rendering
  • Hardware fragmentation with regards to GPUs. If the current situation is any indication then people will cry tears of anger after finding out that all the time taken to learn Renderscript did not pay off because some manufacturer couldn’t get its act together and adhere to the GLES 2.0 specs. Since Renderscript is primarily targeted at tablets and most of those seem to sport a Tegra by Nvidia i can imagine that it might be less of a problem. As soon as Renderscripts enters the handset space it will be a different story. Any phone advertised as supporting GLES 2.0 is a target now.
    • So, i wonder what’s the benfit of using Renderscript over custom native code written via the NDK in conjunction with GLES 2.0 (also exposed in the NDK). Ingoring the possibility for Renderscript to run on the GPU, i don’t see any reason why you couldn’t just create a standard Android App with a native layer doing all the heavy lifting. Benefits: debugging support, 3rd party libraries, greater control of what’s going on under the hood, less obfuscation. edit: derp, looks like Renderscript will automatically distribute computations over multiple cores. That’s of course a plus!. Of course, the graphics API and linear algebra intrinsics of Renderscript are very nice and make live easier (to some extend). But why not provide those to the NDK? Also, if you want easier access to native functions, why not make that work with the NDK in some way? Include the NDK toolchain with ADT. Get rid of JNI and brew your own thing. Make it easy for us to write general native code (including debugging).

      In terms of compute operations i have one word: OpenCL. It’s a standard. It’s being adopted by ATI, Nvidia and even Apple (not sure about IOS but i think i read about that somewhere). It has a few limitations of course, but overall it would be a valid alternative to compute Renderscripts.

      All in all i think Renderscript is an impressive technical feat. But while the demos based on it are really nice (Youtube app etc.) i just don’t see regular app developers getting a hold of it. Grain of salt, yadda yadda. Go check out the examples. And hope that your emulator can render them :p

      This makes me wonder: does the emulator now support GLES 2.0?

14 thoughts on “Renderscript

  1. A few notes:
    – RenderScript is not targeted at tablets, we’ve been using RenderScript since Android 2.0 for live wallpapers, 3D all apps in Home, etc. It has been working on several devices/hardware platforms for a while now
    – RenderScript does more than OpenCL and should be compared to CUDA instead (you can do pointers for instance); and CUDA works fine on GPUs 🙂
    – RenderScript uses LLVM indeed
    – Shaders are currently written in GLSL

  2. Thanks for the clarifications.

    Having used CUDA myself for a while i’d say Renderscript is even more powerful. However, i remember the constant battle with shared memory in CUDA along with other parameters (kernel size etc.) and i’d be interested to know how you guys managed to pull all that off automatically in Renderscript. On mobile GPUs even. There doesn’t seem to be any GPU specific configuration for compute scripts. With CUDA that stuff was crucial to get any performance out of the GPU at all (for non-trivial compute tasks…).

  3. Thank you for the quick summary Mario. I’m a newcomer to Android looking to make a game soon. A lot of your questions left me wondering whether some of RS would be officially merged with NDK. Are we going to see RS game demos over NDK? RS at GDC? etc. I’m not quite sure but at least I have libgdx!

  4. Romain Guy asked me to answer a few questions.

    The jit is LLVM. C is a proven language for both graphics and compute. Dalvik is not really designed for execution on a GPU.

    Fixed function emulation. The objective is to make app development easier. No reason an app developer should have to write a shader when all they want to do is draw a bitmap. The fixed function code generates a shader for the developer but provides a simple interface for doing so.

    Connivence functions such as rsgDrawQuad. Yes these can be abused. However, If all you want to do is draw a background image there is no reason developers should have to create a mesh just to do it.

    OpenCL. Renderscript started as a graphics API. Romain realized it could easily serve as a compute API also. Rather than cut feature (pointers, transparent graphics inter-op, etc) to make it more OpenCL like and add yet another API, we simply kept it as is. C is a well proven approach to compute, much more so than even OpenCL.

    NDK code is not portable. Yes you can simply drop to the NDK for performance sensitive code but developers must be prepared for CPU with and without VFP, with and without Neon, with various register set sizes, x86 vs arm, … So potentially to “just drop down to the NDK” you need several versions of the same code. This plan is also not future proof. If a new device appears with a new CPU design, existing NDK apps may not work. Renderscript is intended to solve this.

  5. @Nate, the NDK has its own issues. Writing JNI is annoying and you would need to compile the binary for several architectures (Google TV runs on x86 for instance.) If you need to write a bit of code to make something go faster (image processing for instance), using RenderScript will be portable, easy to write and you’ll get the same performance benefits as with the NDK.

    @Mario, RenderScript doesn’t run compute on the GPU as of today. But it’s been part of the design since day 1 and we have a very good idea on how to achieve it.

  6. Well, only a strong and reliable automated distribution of the computations over the different cores of the architectures would make it worth learning, compared to what is currently available and working.

  7. Mario thanks for the review and it’s nice to see input from the devs working on Renderscript too.

    As has been pointed out by Nate and such Renderscript is kind of useless for me and libgdx in many respects if it can’t work on the desktop or be ported, etc.

    As Paul & Obli pointed out.. Can’t we just have OpenCL. My understanding is that various mobile GPUs will be supporting OpenCL by the end of ’11 (I’ve read Mali will and guess Tegra 3 might).

    Of course Renderscript is mostly working now, but I just don’t see the need for a custom DSL / Android specific APIs for low level functionality if it means delayed support for cross-platform standards. Also the mixed fixed API with helper functions and various stuff just muddies things up (to be fair I haven’t dug in deep yet).

    To that end though to the Android team congrats on making something usable now, but don’t lose sight of cross-platform standards. There is only so much benefit to the Android team building custom solutions especially non-documented systems of this nature.

    I just can’t see any benefit to Renderscript as a dev providing middleware for cross-platform use and will likely be skipping this beast.

  8. I would not say that anyone has to take into account what’s good for libgdx :p

    But i too like to have me some open standards. Renderscript is a lot more powerful than OpenCL though in terms of expressivness. In terms of parallelization i’m not sure yet. It’s not documented and i can only assume that it is being done with intrinsics like rsForEach which can give the backend only a hint. No hardware dependend manual fine-tuning possible as often needed in say CUDA.

    I also agree that cross-platform solutions should be preferred. However, that does not seem to be in the interested of the OHA, and rightly so from a business perspective i guess.

  9. @Mario:

    I agree the OHA (erm Google.. ;P) can choose to play nice or not, but I’ve yet to see an official stance on not pursuing open standards, but RS could be a banner for such if not followed up with implementations of standards when they are available in hardware (soonish / end of ’11). It will be a short term victory if such a path is chosen.

    In brief I for instance in the mid/long term am seriously taking a look at OpenCL for audio computation tasks and Android as the OS for audio hardware and it’d be great for an open standard to be supported in that regard.

    As far as Renderscript providing additional features and expressiveness I have no doubt cleverness is involved, but
    I’m concerned Google can’t anticipate all uses. Those involved w/ Renderscript for instance are likely not considering audio computation for instance. I’m sure there are neat possibilities for RS though.

    Thus in the mid to long term it’s best if open standards are supported, so general low level expressiveness can be had by all. So an eyebrow remains raised on my side and will lower when OpenCL support is delivered. 😉

    Always glad to see things moving forward. It’ll be interesting to see how things roll together with ICS next.

Leave a Reply

Your email address will not be published.