New 3d API

Discussions for core devs and contributors. Read only for normal forum members to keep the noise low

Re: New 3d API

Postby mzechner » Sun Mar 24, 2013 5:33 pm

I finished up the first iteration of the new Model class, and also added a new ModelBatch class. It's essentially a RenderBatch class that works with the new Model class. You can see it in action here: https://github.com/libgdx/libgdx/blob/n ... lTest.java

I also refactored ExclusiveTextures. We now have a TextureBinder interface, with begin/end/bind methods. A TextureBinder is responsible for allocating a texture unit on bind, and binding the texture. In begin/end it has a change to setup/teardown units. It's mandatory to clean up the texture unit state in TextureBinder#end, otherwise we can't render other things, and we'd be fucked in case of context loss. ExclusiveTextures is now called DefaultTextureBinder, i also cleaned it up a little and fixed a minor bug (GL_MAX_TEXTURE_UNITS is a constant, not the actual value :)).

I also cleaned up other classes as well and removed the OldBatchXXX classes in the g3d.test package.

I put a lot of FIXME's all around the g3d package. We'll need to tackle those eventually. Things we should work on next:

- proper local/world transform calculations for hierarchical models.
- think about how to manage shaders. who creates/disposes them?
- think about RenderBatchListener, currently it's a bit complicated to know who's doing what (sorting of renderinstances, selection of shader, etc.) There's some cross-over with the Shader class
- start modelling the animation class
- add bone matrix information to renderable (basically an array of Matrix4 instances to be used by a compatible Shader). Think about how to communicate to the shader that a renderinstance is skinned (fake attribute? would be more unified, having the shader check if the Mesh has bone indices/weights is kinda meh :/).
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Sun Mar 24, 2013 6:49 pm

Ok, we resolved a few of these issues already. I removed the old RenderBatch stuff in favor of ModelBatch. The RenderBatchListener interface was replaced with the interfaces ShaderProvider and RenderableSorter. Easier to maintain, and more flexible. The ShaderProvider returns a Shader for a Renderable (or uses the one in Renderable). It's also responsible for managing resources and has a dispose method. The RenderableSorter simply sorts renderable by whatever criteria it wants. Both are used by ModelBatch to do what it needs to do.

I made everything properly disposable, so that should be fine. Apart from one thing: Materials. They may have textures, loaded when a ModelData is converted to a Model. Proposed solution:

MaterialAttribute implements Disposable. When a Model is disposed, it iterates through all MaterialAttributes and disposes them. This is problematic if we share Materials and MaterialAttributes across multiple models. If used via AssetManager, any referenced textures and other resources wouldn't be directly managed by the Attribute but by the AssetManager, so there's no problem there. Apart from the minor complication that an attribute that references a texture needs to know if it owns it (and can dispose it) or not (so the AssetManager is in charge of disposing).

Not sure yet how to deal with this, but i think it's the last stumbling block before we can jump into the really interesting stuff: node animation and skeletal animation :D
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Sun Mar 24, 2013 10:09 pm

Xoppa and I just agreed to add a ModelBuilder that can spit out common shapes like cubes/spheres/cylinders/capsules.
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Sun Mar 24, 2013 11:42 pm

And we fixed most of the issues by introducing a ModelInstance class. A model stores the original data plus meshes and textures. It owns the later two things. A ModelInstance is derrived from a Model, and has an additional transform for positioning it in the world. The ModelInstance makes a copy of the materials and the node hierarchy of the model. It does not copy meshes/textures, but instead references them. All ModelInstances of the same Model share meshes/textures. Due to the copying of materials and nodes, you can freely modify those in a ModelInstance, solving all the issues we talked about before.

Xoppa also fixed the bullet tests accordingly, wohoo :D

Next steps:
- Add a simple GLES 1.x Shader
- Fix up Gdx Invaders to use new ModelBatch, ModelInstances, with GLES 1.x and GLES 2.x renderer
- Remove the old model stuff entirely (graphics.g3d.old)
- Go through the FIXME's, and fix anything crucial
- Merge with master
- Write more shaders
- Tackle animation, adapt shaders to allow skinning :)
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Sun Mar 24, 2013 11:56 pm

Final thought of the day: modeller plugins. We could simply wrap fbx-conv into plugins for various modellers. The plugin would then first output a temporary fbx file, and invoke fbx-conv, all from within the modeller app. No need for command line stuff. Who wants to give that a try for Blender? :D
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby xoppa » Mon Mar 25, 2013 6:32 pm

I started to create a basic ModelBuilder class which is used by the bullet tests now. All bullet tests (except SoftMesh which commented out) are now working with the new model class including lighting. Since the bullet tests are changes a lot, i'll recheck them when the new 3d api is final and might slightly rewrite them on some parts for readability etc.
xoppa
 
Posts: 676
Joined: Thu Aug 23, 2012 11:27 pm

Re: New 3d API

Postby mzechner » Tue Mar 26, 2013 9:57 pm

Last four problems to solve: CPU-side skinning (GPU-side is "easy" with current setup), maximum #bone matrices, keyframed animation ala md2, telling a shader whether a renderable is static, keyframed or skinned. Kalle gave a lot of input on that on IRC, thanks.

CPU-side skinning
I'm in favor of not doing this. Problem: Model stores all meshes, ModelInstances reference those meshes for rendering. If we do CPU-side skinning for multiple model instances that share the same mesh, subsequent skinning overrides the results of previous skinning, as they'd all write to the same mesh. So, we'd have to have a mesh copy for each instance. That complicates ownership, i.e. what do i need to dispose? At the moment, only model needs to be disposed, model instances don't. With additional meshes per CPU-skinned model instance, the model instances need to be disposed as well. That's bad.

Maximum #bone matrices (uniform limits)
GPUs have a hardware specific limit on the #uniforms. On PowerVR it'S somewhere around 128 vec4s. That's 32 4x4 matrices, which would mean we can support 32 bones per skinned mesh. Problem is, we also have other uniforms (mvp matrix, normal matrix, lights, material etc.). Realistically, we can probably only have 20 bones per mesh across all GLES 2.0 compatible mobile GPUs. We can somewhat solve this by splitting up meshes that use more than that limit. The vertex data can still be in the same VBO (so same mesh), but we need to add additional mesh parts. That would be a preprocessing step, either at load time, or when converting the fbx to our format. I think for the first iteration we should note that 20 bones max is supported (probably 4 bone weights per vertex max, or 2, or make it configurable). If it becomes an issue, we can look into writting the splitting logic.

Keyframed Animation (or Morphs)
These are used for facial animation, or if skeletal animation is to costly (or you don't have enough bones...). Trade off is space, as you essentially store the mesh once per keyframe, in a different pose. MD2 does that. Currently, we don't support morphs in the json format, and we don't read morphs from FBX. The later isn't an issue, we can always sample a skeletal animation and create morphs for each keyframe. The bigger issue is where to store those morphs, and how to render the interpolation between 2 morphs.

Storing could work like this (thanks Kalle): say one Keyframe is stored as a single mesh part (so, subset of a mesh). That mesh part would actually be composed of two positions/normals/uv sets. One for the current keyframe, and one for the next. So each vertex in a mesh part would look like this:

Code: Select all
attribute vec3 a_position;
attribute vec3 a_position2;
attribute vec3 a_normal;
attribute vec3 a_normal2;
attribute vec2 a_texCoord0;
attribute vec2 a_texCoord0_2;
...


Say we have 128 animation frames, then we'd not have 128 meshes, but a single one, with mesh parts each holding 2 keyframes as above. This essentially doubles the amount of data we store (mesh part 1 = {keyframe 1, keyframe 2}, mesh part 2 = {keyframe 2, keyframe 3}), But that may be fine.

So how do we render this? First we need to define how we store the information. In the json format, we need to add a new section, maybe call it morphs. That will store animations as meshes and mesh parts. TBD, need an example, can probably derrive one from one of the MD2 models we have.

Nodes own meshparts and materials for each mesh part. When a morph animation is applied to a model instance, we simply point those node mesparts to the mesh parts that store the two key frames we need to interpolate.

Tell shader if renderable/mesh part is static, keyframed or skinned
Of course, the shader rendering those meshparts then needs to know about how to interprete the vertex attributes and interpolate them. We have the same problem in case of GPU-skinned models. We need to tell the shader (the thing that actually renders mesh parts + materials in form of a Renderable), whether a meshpart is static, keyframed or skinned. I think the cleanest way is to specify that via the material of a mesh part. Let's create artificial material attributes, e.g. KeyframedAttribute and SkinnedAttribute. If those are encountered by a shader in a material of a renderable, then it knows which rendering path to take. If neither is present, we assume static rendering. Could also coerce that into a single attribute, e.g. AnimationAttribute or some such.

I know this is a lot to swallow, would appreciate any input.
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Thu Mar 28, 2013 11:01 pm

Resolved a few more issues today.

Problem: Model is supposed to manage all OpenGL resources (meshes, textures). Managing mostly means being responsible to dispose them. That way model instances don't have to be disposed, you just carelessly instantiate as many as you want, modify the copies of materials and nodes in them, and when you are done, you only have to dispose the model to get rid of all the things. Disposing the meshes is easy, the Model already stores a list of those. Textures are a different beast. They are stored in material attributes, so the Model would have to walk through all its materials and their attributes, and would have to know which attributes hold a texture. That implies that the model has know all attribute classes, a no go.

First idea: make material/attributes disposable. Makes no sense for attributes like ColorAttribute or FloatAttribute. Also has the bad side-effect that material copies on instances could be disposed accidentially, killing the model and all other instances.

Second idea and final solution: Model has a list of Disposables. While creating the textures when loading the meshes materials from model data, the model puts those into the disposable array. In case AssetManager is used for loading, only meshes are put into that list, textures are managed by AssetManager.

In other news: we'll merge with master asap, to relieve some of Xoppa's pain. The old model-loaders stuff will be gone, so will the old 3D API.
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Sat Apr 06, 2013 10:24 pm

OK, we are almost ready to merge with master. I want to get gdx-invaders fully working first, that is, it should render exactly like the old version. I fixed up a few minor things tonight. I'll probably fix up some more things tomorrow. Not sure i want to keep the current architecture, that mixes model and view (Simulation & Renderer). What we need:

- blending seems to be broken with the GLES20 renderer
- need to add different light types (point + directional at least) and update shaders
- need to figure out why explosion in gdx-invaders is bonkers
- benchmark and optimize, i have a feeling we'll be quite a bit slower than the old version on Android.

Once that's done we can merge.

The logic of Material#add is a bit weird (which broke transperancy of blocks in gdx-invaders). If an attribute with the same type is already in the material, the new attribute isn't set.

We should also consider adding statistics wrappers for GL10/GL11/GL20 that count how often which method was invoked. That way we can track down per frame state change counts more easily.

In other news, i put fbx-conv on the server, see http://libgdx.badlogicgames.com/downloads/fbx-conv. Currently Windows32, Linux64 and MacOSX 64 only. Can't easily setup Jenkins build of that thing but will eventually look into that.
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

Re: New 3d API

Postby mzechner » Mon Apr 08, 2013 8:07 pm

Welp, we may have to rethink how to do lighting. The current GLES 2.0 shader is rather slow on mobile, we may have to specialize things. Kalle_h suggested we have a fixed lighting model, e.g. ambient + 2 points + 1 spotlight + ambient cube per object. That will make writing the shaders easier and can be mostly done in the vertex shader -> good for mobile.

Downside is the somewhat "arbitrary" limitation. But i think i could live with that. Some links:

Valve shading in HL2, explains ambient cube
http://www.valvesoftware.com/publicatio ... Engine.pdf

God of War 3 shading, interesting technique
http://advances.realtimerendering.com/s ... g%20Course).pptx

Reference shader for keyframed anim on GPU!
http://http.developer.nvidia.com/CgTuto ... ter06.html

I still have to figure out animation to some degree, mostly how to deal with morphs. I *think* it would make sense to introduce an attribute for skinned animation (would contain bone matrices) and morph animations (would contain the keyframe meshes shared across model instances, plus info on which two keyframes to blend with what weight).

sneaky edit: Kalle_h also proposed to use vectors/quats for encode translation/rotation/scale in a model(instance) instead of using a matrix. i tend to agree with this. thoughts?

So much to do, so little time :/
mzechner
Site Admin
 
Posts: 4875
Joined: Sat Jul 10, 2010 3:50 pm

PreviousNext

Return to Libgdx Development

Who is online

Users browsing this forum: No registered users and 1 guest