VRscosity Dev Blog #5: Fail faster

In design of all forms, there is the beautiful mantra of “fail faster.” The idea is essentially that you will never get any idea right in the first try – and ultimately, boiling something down to a high level idea, trying it, and seeing what works and doesn’t is the best way to learn.

And you should do this. Quickly.

Extra Credits (you really should follow these guys if you’re interested in game development) covers this very well, better than I can by far.

Tweaks and Balances

I’ve not written a dev blog in a long time simply because there hasn’t been much to comment on. After my last post about how I’m using raycasts to cull the number of chunks being rendered, my progress has been little more than tweaking. Specifically:

  • Moving the mesh generation about such that each chunk handles its own mesh, so updating individual chunks with the simulation will be easier.
  • Adjusting material use such that I can use texture atlas based systems to avoid too many draw calls.
  • Building a really annoying set of lookup tables to ensure the simulation works correctly without too much mathematics involved.
  • General refactoring and tidy up, including arranging the code into a decent set of namespaces.
  • Adding saving and loading functionality for when I eventually get to this point.
  • And of course, further testing in the Vive to ensure everything looks how it should.

But most importantly, I’ve been failing.

 

Slow Loads

Here is an example for you.

So the largest issue with my current culling system is that it relies on the automated Mesh Collider generation Unity offers. For situations wherein a model must have mesh-perfect collisions, they are a fast and effective way to generate a collider for that very purpose.

But fast is relative. Compared to having to manually calculate, model or generate a collision surface, it is very fast. But logically? Generating a Mesh Collider is slow.

So very slow.

Bad Physics generation
Generating colliders for 16³ chunks. This is miserable. Very miserable.

The result of this is that during stress tests, generating a 128³ Voxel grid in this way takes roughly 40 seconds. Generating a 256³ Voxel grid takes 400 seconds. This is, genuinely, the biggest limiter to the fidelity of the simulation – the load time of the scene.

The worst part of it is that these things are not thread safe – none of Unity’s main engine features are – so the mesh for each of the 512 chunks in the 128³ grid has to be calculated one by one. All I can do to mitigate this is reduce the size of the chunks to 8³ instead of 16³, which achieves no gain in load time, but does prevent the scene from reducing to 15fps (each generated mesh is smaller so takes less time in a frame) in return for four times as many batches drawn.

So of course I am looking for a solution, preferably one that doesn’t require physics. Unfortunately, due to how Unity’s ray casting works, no physics means no raycasts… which is a problem.

 

The Fast Fail, The Pride Cube

Ultimately what we’re looking for in the culling algorithm is what the camera sees and nothing else. If you boil this down further, you’re talking about exactly what is drawn on screen when you look in a given direction. So, essentially, pixels and their colours.

Enter what I have dubbed The Pride Cube.

3D Representation of 16-bit RGB space.
3D Representation of 16-bit RGB space.

The idea is simple: Assign each chunk its own RGB value as an ID in a sub shader, create a secondary render target at a lower resolution and output the result you see above to that render target. Iterate through the pixels and if a colour is seen, set that chunk as visible to the main render target. Good idea in theory…

…but in practice, not so simple. Multiple Render Targets as an implementation in Unity has been supported for a while, but the supplied documentation is obtuse and borderline unusable. On top of this, if you actually get the system working, you have to rely on black magic for it to work properly. Not reliable with VR applications. Yet at least (though Unity does support up to 8 render targets for hardware that supports it).

And once you get past that problem, which is certainly doable, you have the issue of meshes. You can hide a mesh from view of one camera source, but it will still be stored in the buffer for the secondary render target, nullifying the point of culling in the first place. It’s a tricky one for dynamic meshes.

This failure took me roughly 5 hours to prototype, test and scrap. It sounds like a lot of time – more than half a 9 hour work day – but compared to the overall time scale of the project this was a fast – and needed – fail. Now I know this system won’t work and I must consider other options.

But first I will be optimising the mesh generation to lower the number of polygons the physics collider is to handle when possible, and making sure the simulation actually works.

Leave a Reply

Your email address will not be published. Required fields are marked *