Author: Tetrino

ᴛᴇᴛʀɪɴᴏ moving forward

Due to various work and study commitments I’ve been a bit lapse on writing content for this here site. For that I am to be shamed. Regardless, now that university is officially over and I’ve settled into the working life, I can finally kick things into progress.


ᴛᴇᴛʀɪɴᴏ: The brand and a brief word on accessibility

Frankly I’m terrible at branding, but to start with it’s time to solidify the small thing I’m building up. With this in mind, TETRINO as a brand (stylised ᴛᴇᴛʀɪɴᴏ) is being collected into an actual entity on social media (and other outlets) and all further releases made as personal work will be under this brand.

Ideally, any contract work would also be under this brand, so if everything goes to plan you’ll see ᴛᴇᴛʀɪɴᴏ in many other places soon enough.

Of course there are accessibility issues to be raised with this style of lettering, especially as Text to Speech does not enjoy smallcaps (often just assuming it’s blank). Due to this, despite my love for the look, the social media presence is remaining regular text.

The goal is to avoid causing accessibility troubles for as many people as possible. All future uploaded images will also have accessibility text added through Twitter Image Descriptions and the Facebook equivalent. This reasoning is also why the website branding is remaining “TETRINO” in all-caps – text-to-speech can handle it fine.

I wont be removing or adjusting the previous smallcaps-only tweets as they have little in the way of important information.



I’m working on a couple of primary projects right now that I’m excited to write about, and will have more public information soon.

Project 1: VRAccess

With time on my side I’m finally able to continue work on my VR accessibility plugin VRAccess (working title, may be permanent), focusing on implementing an easy-to-use subtitling toolkit for use in VR applications. Further information on this will be released shortly, however my main workload currently is in building a proof of concept for distribution amongst deaf and hearing-impaired VR community members for feedback.

The toolkit is being built for use in Unity 5.x and (hopefully) above and relies on the VRTK. The intent is to port it to other engines when possible, but I find it’s easier to focus on one engine at a time.

When the first phase of VRAccess is complete there should be a usable subtitling system that not only allows for subtitling of objects dynamically, but also allow for indication of where those objects are in 3D space

Project 2: Project Carter

I have gushed about this project on Twitter before, but to simplify: Project Carter is a free-release game, designed to be a hybrid of FTL, Factorio and Rimworld wherein you the player must guide a sleeper ship to a new home for humanity.

You have to juggle failing systems, crew relationships and the limited resources you have at your disposal to ensure that humankind reaches a new destination. Losing a crew member may not be the end of your run, and sometimes such sacrifices must be made to save the thousands kept on ice, waiting for a new home.

As I spoke of in the above thread, I grew up watching all of these deep science fiction universes that filled their worlds with great – if fake – science. Things felt real, you understood why everything worked. Technobabble was still technobollocks, but it was the dogs technobollocks.

Yet we never really had a game that properly explores this concept. We’ve had some that grazed the top of it, and plenty that simulates group-bridge play (Artemis Bridge Simulator being the most well known) but nothing really let you dig deep in maintaining these systems long term. Though, Hellion sure looks promising.

Project Carter is me taking the core concept behind FTL: Faster Than Light (simple management of crew and systems) and building the antithesis. The ship has resource flow and power management to consider, but that’s not the half of it. Too much power could blow a fuse – but which one out of the hundreds? Has the fuel intake valve for the lower port RCS thruster locked open again? Did a bolt on the hull get sheered off during that collision with a meteorite? Do we have to route coolant through a corridor else risk losing our frozen payload?

I’m not expecting Project Carter to be a popular game, but I don’t need it to be. I’d be building it for myself, on my own time. But, if people want it, it’ll be freely available. Further information on this will be coming once I’ve finished wrapping up VRAccess.

Other things…

I’ve a few smaller projects on the go that will be announced in due time. Some of them visual, some musical, some for existing games I’ve nothing to do with (being a fan of machinima and having made some in the past). However to answer some questions I received privately:

VRscosity, which was my dissertation project will be released for free to everyone who wants it within the next couple of days.

How Small We Are is a slow-burner project I work on to relax from these other things and work, so it’ll come but I cannot set a firm date.


Social media

The brand Twitter account is to be the main access point on social media, however a Facebook and YouTube presence exists for cross-posting and (hopeful but impossible) avoidance of freebootingSpecifically the second definition on Wiktionary.

There is also the Discord channel for those that wish to join and have a quiet natter.

There is a locked off Bandcamp and Patreon page for when the time is right to unlock them, and to be honest I’m primarily sitting on them in order to prevent malicious URL grabbing so don’t expect the Patreon to be public for a while. Bandcamp may become public once one of the few projects I’m focusing on comes to fruition.


To close…

The next year or so is going to be an exciting time for me, as I hope it is for all of you. I will finally push myself to where I’d like to be, and I’d like all of you to join me on this adventure!



Avoiding Incarna: How Elite: Dangerous could be in trouble

It is no small secret amongst friends that I am a huge fan of science fiction. Some of the first books I read were from Asimov and Clarke, I grew up watching the various Star Trek series in the home, and I am currently doing a personal re-cut of Silent Running to include the 65daysofstatic re-score simply because I canLet’s be fair, I don’t know anyone who’s a fan of the original soundtrack.

Naturally, this lends itself to games. Homeworld is one of my top titles of all time, I played Eve Online for more than a decade, and own almost every X release.

No surprise then that I’m a massive fan of Elite: Dangerous. Essentially “Elite 4” as known for years before a rather successful Kickstarter gave it both the name we know and the funds to prove its market, Dangerous offers a multiplayer sandbox literally the size of our galaxyThanks to the wonderful world of procgen filled with interesting sights to see, sites to visit, and slights to perform.

Yet if you listen to the vocal community, the game may be in trouble.

It could be argued that there are many faults with Elite: Dangerous’ game design. Entire systems are built around the concept of “grind as depth” and this has resulted in a lot of criticism from the community. However, said faults are not the focus of this post.

I must also point out that below is in no way an attack or commentary on individual employees of Frontier (the developer), more the general view of how some things are seemingly done from an outsider’s perspective. When talking about dev time, remember that I am referring to the time that is given to specific tasks as dictated by a schedule that is often if not always outside the control of the developers themselves in most commercial software houses.

The Incarna Problem

In the modern world of large-scale online games, there has emerged multiple instances of what I have coined The Incarna Problem. The concept is that over time, the producers and managers responsible for the maintenance of a long-term online game choose to focus on expansion rather than refinement, to the ultimate detriment of the game itself.

To explain this in detail involves speaking some of Eve Online and its troubled development history.

After many years of following bi-annual “expansion release” schedules, lots of issues with Eve‘s core gameplay were simply left to flounder, and the vast majority of the new gameplay additions contained within these expansions were left to drown.

For example, Empyrean Age was released mid-2008 with their first-ever CG enhanced trailer, several in game events and even their first tie-in novel. But the gameplayWhich was unfinished and in many ways functionally broken was essentially ignored even in the late-08 expansion and left to flounder right up until late-2011.

By the end of 2010, “unfinished additions” became an accepted, but annoying, fact of Eve and the player base as a whole was actually having enough. The vast majority of “top level” players just accepted that each expansion was used to sell the game to more people, rather than encouraging the enjoyment of those who were already giving money.

Incarna, an expansion that was to introduce out-of-ship interaction, was touted as this massive new frontier for Eve – and marketed as such since the original preview videos in 2008. However, come release in 2011, it was barely in an alpha state – broken, crap performance, had a single room with barely any interactivity, and wasn’t even optional, meaning some people couldn’t actually log in due to the performance or crash bugs. Top this off with a brand new cash shop infamously featuring a $70 vanity item and the straw finally broke the back of the camel.

This monocle, worth $70, became a symbol of the greed of the developer.

The result? CCP lost a lot of money. Like all MMOs launched in the early 00’s, Eve actually relied on monthly subscriptions and the resulting player protests resulted in these dropping away like a cliff. The sudden reduction in income meant they had to reduce their staff by a few hundred, stop working on their World of Darkness MMOThey essentially purchased White Wolf in 2006 explicitly to work on it, and actually start listening to their existing player baseAnd paying attention to the devs in the mines.

Eve itself, however, benefited. The late-2011 “expansion” Crucible was barely more than a giant fix for hundreds of bugs and various other missing components, many of which had been standing since 2007 or earlier. These days CCP is in reasonably good standing, and while the running trope of “Eve is shit, but the community is great!” still runs, the game would simply not exist in its current state if the management didn’t pull their heads out of their arse and listen.


Repeating history

Bringing it back to Elite: Dangerous, a game that is approaching two and a half years of lifetime, and a lot of the above rings surprisingly true.

Since the release of the game, there have been several feature updates, many of which have been sold through trailers and related press releases to the wide world as a massive new addition. Like Eve, a player is required to download these updatesIn a stripped-down form if they do not own the Horizons season pass if they wish to continue to play the game. Like Eve before the Incarna incident, each addition is unfortunately left in an unfinished state.

In software development you have the concept of a “Minimum Viable Product.” This is true for pretty much all manufacturing and media industries as well. Essentially, what is the minimum you can get away with while still being able to provide something at least some customers will be happy with. The original intent of the MVP in software is to build upon it through user feedback, yet is often used as a “get out clause” for releasing and then dropping content that is beginning to exceed its allocated development budget.

Dangerous is filled with these. Since release, there have been multiple major additions to the game, and each one betrays a pattern as seen in Eve‘s own release backstory. Each one of the major patches were pushed hard in the media and generated plenty of sales, but left those who were playing already a little sour taste in their mouth.

Powerplay for example was sold as a massive game changer, allowing players to affect vast empire politics. The result was more of a numbers game many easily ignore. Wings allowed players to play together, but was seemingly released in spite of the game’s own networking code, and its buggy unfinished social frameworks were not even remotely finished.

That was until the most recent patch, The Commanders, leaving a gap of more than two years where this feature was rather incomplete.

CQC was supposed to be the developer’s entrance into esports, but even their own official tournament was cancelled as they didn’t have the development time allowance to make it what it needed to be. Even Horizons, the game’s fabled “planetary landings” patch, suffered.

EngineersGuardians and The Commanders were all sold to the greater public as massive improvements to the game, designed to bring in as many new players as possible. But for those already playing, each update added further content that was expected to not see any real improvement or changes for many years to come. If you ignore the design decisionsEach update’s core design hook so far has been based on the easy get-out clause of RNG-based time sinks, the fact that the update cycle is unfinished (or unrefined) content stacking on unfinished content is leaving current players of Dangerous feeling much the same way as players of Eve did in the years before Incarna.

Elite: Dangerous is very much suffering from its own version of The Incarna Problem. It has even been put forward that the next major release, 2.4, may well be the direct analogue.


The issues of money

There is a lot that could be said about the content that has been added. One of the key criticisms made by the vocal aspects of the community is that the content that is mostly bug-free on release is the content that is designed to make the developers Frontier the most amount of money.

A prime example of this being The Commanders, which introduced plenty of micro-transaction items to buy for your in-game pilot avatar, but featured some game-breaking bugs that were known before release. However, they were not fixed as development time was commanded to be spent on this aspect of the gameNot forgetting the almost bug-free improved external camera, such that players could show just how pretty the game is to the world… to perfect it.

Elite Monocle Parody

The crux of the issue is that this content is stacking and is often unfinishedElite: Dangerous today is very similar to Eve in late 2010 in that there are loads of potentially great systems that are being held back by the lack of time to really flesh them out. Yet, and this is important, nothing is going to change. This could mean Elite is in trouble.

As that, as mentioned before, Eve is a subscription based game. When players decided they’d had enough with the developer’s collective shits, they voted with their wallets, and for a subscription based gameAs well as a company that has built its entire financial model on it this is a very dangerous thing to do. Eve losing a thousand accounts results in CCP Games losing $15,000 a month revenue. Incarna resulted in ten times that.

Elite: Dangerous however, is not, and I would argue on consideration that running costs are nowhere near as high.

The lack of subscription fee for Elite: Dangerous means that ultimately, if half the player base quit overnight, Frontier may note it down in their journal, but they’re unlikely to be too worried about it. The game is funded by aesthetic micro-transactions and press bumps from patch releases, and as long as they both still bear fruit there is little need to worry about the majority of player contribution, or even much about what they are saying.

On top of this, using P2P connectivity means that while players get a subpar experience in some multiplayer situations (someone with low bandwidth is going to cause hell to everyone else), the costs of running the servers for the game are comparatively low to hosted connectivity, simply requiring players to be linked together and the most basic of checks to be carried out. Frontier likely do not need a large number of players to keep the servers running at all, as long as those that are playing still buying skins for their ships and suits for their avatars.


Avoiding Incarna

If the developers want to avoid a catastrophic loss of their core players, they’ll have to do something about the mounting pile of unfinished features, known bugs and game play annoyances sooner rather than later, especially after The Commanders created so much negative community opinion.

However, it’s possible that Elite: Dangerous is already on its way to avoiding a similar final straw like what Eve saw with Incarna. Or that they simply don’t care if it happens.

Recently Zac Antonaci put out a statement over what is coming to the game within the next year or so, with mention of coming updates designed around cleaning up the core gameplay loops once the 2.4 update cycle is complete. This could mean two things, I feel:

Either they are finally in a state where it makes sense to do this before moving forward or else be faced with an insurmountable task;

Or the developer, having reached the end of term with the product, is aiming to wrap up the game’s main development come the end of 2018 (now that they have other guaranteed incomes in the forms of Planet Coaster and an unannounced movie tie-in) and would rather do it with a product they don’t have to sink much more in the way of development time into.

After all if you clear the lawn before ignoring it for a while, you’ll have an easier time clearing it again when your neighbours complain about the leaves.

Wallpaper Released for Spectrum

Le Frimeur Part 1: Personal projects

2017 marks roughly ten years of myself finally starting to get to grips with who and what I want to be in life. The fact it took me ten years to solidify this to my current path says… well, it’s an indication of many things.

But ten years of faffing about has led to a lot of various connections as well as things I have tried, failed and sometimes even succeeded at doing.

Unfortunately, it’s also been ten years of fighting depression and self worth issues. I am finally sorting myself out, and my current situation is thus: Degree is ending soon with an almost guaranteed upper-class second, I have guaranteed graduate employment waiting for me, and an EngD application sent off. So not in a bad place!

However, with some nagging doubts in my mind still, I figure it’s about time I posted some things I’ve done over the years.

Le Frimeur – “The Show-off” – is a short series of posts I’m going to publish talking about my own work and projects, as well as other things I was directly or indirectly involved in and/or responsible for.

Some were good, some were bad, and most were failures, but here it is!

Note: Many of these projects are under the name “A Rock Called Steve” which was, for the longest time, the studio title for online releases.


Major Personal Projects

Simply stuff I’ve done without being on someone else’s contract. Nothing particularly finished really, either. But at least some attempts.

Project 1: The Unfinished Machinima – Ad Astra

From 2009 to 2015, I worked on and off on a series of pre-rendered animated shorts set within the Eve Online universe dubbed Ad Astra. Designed as a multimedia project including written, audio and visual works, the only public releases were a series of test videos and a (now removed) unfinished proof-of-concept short.

Ultimately the project failed because its scope was far too large for the time I had available to me. Regardless, I sunk hundreds of hours into it and learned a lot about animation and rendering techniques – let alone about the Eve Online engine itself, and how it handled its asset pipeline.

I don’t consider this work for nothing, despite the lack of output. At the time, I ended up discussing at length – and solving various engine problems with – the man behind Caldari Prime Pony Club‘s Jeremy viewer (and soon to be real-time, WebGL machinima tool) and the man behind the (seemingly abandoned) Eve Outtakes for beneficial results for all.

The last bit of work I put into the concept was in late 2014, after spending a year out of the game itself, and was a test to see if I could drop the photo realistic look I was going for in stead of a cell-shaded affair. It worked out okay, but I doubt I’ll return to Ad Astra any time soon.


Project 2: The Failed IndieGoGo – Spectrum

Spectrum was an idea I had while floundering for work around 2012. For some reason I got it into my head that I could, with my knowledge of the time, develop a side-scrolling shoot-em-up (shmup) with a unique style based on the work of Piet Mondrian.

I also decided, after the success of a lot of larger Kickstarter and IndieGoGo projects in 2011, that crowd-funding it would be a great idea.

Wallpaper Released for Spectrum
Wallpaper Released for Spectrum

Of course I was very, very wrong. The graphical style was fine – after some friendly and helpful advice from Cat Musgrove I refined the last couple of major faults and managed to get a suitable demo of the style ready. Unfortunately, I was naive enough to do everything too quickly, and support died quickly.

If you want to see the demo in action, I have a video here.

These days the only evidence I have that the project existed at all was a brief mention in a long-dead Kickstarter Katchup periodical on Rock Paper Shotgun (ctrl+f “Spectrum”) as IndieGoGo removes failed projects from their archives after a few years. In this case? Probably for the best.


Project 3: Various Attempts at YouTube

Over the years, I also attempted a few times to get into the YouTube community and potentially make a bit of coin that way. Most of my content was gaming related and very low brow – most of it either private or out-right removed from my channel over the years.

Amongst these were yalP s’teL, which was 19-20 year old me trying to be experimental and release backwards Let’s Play videos for the fun of it, and Once More Unto the Breach, which was the more traditional take on the idea. At one point I was in talks with the wonderful Amy on doing videos with herself – including testing Torchlight 2 at one point – but time on both of our sides fell apart (and she ended up working somewhere awesome).

The last attempt in 2013 was more of a hobby thing than anything else, as my application as a mature student to do a BSc had been confirmed. It was also around this time that Google saw it fit to lock off the income of a large portion of the gaming YouTube community thanks to a rights disagreement, which basically drove me off entirely for a while.

The series, called Ways to Play, was heavily inspired by Loading Ready Run‘s X ways to Y skits (you’ll have to search the videos for “Ways to”) and was a bit of prodding at various, less regular ways to do things.

Or rather that was the theory. In practice the videos fell short (mostly) and took far too much time to make for the result. You can watch the playlist here, though the only ones I’m really happy with are Kerbal Space Program (though with an intro that is far too long) and Dishonoured.

I still like this little bit from the latter, though.

That’s it for major projects over the past few years – or at least, all I want to write about them. Part 2 and Part 3 will be on their way when I have time.

3D Representation of 16-bit RGB space.

VRscosity Dev Blog #5: Fail faster

In design of all forms, there is the beautiful mantra of “fail faster.” The idea is essentially that you will never get any idea right in the first try – and ultimately, boiling something down to a high level idea, trying it, and seeing what works and doesn’t is the best way to learn.

And you should do this. Quickly.

Extra Credits (you really should follow these guys if you’re interested in game development) covers this very well, better than I can by far.

Tweaks and Balances

I’ve not written a dev blog in a long time simply because there hasn’t been much to comment on. After my last post about how I’m using raycasts to cull the number of chunks being rendered, my progress has been little more than tweaking. Specifically:

  • Moving the mesh generation about such that each chunk handles its own mesh, so updating individual chunks with the simulation will be easier.
  • Adjusting material use such that I can use texture atlas based systems to avoid too many draw calls.
  • Building a really annoying set of lookup tables to ensure the simulation works correctly without too much mathematics involved.
  • General refactoring and tidy up, including arranging the code into a decent set of namespaces.
  • Adding saving and loading functionality for when I eventually get to this point.
  • And of course, further testing in the Vive to ensure everything looks how it should.

But most importantly, I’ve been failing.


Slow Loads

Here is an example for you.

So the largest issue with my current culling system is that it relies on the automated Mesh Collider generation Unity offers. For situations wherein a model must have mesh-perfect collisions, they are a fast and effective way to generate a collider for that very purpose.

But fast is relative. Compared to having to manually calculate, model or generate a collision surface, it is very fast. But logically? Generating a Mesh Collider is slow.

So very slow.

Bad Physics generation
Generating colliders for 16³ chunks. This is miserable. Very miserable.

The result of this is that during stress tests, generating a 128³ Voxel grid in this way takes roughly 40 seconds. Generating a 256³ Voxel grid takes 400 seconds. This is, genuinely, the biggest limiter to the fidelity of the simulation – the load time of the scene.

The worst part of it is that these things are not thread safe – none of Unity’s main engine features are – so the mesh for each of the 512 chunks in the 128³ grid has to be calculated one by one. All I can do to mitigate this is reduce the size of the chunks to 8³ instead of 16³, which achieves no gain in load time, but does prevent the scene from reducing to 15fps (each generated mesh is smaller so takes less time in a frame) in return for four times as many batches drawn.

So of course I am looking for a solution, preferably one that doesn’t require physics. Unfortunately, due to how Unity’s ray casting works, no physics means no raycasts… which is a problem.


The Fast Fail, The Pride Cube

Ultimately what we’re looking for in the culling algorithm is what the camera sees and nothing else. If you boil this down further, you’re talking about exactly what is drawn on screen when you look in a given direction. So, essentially, pixels and their colours.

Enter what I have dubbed The Pride Cube.

3D Representation of 16-bit RGB space.
3D Representation of 16-bit RGB space.

The idea is simple: Assign each chunk its own RGB value as an ID in a sub shader, create a secondary render target at a lower resolution and output the result you see above to that render target. Iterate through the pixels and if a colour is seen, set that chunk as visible to the main render target. Good idea in theory…

…but in practice, not so simple. Multiple Render Targets as an implementation in Unity has been supported for a while, but the supplied documentation is obtuse and borderline unusable. On top of this, if you actually get the system working, you have to rely on black magic for it to work properly. Not reliable with VR applications. Yet at least (though Unity does support up to 8 render targets for hardware that supports it).

And once you get past that problem, which is certainly doable, you have the issue of meshes. You can hide a mesh from view of one camera source, but it will still be stored in the buffer for the secondary render target, nullifying the point of culling in the first place. It’s a tricky one for dynamic meshes.

This failure took me roughly 5 hours to prototype, test and scrap. It sounds like a lot of time – more than half a 9 hour work day – but compared to the overall time scale of the project this was a fast – and needed – fail. Now I know this system won’t work and I must consider other options.

But first I will be optimising the mesh generation to lower the number of polygons the physics collider is to handle when possible, and making sure the simulation actually works.

VRscosity Dev Blog #4: Lasers of culling

After a data structure is decided upon for a Voxel system, you run into the problems of rendering. As mentioned in a previous blog post exploring a screw up of mine, the crux of my solution is this:

“In brief, the final render system works similar to how other such systems do. A chunk is generated by building meshes by vertices and then once the desired size has been reached, a new chunk is started, repeat. Any Voxels that don’t touch an empty space aren’t rendered, essentially culling a lot of the work that the extra meshes would cause the graphics card. Chunking in itself also lowers the amount of draw calls made, resulting in (generally) better performance all around.”

Generating mesh is only one portion of the overall challenge. The second is that of effectively optimising what is displayed so that the system runs as fast as possible. Why bother displaying chunks that can’t be seen by the user? The answer to that is simple: Culling.


Frustum Culling

The less meshes the graphics card has to draw, the less overall drawcalls made, the less vertices displayed and the faster everything runs. Unity’s rendering engine has the wonderful function of camera frustum culling – if it’s not visible by the camera’s field of view, it’s unloaded from the graphics pipeline. In most situations, you can combine this culling with Unity’s own baked occlusion mapping to save drawcalls made creating geometry the player cannot see at that position (usually because said geometry is behind another). Great! But…


Unity is Not Enough!

The issue is that we’re dealing with procedural geometry that is generated upon the loading of the program. Unity’s occlusion system is solid, but as hinted before, is baked: It relies on the meshes being there already so that an occlusion map can be built and stored in the editor. When things are dynamically generated, this is impossible with the provided tools. So a workaround must be constructed.


Chunks to the Rescue

By rendering in chunks, you not only manage to avoid generating too large a mesh, but you also allow for culling of entire areas of the world that wouldn’t be visible to the camera. The issue is how to cull effectively and with minimal overhead.



The easiest solution by far is to draw lines between the camera and the chunks, and removing any chunks that cannot see the camera, or vica-versa. After various attempts, the latter became the smarter option, hence Camera.ViewportPointToRay(), a wonderful function that allows you to draw a virtual line from a given point in the viewport (best described as the camera’s field of view) out from the camera.

The concept is that any chunk a camera ray hits will be rendered, all others will be hidden. Rays won’t travel through chunks due to the reliance on physics colliders, and chunk mesh shape is unimportant assuming the automatically generated collision mesh doesn’t fill any wanted holes. In a perfect world, the entire viewport would fire out rays every frame and cull anything invisible.

This is not a perfect world. The overhead of tracing rays (many thousands per frame), and handling collisions for such a scenario is immense. It’s untenable in any situation, let alone a real-time application aiming for 90 frames a second!

Performance drops every 0.2s with that method, down to 15fps!


in order to save this overhead, I took the approach of adding a “gaze” timer to each chunk, and scanning across the viewpoint in vertical scanlines. Whenever a chunk is hit by a raycast line, it is set as visible and starts to count down. If the chunk is not hit by another raycast within the time limit (currently 5 seconds in the prototype), it disappears. If it is, the timer resets. Scan from both sides in both directions and you end up with a laser butterfly of cheap culling:

By handling only a couple hundred points rather than several thousand, the overhead is vastly reduced. Even with two cameras, as is the standard for VR, we’re talking huge gains of performance:

Scanline approach is much faster – peaks of 4ms (250fps!)

Obviously the method will require further tweaking – cameras hooked to a VR headset aren’t exactly a stable scenario versus a still camera being dragged around. But it’s a start, and one that can be built upon.

Next step: Nailing down the simulation!


VRscosity Dev Blog #3: Mind your loops

In an upcoming DevBlog I’m going to be talking at length about the steps taken to generate the graphical side of VRscosity. It’s relatively boilerplate stuff – Voxel engines have effectively been long solved, so it’s simply a case of implementing something I’ve done a couple of times before.

…Which is why a particular prototyping bug I had drove me up the wall.


The Symptoms

In brief, the final render system works similar to how other such systems do. A chunk is generated by building meshes by vertices and then once the desired size has been reached, a new chunk is started, repeat. Any Voxels that don’t touch an empty space aren’t rendered, essentially culling a lot of the work that the extra meshes would cause the graphics card. Chunking in itself also lowers the amount of draw calls made, resulting in (generally) better performance all around.

The issue I was having was one of bloat. As each chunk was being generated, the amount of vertices required was exploding exponentially. By the time the final chunks were being drawn, they were trying to add too many vertices to the mesh (which, in Unity, is the hard 16 bit limit of 65536) and failing. Some of the chunks were reporting vertex collections close to 100,000 strong, which shouldn’t be anywhere near the case in a system which relies on each side being no more than 1024!

Holes in the mesh? This isn’t right.

The Source

I threw myself at this problem for more hours than I’d honestly like to admit, trying to work out a solution. Ultimately, after relying on console printouts to show me anything at all, I realised it was my own damn stupidity.

One of the issues with building 3D Voxel environments is that you essentially, somewhere, have to rely on six nested loops. Three for each direction (X,Y,Z) of chunk, and three for each direction of Voxel within the chunk. The mechanism is basically this:

//Setup chunkX/Y/Z here, and the expected chunkCount, then:
  while (chunkY < chunkCount) {
    while (chunkX < chunkCount) {
      while (chunkZ < chunkCount) {
        //Setup Voxel vertex factory, then do this:

        int y = Math.Max(chunkY * chunkEdgeSize, 0);
        int x = Math.Max(chunkX * chunkEdgeSize, 0);
        int z = Math.Max(chunkZ * chunkEdgeSize, 0);

        while (y < ((chunkY * chunkEdgeSize) + chunkEdgeSize)) {
          while (x < ((chunkX * chunkEdgeSize) + chunkEdgeSize)) {
            while (z < ((chunkZ * chunkEdgeSize) + chunkEdgeSize)) { 
              //Generate vertex here
            z = 0; //Oops! - Read below
          x = 0; //Oops! - Read below
        y = 0; //Oops! - Read below
        //Update chunk mesh and apply material
      chunkZ = 0;
    chunkX = 0;

This code is a mess, but it works and as a prototype that was the goal. It’s simple enough, but for some reason was causing overdraw and I had no idea why – especially when the internal mechanism for drawing the vertices was working perfectly.

It may be lack of sleep, but eventually it clicked, and I’m a moron.


The Cure

int y = Math.Max(chunkY * chunkEdgeSize, 0);
int x = Math.Max(chunkX * chunkEdgeSize, 0);
int z = Math.Max(chunkZ * chunkEdgeSize, 0);

So this section is designed to ensure that chunks only started drawing vertices at the current chunk location. It’s called at the start of a new chunk with the idea that the first vertex considered to be rendered for the second chunk, assuming chunks are 8 Voxels across, would be for the ninth Voxel.

But in a moment of stupidity that took me far too long to notice, I was zeroing the variables every loop, instead of calling the above! This resulted in a situation wherein the earlier chunks would “work” fine, but every loop the Voxels would be displayed from the very first one, every time, rather than from the chunk location. This resulted in both overdraw (as vertices were being placed in the same location multiple times), and come larger world sizes, massive amounts of vertices.

Basically the moral of the story is twofold: If you have a problem, come back the next day, work on something else in the mean time. And secondly. check your damn loop conditions.

More on

VRscosity Dev Blog #2: 500 million bees

There is an old saying that a single bee sting hurts, but a thousand kills. If you ignore the reality of this idea (there are plenty of other factors that affect your reaction to bee stings) the premise is that enough of any small thing can eventually overwhelm.

In software, the developer is constantly wrestling with various factors. In game development, chief among them is performance. Memory and CPU usage has to be kept as low as possible for any given function and writing cheap, efficient code is paramount.

However, some things are simply unavoidable.


The Anatomy of a Voxel

There are many ways to build a Voxel system. Minecraft and many of their ilk rely on chunk-based systems and trees to build their dynamic worlds, wherein as the player loads in or expands the world, new areas are generated based on an algorithm in blocks. I mentioned before how VRscosity is not Minecraft, but the latter is a well optimised example of a Voxel system in action.

In Minecraft’s specific case, each chunk is 16x16x256, or roughly 16 bits of block registration. Each chunk is in itself separated into 16 render zones, and are updated based on this separation. Hard numbers for how much memory a single block in Minecraft requires are hard to come by, but it’s somewhere between 1.5 to 3 bytes per block pre-compression.

So at Minecraft’s maximum default loading configuration (which loads 25×25 chunks in memory, storing the rest on disk) is roughly 120mb of RAM for 3 Bytes/block. In practice it’s actually more, thanks to overhead from other concerns such as object referencing (8bytes per reference in 64bit systems) and entity loading, but as a pure system it’s reasonably compact.


The Issues of Exponential Growth

Unfortunately, any system that deals with squared numbers won’t stay small for long, and any addition to the world size dramatically increases the amount of memory used.

As with the given example above, just dealing with block usage of 3 Bytes each (so no referencing overhead or compression), loading a 33×33 section of chunks in Minecraft takes 204MB of RAM, but loading a 65×65 section of chunks (the maximum Minecraft allows) takes 792MB!

Suddenly a single bee (block) becomes a deadly swarm of memory usage.


VRscosity Voxels

Taking the above problem to heart, the fact that I am planning the maximum sandbox size in VRscosity to be 1024²*512 (1024*1024*512) is showing the problems involved with handling so many objects. Like Minecraft’s top estimate, currently each one of the Voxels in VRscosity takes 3 bytes. On paper, a 512³ box of Voxels should take around 384MB of RAM, whereas a 1024²*512 box takes 1536!


Memory Issues

It is a large jump, but not actually the biggest problem to consider. Without any measures to mitigate loading, there is the issue that a full box of voxels is a collection of just shy of 537 million individual objects. Storing and accessing these objects is a complicated dance to learn!

The first large problem is referencing. In the worst case, which is a 64bit system, any given memory reference requires 8 bytes of memory to hold in itself. If I were to reference each Voxel instance, I’d have to add  double the size of each Voxel to the memory cost calculation. This results in the ram usage for the given max size jumping from 1536MB straight to 5632 – almost 4GB purely used for references! Obviously, this isn’t tenable.

To really show the kind of scale I’m dealing with here, moving the Voxel from 4 bytes to 3 saved over half a gig of memory on its own…

One bee is fine. Millions are not!


Early Bird Solution

The solution I came up with was to scrap any pretence of building a Voxel as a class. While there are advantages to holding a reference pointer to a class – if only for mutability if nothing else – when making so many repeated class references the penalty is too high.

Thankfully, C# supports Structs.

Structs, for those who are unaware, are essentially data structures – collections of variables with, ideally, no other functionality other than access. They have no overhead associated with them, so their footprint is as large as the contents (a 3 Byte Voxel is a 3 Byte struct, but an 11 byte referenced class), and creation (as well as access) is very fast.

The disadvantage is that Structs are immutable. The situation that gives them their speed (the stack allocation without the heap) means that they cannot be directly modified without pulling some tricks that remove that advantage in the first place. Any changes I wish to make would require the creation of a new Voxel and then overwriting the old one. Thankfully, this in itself is incredibly quick.

So for now, the model is literally a 3D Array of Structs. It’s not pretty, and can be further refined (there is no need to have an “air” object really, as functionally air is empty space). It is, however, the best interim solution while I hammer out other problems. With compression, it may not even be that bad at the end of the day.

That is a future blog though!

More on

VRscosity Dev Blog #1: Not Minecraft

Many years ago, when the internet was only just maturing into what we know of today, I would spend many hours fluttering between various free games as they came up. Be it Runescape, some cheap thing from Kongregate or Albino Black Sheep, as was typical of youth I found ways to waste my time for as little money as possible.

One of the constant little toys that I’d continuously return to was that of the Sandbox variety. Long before the age of Minecraft, there were plenty of flash and java games available that on a quick glance were little more than embedded versions of MS Paint. However, unlike said drawing program, these toys contained various different materials that could be placed, which would interact with each other.

More advanced versions included acidity, electricity, temperature changes and air pressure, most notably the Dan-Ball’s Powder Game, which has long set the staple for these virtual 2D sandboxes. Eventually this was surpassed by The Powder Toy, which as a desktop client allowed further simulation to the point of being able to set up microcomputers and sensor networks.

So I was wondering, what about if I take these principles, and apply it to voxels?


Introducing: VRscosity!

A voxel based sandbox featuring fluid dynamics.

For the HTC Vive.

Because I don’t hate myself enough already.

VRscosity is to be exactly as tagged. Taking inspiration from the aforementioned sandbox games, I am building a voxel-based toy that lets the player pile, shape and sculpt various materials as they see fit, including liquids, solids and gasses.

The idea is that there will be some rudimentary fluid dynamics simulation within the sandbox. The goal of this is not to be realistic, but rather give some fundamental considerations to deal with – specifically, pressure, weight, mass and viscosity. A wall built with sand won’t hold back as much water as a wall built with stone, for example, before the pressure pushes it over.

Why am I building this for the Vive (and potentially the Rift)? Because I feel that this kind of toy would work best with as direct a set of controls as possible. As evidenced by Google Tilt Brush and Quill, the 3D control setup allows for much finer control than you would usually achieve with a mouse and keyboard interface, especially in 3D space. For the experience to be as fluid as possible, it makes sense for the control and interactivity allowed by the VR systems.


Not Minecraft. Honest.

The first thing that anyone these days leaps to when someone suggests they are making anything involving Voxels is “yet another Minecraft ripoff.”

It’s understandable really. After the runaway success of the game – a success that eventually netted its creator a cool $2.5billion – everyone and their dog went to jump on the bandwagon. Additional successes of the original DayZ mod for ARMA2 pushed the survival aspect into the spotlight equally. Thus, steam is awash with creative survival titles hoping to hit those delicious big bucks that Minecraft pulled in (spoiler: they wont).

I’m not going this route.


What now?

There will be plenty of development blogs on this project as it moves forward, with the intent of releasing it on Steam when viable for a cheap price (thinking $5 or less).