While rendering is now a soft problem, finding out what is potentially visible remains difficult. There is a long history of hackery in this topic: BSP trees, PVS, portals etc. (The acronyms in this case make it sound simpler.) These approaches perform well for some cases to then fail miserably in other cases. What works for indoors breaks in large open spaces. To make it worse, these visibility structures take long to build. For an application where the content constantly changes, they are a very poor choice or not practical at all.
Occlusion testing on the other hand is a dynamic approach to visibility. The idea is simple: using a simplified model of the scene we can predict when some portions of the scene become eclipsed by another parts of the scene.
The challenge is how to do it very quickly. If the test is not fast enough, it could still be faster to render everything than to test and then render the visible parts. It is necessary to find out simplified models of the scene geometry. Naturally these simple, approximated models must cover as much of the original content as possible.
Voxels and clipmap scenes make it very easy to perform occlusion tests. I wrote about this before: Covering the Sun with a finger.
We just finished a new improved version of this system, and we were ecstatic to see how good the occluder coverage turned out to be. In this post I will show how it can be done.
Before anything else, here is a video of the new occluder system in action:
A Voxel Farm scene is broken down into chunks. For each chunk the system computes several quads (a four vertex polygon) that are fully inscribed in the solid section of a chunk. They also are as large as possible. A very simple example is shown here, where a horizontal platform produces a series of horizontal quads:
Here is how it works:
Each chunk is generated or loaded as a voxel buffer. You can imagine this as a 3D matrix, where each element is a voxel.
The voxel buffer is scanned along each main axis. The following images depict the process of scanning along one direction. Below there is a representation of the 3D buffer as a slice. If this was a top down view, you can imagine this is a vertical wall going at an angle:
For each direction, two 2D voxel buffers are computed. One stores where the ray enters the solid and the second where the ray exits the solid.
For each 2D buffer, the maximum solid rectangle is computed. A candidate rectangle can grow if the neighboring point in the buffer is also solid and its depth value does not differ more than a given threshold.
Each buffer can produce one quad, showing in blue and green in the following image:
Here is another example where a jump in depth (5 to 9) makes the green occluder much smaller:
In fact, if we run again the function that finds the maximum rectangle on the second 2D buffer it will return another quad, this time covering the missing piece :
Once we have the occluders for all chunks in a scene, we can test very quickly whether a given chunk in the scene is hidden behind other chunks. Our engine does this using a software rasterizer, which renders the occluder quads to a depth buffer. This buffer can be used to test all chunks in the scene. If a chunk's area on screen is covered in the depth buffer by a closer depth, it means the chunk is not visible.
This depth buffer can be very low resolution. We currently use a 64x64 buffer to make sure the software rasterization is fast. Here you can see how the buffer looks like:
While this can still be improved, I'm very happy with this system. It is probably the best optimization we have ever done.
This is really slick, I always wondered how this would/could/was done for voxel worlds.
ReplyDeleteHere is the funny part. For non-voxel worlds (like the one in Witcher 3, Destiny, etc), they run a voxelization stage in order to produce occluders. A state of the art polygon based occlusion system like Umbra will go from polys to voxels. This is a case where having your information stored as voxels makes a big difference. Occlusion culling is a holy grail for computer graphics, so this one is a big win for voxels as a storage and processing unit.
DeleteI've been wanting to hear more about how Voxel Farm deals with occlusion, so this is great! But something's been confusing me for a while: the final mesh will often partially intersect the space of a given voxel, so how does the occlusion system handle that? Do you just treat partially-full voxels as empty?
ReplyDeleteAs always, great work! I didn't know how much fun rendering was until I started reading a year ago!
Very good question. In order to save time we do not look at the voxel surface profile. The whole approach is conservative, the air voxels that neighbor a voxel with a profile will take over.
DeleteThis is a very fun aspect of the tech. It is probably the only chance you will ever get to write your own software triangle rasterizer.
Excellent stuff as always. Have there been any considerations for material properties and visibility?
ReplyDeleteIs there an occluder? ok, is it transparent.... e.g.
Yes, it is very simple. A material can be opaque to the occlusion test or not. It is a binary thing at the moment.
DeleteDo you only process internal voxels, or move the quad vertices to a vertex on the voxel?
ReplyDeleteFor software rasterization, I'm sure you're aware of Intel's demo of this: https://software.intel.com/en-us/blogs/2013/09/06/software-occlusion-culling-update-2 and Fabian Giesen's optimization write-up: https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index/ but I thought I'd post it here for those interested.
The quad vertices go into the center of internal voxels. Thanks for posting the links, I had seen this a while ago, but completely forgot about it. I'll take a fresh look.
DeleteVery interesting article!
ReplyDeleteI stumbled upon this article while searching for a solution for approximating the largest quads along each axis of a box of discrete points. This is basically what you're doing, but I'm having trouble understanding how you extract the quads from the 2D buffers.
Maybe I'm missing something obvious here, but I'd really appreciate it if you could explain how you get the quads from the generated 2D buffers!
Thanks!