In the past, every Voxel Farm scene could be described by this diagram:
This is a 2D representation of how the entire scene is segmented into multiple chunks. Each chunk may cover a different area (or volume in 3D), but the amount of information in it is roughly the same compared to other chunks, regardless of their size.
Thanks to this trick we can use bigger chunks to cover more distant sections of the scene. Since the chunk is far away, it will appear smaller on screen so we can get away with a much lower information density for it.
Until recently the only criteria we used to decide chunk sizes was how distant the chunk was from the viewer, which is the red dot in the image. For some voxel content types like terrain, we could afford to quickly increase the chunk size along distance to the viewer. The resulting lower density terrain would still look alright.
Some other types of content, however, required more detail. Imagine there is a tower a few kilometers away from the viewer:
The resolution assigned to the chunk containing this tower is simply too low to capture the detail we want for the tower. We could increase resolution to all chunks equally and this would bring the tower into greater detail, but this would be very expensive because many chunks containing just terrain would have their density bumped up as well.
The solution is simple. Imagine that while we build this world, we can compute an error metric for each larger chunk based on the eight (or four in 2D) children chunks it contains. For terrain-only chunks this error would always be zero. For the chunks containing the artist-built tower, this error could be just a counter of how many voxels in the larger chunk failed to capture the detail in the voxels from the smaller children chunks.
Starting from the distance-based configuration we can do another round of chunk refinement. Each chunk having an error that we consider too high will be subdivided. We can keep doing this until the errors are below the allowed threshold and the overall scene complexity remains within bounds.
This gives us a new scene setup:
As result of the additional subdivision, we now use higher density chunks to represent the tower. Since we know how distant these are, we could even pick a chunk size that shows no degradation at all as all errors become sub-pixel on screen.
The following video shows more about this technique:
In the last part of the video, you can see how the terrain LOD changes while the tower remains crisp all the time. If you would also like to minimize terrain LOD changes, this technique can give you that as well:
Just like we can focus the scene more on fine architectural details, we can "unfocus" sections we know will be fine with lower information density.
There is still one issue this technique does not address. When we are bumping up the level of detail of a distant castle, this may also bring a lot of information we do not necessarily want, like walls and spaces that may be inside the castle.
We found a very elegant way to deal with this. This is what enabled the very complex Landmark builds from the earlier post to display in high detail and still run at interactive rates. How we did it will be the topic of the final post on the LOD series.
This might be irrelevant to this post, but I want your (valuable) opinion on this: http://www.gamasutra.com/blogs/LeonardRitter/20150423/241777/Towards_Realtime_Deformable_Worlds_Why_Tetrahedra_Rule_Voxels_Drool.php
ReplyDeleteI agree with his findings. It is pretty much what we see with our tech, which is a modified dual contouring over a regular-grid/octree hybrid.
DeleteIt is not clear how Tetrahedra improves over voxels, this is not something the article touches.
Well, the last bulletpoint list in the article is all about the benefits of tetrahedra over voxels saying things like 'Raycasting is a simple neighborhood walk algorithm.' But I just wanted your opinion because in that article it all sounds too good in paper for something mostly unproven so its actual effectiveness is all up in the air until he or someone makes an actual implementation.
DeleteThe only thing close to Voxelfarm I have seen an actual implementation of is MediaMolecule Dreams, and I'd say it has mostly to do with their crazy renderer.
www.mediamolecule.com/blog/article/siggraph_2015
https://www.youtube.com/watch?v=u9KNtnCZDMI
Their VR editing tools videos also look interesting.
We are taking a serious look at tetrahedra at the moment. It seems there are many unanswered questions, it takes me back to a time when we were missing similar answers about voxels. It is not clear if this will be able to perform as well as voxels while merging multiple layers of content in real time, how you would have progressive LOD, etc. It is really hard to say without actual measurements.
DeleteThe thing I found the most surprising about that article it that it is the same principle behind FEM (Finite Element Meshes) used in the Star Wars Force Unleashed games which to this day still feature some of the most impressive deformation and destruction physics in games. It is by the makers of the famous DMM plugin http://www.pixelux.com/
DeleteI followed them a lot back in the day but lately they seem focused in non-realtime content like Hollywood VFX and baking of complex animation sequences for games like the lovely ones featured in Quantum Break.
FEM seems to be highly performant for destruction but don't know how it would fare for entire deformable worlds.
This comment has been removed by the author.
ReplyDelete