In traditional polygon-based worlds, a lot of the detail you see has been captured in textures. The same textures are reused many times over the same scene. This brings a lot of additional detail at little memory cost. Each texture needs to be stored only once.
Let's say you have a column that appears many times over in a scene. You would create a triangle mesh for the column with enough triangles to capture just the silhouette and then use textures to capture everything else. A process called UV-mapping links each triangle in the mesh to a section of the texture.
This is how virtual worlds and games get to look their best. If we had to represent everything as geometry it would be too much information even for top-of-the-line systems. If given the choice, nobody would use UV-mapping, but there is really no choice.
With voxels you have the same problem. If you want to capture every detail using voxels you would need too many of them. This may be possible in a near future, maybe for different hardware generations and architectures, but I would not bet on voxels becoming smaller than pixels anytime soon.
The beauty of voxels is they can encode anything you want. We saw it would be possible to keep voxel resolutions low and still bring a lot of detail into scene if we encoded UV mapping into voxels, just like vertices do for traditional polygon systems.
You can see some very early results in this video:
Luckily we need to store UV only for those voxels in the outside, so the data is manageable. For procedural objects, voxels could also use UV. The rocks we instance over terrain could be using detailed textures instead of triplanar mapping. Same for trees and even man-made elements like buildings and ruins. For procedural voxels the UV adds little overhead since nothing has to be stored anyway.
Use of UV is optional. The engine is able to merge UV-mapped voxels with triplanar-mapped voxels on the fly. You can carve pieces out of these models, or merge them with procedural voxels and still have one single watertight mesh:
As you can see the leopard's legs do not go underground in the rendered mesh. Everything connects properly.
This is an earlier video I think never linked before:
Why go over all this trouble? UV-mapping took polygonal models to a whole new level of visual quality and performance. We are going through the same phase now.
This kind of encoding we have done for UV also opens the doors to new interesting applications, like animation. If you think about it, animation is not different from UV-mapping. Instead of mapping vertices to a texture we map vertices to bones, but it is pretty much the same. So, yes, that zebra could move one day.
Question about the half-buried leopard, if you were to dig around it, would you find its legs again? Fully textured?
ReplyDeleteYes, if you dig carefully enough to remove dirt voxels only, you would find the textured legs. Also if you changed the terrain generation rules and made the terrain a few meters lower. The legs are there, they are not picked up by the surface creation because they do not contribute to the visible surface.
Delete"Yes, if you dig carefully enough to remove dirt voxels only, you would find the textured legs."
DeleteThat's too cool!!
This software is going places =D.
DeleteAlso: Potentials for archaeology sims!
Anyway, the animation stuff also seems really cool, is that what you guys will be persuing next?
Archeology sims... fascinating. It would probably be enough to give these voxels different "hardness" and then a tool that removes only softer materials.
DeleteAs an archaeologist...well it would certainly be more authentic than Lara Croft. :D
DeleteMiguel, assuming these voxel creatures (zebras etc.) became animated, and assuming we were attacking 100s of them in-game, wouldn't it become too demanding on servers to simulate the zebras falling apart (assuming there's physics involved and it isn't just visuals)?
ReplyDeleteBecause, wouldn't the server need to simulate the physics and then send it back to each individual client in the nearby area of the event?
Destroying animated things does not change much. If you had 100 static zebras and started blowing them to pieces it would be the same load for physics.
DeleteBeen following this project for maybe 2 years now (or more, lost count). You never fail to surprise me :)
ReplyDeleteHow does those models in particular handle Lod? From what i've seen, you're terrain has have quite the consistency even in the lowest level of detail.
However not really sure about the zebra, due to its size being close to the minimal size?
Just been speculating a bit about tesselation if it somehow could be used as a way to procedurally generate surfaces for a terrain:
A little bit like this. But more of like on the larger planes.
https://www.youtube.com/watch?v=bkKtY2G3FbU
If the tesselation relied on data based on procedural generation (unique data for each plane/triangle) and not man-made maps/textures.
Could it perhaps generate a larger detail level?
Short thought would be bezier triangles for the shape. then maybe adding some randomness from terrain data.
LOD for uv-mapped voxels works the same. If there is not enough voxel resolution there will be aliasing issues. The UV mapping holds, as we already are able to remap from one mesh density to another.
DeleteGPU tessellation is tempting, but there are some pitfalls. Anything that changes your geometry on the GPU may not be available CPU-side. You will have to either read it back to host memory or re-compute it on CPU side.
GPU or now, the principle of using displacements to add procedural detail is something we do all the time. The interesting bits moving forward is to provide a really volumetric surface, instead of just a displacement value which is kind of a hack.
Have you seen this type of volumetric model?
ReplyDeleteVolumetric Modeling with Diffusion Surfaces
https://www.youtube.com/watch?v=gFQKMCF2jqs
http://igl.ethz.ch/projects/diffusion-surfaces/
"Diffusion Surfaces are, conceptually, an extension of diffusion curves to 3D volumes."
Diffusion Curves
https://www.youtube.com/watch?v=lEVe7vU5WiU
Yes, saw it a while ago. The proposed method in particular is too dependent on symmetry and not very artist friendly, but it would make for a nice method to interpolate the interior of objects given a few content "slices".
DeleteMiguel, this is exciting. You're taking voxels in a great direction.
ReplyDeleteThanks... Looked at your artwork, it is very good. Congrats!
Delete