This is how most 3D objects have been created since the beginning of time. It is a very powerful way to capture rich surfaces in models. It is very efficient, it aligns well with the hardware, allows you to have incredible detail and even animate.
Voxels can also have UV. This allows you to capture more detail at much lower voxel resolution.
The zebra at the right had an interesting life. It went from the artist made polygon into a full voxel representation. Then it went back to triangles just before rendering. UV coordinates were preserved along this trip, but there is a lot of trickery involved. These are different meshes.
Both models use exactly the same texture the artist made. This is the important part. You could draw both in the same draw call.
The voxel version has fewer triangles. This is a 100x100x100 voxelization. To give you an idea of how small that is, here is the equivalent of that in 2D:
If you approached the zebra and looked at its head, at the left is how big these voxels would be:
At the right you see our results. The same amount of voxels can provide a lot more detail if UV coordinates are used.
I am happy with the results. To me this is as important as solving the physics problem. This will take the look of voxel scenes to a whole new level, while allowing you to harvest and destroy these carefully designed things.
This is still experimental and there are tricky issues ahead, like handling topology changes (holes closing.) and dealing with aliasing. For now I got to make a post with images of only zebras in it.
Accidental stereoscopy!?
ReplyDeleteIs it possible to animate voxels? And if so, will it still be so resource-efficient?
ReplyDeleteYes, animation is not that different than the UV problem. It is resource efficient enough to make it possible, but you would not go into voxel animation to save resources. A skinned mesh is the most efficient form of animation I know of. You would do voxel animation if you want to destroy/cut the animated model in real time.
DeleteWould one actually do this in practice in this specific scenario? That is to say, why would you bother to convert an NPC object to voxels? I assume one such case might be automatic LoD, but I assume at some point in all this dynamic vert generation that animation and UV maps will fall apart?
ReplyDeleteFor NPCs, this: http://youtu.be/dhRUe-gz690?t=2m50s
DeleteAutomatic LOD would fall appart the same as a mesh only approach. These voxels are encoding a mesh after all. Like with meshes you can automate sometimes, other times it is better to create some LOD by hand. In general you would address LODs same way, encoding into voxels does not change much.
hah, well played. I'm curious how well rapid destruction of NPCs would work in production. Have you had a chance to load test this solution yet?
DeleteThe latency between damaging an active NPC locally and the server returning the resultant mesh bears consideration. I'm making the assumption that updating voxelised complex NPCs will be generally higher cost than an arbritary block of terrain. I suppose you'd try to hide that by approximating the result locally then updating with the server result? (I don't remember if you do that already)
I also wonder what animation artifacts one might see in the process of updating a mesh mid-animation.
Voxel resolution for an NPC or avatar will likely be higher than the world's voxel resolution. But the changes we do in the world are quite large, in proportion larger than the changes you would do to a creature. I would say this is not different than building.
DeleteI would like this too for Godzilla-sized monsters, or 10 times bigger. Instead of digging the ground, you are cutting the monster's flesh. Engine wise is no different than carving on an island.
Latency not a problem. It is OK to do changes locally and not wait for server ack before visual feedback. Server performs operation anyway and broadcasts. Any deviation will be corrected then in the client, but these are the exception.
We have not tested this yet, especially not the giant monsters.
Dibs testing your giant monster cutting simulation.
DeleteWhat are your views on using a voxel-based system for cloth simulation, which is one of the most difficult things to do via traditional polygons. Unity's cloth system frankly doesn't work. I'm assuming that a voxel-based method might work better and faster for collision detection (the main difficulty with cloth simulation), because you've got a uniform grid to work with rather than polys that can be any size and shape.
ReplyDeleteI would say a mesh approximates better the cloth itself. Most of what is going on is 2D, except for the collisions. I would use an evenly tessellated mesh surface for the sim.
ReplyDeleteFor collisions the voxels could help, but then you would need extra sorcery for testing against animated objects. If they are skinned voxels it could be just a regular voxel test, but you'd need to solve the voxel skinning problem first.
But to simulate clothing, it isn't just a flat 2D mesh, and if the clothing has to drape over a human body then you've got a lot of irregular polygons to check for collisions. To make a clothed statue (non-animated) would be difficult with polygons, but I would think it would be easier with voxels, wouldn't it?
ReplyDeleteWhat is the song called that you used in the video on Youtube?
ReplyDeleteI don't understand your explanation about how having UVs helps provide better results, do you use UV values to somehow generate a better fitting mesh?
ReplyDeletePaul: he means that he's now texturing things at a resolution well below the size of individual voxels. using the UV coordinates much as you would with traditional texturing.
ReplyDeleteCorrect. Ideally we would like to have each voxel have its own unique color. But we know this is too much information. This is what RAGE (the game) did. While uniquely textured worlds are beautiful, they take too much memory. If you are creating some sort of large world then forget it. It is just not possible. So there is no other choice, voxel or polygon, does not really mater, than to reuse textures. So different points in the world map to the same texture pixel. With voxels we could do it using triplanar mapping, which is OK for terrains, but not so much for architectural element. This new technique allows to reuse the same texture many times over the world, but artists get control on how it is applied. Once the results start coming out we will see what is the big deal about this, but it is a big deal. Big!
DeleteAh I see now, I was confused by the zebra head picture having such a jagged outline, which made me think UV's were somehow affecting geometry.
ReplyDeleteVoxelzebra has a hole in left backleg
ReplyDeleteCan we get some wires? You say the voxel version has less polys, can we get a screenshot of those two side by side?
ReplyDeleteYes, I will be posting about this soon. I will include some wires.
DeleteThat's an amazing increase in resolution!
ReplyDelete