Thursday, December 11, 2014

How the voxel zebra got its stripes

Here is the story behind these two zebras:



The zebra at the left was handcrafted by an artist. It is a traditional polygon mesh where each triangle has UV coordinates. These coordinates are used to wrap a handpainted 2D texture over the triangle mesh.

This is how most 3D objects have been created since the beginning of time. It is a very powerful way to capture rich surfaces in models. It is very efficient, it aligns well with the hardware, allows you to have incredible detail and even animate.

Voxels can also have UV. This allows you to capture more detail at much lower voxel resolution.

The zebra at the right had an interesting life. It went from the artist made polygon into a full voxel representation. Then it went back to triangles just before rendering. UV coordinates were preserved along this trip, but there is a lot of trickery involved. These are different meshes.

Both models use exactly the same texture the artist made. This is the important part. You could draw both in the same draw call.

The voxel version has fewer triangles. This is a 100x100x100 voxelization. To give you an idea of how small that is, here is the equivalent of that in 2D:
If you approached the zebra and looked at its head, at the left is how big these voxels would be:


At the right you see our results. The same amount of voxels can provide a lot more detail if UV coordinates are used.

I am happy with the results. To me this is as important as solving the physics problem. This will take the look of voxel scenes to a whole new level, while allowing you to harvest and destroy these carefully designed things.

This is still experimental and there are tricky issues ahead, like handling topology changes (holes closing.) and dealing with aliasing. For now I got to make a post with images of only zebras in it.



20 comments:

  1. Is it possible to animate voxels? And if so, will it still be so resource-efficient?

    ReplyDelete
    Replies
    1. Yes, animation is not that different than the UV problem. It is resource efficient enough to make it possible, but you would not go into voxel animation to save resources. A skinned mesh is the most efficient form of animation I know of. You would do voxel animation if you want to destroy/cut the animated model in real time.

      Delete
  2. Would one actually do this in practice in this specific scenario? That is to say, why would you bother to convert an NPC object to voxels? I assume one such case might be automatic LoD, but I assume at some point in all this dynamic vert generation that animation and UV maps will fall apart?

    ReplyDelete
    Replies
    1. For NPCs, this: http://youtu.be/dhRUe-gz690?t=2m50s

      Automatic LOD would fall appart the same as a mesh only approach. These voxels are encoding a mesh after all. Like with meshes you can automate sometimes, other times it is better to create some LOD by hand. In general you would address LODs same way, encoding into voxels does not change much.

      Delete
    2. hah, well played. I'm curious how well rapid destruction of NPCs would work in production. Have you had a chance to load test this solution yet?

      The latency between damaging an active NPC locally and the server returning the resultant mesh bears consideration. I'm making the assumption that updating voxelised complex NPCs will be generally higher cost than an arbritary block of terrain. I suppose you'd try to hide that by approximating the result locally then updating with the server result? (I don't remember if you do that already)

      I also wonder what animation artifacts one might see in the process of updating a mesh mid-animation.

      Delete
    3. Voxel resolution for an NPC or avatar will likely be higher than the world's voxel resolution. But the changes we do in the world are quite large, in proportion larger than the changes you would do to a creature. I would say this is not different than building.

      I would like this too for Godzilla-sized monsters, or 10 times bigger. Instead of digging the ground, you are cutting the monster's flesh. Engine wise is no different than carving on an island.

      Latency not a problem. It is OK to do changes locally and not wait for server ack before visual feedback. Server performs operation anyway and broadcasts. Any deviation will be corrected then in the client, but these are the exception.

      We have not tested this yet, especially not the giant monsters.

      Delete
    4. Dibs testing your giant monster cutting simulation.

      Delete
  3. What are your views on using a voxel-based system for cloth simulation, which is one of the most difficult things to do via traditional polygons. Unity's cloth system frankly doesn't work. I'm assuming that a voxel-based method might work better and faster for collision detection (the main difficulty with cloth simulation), because you've got a uniform grid to work with rather than polys that can be any size and shape.

    ReplyDelete
  4. I would say a mesh approximates better the cloth itself. Most of what is going on is 2D, except for the collisions. I would use an evenly tessellated mesh surface for the sim.

    For collisions the voxels could help, but then you would need extra sorcery for testing against animated objects. If they are skinned voxels it could be just a regular voxel test, but you'd need to solve the voxel skinning problem first.

    ReplyDelete
  5. But to simulate clothing, it isn't just a flat 2D mesh, and if the clothing has to drape over a human body then you've got a lot of irregular polygons to check for collisions. To make a clothed statue (non-animated) would be difficult with polygons, but I would think it would be easier with voxels, wouldn't it?

    ReplyDelete
  6. What is the song called that you used in the video on Youtube?

    ReplyDelete
  7. I don't understand your explanation about how having UVs helps provide better results, do you use UV values to somehow generate a better fitting mesh?

    ReplyDelete
  8. Paul: he means that he's now texturing things at a resolution well below the size of individual voxels. using the UV coordinates much as you would with traditional texturing.

    ReplyDelete
    Replies
    1. Correct. Ideally we would like to have each voxel have its own unique color. But we know this is too much information. This is what RAGE (the game) did. While uniquely textured worlds are beautiful, they take too much memory. If you are creating some sort of large world then forget it. It is just not possible. So there is no other choice, voxel or polygon, does not really mater, than to reuse textures. So different points in the world map to the same texture pixel. With voxels we could do it using triplanar mapping, which is OK for terrains, but not so much for architectural element. This new technique allows to reuse the same texture many times over the world, but artists get control on how it is applied. Once the results start coming out we will see what is the big deal about this, but it is a big deal. Big!

      Delete
  9. Ah I see now, I was confused by the zebra head picture having such a jagged outline, which made me think UV's were somehow affecting geometry.

    ReplyDelete
  10. Voxelzebra has a hole in left backleg

    ReplyDelete
  11. Can we get some wires? You say the voxel version has less polys, can we get a screenshot of those two side by side?

    ReplyDelete
    Replies
    1. Yes, I will be posting about this soon. I will include some wires.

      Delete
  12. That's an amazing increase in resolution!

    ReplyDelete