Saturday, May 14, 2016

Turtle Mountain

If you have ten minutes or so to spare I encourage you to check out this video. The rest of this post will be about how it was done:


The shyamalanian twist here is the guy lives in the back of a giant turtle. (Maybe not so much of a twist since the video title and thumbnail pretty much give it away.)

What you are seeing here is a new Voxel Farm system in action. It gets a very low-resolution mesh as a base and enhances it by adding procedural detail.

I think this is an essential tool for world builders.. Very often procedural generation deprives the creator of control over the large scale features of the terrain. Or, when control is allowed, it comes in the form of 2D maps like heightmaps and masks. There is no way to drive the procedural generation into complicated shapes and topologies like intricate caves, floating islands, wide waterfalls, etc.

We chose a massive turtle mountain to drive the point anything you can imagine can be turned into a detailed terrain. This is how it works:

The first thing you need to do is create a low-resolution mesh for the base of the terrain feature. This project used three of these meshes, one for the turtle's body and shell, another for the terrain protuberance on the top of the shell and one last mesh for a series of caves. Here you can see them:


On their own they were rather simple to produce. The tortoise is a stock model from a third party site. The mountain was done by displacing a mesh using a heightmap that had a fluvial erosion filter applied to it. The cave system mesh is a simple mesh with additional subdivisions and 3D noise applied to it.

These meshes were imported into Voxel Studio (our creative world building tool) and properly positioned relative to each other.

In addition to triangles, the meshes were textured using traditional means. Here you can see the texture that was applied to the turtle body:


Here is how the textured top mountain looks like:


Note how the texture uses single flat colors. Each pixel in the texture represents a terrain type, not an actual color. You can think of these as instructions to be passed down to the procedural generators when the time comes to add detail.

The meshes may appear detailed at this distance, but if you stretched them to cover four kilometers (which is the size of the turtle base in the world), you would see a single triangle span a dozen of meters or more. A single texture pixel would cover several meters. This would make for a very boring and flat environment. Here is where the procedural aspect kicks in.

Each color in a mesh texture represents what we call a "Meta-Material". I have posted about them before: here and here. In general a metamaterial is a set of rules that define how a coarse section of space can be refined. In this particular implementation for our engine, this is achieved by supplying two different pieces of information:
  1. A displacement map
  2. A sub-material map 
This is a very simple and effective way to refine space. The displacement map is used to change the geometry and add volumetric detail to an otherwise flat surface. The submaterial map registers closely to the displacement map so the artist can make sure materials appear at the right points in the displaced geometry. Once again the submaterial map does not contain final colors. Each pixel in this map represents a final voxel material that would be applied there.

Here you can see the displacement and submaterial map used for one of the metamaterials in the scene:


One particularly nice aspect of the system is that displacement properly follows the base mesh surface. It is possible to have nice looking cliffs and even apply displacement to bottom facing surfaces like the ceiling of a cave. For mesh-only displacement this is not usually difficult, but doing so in voxel space (so you can dig and destroy) can be quite complex. I'm happy to see we can have voxel cliffs that look right:


Metamaterials, beside displacement and submaterial maps, can be provided with "planting rules". This allows bringing in additional procedural detail in the form of larger instanced content. These can be voxel instances like the large rocks and boulders seen in the video or, they can be passed as instances to the rendering side so a mesh is displayed in that position. The trees in the video are an example of the later.


The previous image shows a mesh instance (a tree) at the left and a voxel instance (a boulder) at the right. Plants, grass, and small rocks are also instanced, but they are planted on top of materials, not meta-materials. One thing I did not mention before is this demo uses Unreal Engine 4. That is another key piece of tech that is coming along very nicely.

Already confused by these many levels of indirection? It is alright, once you start working with these features they begin to make perfect sense. More to that, it becomes apparent this is the only way you can get from a very coarse world definition into something detailed as seen in the video.

I hope you enjoyed this and that it gets your imagination started.

18 comments:

  1. Looking forward to this new tech, very nice work that is for sure. It was to bad to hear about EQ Next but I am sure that when the time comes this is going to be used and made into a seriously wicked open world RPG

    ReplyDelete
    Replies
    1. Yes, do not feel bad about it. We starting thinking about systems like this because of EQN. Like you said, I also think it is a matter of time. I cannot be public about this, but there are very interesting projects in the works.

      Delete
    2. Both Landmark and H1Z1 could do with these sorts of systems. Hope we get to explore a world of this tech soon(tm).

      Delete
  2. Now make the turtle move! XD. I'm kidding of course (though that would be really cool)

    This indeed seems like a very very useful feature, since it allows more control to the creators, without necessarily taking more time. And heightmaps aren't really all that intuitive to most people, while 3D models are pretty obvious how they're going to turn out =P.

    ReplyDelete
    Replies
    1. You may be kidding, but making the turtle move keeps me up at night. I'd like to go there as soon as possible. It is not trivial but it can be done.

      A big question is what happens to the stuff you build where the skin crevasses. It could mean massive computations for the physics layer. This is somewhat related to natural disaster simulation (e.g. earthquakes), which also keeps me up at night. I'd like to kill those two birds with the same stone.

      Delete
    2. I do feel like that scenario is one of those scenarios that would never be a real world practicality. Like, if I were making a game that used this framework I'd have the turtle base as a typical static 3D model so it can be skinned and rigged, with the shell itself being the voxel terrain.

      Maybe a nice tech demo though.

      Delete
    3. You may be thinking in narrow turtle terms. Most animated characters do not have a rigid large section like a turtle's shell. It can be a 10Km rhino charging a 15Km elephant, where the elephant civilization makes all possible the rhino civ is destroyed and vice-versa. You can have time scale proportional to scale. So when you are civilization sized the animals are very slow.

      The real question is whether you want to modify the deformable parts of the model. I think the answer is yes, does not matter if it is for a game or a tech demo. We are still in 2016, it is a bit early to predict what games are out there.

      Delete
    4. I strongly support development of massive mobile things! And to start off, perhaps you could just have voxel-space that covers eachother cancel eachother out? So if a part of a building enters another, the two parts cancel out, so that when they get pulled out of eachother again, there's nothing there.
      Not pretty, but it'd work for a temporary thing I bet =P. And as far as I understand, it shouldn't be too hard with voxels, should it? Or am I understanding how these systems work wrongly?

      Delete
  3. Great stuff as usual Miguel!

    Though I must be honest - I continue to be disappointed by the artifacts present in the UVs and textures. It really does not look good in this day and age. I realize some of what we see is down to you simply not putting artists on to the case. However one still sees a lot of texture seams all the time in the above video.

    Is this something you intend to continue to improve over time?

    ReplyDelete
    Replies
    1. Thanks for pointing it out. These artifacts come from the triplanar mapping not being complete in the UE4 version of the shaders. It lacks blending and filtering of contribution from different planes, which produces the hard seams. We are still working on it.

      I'm not not sure what is your frame of reference for how procedural harvestable/destructible terrain looks like in this day and age. Can you provide a concrete example?

      Delete
    2. I tried to find something specific to show you and in the process I think I figured out the key issue: I don't think its good enough to simply generate sparse geometry that looks like rock and then paint it with tilable rock textures for the coarse detail. Not anymore anyway.

      If you look at some modern games that have large open environments (Battlefront, Battlefield 4) you'll notice that a lot of the environment uses diffuse textures that have been specifically painted along the contours of the geometry.

      Delete
    3. That makes more sense.

      I agree that triplanar mapping on top of random geometry does not look OK for all surfaces. This is why we have added a UV channel to voxel data. It is not used in this demo, but we are moving there for terrain as well. You can check my previous post for a video of UV-mapped voxels and how they can be deformed while retaining as much as the mapping as possible.

      For a changing environment, you still need something like triplanar. In these modern games you speak of you cannot make a hole in the rock or cut a stone pillar in two. One of the main reasons is the game does not know how to texture the new content so they create this invisible wall mechanic. You could be carrying the most powerful weapon but it won't make a dent in the world. Also the game spaces in these games are much more limited and fixed. So it is not fair to compare these to what you just saw here, these are different classes of systems.

      Some other modern games, with less realistic styles, can afford to exploit the triplanar mapping for natural elements. You can save a lot in man-hours, texture space and download sizes.

      I'd say both techniques are needed for a next gen tech.

      Delete
  4. True voxel raytracing is more interesting; but of course nobody is working on that.

    ReplyDelete
    Replies
    1. Would that include Atomontage and Euclideon? These are under active development last time I checked. They are not exactly raytracing but still closer to "true" voxel rendering. At Voxel Farm we are not exactly into rendering, we are more about the content creation side of things. We do not have a pony in this race, we just use what works best.

      For games "true" voxel rendering is not very appealing today. The rendering part alone is not on par with what you get with GPU assisted rendering. I have not seen satisfactory physically based rendering, dynamic lighting with shadowing from multiple sources working directly on voxels.

      Even if you had the rendering completely figured out, rendering is but one of the subsystems you need for a game. There is also physics, AI, pathfinding, etc. which are fairly complex and have very mature implementations on top of polygonal data. So you have this 18-wheeler where each wheel needs re-inventing. It will take time, if it happens at all.

      Delete
  5. Great work. Keep going!

    ReplyDelete
  6. Just like Kamica above, I immediatly thought about movement. More specifically, about a bacteria walking on a gigantic clockwork. It wouldn't be that difficult to simulate because all the parts move in a repetitive fashion, so all the contact surfaces can just be assigned a worn down version of their meta-material, without physically simulating collisions and deformation.

    Is simulating physics on various levels of detail feasible? Let's say we have an asteroid impact and instead of running a huge simulation we just simulate two balls colliding and set everything more detailed to molten rock meta-material. Is this a thing? Will it ever be a thing?

    ReplyDelete
    Replies
    1. Yes physics at multiple LOD would be the way to go. The challenge is to apply changes to higher res LOD as an observer approaches. I really hope we get to work on that.

      Delete
  7. Hello, I am realy big fan of your work and I am thinking about makeing a game in a future combining Unreal engine and voxel farm.
    I have seen your video about Air Buoyancy and destruction. So I know that voxels have mass so I was wondering how hard and taksing it would be to make falling down voxels to react to the point of impact and if material can‘t handel weight of falling body it would shatter.
    I could image that calculate each voxel for that would be realy hard but maybe it would be posible to generate a map of falling object which would be responsible for calculating if that object part can handle that impact or not. If it can‘t handle then shatter those voxels and reduce main body speed by some amount and then calculate other part of generated map. That map could be different for each material like wood or rock.
    At least I think that ability to shatter is one of the biggest advantages your engine could have.

    ReplyDelete