Wednesday, May 18, 2016

Terrain Synthesis

This is just a teaser. We are still working on this, but we got some results that are already good enough to show. It is not about where terrain types appear (that was covered here and here), but how a particular terrain type is generated.

We want to make procedural generation as accessible as possible. Just like a movie director who shows a portfolio of photos and concept art to the CGI team and just says "make it look like this", we wanted the creator to be entirely clueless about how everything works.

This is how it feels to create a new terrain type. You provide a few pictures of it and we take it from there:


This system builds a probabilistic model based on the samples you provide. That is enough to get an idea of the base elevation. On top of that, several natural filters are applied. It turns out we do know a bit more about this landscape. We know how dry it is, what is the average temperature among other things. The only fact we are missing and have to ask about is how old do you think this is. The time scales range from hundreds of millions of years to billions of years. (If you believe your terrain is 6000 years old we cannot accommodate you at the moment.)

You can provide one or more sample pictures. The more pictures you provide, the better, but just one picture is often enough. Ready to see some results? The following terrains were synthesized out of a single photo in every case (do not mind the faux coloring, this is only to indentify the different terrain layers for now):




Providing multiple samples creates some sort of mix, similar to how you find both mother and father features in their kids:


This works with any kind of image. It could be some fancy concept art as seen below:


The natural filters in this case added some realism to the concept, and eroded some of the original hill shape. This could be avoided if you are after a more stylized look. But if you are short on time, and want to prototype different realistic terrains, the ability to quickly sketch something and feed it to the generator is a big help.

Of course you can still look under the hood and tinker with generation frequencies, filter parameters, etc. You can still have terrain models imported from Digital Elevation Models, or from third party software like World Machine. The key here is you do not have to anymore.

I'd be glad to enter into details of how this works if you guys are interested. Just let me know. I still owe the Part 2 of the continent generation. That should come shortly.

Saturday, May 14, 2016

Turtle Mountain

If you have ten minutes or so to spare I encourage you to check out this video. The rest of this post will be about how it was done:


The shyamalanian twist here is the guy lives in the back of a giant turtle. (Maybe not so much of a twist since the video title and thumbnail pretty much give it away.)

What you are seeing here is a new Voxel Farm system in action. It gets a very low-resolution mesh as a base and enhances it by adding procedural detail.

I think this is an essential tool for world builders.. Very often procedural generation deprives the creator of control over the large scale features of the terrain. Or, when control is allowed, it comes in the form of 2D maps like heightmaps and masks. There is no way to drive the procedural generation into complicated shapes and topologies like intricate caves, floating islands, wide waterfalls, etc.

We chose a massive turtle mountain to drive the point anything you can imagine can be turned into a detailed terrain. This is how it works:

The first thing you need to do is create a low-resolution mesh for the base of the terrain feature. This project used three of these meshes, one for the turtle's body and shell, another for the terrain protuberance on the top of the shell and one last mesh for a series of caves. Here you can see them:


On their own they were rather simple to produce. The tortoise is a stock model from a third party site. The mountain was done by displacing a mesh using a heightmap that had a fluvial erosion filter applied to it. The cave system mesh is a simple mesh with additional subdivisions and 3D noise applied to it.

These meshes were imported into Voxel Studio (our creative world building tool) and properly positioned relative to each other.

In addition to triangles, the meshes were textured using traditional means. Here you can see the texture that was applied to the turtle body:


Here is how the textured top mountain looks like:


Note how the texture uses single flat colors. Each pixel in the texture represents a terrain type, not an actual color. You can think of these as instructions to be passed down to the procedural generators when the time comes to add detail.

The meshes may appear detailed at this distance, but if you stretched them to cover four kilometers (which is the size of the turtle base in the world), you would see a single triangle span a dozen of meters or more. A single texture pixel would cover several meters. This would make for a very boring and flat environment. Here is where the procedural aspect kicks in.

Each color in a mesh texture represents what we call a "Meta-Material". I have posted about them before: here and here. In general a metamaterial is a set of rules that define how a coarse section of space can be refined. In this particular implementation for our engine, this is achieved by supplying two different pieces of information:
  1. A displacement map
  2. A sub-material map 
This is a very simple and effective way to refine space. The displacement map is used to change the geometry and add volumetric detail to an otherwise flat surface. The submaterial map registers closely to the displacement map so the artist can make sure materials appear at the right points in the displaced geometry. Once again the submaterial map does not contain final colors. Each pixel in this map represents a final voxel material that would be applied there.

Here you can see the displacement and submaterial map used for one of the metamaterials in the scene:


One particularly nice aspect of the system is that displacement properly follows the base mesh surface. It is possible to have nice looking cliffs and even apply displacement to bottom facing surfaces like the ceiling of a cave. For mesh-only displacement this is not usually difficult, but doing so in voxel space (so you can dig and destroy) can be quite complex. I'm happy to see we can have voxel cliffs that look right:


Metamaterials, beside displacement and submaterial maps, can be provided with "planting rules". This allows bringing in additional procedural detail in the form of larger instanced content. These can be voxel instances like the large rocks and boulders seen in the video or, they can be passed as instances to the rendering side so a mesh is displayed in that position. The trees in the video are an example of the later.


The previous image shows a mesh instance (a tree) at the left and a voxel instance (a boulder) at the right. Plants, grass, and small rocks are also instanced, but they are planted on top of materials, not meta-materials. One thing I did not mention before is this demo uses Unreal Engine 4. That is another key piece of tech that is coming along very nicely.

Already confused by these many levels of indirection? It is alright, once you start working with these features they begin to make perfect sense. More to that, it becomes apparent this is the only way you can get from a very coarse world definition into something detailed as seen in the video.

I hope you enjoyed this and that it gets your imagination started.

Monday, May 9, 2016

Applying textures to voxels

When I look back at the evolution of polygon-based content, I see three distinct ages. There was a time where we could only draw lines or basic colored triangles:


One or two decades later, when memory allowed it, we managed to add detail by applying 2D images along triangle surfaces:


This was much better, but still quite deficient. What is typical of this brief age is that textures were not closely fitted to meshes. This was a complex problem. Textures are 2D objects, while meshes live in 3D. Somehow the 3D space of the mesh had to be mapped into the 2D space of the texture. There was no simple, single analytical solution to this problem, so mapping had to be approximated to a preset number of cases: planar, cylindrical, spherical, etc.

With enough time, memory constraints relaxed again. This allowed us to write the 3D to 2D mapping as a set of additional coordinates for the mesh. This brought us into the last age: UV-mapped meshes. It is called UV because it is an additional set of 2D coordinates. Just like we have XYZ for the 3D coordinates in space, we use UV for coordinates in the texture space. This is how Lara Croft got her face.


We currently live in this age of polygon graphics. Enhancements like normal maps, or other maps used for physically based rendering, are extensions of this base principle. Even advanced techniques like virtual texturing or Megatextures still rely on this.

You may be wondering why is this relevant to voxel content. I believe voxel content is no different than polygon content when it comes to memory restrictions, hence it should go through similar stages as restrictions relax.

The first question is whether it is necessary to texture voxels at all. Without texturing, each voxel needs to store color and other surface properties individually. Is this feasible?

We can look again to the polygon world for an answer. The equivalent question for polygon content would be, can we have all the detail we need from just geometry, can we go Reyes-style and rely on microgeometry? For some highly stylized games maybe, but if you want richer realistic environments this is out of the question. In the polygon realm this also touches the question about use of unique texturing and megatextures, like in idTech5 and the game Rage. This is a more efficient approach on having a unique color per scene element, but still was not efficient enough to compete with traditional texturing. The main reason is that storing unique colors for entire scenes was simply too much. It led to huge game sizes while the perceived resolution remained low. Traditional texturing on the other hand allows to reuse the same texture pixel many times over the scene. This redundancy decreases the required information by an order of magnitude at often no perceivable cost.

Unique geometry and surface properties per voxel are no different than megatextures. They are slightly worse as the geometry is also unique, and polygons are able to compress surfaces much more efficiently than voxels. With that in mind, I think memory and size constrains are still too high for untextured voxels to be competitive. So there you have the first voxel content age, where you still see large primitives and flat colors, and size constraints won't allow them to become subpixel:

(Image donated by Doug Binks @dougbinks from his voxel engine)

The second age is basic texturing. Here we enhance the surface detail by applying one ore more textures. The mapping approach of choice is tri-planar mapping. This is how Voxel Farm has worked until now. This is sufficient for natural environments, but still not there for architectural builds. You can get fairly good looking results, but requires attention to detail and often additional geometry:


In this scene (from Landmark, using Voxel Farm) the pattern in the floor tiles is made out of voxels. The same applies to table surfaces. These are quite intricate and require significant data overhead compared to a texture you could just fit to each table top for instance, as you would do for a normal game asset.

We saw it was time for voxels to enter the third age. We wanted voxel content that benefited from carefully created and applied textures, but also from the typical advantages you get from voxels: five-year-olds can edit them and they allow realistic realtime destruction.

The thing about voxels is, they are just a description of a volume of space. We tend to think about them as a place to store a color, but this is a narrow conception. We saw that it was possible to encode UV coordinates in voxels as well.

What came next is not for the faint of heart. The levels of trickery and hackery required to get this working into a production ready pipeline were serious. We had to write voxelization routines that captured the UV data with no ambiguities. We had to make sure our dual contouring methods could output the UV data back into triangle form. The realtime compression had to be now aware of the UV space, and remain fast enough for realtime use. And last but not least we knew voxel content would be edited and modified in many sorts of cruel ways. We had to understand how the UV data would survive (or not) all these transformations.

After more than a year working on this, we are pleased to announce this feature will make it into Voxel Farm's next major release. Depending on the questions I get here, I may get more into detail about how all this works. Meanwhile enjoy a first dev video of how the feature works: