Monday, March 21, 2016

Displacement and Beyond

In a previous post I introduced a couple of new systems we are working on. We call them Meta Meshes and Meta Materials. Here is another quick bit of info on how they work. This time we will look at the very basics.

We will start with a very low resolution sphere mesh like this one:
This new system allows me to think of this not just as a mesh, but as a "meta mesh". This is not just a sphere, but the basis of something else that kind-of looks like sphere: a planet, a pebble or maybe your second-grade teacher.

To a meta-object like this one we can apply a meta-material. For the sake of simplicity, in this post I will cover only two aspects of meta-materials. The first is they can use a displacement map to add detail to the surface.

Once you tie a meta-material to a meta-object, you get something tangible, something you can generate and render. Here is how the meta-sphere looks when the displacement equals zero:
This is a reproduction of the original mesh. It still looks very rough and faceted, making you wonder why would you need to go meta in the first place. But here is what you get when you increase the displacement amplitude:

While the shape follows the supplied concept, the detail is procedural. The seed information is the very low resolution sphere and the displacement map, which is applied to the surface using something similar to triplanar mapping. Unlike shader-only techniques, this is real geometry. It will be seen by the collision system, the AI pathfinding and other systems you may lay on top. It is made of voxels, so this is detail you can carve and destroy.

This displacement is applied along the normal of the original meta-object. This would produce beautiful features in vertical cliffs, overhangs and cave ceilings. But often displacement alone is not enough. You may want more complex volumetric protuberances. Luckily meta-materials can also be extended with voxel instances:


This is a real-time generation system, so there are practical limits on what can be done. I would like to go full fractal on this one, having an infinitude of nested meta materials. It is not really like that in this current iteration of the system. But at this stage the system can be used to produce very impressive content, in particular landscapes. I will be covering that in a soon-to-come post.

14 comments:

  1. I get where you are going with this. It is nested materials all the way down. A planet meta material has just enough information to generate a unique set of biome neta materials, and each biome will have just enough information to generate their topological/feature data all the way down to individual object insrances, which will be textured with more metal materials that define things like 3d/2d bark/stone/leaf/etc textures.

    That said, I do have one question. Do meta materials have individual minimum distance triggers?

    ReplyDelete
    Replies
    1. Yes you clearly got it. At the moment the system has only two levels. This is mainly because I believe generation rules may be different at different scales. For instance you may place biomes using a different logic than the one used to place spots in a leopard. The system only uses distribution maps for now to break down the next level of materials.

      There are no distance triggers, it is more a matter of scale. Depending on the size you assign to it the meta material to material conversion will happen at different levels.

      Delete
  2. So if you are far off from a planet it will still trigger the biomes from being generated? Or is it more of a ratio between size and distance?

    ReplyDelete
    Replies
    1. In this implementation distance does not matter at all, so yes, biomes would be generated. Generation happens at the target resolution. If the planet is too far away, you may get only a very coarse depiction of the biomes. Distance becomes a factor when you look at the specifics of rendering. At some distances, where the geometry may not be important, you could feed the meta data to the shaders and bump up the perceived amount of detail while skipping the entire voxel part of the pipeline.

      Delete
  3. I was thinking, there is another technology I know of that is structured very similarily to this. AI, some of the most powerful ai applications developed to date use layers to help interpret pictures/sounds/videos. I find the similarities between what you are doing and how ai operates astounding. You are essentially working ai backwards to create worlds/things/etc.

    Perhaps, and I am just thinking out loud here, but maybe it would be possible to incorporate ai into procedural generation. That way instead of feeding code to generate models/terrain/etc, you could program the ai to generate code based on examples of code you feed it and examples of real pictures or hand crafted models to give the ai goals.

    ReplyDelete
    Replies
    1. I believe in the really long term, procedural generation is not a thing. There is only AI. This is our mantra: Automate the artist not the subject. Like you say, a human painter will recall how erosion looks instead of running erosion filters in his/her mind.

      Today both AI and ProcGen are a collection of very domain specific tools. There is little common ground among AI techniques. I'd say there is less common ground between AI and ProcGen.

      Delete
    2. I see ai and procgen both as branching/expanding subjects, although ai is growing at a much faster rate. Eventually, when the connection is made between the two, and ai gets the ability to use procgen as an output we will see a huge boost to procgen.

      Delete
    3. otoh , using a deep learning network you can feed in erosion pictures and get out erosion (or even for instance extremely complex things like animated weather or sun activity) effects that would potentially be non computable in any sane amount of time. even though it is not physically accurate (the algorithim is just mimicking pictures) the end result can potentially be more accurate than you could get writing a physically accurate algorithim by hand.

      Delete
    4. I've been watching how Space Engine generates planetary features on a grand scale. It's more natural looking at the planetary scale than the local scale, but I want to think it might be one way to approach a part of the design problem here, driving features top-down by planet type/conditions. Granted, I don't have first hand experience with the problem you often mention of "total automation having a high garbage:realistic ratio."

      Delete
    5. My actual remarks are you cannot have interesting planets generated in realtime at spaceship approaching speeds and using consumer grade hardware. Does not really matter if you do it top-down, although like you say top-down is likely to produce better results.

      It is really about informational entropy. It does not matter where it comes from, it could be man-made, automatically generated or even sampled from the real world. It takes *work* to produce entropy, there is no escape. Realtime generation aims to skip this work by using low entropy methods like local mathematical functions. Crappy is in the eye of the beholder, I will just say that line of research is not very interesting to me.

      With enough time and energy, it is possible to automate the creation of very rich environments. This is where I want to spend my time.

      Delete
  4. so the obvious question , why not move to displacement voxels instead of displacement maps.. tie this in with detail levels and wammo? displacement voxels could probably make for interesting procedural meta materials so you could have volumetric bodies and such. pretty amazing developments though, very inspiring!

    ReplyDelete
    Replies
    1. The key reason why not is performance. Each metamaterial needs to be transformed on the fly, in realtime. It has to be applied along the surface direction, which involves rotation. Also it has to be stretched and scaled in order to provide more diversity. These transforms are expensive to compute with voxels at the moment.

      But yes, volumetric content for the metamaterials is eventually the right answer.

      Delete
  5. 3D Rendering Services 3D Rendering don’t need to be expensive, Reality-forge offers photo-realistic rendering services for a very affordable price. You can get a render of your project for as low as $180, and you can even see your designs come to life even before you pay!! Architectural 3d rendering is not a job for amateurs; it takes professionals to do it accurately. It requires hard-earned experience and skills to transform something into a 3 dimensional illustration. No other service on the web that offer a more cost-effective price with high end quality.

    ReplyDelete