Wednesday, November 26, 2014

The Missing Dimension

I believe when you combine voxels with procedural generation you get something that goes well beyond the sum of these two parts. You can be very successful at any of these two in isolation, but it is when you mix them that you open up a whole set of possibilities. I came to this realization only recently.

I was watching a TV series the other night. Actors were filmed against a green screen and the whole fantasy environment was computer generated. I noticed something about the ruins in this place. The damage was clearly done by an artist's hand. Look at the red arrows:


The way bricks are broken (left arrow) reminds me more of careful chisel work than anything else. The rubble (right arrow) is carefully arranged and placed around the floor. Also we should see smaller fragments of rocks and dust.

While the artists were clearly talented, it seems they did not have the budget to create physically plausible damage by hand. The problem with the series environment was not that it was computer generated. It wasn't computer generated enough.

Consider physically-based rendering. It is used everywhere now, but there was a time when artists had to solve the illumination problem by hand. Computing photons is no different than computing rolling stones. You may call it procedural generation when it is about stones, and rendering when it is photons, but these are the same thing.

As we move forward, I see physically based generation becoming a thing. But there is a problem. Until now we have been too focused on rendering. Most virtual worlds (like game scenes) are described only as a surface. You cannot perform physically based generation in a world that is only a surface. We are missing the inner dimension.

Our world is 4D. This is not your usual "time is the fourth dimension" pickup line. The fourth dimension is the what, like when you ask what's inside a box. Rendering was focused mostly on where the what turns from air into solid, which is a 3D surface. While 3D is good enough for physically based rendering, we need 4D for a physically plausible world.

Is that bad that we are not 4D? In games this translates to static worlds, or scripted destruction at best. You may be holding the most powerful weapon in the universe but it won't make a dent on the floor. It shows everywhere as poor art, implausible placement of rocks, snow, debris and damage, also as lack of detail in much larger features like cities, castles and landscape.

If you want worlds that can be changed by its inhabitants, or if you want to generate content by simulation, you need to know your world as a volumetric entity. Voxels are a very simple way to achieve this.

Going 4D with your content is a bit of a problem. Many of the assets you may have could not work. Not every mesh defines a volume. Often, meshes have holes in them. They do not show because they are hidden by other parts of the object. These are not holes like the center of a doughnut. It is a cut in the mesh that makes it just a surface in 3D space, not a closed volume.

Take a look at the following asset:

The stem of this mushroom is not volumetric. It is missing the cap. This does not show because the top of the mushroom is sunk into the stem and this hole is completely hidden from sight. If you tried to voxelize this stem it would have unpredictable results. This hole is a singularity to the voxelization, it may produce all sorts of artifacts.

We have voxelization that can deal with this. If you voxelized the top and bottom together, the algorithm is robust enough to realize the hole is capped by other pieces. But we just got lucky in this case, the same does not apply to any open mesh.

Even if you get meshes that are closed and topologically correct, you are only describing a surface. What happens when you scratch the surface? If I cut the mushroom with a knife, it should reveal some sort of mushy, moist material. Where is this information coming from? Whoever is creating this asset has to put it there. The same applies to the bricks, rocks, plants, even living beings of your virtual world.

I think the have reached a turning point. Virtual worlds will remain static and very expensive to build unless we can make physically correct decisions about the objects in there. Either to destroy them or to enhance them, we need to know what they are made of, what is inside.

5 comments:

  1. I wonder how long it will take to build up a public library of 'Physically Based Objects' for procedural generation. Take for example your engine - It seems it will only hit maximum velocity when a developer can go in and say "I want a forest here with these 80 flora genealogies, this regional temperate, and these 10 geological properties.".

    The same goes for all the many buildings you've spent the past few years occasionally blogging about - No single individual developer could possibly afford to build up such a dense library of data and information.

    I do believe you are correct that '4D' objects are the future. But they do still require that additional dimension of information and effort that should surely only need to be done once, much like how PBR works. Have you thought about how your company might help solve this problem?

    ReplyDelete
    Replies
    1. I do not think much is needed. Libraries are not essential for mesh based production today. Most AAA games avoid reusing assets for aesthetic reasons, so players feel the content is new, the style is current, etc. I think there is a huge churn rate for assets. When producing new ones to be used for destruction and other forms of simulation and synthesis, we would make sure they are volumetric. Libraries would be a by-product, just like mesh libraries.

      We are solving this problem already at Voxel Farm. I see these not as a problems, but as opportunities. For content created originally in voxels the data is there. We have also invested a lot into voxelization. Meshes will remain a very efficient, resolution-independent way to store content, even if it is volumetric. You could define a very complex volumetric object, like a shark, by using a set of nested meshes.

      Delete
    2. I would have to disagree for the most part. There are certain parts of a voxelised scene that will need to be unique - Textures being a primary example. But we're talking about properties of assets that are not variable - Hills are hills. Tree species are tree species. There are constants born from physics and geology that are true and perfect for all but the rarest games.

      This is exactly what is occurring in the world of PBR shading right now: Its simple physics and the properties created to make an object look like plastic won't change game to game. The texture and physical mesh of the object will, but not the PBR shader.

      The same applies to voxels and your engine to some extent: The grass and hills and trees and all the things that simulate realistic objects will not change game to game. The textures and materials applied to said objects may be unique. But not the properties.

      Any game that is based in medieval times and wishes to create a castle will all use the same language to define that voxel castle. The exact layout and materials applied will be unique. But the L-System definition of it will not change (At least not to begin with).

      Delete
  2. There was a voxel modeler that appeared last year called Volumerics VOTA that does exactly all being talked in here but looking for it around the internet I can't find anything. It is like if they just disappeared this year while trying their hardest to erase any trace.
    But basically, what it allowed was modelling the interior of your object and they had the tagline "Now you are modeling with volume" or something like that while bragging how unlike other 3d modelers voxels are always tightly closed. Maybe someone saw the potential of their technology for 3D printing and purchased them.

    Either way, maybe a mesh chopping system akin to that of Metal Gear Rising can help solving this problem? You cut metal in infinite parts and it generates a nice "metallic" texture pattern on the fly that makes them look like if they were solid in the inside.
    This image exactly: http://platinumgames.files.wordpress.com/2013/02/e69c88e58589e696ade99da2.jpg?w=500&h=251
    Entire post:
    http://platinumgames.com/2013/02/13/a-cut-of-the-characters/

    Or just can just become the innovator in here and create some kind of "voxel interior mapping" technique that defines how is the voxel interior defined. (As we all know you are already doing haha)
    For some reason I can't stop thinking in a digital tree while writing this.

    ReplyDelete
  3. This is true for all things and all voxels. Consider a scene with a barn, with two floors, a window, and a skylight, sitting on dirt, with the wind blowing. The player gets in a bulldozer and wants to knock over the building. You want it to all be dynamic and procedural. For this to happen, the following must be in place:

    The mass, structural integrity, resonance frequency, sheer rate, density, and appearance
    of the volumetric materials that comprise wood, glass, steel, and earth, must all be
    known. Modules for the physics engine, the lighting engine, and the audio engine
    must all work together in realtime to update each other. The physics engine calculates
    using some parameters where the barn breaks and how many pieces are made, and
    in turn the sound engine creates noises based around the breakage and impact.
    Then, the lighting engine updates the lightmap of the material objects. Finally,
    the frame is rendered and sent with the sound to the player.

    The same could be true of two swords striking against each other,
    a spaceship landing in a field, a tsunami, a tornado, and much more.
    Snow in particular is a temporary layer that is often badly done.

    I can see a future where there is a database of object templates created
    from voxels of specific materials, combined with a database of material types
    and properties. I can also see a future where massively-parallel engines running
    on FGPGPU arrays like Project Larabee or other manycore server systems actually
    handle each object as running on a different core, processing its own physics and
    position and updating a global mesh and texture memory paging system in realtime,
    which is then accessed by users to generate frames. The users would
    initiate changes to begin with by predication, where their system decides what
    cores on the other end to talk to, and then models interaction with them and sends
    them the updates to calculate and then push to global rendering.

    ReplyDelete