Showing posts with label Voxelization. Show all posts
Showing posts with label Voxelization. Show all posts

Monday, May 9, 2016

Applying textures to voxels

When I look back at the evolution of polygon-based content, I see three distinct ages. There was a time where we could only draw lines or basic colored triangles:


One or two decades later, when memory allowed it, we managed to add detail by applying 2D images along triangle surfaces:


This was much better, but still quite deficient. What is typical of this brief age is that textures were not closely fitted to meshes. This was a complex problem. Textures are 2D objects, while meshes live in 3D. Somehow the 3D space of the mesh had to be mapped into the 2D space of the texture. There was no simple, single analytical solution to this problem, so mapping had to be approximated to a preset number of cases: planar, cylindrical, spherical, etc.

With enough time, memory constraints relaxed again. This allowed us to write the 3D to 2D mapping as a set of additional coordinates for the mesh. This brought us into the last age: UV-mapped meshes. It is called UV because it is an additional set of 2D coordinates. Just like we have XYZ for the 3D coordinates in space, we use UV for coordinates in the texture space. This is how Lara Croft got her face.


We currently live in this age of polygon graphics. Enhancements like normal maps, or other maps used for physically based rendering, are extensions of this base principle. Even advanced techniques like virtual texturing or Megatextures still rely on this.

You may be wondering why is this relevant to voxel content. I believe voxel content is no different than polygon content when it comes to memory restrictions, hence it should go through similar stages as restrictions relax.

The first question is whether it is necessary to texture voxels at all. Without texturing, each voxel needs to store color and other surface properties individually. Is this feasible?

We can look again to the polygon world for an answer. The equivalent question for polygon content would be, can we have all the detail we need from just geometry, can we go Reyes-style and rely on microgeometry? For some highly stylized games maybe, but if you want richer realistic environments this is out of the question. In the polygon realm this also touches the question about use of unique texturing and megatextures, like in idTech5 and the game Rage. This is a more efficient approach on having a unique color per scene element, but still was not efficient enough to compete with traditional texturing. The main reason is that storing unique colors for entire scenes was simply too much. It led to huge game sizes while the perceived resolution remained low. Traditional texturing on the other hand allows to reuse the same texture pixel many times over the scene. This redundancy decreases the required information by an order of magnitude at often no perceivable cost.

Unique geometry and surface properties per voxel are no different than megatextures. They are slightly worse as the geometry is also unique, and polygons are able to compress surfaces much more efficiently than voxels. With that in mind, I think memory and size constrains are still too high for untextured voxels to be competitive. So there you have the first voxel content age, where you still see large primitives and flat colors, and size constraints won't allow them to become subpixel:

(Image donated by Doug Binks @dougbinks from his voxel engine)

The second age is basic texturing. Here we enhance the surface detail by applying one ore more textures. The mapping approach of choice is tri-planar mapping. This is how Voxel Farm has worked until now. This is sufficient for natural environments, but still not there for architectural builds. You can get fairly good looking results, but requires attention to detail and often additional geometry:


In this scene (from Landmark, using Voxel Farm) the pattern in the floor tiles is made out of voxels. The same applies to table surfaces. These are quite intricate and require significant data overhead compared to a texture you could just fit to each table top for instance, as you would do for a normal game asset.

We saw it was time for voxels to enter the third age. We wanted voxel content that benefited from carefully created and applied textures, but also from the typical advantages you get from voxels: five-year-olds can edit them and they allow realistic realtime destruction.

The thing about voxels is, they are just a description of a volume of space. We tend to think about them as a place to store a color, but this is a narrow conception. We saw that it was possible to encode UV coordinates in voxels as well.

What came next is not for the faint of heart. The levels of trickery and hackery required to get this working into a production ready pipeline were serious. We had to write voxelization routines that captured the UV data with no ambiguities. We had to make sure our dual contouring methods could output the UV data back into triangle form. The realtime compression had to be now aware of the UV space, and remain fast enough for realtime use. And last but not least we knew voxel content would be edited and modified in many sorts of cruel ways. We had to understand how the UV data would survive (or not) all these transformations.

After more than a year working on this, we are pleased to announce this feature will make it into Voxel Farm's next major release. Depending on the questions I get here, I may get more into detail about how all this works. Meanwhile enjoy a first dev video of how the feature works:


Thursday, December 11, 2014

How the voxel zebra got its stripes

Here is the story behind these two zebras:



The zebra at the left was handcrafted by an artist. It is a traditional polygon mesh where each triangle has UV coordinates. These coordinates are used to wrap a handpainted 2D texture over the triangle mesh.

This is how most 3D objects have been created since the beginning of time. It is a very powerful way to capture rich surfaces in models. It is very efficient, it aligns well with the hardware, allows you to have incredible detail and even animate.

Voxels can also have UV. This allows you to capture more detail at much lower voxel resolution.

The zebra at the right had an interesting life. It went from the artist made polygon into a full voxel representation. Then it went back to triangles just before rendering. UV coordinates were preserved along this trip, but there is a lot of trickery involved. These are different meshes.

Both models use exactly the same texture the artist made. This is the important part. You could draw both in the same draw call.

The voxel version has fewer triangles. This is a 100x100x100 voxelization. To give you an idea of how small that is, here is the equivalent of that in 2D:
If you approached the zebra and looked at its head, at the left is how big these voxels would be:


At the right you see our results. The same amount of voxels can provide a lot more detail if UV coordinates are used.

I am happy with the results. To me this is as important as solving the physics problem. This will take the look of voxel scenes to a whole new level, while allowing you to harvest and destroy these carefully designed things.

This is still experimental and there are tricky issues ahead, like handling topology changes (holes closing.) and dealing with aliasing. For now I got to make a post with images of only zebras in it.



Monday, December 1, 2014

Looking inside voxel assets

You do not have to be a cat or a Floyd fan to enjoy a laser show.

Here is a laser-looking tool that allows you to explore the inside of voxelized assets. The challenge was to show the interior features of a model while keeping the context clear in the viewer's mind. The following video shows it in action:


I really like this new toy. I have wasted many hours already playing with it, looking if any of the assets we have so far had any sort of defects inside, getting a better understand of how these models are built.

It also allows to place pivot points inside our instances:



This is how we came up with it. We could not see anything inside!

Wednesday, November 26, 2014

The Missing Dimension

I believe when you combine voxels with procedural generation you get something that goes well beyond the sum of these two parts. You can be very successful at any of these two in isolation, but it is when you mix them that you open up a whole set of possibilities. I came to this realization only recently.

I was watching a TV series the other night. Actors were filmed against a green screen and the whole fantasy environment was computer generated. I noticed something about the ruins in this place. The damage was clearly done by an artist's hand. Look at the red arrows:


The way bricks are broken (left arrow) reminds me more of careful chisel work than anything else. The rubble (right arrow) is carefully arranged and placed around the floor. Also we should see smaller fragments of rocks and dust.

While the artists were clearly talented, it seems they did not have the budget to create physically plausible damage by hand. The problem with the series environment was not that it was computer generated. It wasn't computer generated enough.

Consider physically-based rendering. It is used everywhere now, but there was a time when artists had to solve the illumination problem by hand. Computing photons is no different than computing rolling stones. You may call it procedural generation when it is about stones, and rendering when it is photons, but these are the same thing.

As we move forward, I see physically based generation becoming a thing. But there is a problem. Until now we have been too focused on rendering. Most virtual worlds (like game scenes) are described only as a surface. You cannot perform physically based generation in a world that is only a surface. We are missing the inner dimension.

Our world is 4D. This is not your usual "time is the fourth dimension" pickup line. The fourth dimension is the what, like when you ask what's inside a box. Rendering was focused mostly on where the what turns from air into solid, which is a 3D surface. While 3D is good enough for physically based rendering, we need 4D for a physically plausible world.

Is that bad that we are not 4D? In games this translates to static worlds, or scripted destruction at best. You may be holding the most powerful weapon in the universe but it won't make a dent on the floor. It shows everywhere as poor art, implausible placement of rocks, snow, debris and damage, also as lack of detail in much larger features like cities, castles and landscape.

If you want worlds that can be changed by its inhabitants, or if you want to generate content by simulation, you need to know your world as a volumetric entity. Voxels are a very simple way to achieve this.

Going 4D with your content is a bit of a problem. Many of the assets you may have could not work. Not every mesh defines a volume. Often, meshes have holes in them. They do not show because they are hidden by other parts of the object. These are not holes like the center of a doughnut. It is a cut in the mesh that makes it just a surface in 3D space, not a closed volume.

Take a look at the following asset:

The stem of this mushroom is not volumetric. It is missing the cap. This does not show because the top of the mushroom is sunk into the stem and this hole is completely hidden from sight. If you tried to voxelize this stem it would have unpredictable results. This hole is a singularity to the voxelization, it may produce all sorts of artifacts.

We have voxelization that can deal with this. If you voxelized the top and bottom together, the algorithm is robust enough to realize the hole is capped by other pieces. But we just got lucky in this case, the same does not apply to any open mesh.

Even if you get meshes that are closed and topologically correct, you are only describing a surface. What happens when you scratch the surface? If I cut the mushroom with a knife, it should reveal some sort of mushy, moist material. Where is this information coming from? Whoever is creating this asset has to put it there. The same applies to the bricks, rocks, plants, even living beings of your virtual world.

I think the have reached a turning point. Virtual worlds will remain static and very expensive to build unless we can make physically correct decisions about the objects in there. Either to destroy them or to enhance them, we need to know what they are made of, what is inside.

Tuesday, November 25, 2014

Instance Voxelization

We are finishing the voxelization features in Voxel Studio.  Here is how it looks like:


At only 40x80x40 voxels it is a good reproduction of the Buddha. You can still see the smile and toes.

This computes 12 levels of detail so when this object is distant we can resort to a much smaller representation. If you know what texture mipmaps are, you'd see this is a very similar concept.

The LOD slider produces a very cool effect when you move it quickly: you can see the model progress quickly from high to low resolution.

And here is the Dragon at 80x80x50 voxels:


Friday, April 18, 2014

Video Update for April 2014

Wondering what happened in the last few months? Here is an update:


There are several things we did that were not covered in this update. You will notice a river in the background but there is no mention about water.


It is not that we are hydrophobic or that we want to tease you about this feature, we just want to spend more time improving the rendering.

I also go on in this update about how clean and sharp our new tools are. There is indeed a big difference in the new toolset, but still there are serious issues with aliasing when you bring detail beyond what the voxels can encode. For instance, the line tool now can do much better lines, but we still cannot do a one-voxel thick line that goes in any angle. This is because in order to fix the aliasing in this line would need sub-voxel resolution. So it OK to expect cleaner lines, but they can still break due to aliasing.

Wednesday, December 4, 2013

More statues

Here are other screenshots showing some statues. I did not create this cat statue. It was a model I found free for download on the web. In each case the model was voxelized and "pasted" into the scene.




Bringing components other have made beats sculpting them yourself. The question is how simple we can make this process for the average player.


Tuesday, May 28, 2013

Video Update for May 2013

This update sums up a few nice additions done over the last month: the ability to use meshes as brushes for creation and how you can go off-grid.


Monday, April 11, 2011

Just happy to be outside

Another screenshot of a familiar model coming out of the voxelization mill:


Spring is here and the Buddha is just happy to be out. As the ancient people said, if you can see the Buddha smile, voxelization is working fine.

Voxelization Stats

Just to complement the earlier post, here are some facts about this voxelization method.

Here you can see the Standford dragon after voxelization in a 128x128x128 grid:


The original model has 260,000 triangles. The resulting mesh resolution for 128x128x128 is much lower, so a lot of detail is necessarily lost. Still the voxelization preserves the key features of the model.

This runs in 14 milliseconds on an ATI 4770.

Saturday, April 9, 2011

OpenCL Voxelization

If you are going down the voxel way, odds are at some point you will need to include polygonal assets in your workflow.

For me it happened when I integrated the output of the architecture L-System into the voxel world. Architecture is produced as a collection of polygonal volumes, I needed to translate those volumes into voxels.

As usual, it had to be fast. It also needed to be accurate, Sharp features in the original polygon mesh should appear in the voxelized version.

There are several approaches to voxelization. One is to use the GPU to render slices of  the polygonal object and then construct the voxel information out of the slices. Another way to do it is to compute a 3D distance field, that is, the distance from each point in space to the closest polygon surface. In GPU Gems 3 there is a nice implementation of  this method. And then there is the classical method, which is to shoot rays to the polygonal solid and construct the voxels from the intersections.

I chose to implement a flavor of the classical method. The clean intersection points it produced would help later when reconstructing the surface. I also suspected the ray-tracing nature of  this approach would translate well into OpenCL or CUDA.

In particular my method is inspired on this one (PDF link). The article is very detailed, but if you want to skip reading it here is the basic idea:

It uses a regular grid. Each element in the grid is a voxel. One of the coordinate planes is used to build a Quad-tree. All the triangles in the polygonal mesh are projected into this plane and inserted in the Quad-tree. I stole an image from the article above that illustrates this phase:


In this image all the triangles are indexed in a Quad-tree aligned to the XZ plane (assuming Y goes up).

The next step is to shoot rays perpendicular to the Quad-tree plane. Each ray is intersected with all the triangles found in that cell of the Quad-tree. This is actually why a Quad-tree is used. It allows to test only the triangles that potentially intersect the ray.

Each ray may intersect several triangles along its path. For each voxel that is visited by the ray, the algorithm must find a way to tell whether it is inside or outside the solid. This is actually simple. If we assume the polygonal volume is closed, then it is possible to count how many intersections we passed before getting to the current voxel. If the number is odd, the voxel is inside the volume, if it is even, it is outside.

This other image, also from the same article, shows this principle:



And that's it. Once you have sent enough rays to cover the grid, the voxelized solid is there.

This method, however, could not produce the results I needed. It only tells you if a voxel is solid or not. I needed a 3D density field. Also I wanted it to preserve the sharp corners in the original mesh, which is something this method doesn't consider. And last but not least, this algorithm is not very fast. Ray-casting on CPU quickly becomes expensive as grid resolution goes up.

This is what I did:

First, instead of having just one Quad-tree, I built three of them. One for each coordinate plane. I realized I had to cast rays along the three main axis instead of just one as the original method does.

Why? The original method only cares about voxel occupancy. In my case I needed to record the normals at the intersection points. If rays are shot only in the Y direction, any polygons perpendicular to the plane XZ would never be intersected, their normals would be unknown. And it is not only that. For a sharp edge to be reconstructed, you need at least two intersections inside a voxel. To reconstruct a vertex, you need three rays.

The following screenshot shows how sharp features are reconstructed. This is a cube that is not aligned with any of the coordinate axis, still its sharp features are preserved.



The Quad-trees are built on the CPU as a pre-processing phase. It takes less than 1% of the entire cost of the voxelization, so it is something I won't move into the GPU soon.

Next, the OpenCL voxelization kernel runs. Each instance of the kernel processes a single ray. Before invoking the kernel I make sure all the rays are packaged in sequence, regardless of their direction.

The ray has access to the corresponding Quad-tree so it can iterate through the list of triangles in that cell of the Quad-tree. For each triangle it will test if there is an intersection. If that is the case, it writes the coordinates to an array.

Then comes a twist. The voxelization method I described before relies on counting how many intersections were found up to the current voxel. For this to work, the list of intersections needs to be sorted. Sorting the intersections in the CPU is no problem, but inside the OpenCL kernel it could get messy very quickly. How to get around this?

I realized that it was still possible to determine if a voxel was inside or outside without having to sort the intersections. The trick is to "flip" the occupancy state of all voxels preceding the current intersection point. If the count is odd, the voxel will flip an odd number of times and it will remain set at the end. If the counter is even, the voxel will finish empty.

You can see this in the following series of images. The intersection points are marked in red, and as you can see they arrive in any order:





I'm satisfied with the speed of this method. Anyway there are some optimizations I could do. If some restrictions are imposed on the polygonal solids, it will be possible to insert triangles already sorted in the Quad-tree. This means the OpenCL rasterization only will need to do one pass per ray. Right now each intersection requires its own pass.

But I'm still on the look for a faster voxelization method. If you know of something better please let me know.

Saturday, February 5, 2011

The Missionaries Arrived

I have done my first render of some architecture. Here you can see the results:


It is some kind of Romanesque church. It is all made of voxels, so it is pretty much the same as the terrain and the trees I have shown before.

This church is entirely procedural. It was created by an L-System based on a series of grammar rules. The rules can be evaluated in many different ways, producing churches with many different layouts and sizes. Many of the rules can be used for other things than churches. For instance, the same towers could appear in a castle as well.

Artistic input is still required, but at a very generic level. The artist creates the basic building blocks like doors, windows and ornaments. Then they are procedurally recombined by the grammar. This way a few assets can spawn a large number of very different looking buildings.

The base assets are in polygonal form. This is so it is easier for the artist to create them. For this reason the architecture L-System outputs polygonal meshes. The meshes are then voxelized and blended with the rest of the terrain so it benefits from all the advantages of the voxel engine.

I have still a very long way to go regarding architecture. First I need to improve the grammars so I can also represent interior spaces. The interior of this church is not very good for instance. It has practically nothing inside. I also need to port the voxelization to OpenCL. I'm still running a CPU-bound prototype.

And one building is just the beginning. I want a complex network of interconnected cities. These doodles illustrate what I'm going for:




At this point I feel like the man who was carrying a brick so other people could imagine how his house was. But hopefully you will get the idea.

I will post soon about the L-System and architecture grammars. It was very interesting for me and it was one of the tasks in this project I enjoyed the most.