I knew I had to pick a voxel resolution that was small enough to handle the detail I wanted, but still coarse enough so the data volumes were manageable. I chose one tenth of a meter, which is close to four inches. That was good enough for rocks, boulders, even architectural elements like stairs and columns.
It is actually possible to have details smaller than the voxel resolution, like a tree branch with a diameter of one inch. The catch was that the next non-empty element, the next branch for instance, would necessarily be separated by twice the voxel resolution. So I could have two one-inch diameter sticks, but the closer they could ever be was eight inches. You can look into the Sampling Theorem if you want to understand where those limits come from.
Those were good enough for the basic geometry of the world. Smaller details would come from traditional texturing. That is normal and parallax mapping, and even instancing for tree leaves and grass.
But voxels are square by nature. A naive rendering of my voxels would result in conspicuous four-inch boxes. Somehow the boxes needed to become smooth and shape the world in a realistic way. The boxes, as seen below, needed to turn into the sphere:
This is a very old problem. Medical scans produce 3D data which is very similar to voxels. There are several algorithms that would take the 3D data and render it as a series of triangles. This problem in general is referred to as isosourface extraction, which means obtaining a surface out of a three dimensional field.
The best known of these algoritms is Marching Cubes. Other similar algorithms are Marching Thetraedron and Dual Contouring. The idea behind them is simple, and there are many implementations available. They look at different corners of the voxel and depending if the corner is solid or not, they discover that the resulting surface is actually crossing the voxel somewhere. The surface fragment is output and the algorithm proceeds to look at the next voxel.
My problem with these algorithms was that in order to guess the right position of the surface, you not only needed to know whether a voxel was solid or not, you also needed to know its "density". The virtual world was not a clean cut between black and white anymore, it was all shades of gray. In that sense, at a given point you may have 30% air and 70% gravel.
This is not trivial. It is like comparing a raster image with a vector image. With the raster image, like a BMP, everything is simple. Each pixel has one distinct color. If you want to modify the image, you just change the color value of the pixel and that's it. With a vector image, you never have individual points, just a series of mathematical functions that evaluate to shapes. If you want to modify the image, you either alter the parameters of the existing shapes or add new ones.
If I wanted to use any of those algorithms, I would have to represent everything in my virtual word as a mathematical function. Or at least, I would need to provide a diffuse boundary to all the voxel construction tools I might devise. I didn't like that. I hoped for an approach in which it was realy simple to modify the voxel.
So I found a simple way to come up with the surfaces. I realized that if each voxel corner was moved to the averaged position of its neighboring corners, the voxel grid would soon converge into a smooth approximation of the surface. It would take only seven iterations to produce the sphere in the earlier image.
This method was even able to produce sharp boundaries. The contribution of neighboring voxels to the averaged position depended on material type. As the grid relaxation increased on each iteration, tension lines would naturally appear in the boundaries between different materials.
The following image shows the results of this method:
The material boundaries appear as red voxels in the section at the left. As you can see, they translate into sharp features once the surface is generated.
Since it was so easy to work with the voxel, I soon found myself adding noise, and then applying convolution filters to the voxels. It was a very simple way to get interesting results. I even implemented an L-system for some complex volumes like the shell below:
But it did not take long before I discovered the relaxation approach would not take me as far as I wanted to go. I started turning triangle-based geometry into voxels and the results were never good. I was creating the models in traditional 3D software like 3ds max and then importing them as voxels. I was getting a lot of noise I could not deal with. It seemed that black and white would not cut it, that I really needed to consider voxel density.
I dumped all the relaxation code and started looking at the voxels from a fresh perspective. I was back to square one.
This is amazing stuff. Do you have a reference for the relaxation technique you're using?
ReplyDeleteI developed this method on my own, then I found something very similar here: http://www.merl.com/papers/docs/TR99-24.pdf
ReplyDeleteBut note that all my recent screenshots do not use this method. I moved to an adaptive dual contouring method.
I see you used some sort of mesh minimization technique to reduce the polygon count. Could you explain the algorithm for that or give a link to some document which describes it?
ReplyDeleteThanks,
- Kyle
http://procworld.blogspot.ca/2011/10/playing-dice-with-mesh-simplification.html
Delete