Very often people ask me what a voxel is. I struggle to explain this in simple terms, even to savvy professionals from other fields of IT. In most occasions, I just say a voxel is like a pixel, but in 3D, and move on to refresh my drink or hide in a lavatory. I can't help the feeling I have avoided the question.
To help understand why voxels matter today, we need a different analogy. If I had enough time, I would say voxels are like triangles.
A triangle defines a closed 2D space. Imagine we want to do something to this closed space, for instance, paint it red. We could do this by drawing one long line and making the right turns until we have our triangle:
This is how most triangle rasterization worked in the early days. Even after many clever optimizations, it remained awfully slow. It was an inherently serial solution. The value we paint for one point depends on computations we made for earlier points. This would never scale up to hundreds of millions of triangles per second, even with the transistor densities we have today.
GPUs changed that. They render triangles faster only because the problem is solved in parallel. Remember how a triangle is a closed 2D space? That means there is "inside" versus "outside". The GPU, with a simple test, will know this. If a point is inside it will be painted red. It does not matter whether previous points were inside. Since there are no dependencies between points, the GPU is free to look at many points at the same time.
This amazing property of triangles, where they can tell inside from outside without any additional context, enabled the GPU age.
Just like a triangle defines a closed 2D space, a voxel defines a closed 3D space. And just like a triangle, a voxel can have any properties you want. It could have a color, or a material, or even surface parametrization. Voxels can use UV maps and textures in the same way triangles do. In this next image, you can see this voxel rock that looks indistinguishable from your typical low-poly textured mesh:
We tend to think of voxels as cubes, and most of the time this is correct. A voxel cube is equivalent to a surface quad. Just like the quad can be split into two triangles, a voxel cube can be split into five tetrahedron voxels.
And just like triangles did for 2D problems, voxels enable massively parallel processing for problems in 3D. I think this is a big deal.
But what are these problems that you need to solve in 3D?
Rendering is not one of them, contrary to what intuition may tell you. Rendering is about projecting the data into 2D so humans can understand it. It will always be solved more efficiently using 2D elements like triangles and surface processors like GPUs. While "seeing" is very important for humans, it does not really mean anything to a computer. They have no problem working in higher dimensions.
Pretty much everything else is a problem in 3D. Here is a basic one: Imagine you needed to compute the volume of a very random 3D object that has the size of a small town. If you are using voxel data, you can have hundreds of nodes in a network compute a small section of the object's volume and then add the results to get the final volume. You would get the results in a fraction of the time. This is only possible because voxels, like triangles did it for GPUs, allow you to answer the inside/outside question locally. That's the voxel Eureka moment.
This enables many Holy Grail solutions which for brevity reasons I won't enumerate, but that I will be happy to discuss if you drop me a comment below.
Today, most of the entertainment and geospatial industries still use serial, on-core, approaches to solving their 3D content problems.
As the data grows and more entities are required to produce it and consume it, the shift to parallel computing will necessarily happen. And we can be certain voxels will be at the heart of this next age, just like triangles were at the center of the GPU revolution.