Traditional painters were taught to just "paint the light". This was centuries before 3D graphics were a thing. They understood how light bounced off surfaces picking up color on its way. They would even account for changes in the light as it went through air.
Going into realtime 3D graphics we had to forget all of this. We could not just draw the light as it was computationally too expensive. We had to concentrate on rendering subjects made of surfaces and hack the illumination any way we could. Or we could bake the stuff, which looks pretty but leaves us with a static environment. A house entirely closed should be dark, opening a single window could make quite a difference and what happens if you make a huge hole in the roof?
For sandbox games this is a problem. The game maker cannot know how deep someone will dig or if they will build a bonfire somewhere inside a building.
There are some good solutions out there to realtime global illumination, but I kept looking for something simpler that would still do the trick. In this post I will describe a method that I consider is good enough. I am not sure if this has been done before, please leave me a link if you see that is the case.
This method was somewhat of an accident. While working with occlusion I saw that determining what was visible from any point of view was a similar problem to finding out how light moves. I will try to explain it using the analogy of a circuit.
Imagine there is an invisible circuit that connects every point in space to it neighboring points. For each point we also need to know a few physical properties like how transparent it is, how it changes the light direction and color.
Why use something like that? In our case it was something we were getting almost for free from the voxel data. We saw we could not use every voxel, it resulted in very large circuits, but the good news was we could simplify this circuit pretty much the same way you collapse nodes in an octree. In fact the circuit is just a dual structure superimposed on the octree.
Consider the following scene:
The grey areas represent solid, white is air and the black lines is an octree (quadtree) that covers the scene at adaptive resolution.
The light circuit for this scene would be something like:
Red arrows mean connections between points where light can freely travel.
Once you have this, you could feed light into any set of points and run the node to node light transfer simulation. Each link conduces light based on its direction and the light's direction, each link also has the potential to change the light properties. It could make the light bounce, change color or be completely absorbed.
It turns out that this converges after only a few iterations. Since the octree has to be updated only when the scene change you could run the simulation many times over the same octree, for instance when the sun moves or a dragon breathes fire.
To add sunlight we can seed the top nodes like this:
Here is how that looks after the simulation runs. This is a scene of a gorge in some sort of canyon. Sunlight has a narrow entrance:
The light nodes are rendered as two planes showing the light color and intensity.
Here are other examples of feeding just sunlight to a complex scene. Yellow shows the energy picked up from the sunlight.
Taking light bounces into account is then easy. Unlike the sunlight, the bounced light is not seeded from outside, it is produced by the simulation.
In the following image you can see the results of multiple light bounces. We made the sunlight pure yellow and all the surfaces bounce pure red:
This is still work in progress but I like the fact it takes a fraction of a second to compute a full light solution, regardless of how complex the scene is. Soon we will be testing this in a forest setting. I miss the green light coming through the canopies from those early radiosity days.