The oldest optimization in real-time graphics is to avoid rendering what you don't see. When you explain this to people who are not in the field, they usually shrug and say something in the line of "Duh, Sherlock".
It is easier said that done. Well, actually a big part of it is quite easy. The first trick you see in all graphics books is to render only what's inside the field of view. While the scene surrounds entirely the camera, the camera only captures a narrower slice of it. Anything outside this slice, which is usually 90 degrees along the horizontal, does not need to render. For a mostly horizontal scene, only 90 degrees out of 360 need to be rendered. This simple optimization reduces scene complexity four times. Another way to put it is, now you can have four times more detail without a performance drop. This technique is called Frustum Culling.
Frustum Culling is a no-brainer for small objects that are scattered around the scene. As scene complexity rises you must batch as many objects together as possible. The need for aggressive batching apparently has relaxed a bit recently, but there is no question that batching is still necessary. This goes against Frustum culling. What if there is an entire batch that is only partially in the field of view? You would still need to render it all. So, the more you batch, the more you can loose from the frustum culling optimization... unless your batches are somehow compatible with the scene slices you need to render. More to that later.
Even if you were able to perfectly cull all the information outside the field of view, there is usually a lot of polygons being rendered in a scene that never make it to the screen as pixels. This is because they become hidden later by a closer polygon.
Imagine a huge mountain with a valley behind it. If the mountain was not there you would see the valley. With the mountain in front of you, all the efforts rendering this valley go to waste. If we could somehow detect we can skip this valley, we would save a lot of rendering. We could have a much nicer mountain.
This technique is called Occlusion Culling. It is in principle a difficult problem, as the final rendering is the ultimate test of what is really visible and what not. Obviously some sort of approximation or model has to be used. A simpler model of the scene allows to estimate what portions of the final rendering will become hidden so it is safe to skip them.
And then again, if you had the occlusion problem perfectly solved, you would still have the issue with batching. It is not that different than with frustum culling. Maybe just a small clip of a large batch is visible, still that would require the entire batch to render... unless your batches are somehow compatible with the scene volumes being occluded.
I wondered that maybe there was a single approach that would help with all these issues at once. Yes, some sort of silver bullet. I set out to look for one, and did find something. Well maybe it is not a silver bullet, but it is quite shinny.
It is about the geometry clipmaps. I have covered them many times in the past. The idea is somewhat simple: if your world can be represented as an octree, you can compute any scene from this world as as series of concentric square rings. Each ring is made of cubic cells. The size of these cells grow exponentially as the rings are farther from the viewer.
The image above shows a projection of a clipmap in 2D.
You can see right away how this helps with batching and frustum culling. Each cell is an individual batch, which can contain a few thousand polygons. It is quite simple to determine whether a cell is inside the field of view. Also, cells go out of the field of view quite efficiently as their size is constrained by their very definition.
The clipmap turned to be very friendly for occlusion testing as well. Imagine you could identify some cells as occluders in one specific direction of the clipmap. It becomes fairly simple to test if more distant cells are occluded or not.
The following image shows how this principle works:
Here four cells have been identified as occluders. They show as vertical red lines. Thanks to them, we can safely assume all the cells painted in dark red can be discarded. These batches are never sent to the graphics card.
In my case I am performing the tests doing software rasterization. It is very fast because the actual cell geometry is not rendered, only cell aligned planes. So far a depth buffer of 64x64 provides sufficient resolution.