While we are in the subject of big data, I wanted to show how much detail these algorithms are able to generate. It may give you an idea of the amount of information that needs to be delivered to clients, one way or another.
Before anything else a disclaimer: the following images (and video) are taken from a mesh preview explorer program I have built. The quality of the texturing is very poor, this is actually textured in realtime using shaders. There are a lot of shorcuts, especially in the noise functions I use. Also the lighting is flat, no shadows, no radiosity. But it is good enough to illustrate my point about these datasets.
Take a look at the following screenshot:
Nothing fancy here, it is just some rock outcrops. In the next two screenshots you can see how many polygons are generated for this scene:
This is how meshes are output from the dual-contouring stage. If you look carefully you will see there is some mesh optimization. The dual contouring is adaptive, which means it can produce larger polygons if the detail is not there. Problem is, with this type of procedural functions there is always detail. This is actually good, you want to keep all the little cracks and waves in the surface. It adds character to the models.
Here is another example of the same:
These scenes are about 60x60x60 meters. Now extrapolate this amount of detail to a 12km by 12km island. Are you sweating yet?
I leave you with a screen capture of the mesh preview program flying over a small portion of this island. It is the same 60m x 60m x 60m view moving around, detail outside this view is clipped. The meshes loading here are a bit more optimized, but they still retain a lot of detail. I apologize for the motion sickness this may cause. I did not spend any time smoothing camera movement over the terrain.