At this point I was considering different methods to stream this huge world to users. I was looking for an approach that featured great compression, but also could be selective about what information to send. There is no point in sending very compressed information if it's not needed anyway.
Polygons, actually triangles, are a very efficient way to compress 3D models. You can think of it as some form of lossy compression. A very irregular surface can be approximated by a series of triangles. Depending on how many triangles you use, the resulting surface becomes closer to the original.
The beauty of this is you can do it progressively: you start with a coarse approximation and then, if needed, you can increase the amount of triangles until the approximation is satisfactory.
Progressive meshes are quite old. The classic approach is to simplify a mesh and record all the compression operations. If you reverse this process, that is, start from the simplified mesh and play back the recording, it is like you were adding detail.
This principle has been extended so playback only happens for the polygons that are visible, making it view-depending. Detail outside the view is not refined. This property suits very well for streaming from a server. You can be very selective about what details are sent to clients. For instance, if the user cannot see the back of a mountain or a house, there is no need to stream any detail there.
Now, if you use a simplification method that is able to change mesh topology, it means the playback can increase the "genus" of your mesh. For instance a house in the distance would not have a hole in the chimney, but if you get close enough, this hole would appear. This is a game changer.
An octree-based mesh simplification method can do this for you:
This method works very well over a network. TCP/IP is a streaming protocol. Not only it makes sure all bits of info arrive, it makes them arrive in the same order as they were sent. This is actually an overkill. For traditional progressive meshes it was required that playback occurred in the exact same order as simplification ocurred. You needed something like TCP/IP for them. With an octree-based progressive mesh this is not necessary all the time. For instance, cells at the same level can be refined in any order. In this case it makes sense to use UDP, since in many cases you won't matter if packets arrive out of order.
A big problem with traditional polygonal approaches is texturing. Each vertex requires some UV parametrization that links the 3D model with the 2D texture space. It is a problem because a progressive mesh evolves in a way that may break its texture mapping. Often the solution was to create charts that were shared both by the model and the 2D space, and then constrain simplification to within the charts. This was too limiting, and it did not do well with massive topology changes.
What if I dumped 2D parametrization altogether? I could use the actual vertices to store all the information I needed, like material, illumination, etc. The mesh itself acted as a compression structure. If I had enough vertices, the results could be very detailed, same as if I was using conventional 2D mapping to store materials and lightmaps. Then the concept of material kicks in. A material is more than a texture, it can include displacement maps for tessellation, wang-tiles of very detailed geometry for instancing, instructions for procedural generators, etc.
I did some quick tests and saw this method was within what the current hardware generation can do. I did not like a couple of things about it. First, it required to render a lot of polygons. Newer systems could take it, but older ones did not do so well. And second, I would need to run some streaming logic on the server-side, making sure only those packets needed by the client were sent. I don't have the budget for running UDP-streaming servers right now.
So I did not choose this direction, but I think it has a lot of potential. I may come back to it later. As I see it, this method is an alternative to sparse voxel octrees. Voxels make most sense when polygons are too small. They don't require a 2D parametrization of your world anymore. And they can deal with topology simplification in a beautiful way. Progressive octree meshes does pretty much the same, as long as you don't allow your polygons to become very small on screen. The advantage is you may still use all the polygon hardware.
How do you prevent you polygons from getting very small?
ReplyDelete@owen: It all starts from fairly large polygons, then octree cells are refined down. If an octree cell is visible and still large enough, it can be broken down.
ReplyDeleteThere is also the problem of 'popping' when the mesh suddenly changes to a higher-resolution version. Take your chimney for example: there is a distance at which the chimney-without-a-hole changes into the chimney-with-a-hole. Of course, you can do the mesh change at a distance where the user wouldn't notice because the change is too small, but this would be very wasteful I think.
ReplyDelete@Pelsmaeker: Well, there is always popping, but it can be done smoothly. This is actually one thing progressive meshes do well. When new detail is introduced, it comes in not in the new position and configuration, but still collapsed so it matches the previous level of detail. Then it can be smoothly interpolated into the new location.
ReplyDeleteThe simplest example is a vertex split that creates two new vertices and a new edge. In the lower resolution there is only one vertex. When the vertex split is introduced, the new vertex is located still in the position of the vertex being split. Then it is progressively moved into the final position so the new edge gradually appears.
This happens when elements are relatively small on screen (like ones far away), it helps reduce the popping.
How about using peer-to-peer networking to get the mesh? You could build onto Bittorrent, instead of requesting chunks by their hash, clients would request octtree cells by their unique address, store them in their local cache and seed them to other clients.
ReplyDeleteIt would also not be that hard to find neighboring players that way, establish direct connections with them, to transfer chunk updates…
@pascal: It is a very intriguing idea. I'm not sure about the latency. It may take too long to discover who has the next chunk you need and move it.
ReplyDeleteP2P mesh distribution sounds like a solid idea. You'd get double karma from staying online too: by systematically getting meshes out from your last position, you'd preemptively prevent delay the next time you run the program, AND you'd increase your p2p karma that gives you priority over players that haven't seeded much mesh (many meshes?). Pillars of the community in an almost literal sense.
ReplyDeleteRequest: could you make a video where the camera does not move, but the mesh increases in detail, and then each new vertex is moved into the proper position gradually? I've been trying to explain this to some friends and they can't picture it, since most games stop with the jump in detail.
@tentus: P2P is not fully proven for streaming. Streaming brings new latency and congestion issues compared to the old torrent model used for file sharing. This is a research field on its own and it is quite hot at the moment. In 2011 the BitTorrent creator showed a limited demo of P2P streaming. He says he had to overhaul the entire bit torrent protocol to make it possible. Very promising, but also hard.
ReplyDeleteAbout the video request: I'm sorry, I did not go that far. I stopped going in that direction when I realized my low-level target hardware would not cope with the polygon counts.
Rather than true P2P you can at least offload some of the bandwidth use to other clients without 'too' much trouble and double the normal latency (latency problems will be nothing compared to running out of bandwidth).
ReplyDeleteSomething like...
1) Client requests a data chunk from server
2) If server hasn't cached that chunk, and neither has any client...
2a) Generate chunk
2b) Send chunk to client
3) else if we have bandwidth to spare
3a) Send chunk to client
4) else find a client who has the chunk...
4a) Send them the data about what to send to
4b) the client then transfers the data for you
who.
True P2P may be wonderful one day but since you have a server to co-ordinate you can have all the advantages of a client-server model, but borrow some scalability from P2P.
The cost comes in keeping track of what data everyone has, fiddly but not impossible.
If some clients can generate their own data then it wouldn't take much to inform the server about it and build it into the system.
@John: Note that I will generate all info before-hand, my generation is slow compared to viewer movement. Also clients won't do any generation. In part for the same reason (being too slow), and also because they cannot be trusted. Even considering pre-generation helps with delivery times, anything involving P2P may still be too laggy. What you consider to be the worst case scenario may happen too often.
ReplyDeleteYou need to track who has each chunk. Unlike P2P sharing, no client has the entire dataset. This means chunks are deleted continuously to make room for new chunks. Actually once the client's cache is full, chunks are deleted at the same rate new ones are received, which is quite fast. Your machine may think another client has one chunk when it is really gone.
There are ways around that, like having clients commit to chunks for a window of time, or having some redundancy, basically ask for the same chunk to more than one peer.
Making sure the streaming is fluid will take some work, especially on the tracker side. I don't want to spend too much developing a tracker, or host some sort of special server.
Sometime between starting and ending that reply I managed to forget you pre-generate your data, whoops.
ReplyDeleteI guess since you're writing about things you have already done you probably have a cunning plan lined up for your next post that doesn't need something like P2P to work.
So, maybe not relevant but... if a client notifies the server which chunk it is going to delete in the same packet that requests a new chunk then the server should have a good enough idea of that exists on each client.
Then you'd have to limit the number of chunks a client can stream at a time and keep at least that much memory free in case a P2P streaming happens when the client wants to free the chunk.
I realise I've posed a few comments now and forgot to state the obvious... this blog has been a great read and the worlds produce are very nice, certainly puts the usual height map generators to shame.