Friday, October 7, 2011

Popping Detail

I have added two new videos where you can see the popping of new detail.

It is very noticeable when the camera is close to a boundary and the new mesh comes right in front of it.

In these videos the clipmap loader is always trying to catch up with the moving view. In most cases it drags behind, as it really needs to load and process nearly 50 Megs worth of new meshes for each new scene. These meshes won't be used by the clients. Clients will receive very simplified meshes with all the detail projected. They will load a lot faster, but some of them will have to come from the network, so it is essentially the same problem.

Adding some degree of prediction should help. A prediction logic would load the clipmap scene for where the camera will be, at least a second ahead.

These videos are from some kind of weird terrain I was testing. I still need to go back and improve the terrain definition and materials. Right now it is a bit crazy.

If you look carefully you will notice both videos end in the same spot of terrain, but they start in very different places. Enjoy!



21 comments:

  1. Very cool!

    Will you be able to reduce the amount of floating bits of land?

    Also, hows the city and house building going? :)

    ReplyDelete
  2. Very cool looking stuff. This is the pinnacle for proc gen as far as I'm concerned. 1 question I had: you are generating all of this nice geometry procedurally, but why then are you storing it and streaming it to the client? To me that kinda seems like losing one of the main benefits of generating procedurally (Ie data compression). Is this because the computation involved in generating the terrain removes the ability to generate in realtime?

    ReplyDelete
  3. @Anonymous: I will be able to remove most of the floaters. I'm already working on that.

    I still need to spend more time on the city module. There are many things I want to do, I have only scratched the surface.

    ReplyDelete
  4. @Lachlan: Yes, it takes too long to generate all this. It cannot be done at runtime by average client hardware. At least today.

    A few years from now, maybe... but then server-side generation would allow for even more detail. Probably it will never catch up. Also dumber and slower clients are emerging: mobiles and tablets. You will want to run there too, server-side generation may have an edge for some time.

    ReplyDelete
  5. This is how I handed the same problem:

    http://www.sea-of-memes.com/LetsCode28/LetsCode28.html

    I have nothing to teach you, but this was my approach, FWIW.

    How much data per second are you expecting the client to display?

    ReplyDelete
  6. @Unkown (Michael): Thanks for the video, I had seen it before in your site, which taught me a few things already. :)

    I think I can average 10K per cell. A scene is around 500 cells (covering a 4km square). With an empty cache, it would be 5 Megs to get the viewer started. It would require some buffering for sure. Then, as you move, it is around 5 new cells every second, so the rate would be 50K per second. But these are estimates at this point.

    One thing I did not mention is that higher octree levels have higher probability of appearing on any scene. They also take approximately one quarter of the datasize of the level before. This means at some point in the octree hierarchy it starts to make sense to pack higher levels along with the client installation. It would be a couple of Gigs, but it could make a big difference in the transfer rate. Only higher definition cells would be requested when the viewer gets close enough.

    ReplyDelete
  7. Good point about preloading the highest levels (lowest res) of the landscape. It hadn't occurred to me to do that.

    ReplyDelete
  8. I am in absolute awe of what you've created here! A truly incredible feat. The area in the first video reminds me of a level from Halo 1 haha.

    I was just wondering if there is a way to get a sense of scale? For example how big would a human being be in this landscape? it is quite tricky to judge from the video.

    Can't wait to see more! :)

    ReplyDelete
  9. @Ali: Thanks. In these two videos the camera hovers from 2 to 3 meters over the ground, but in some points it drops a lot lower.

    ReplyDelete
  10. How are you handling the change in detail?
    I had some success with rendering 2 adjacent LODS ( by which i mean say level1-crude + level2 less crude) in the same space and using a depth based alpha fade. As Im sure you know, this is much better for terrain that is heightmap based as the LOD change really will only occur on the vertical axis. With full 3d voxelised stuff you can get it happening in all directions, which makes it much more obvious (a floating island for instance will contract/expand on all sides, making it much more noticeable.

    The C4 engine does some clever skirt-like things to seam up adjacent LODs, but I imagine you wouldnt be able to generate enough skirts on the fly.

    Im my stuff I have the chunks at view range being generated in threads that then callback when they are ready to be drawn. But I also have the cull range (when stuff leaves view) set to much further away. This is like a cheap sort of caching, since if the player turns bac k around the chunks are still hanging around :) I did do some savig chunks to disk and reloading them, but it ended up better to just regenerate them. And the data size was insane too :)

    As usual your work looks amazing, I love the scale of ambition you have. I dont care if its futureware, its a future i want to see!

    ReplyDelete
  11. @nullpointer: To hide the seams, each cell is 5% larger than it should. So cells actually go into each other. Seams are pretty well concealed when the polygon count is high, as it is in these two videos. If you look carefully you cannot see seams anywhere (either that or I need glasses). There are a few missing triangles that show as holes in the mesh, but this is a bug in my mesh simplification. They are not seams.

    When polygon density is low, seems may appear. This is because the simplification stage makes different decisions for each cell. I tried skirts for a while, and they do work. I can generate them precisely from the voxel data so they go in the right direction, and most important, they have the right size. But this approach would require me to generate higher LODs using voxel data as well. Right now I'm using meshes which is a lot faster.

    I think I will stick with the overlapping meshes for now, and increase polygon density in the cell frontiers. Having skirts and seams adds polygons anyway, so no biggie. Also, it good that mesh resolution goes a bit up near transitions. This way cells can still be simplified in parallel and they won't be too disjoint in the boundaries.

    I still need to get there. My next target is to convert all the meshes you see in these videos to very low polygon resolution and see how it holds.

    ReplyDelete
  12. Heyyy very nice work!!!
    could you give us the world generator? Or maybe create a little RPG based on a world like this to show? I am very excited about trying your program :D

    ReplyDelete
  13. @DrDiablo: I will eventually put something out there, but I'm not ready yet. In a year time, hopefully.

    ReplyDelete
  14. Would it be possible to blend in the new data? Maybe by using some kind of alpha blending (like @nullpointer suggested above) or perhaps morph the lower-density mesh to the higher density. This could either be done over a given time frame after the higher-density cell is loaded, so that you would have a one-second morph from the old mesh to the new, or it could morph based on the player's rate of movement towards the detailed cell.

    ReplyDelete
  15. @WeeTee: Yes, I'm considering blending at this point. But before I want to test and see how bad the popping is when I switch to low-resolution meshes. A new post about this is coming soon.

    ReplyDelete
  16. is some kind of demo released

    ReplyDelete
  17. @Anonymous: No demo for now, maybe in one year.

    ReplyDelete
  18. One big component of the pop is the way that the texture changes with the geometry. The changes in colour especially are very obvious. Is there a way to get the texture of the more complex geometry on the less complex geometry?

    I'm not talking normal mapping or anything, just a mapping that transfers the same black lines etc from one mesh onto the other.

    ReplyDelete
  19. @Josh: Yes, in theory this should be reduced when I switch to lower density meshes with details projected as normal maps.

    But then, I don't have much resolution available for these maps once I start sending cells over the network. I anticipate there will be a similar pop when a higher resolution map kicks in.

    ReplyDelete
  20. Hah, fair point. There's always some data cost! It would be nice if you were able to find a really cute normal map interpolation system, is that likely to be easier than geometric interpolation?

    ReplyDelete
  21. @Josh: I will try having two scenes in memory at any given time, the current and the next one. Then I will do a smooth alpha blending on screen. We'll see if this helps.

    ReplyDelete