Sunday, May 26, 2013

Euclideon Geoverse 2013

Euclideon is back. Here is their latest video:


It seems they put their tech to good use. Their island demo was very nice, and now this is one of the best point cloud visualization I have seen so far. Is it groundbreaking or revolutionary? No. There is plenty of this going around. But this is good stuff.

This announcement may be very disappointing to some people who may have taken their past claims literally. Two years ago they said they had made computer game graphics 100,000 better. They would show you screenshots from Crysis and explain how bad all this was compared to their new thing. Panties were thrown all over the internet.

The reality is today this still ranks below the latest iterations of CryEngine, Unreal Engine and id's Tech 5. It even ranks below previous versions of those engines. Actually I do not think you can make a game with this at all. If you want to have the texture density you have on screen in most games, you would need to store far more information than what they have in the demos shown here. It is a perfect match for the state of laser scanning, where you do not have much detail to begin with. They are not showing grains of dirt anymore, that is for sure.

The real improvement in the last two years? The hype levels are a lot lower.




36 comments:

  1. Well, it is completely different technology but it reminds me of Google Earth. It is amazing what GEarth has been able to do in terms of modeling terrain, with their latest iteration using imagery from multiple angles to reconstruct the 3d contours of not only buildings but trees as well. The detail level of Google Earth is increasing at an amazing rate, but it is leading me to think that maybe these alternate methods of rendering are not the way of the future, but that graphics will go towards the increasingly clever use of polygons.

    ReplyDelete
    Replies
    1. Polygons are great at compression. It is a very effective way to represent any surface. They do bring some awkwardness to the entire process, both to creation of content and rendering.

      Polygons are like vector art. Vector art compresses at lot and you can stretch it, rotate it and see no degradation. But it is a pain in the ass to create it and work with it.

      Voxels and point data are more like pixel art. You get a lot of detail, but as soon as you want to scale it or rotate it you better have even more data available to avoid the aliasing.

      I think hardware will advance faster than human skills. Easier creation or capture methods (like voxels and point data) will prevail once the ability to store and move huge amounts of data is no longer a problem.

      I think we went through as similar phase with 2D graphics. In the beginning developers would favor vector data. Bitmaps were too costly to keep in memory. Now we use bitmaps even for stuff you could define very easily as vector data, for instance a circle.

      We still have not outgrown the same data and bandwidth challenges in 3D. Vector art still has an advantage. There is also the issue of physics and applying deformations to bodies. Vector is a lot simpler for this.

      So yes, voxels are the future, they may always be the future. I hope not, because that would mean we got stuck with crappy hardware.

      Delete
    2. Sorry I can not see the video but in one video they explained that they can render terabytes of point cloud data with outmatching performance with any other system. They said it is just like streaming point cloud data.

      Other existing systems use different approaches for speed (like show least points when moving and gradually increase them when paused). Their system show everything you need to see without any performance issues.

      Delete
    3. The future will bring direct implementations of the human eye (eg. Brigade http://igad.nhtv.nl/~bikker/) however we are still stuck in a computational and IO related bottleneck, using the GPU as a crutch to minimally push the envelope.

      Rasterized graphics engines will, like the VCR slowly die and become obsolete.

      Voxel farm is very impressive however, and although rendering techniques may change, efficient and scalable volume data will typically always be OctTree based.

      Delete
    4. I love you guys.
      It's so good to see such a bubbling hard-core discussion, makes me regret not knowing more about all this.

      Please don't mind me, carry on...

      Delete
  2. I believe their technology is easily capable of exceeding Crysis-level texture density, but their frame rate simply isn't high enough to support shadow map lighting.

    My work on Sparse Voxel Octrees has lead me to believe that they can actually be compressed much better than GPU-based textures. Color data can be stored in a dictionary-coded tree (paper: http://research.microsoft.com/en-us/um/people/hoppe/proj/ratrees/ ) with much greater compression than current GPU texture formats. Though not directly translatable to SVOs, the paper's results are promising - as low as 1 bit per color value. My last attempt at an SVO renderer stored voxels at 2.5 avg bits per voxel excluding color data. That's the uncompressed data that is used directly by the renderer.

    In addition, the CPU typically has access to much more RAM, and it is easier to build a MegaTexture-style streaming architecture purely on the CPU. So yeah, CPU-based voxel rendering can easily support texture densities much higher than the latest games. The low detail you're seeing is caused by Euclideon's source data, not a limitation of the engine.

    On to performance: From memory, in one of their 2011 videos, they claimed to be getting 15-30 FPS at 1024x768, and they were showing a statically lit scene. Modern games render the scene once per shadow casting light source per frame, so to render a scene with 4 lights, they'd be looking at 3-6 FPS. This is crap.

    I have my own hypotheses about why Euclideon is getting 15 FPS while Atomontage is getting 2000 FPS. I think Atomontage is using an imposter-style system to cache rendered chunks on the GPU and reuse them between frames when possible. With such a system, frame rate will always be ridiculously high, but when the CPU is under strain, quality will suffer as the imposters get updated less and less frequently. Unfortunately I won't be able to confirm this until Atomontage release a video that includes animated geometry or a moving light source.

    ReplyDelete
    Replies
    1. Once you get into unique texturing (like MegaTextures) it makes sense to compare voxel compression to GPU texture compression. In reality polygon-based content allows you to reuse the same texture many times over. This is a form of compression too.

      The amount of data you would need to cover grains of dirt and massive mountains in the distance at the same time is huge. For an open world like Skyrim's I bet you would need hundreds of terabytes.

      Also I am not sure if their streaming system can take it once you cover these extremes. After so many years they have not shown anything like this. They had that pebble in the island demo, which was using instancing, now they show mountains at low resolution. So there is a chance even if they had the dataset, it would not work.

      Yes I would like to see the CPU usage at least. I would guess they are running this in multiple cores. If not, they could do a light pass per additional core.

      Delete
    2. Oh wait... I got it. I forgot it is a search engine like Google. You get shadows by adding "shadows pls" to your query :)

      Delete
    3. Couldn't you proc generate trivial things like grains of dirt? Then wipe the data when you are done using it? This could also apply to other textures, like bark which would simply be regenerated based off of a base polygonal mesh.

      Delete
    4. Yes you could. It would be a different engine then.

      Delete
    5. Maybe technologies like Crossbar's RRAM and new upcoming 6-60TB hard drives will help. Or keeping a lot of voxel data on cloud. I hope that engines will be mixed together. I don't see single voxel engine handling everything any time soon.

      Delete
  3. I think they found their market, which is not video games and that's really fine with their claims. It's a very impressive GIS product.

    However, for video games you have a lot of other problems like integrating all this data with a physics engine or making AI navigate through it for example.

    Good luck to them.

    ReplyDelete
  4. Shabtronic/Shabby/ZJ/Zardoz Jones has pretty much an equivalent method.
    The author of importance here D. J. Meagher: it's a variant of his early '80s algorithms: UD is an object-order front-to-back octree traversal with a non-visible subtree rejection technique that is not Meagher's (perhaps some combination of R. Yagel's object-order raycasting with S. Teller's frustum casting).
    Image-order approaches e.g., raycasting are hopeless except on a parallel computer: NVIDIA wants you proceeding that way: do not.
    UD is a greedy alg. and therefore inherently non-parallel: there's a reason why UD is still a CPU thing (GPU used only for secondary things, like outputting the bitmap or the skybox in the island demo).
    That near nobody has publicly reversed UD says a lot on today's lack of inventiveness.

    ReplyDelete
    Replies
    1. The question is why you would want to reverse engineer it? It does not show any exceptional properties. There is enough open research and tech showing how to do this.

      I am not in the games rendering industry, but I think the people who work in this field could see what this was about from the get go. Back then it was one of two: (A) Something like that would be only applicable to games many years from now, (B) it would do great today for rendering geo-spatial capture data. Not games.

      So if you are doing rendering for games there are better things to do. It does not show lack of inventiveness, it is just common sense.

      Euclideon was smart to tackle the problems they can really solve. This took them to an entirely different industry.

      In the near past it just did not matter how much FPS you would get in your data viewer. The reason is your software would have very few users, only the people doing planning, engineers etc. This was not mainstream, there was no money on making this viewer fast.

      With augmented reality this is becoming a hot field. They will be feeling the heat soon. In this field it is all about the data. The viewer is a small part of the technology. I am sure the new map-makers (Google, Apple, MS) are going seriously into this. But this is just happening, we do not know what these big guys are cooking.

      Delete
    2. DICE uses UD, Crytek too.

      Delete
    3. You mean Crytek and DICE use Euclideon's Unlimited Detail tech? I missed that bit of news. What is your source?

      Delete
    4. Crytek:
      "You're have a problem ...CryTek working on new engine with UNLIMITED DETAIL. My friend working on Crytek in Kyev, Ukraine. They are know how in does work. Sorry my bad language." (http://www.youtube.com/all_comments?v=hxtuZE5pOGA)

      DICE:
      "At this current moment, we have agreed with Dice to let them use our Engine for their next game as long as this engine gets ready, so it would help everyone and I mean everyone in this world if Dice uses our engine, Just think of it. Battlefield 4 in Unlimited Detail?" (https://docs.google.com/file/d/0B0Tw1fnDScRscWxQZGtLTGl6b0U/edit?usp=sharing)

      Delete
    5. Neither Crytek nor DICE use or are pursuing UD. No-one in the games industry is for the many of the common sense reasons including those stated by Miguel above...

      Delete
    6. Thanks for the links.

      A guy in YouTube says he knows a guy who works for Crytek who says they know how UD works. Yes, I believe that they know how it works. It does not mean Crytek is dumping their polygon based tech and going with UD for this generation and the next.

      The DICE reference comes from the Euclideon guys, it is a bit dated. We now know for sure that Battlefied 4 and the new version of the Frostbite engine are polygon based. So something happened there.

      Even if this is real it would be for future versions of these engines. This could be easily generations away. Xbox one and PS4 seem very invested in the GPU for instance. CPUs are 8 cores but low frequency. UD is not happening in that platform for sure. You can have better looking games by using polygons.

      Everybody is planning to use something like this for games in the future. Carmack said he was looking at a unique geometry and texturing using voxels even before that.

      What your links tell me is DICE and Crytek looked at the UD tech and they found you cannot really do games with it today.

      Another way to look at this:

      If someone had found a way to make games 100,000 better two years ago, we would have a tsunami of these games already. It would be the biggest thing since DOOM. That did not happen.

      If we keep waiting, of course that will happen. Also robot prostitution and flying cars.

      Delete
  5. Is it possible to stream per pixel rendering of traditional textured polygonal models? I would imagine that doing so would allow for insanely high mesh densities and the elimination of varying mesh/texture levels of detail.

    ReplyDelete
    Replies
    1. I'm sure the problem comes from sorting through the mesh data sets at each fragment. Casting against an octree is much faster than casting against a bunch of triangles.

      Delete
  6. This reminds me of the Atomontage Engine.

    http://www.youtube.com/watch?v=VYfBrNOi9VM

    In my opinion this looks just as great as the Euclideon stuff. Plus he has more "interactivity". Like on the fly modification of the terrain.

    And he is doing this more or less alone, Euclideon is a government funded company.

    ReplyDelete
    Replies
    1. Agreed, same or higher quality.

      Delete
    2. AM has lower object space resolution and range.
      UD determines the actually visible set "AVS" on the fly, think of projecting the viewport on the object instead of the usual opposite i.e., the object and the viewport are swapped.
      UD's renderer is pure in the sense that it needs hardly anything beyond finite combinatorics (no math. analysis). For example neither floats nor * or / occur.
      AM's author should not show his work without also showing his algorithms or code: a worthy algorithm is better than poudre au yeux (the SGI pipeline is ugly and doesn't scale without the help of special, costly, pollutant and as proved by UD superfluous hardware).
      The same holds for UD's author.

      Delete
    3. Yes you are right. It is hard to compare things we do not know.

      I find your idea of software purity interesting. To me floats were morally equivalent to ints or fixed point. Also all math is the same to me, whatever gets the results with the least cost is good.

      Also UD is not powered by superfluous hardware. It runs on CPUs, which are fatter designs than GPUs and are really at the end of their life on what they can do per core.

      Anyway I would agree with you except for one little detail: Both AM and UD need to show they can replace the tech used for AAA games today. To be considered an alternative they at least must do the job.

      Delete
  7. Miguel, what is the resolution of your engine? UD claims, at least for the island demo, a resolution of 64 pts per cubic millimeter. Surely VoxelFarm is way less?

    If you model a cubic mile at that level of detail you get orders of magnitude over petabytes of required data just assuming 1 byte per point. Even with compression. And if you drop it to a square mile with a reasonable height you're still talking huge, huge amounts of required data to model the environment at 64 pts per cubic mm.

    And that was always my understanding of the limitations of the technology that they were presenting. Unless they severely lower the "resolution" of the environment or use really small "canned" environments, there's just no practical way to store all the data.

    And speaking of what I'd like to see from their engine: I wanna' see realtime deformation. Does their compressed database of voxel-data allow real-time changes and storage? I wonder. That may be another tradeoff they're not mentioning. Sure they can do blindingly fast lookups on point data, but whoops...you want to alter the point data? Sorry!

    Anyway, interesting link. Wondered what became of these guys. Thanks!

    And hurry up and give us water dangit! :P

    ReplyDelete
    Replies
    1. I use a different type of voxel, so comparisons don't mean much. In my case it is geared towards generation, not storing or rendering. I offload this to other layers. In that sense you could use UD or AM voxels to render for instance.

      The voxels I use are not square, they live in some sort of warped space. You can think of them as boxes where you can move the eight corners freely, resulting in some sort of distorted box. This means you could have a solid shaft that is only a hair thick, but at the same time you could not have two of these too close together.

      The engine allows for different voxel dimensions, however in the demo program I use to capture most of the screenshots and videos shown here, the smallest visible detail can measure 1 millimeter. But allocating detail to one voxel takes away detail from neighboring voxels, so two 1 millimeter shafts cannot be closer than 0.6 meters in this demo.

      What drives these sizes is the realtime generation. If you look at UD or AM, they require hours of preprocessing. So the real bottleneck is how fast you can produce data out of thin air, voxel sizes are a result of that.

      Delete
  8. I'm glad these guys switched to GIS. It's a very impressive and exciting technology in that field. I'd love to see Google's street view start using point cloud data using something like this. And they definitely nailed a pile of very corporate use-cases.

    ReplyDelete
    Replies
    1. It's easy to underestimate the huge storage that is needed and time to make it look good. They showcase 10+ GB point clouds as they are but it still looks very dotted and broken closeup. They had a few smaller examples to show good closeup quality but the bigger showcases are very short in range. They are masters of being vague and people refer to claims that haven't been backed up. The question is who wants to get into a proprietary format controlled by one company with that kind of reputation.

      Delete
  9. We only just got used to GigaBytes, and we are told that new words, like PetaByte are on the way.

    ReplyDelete
  10. Why do they make such outrageous claims? Going straight from the HDD, I doubt it.

    ReplyDelete
    Replies
    1. I get that one. I think they really use the HDD, with probably some caching in the host memory.

      Delete
  11. They use mmap for their streaming. It allows files to be used as though they were in ram. paging and syncronisations is mostly handled by the OS. That parts simple.

    Saying this can't be used in games is incorrect. point cloud rendering and rasterisation can be combined into a hybrid. So even if there are limitations on dynamic content that doesnt really matter. You can have high detail static geometry in point data and dynamic objects in triangles.

    physics would be much harder to generate from source data, however artists could still manually generate collision hulls to approximate the point data no problem.

    I'm mostly interested in their claims that its infinite. As far as I'm concerned your look up algorithm would have to be O(1) to make such a claim. And if they have indeed found an O(1) method for picking the point you need, it would truly be a feat of engineering genius.

    O(log n) could not be infinite, and log n is what traditional voxel octree lookups cost.

    Bruice Dell has claimed in the past that they do not do raycasting. And has stopped calling his tech voxels (despite having called them that in the past)

    Time will tell. Once they are an established company perhaps they will release their solution so we can all see. Unless its O(1) I wont be terribly impressed.

    ReplyDelete
    Replies
    1. I wonder why people want to think Euclideon found out the impossible. Software doesn't change hardware restrictions and streaming bottlenecks. They sell unlimited (classic), few updates and we guess by their exaggerated educating videos. The future of gaming is dynamic environments. Why waste gigabytes of point cloud data and it still look like floating atoms closeup?

      Delete
    2. They do not retrieve a pixel's octree color (corresponding to an object-space square of edge size greater than or equal to about sqrt(3) * edge size of the cube it is in) in constant time. But the complexity, in terms of the number of subtrees visited, is as the surface of the "seen volume", a part of the view pyramid. And this surface is as the surface of the viewport because that seen volume is no fractal. It is indeed likely that they simply map file windows to memory. For this they store their octree in depth-first order and, as the front-to-back traversal proceeds, load in the relevant nodes. This is also why a front-to-back greedy inherently non-parallel traversal is interesting. UD is the GPUs' killer. Avoid that hopeless without a GPU raycasting, go object-order.

      Delete
  12. I think the author treats Geoverse a bit unfairly. It's certainly revolutionary within its geospacial niche. Find here the testimonies: http://www.techdemonstrations.com/euclideons-unlimited-detail-works-sort/

    ReplyDelete