Loading YouTube Feed...

Tuesday, August 2, 2011

Unlimited Detail

My baloney meter went off the chart last night while watching this:

It is actually a nice piece of software once you consider what it really is. My problem is what they claim it to be.

If you look closely at the video, they only have a few blocks of data they display in different positions. They claim this is to bypass artwork, that is just copy & paste. Well, it is not copy & paste. It is instancing. It is  what makes this demo possible in today's generation of hardware.

They can zoom-in to a grain of dirt, but they do not tell you it is the same grain of dirt over and over. If they had to show really unique "atoms", this demo would not be possible today. The challenge is data management. They chose the best compression possible, which is to not have information at all.

Something similar is done in this video by the Gigavoxel guys:

In this case you see some fractal volume coming out of the repetition of the same blocks, but it is essentially the same the Unlimited Detail guys have done. Impressive, yes, but no real applications for it at this stage.

Here is another example from NVIDIA research:

Also the classic Jon Olick's SVO demo:

And then you have the very promising Atomontage Engine:

None of these people claim they have revolutionized graphics, that they have made them 100,000 times better. Why? They know better. The problems to tackle are still too big, we are still generations of hardware away.

You see for many years we have known of this Prophesy. There is this One Engine that will come and replace polygons forever. And there is no question this engine will come, maybe in just a few years. Meanwhile, whoever claims to be this Messiah will get a lot of attention. As usual the Messiah scenario has only three possible solutions:

1. It is really the Messiah
2. It is crazy
3. It is just plain dishonest

I'm not sure about anything, but I have a very strong feeling about which one to pick.


  1. Amused to see C.S.Lewis referenced here!

  2. I agree it is nice at what it does but they overhyped it so bad. I guess they did that to receive the grants:

    I'm glad that this kindof development can get some grants to. Tho the money would probably have been better spent developing an engine like yours!

  3. It looks like they've taken the NVIDIA demo, added instancing code and run it on a very powerful machine. The demo even converts from polygons to voxels for you.

    Notice how all the instances are square too, presumably so they line up with the octree nicely.

    The thing is they have probably done some interesting work in there, it's a shame it's masked in hyperboli.

    Still, I hadn't seem the Atomontage engine before, I'll have to look it up :)

  4. Your view seems to match up with Notches of Minecraft fame as well, he did a blog post on it as well.


  5. Why pick just one? Crazy and dishonest are both valid choices.

  6. @Alluvian: Maybe, but then remember catch 22

  7. The big problem I see with the UD engine is not that they're instancing, especially. This is done with polygons too – it's tiling a texture again and again, and there are some good tricks to hide it up right?

    I think the problem is that they're thinking it's enough _just_ to have voxels / sparse volumes. This seems like a big mistake. Polygons and texels are popular because the texels give you the detail, and the polygons give you control for animation and modelling. Taking this over to voxels, you still need an underlying geometry (a hyper polygon, if you like) that will let you distort the voxels about.

    If they had that, you probably wouldn't notice the instancing because the ground would look convincingly smooth and continuous – you could join the volume patches up.

    They're going to need that for animation to – to be able to articulate a volumetric model.

    At the moment, I don't think they're even able to rotate / affine transform their models, let alone warp them to allow for deformation.

  8. I can agree that there is quite an amount of hyperbole to this, but consider the background:
    There is this one guy who worked sixteen years on this algorithm he claims to outperform any known algorithm (No other algorithm, not even SPVO can do what he shows at the claimed framerate/hardware requirements). And he is no academic so he does not use the kind of modesty and formality that is imperative in academic publications.
    If what he claims is true then it is still an outstanding engineering effort even if it is of limited use (he says he does not use division or multiplication and no float math). Furthermore he claims to be able to do dynamic scenes, dynamic lights etc. so that remains to be demonstrated. (It actually has been shown on some older videos)
    I don't think it is justified to claim that this is a hoax or that it is an existing voxel approach such as SPVO - there really is no way to tell atm. Also I don't think it is "classic" voxels at all but rather point clouds (as claimed) which are trivial to deform and animate - but that is speculation.
    We'll just have to wait and see to judge the practicality of all this, but I don't think the guy deservers all the shit he gets.

  9. I do not think it is a hoax or a scam. The fact that this was cooked inside a little bubble for so long without contact with academia or the graphics industry, or even exposed to how sience and technology are usually develop may explain their odd behavior. This is very refreshing. We should care only about the facts.

    The facts are they have not shown anything that was not done before. To anyone working in this field, the challenges are in areas Euclideon goes to great length to avoid even mentioning. It is not about animation and dynamic lighting. Even as a fully static, pre-lit world scenery their tech would be groundbreaking. It is about bandwidth and data management. Anyone in the industry could do the same they have right now, but the bandwidth and memory issues are like a brick wall at this time.

    So we all know there are severe limitations to this technology. They know it too, but they don't even talk about it. That is not about lack of academic modesty. It is used-car salesmanship. You may agree to it, some other people won't.

  10. They claim they are running the demo on a single CPU core at 20FPS at 1024x768. No one has ever demonstrated this for SPVOs or any other algorithm, which usually use the GPU. So if what they claim is true, then it hasn't been done before.
    We simply cannot know what the limitations to this approach are because it has not been published and documented.
    Also I do not see how bandwidth should be an issue and what kind of data management problems you expect, if you could elaborate on that.
    In my opinion, it is wisest to simply wait and see what comes of it, instead of calling it 'baloney' ahead of time. Also, at this point in time they are not selling anything so I wouldn't hold against them that they are not elaborating on what will work and what not (they might themselves not be able to tell yet).

  11. 20fps in a software renderer is not that impressive. Like some pieces in the demoscene circuit, you can have an eye-popping demo that does things you thought were not possible. Demos manage to do it because they exploit some tricks to the maximum while going around the limitations. But they are far from being a general purpose engine. You could not do a game with them.

    Also, lack of GPU support is a bad sign. There is no guarantee that Euclideon's approach will translate into the GPU architecture. Chances are they need to start from scratch if they really want to exploit the GPU.

    As you say, we cannot know the limitations to Euclideon's approach because nothing has been published. But the same applies to the advantages they claim.

    One thing we know for sure, they intend this tech to run in the real world. The closest technology to what they claim to do it Id's new Tech5. Tech5 allows to have unique texturing, which means each "atom" in their virtual world has its color stored independently. They use polygons to store the location of these atoms, if you see a polygon is a very efficient way to store a set of points that are in the same plane. To store the colors they use some top compression they licensed from Microsoft.

    Even with all this compression and the old-fashioned polygonal look, it took them 5 years to overcome the bandwidth issues creeping everywhere. There are bottlenecks reading from CD or blue-ray, then the Hard Drive, then RAM and eventually into GPU memory. They are just releasing their first game, Rage. You can see Carmack's keynote about Rage, you will hear it from him. The most difficult part of having so much detail is how you manage the data and move it around.

    So bandwidth is the killer. Anyone can have something nice as long as it all fits in the memory pockets you have. The way you solve it is by devising superior compression and also by smart data management, like the layers of caches tech5 features.

    Euclideon claims they have beaten the look of polygons. What they don't say is they have beaten them as a compression method. Or that they devised a data management system that is unheard of. Their demo with high levels of instancing points to the real issue, they could not move all this data if it was really unique. They have not shown to solve this issue.

    They say it is because of lacking an artist. Well, it is very simple to obtain large sets of data. You can get high resolution DEM for free. If their tech is able to show individual atoms as they say, they could get a one meter resolution DEM of Australia and fly over it. Or get mount Everest. No artist is required. It would take as little as that to have all the scientific community flipped in their favor.

    What they have right now is not better than polygons. They are as far as anyone else. Saying they have revolutionized graphics when they haven't is baloney.

  12. If you watch the video interview you will see him explain that they have not revolutionized data compaction or anything.
    I stand by my claim that no one has so far shown a voxel or point cloud renderer this fast, not even in the demoscene, if you want to refute that by providing some references, please do.
    Even if their algorithm doesn't fit well on the GPU, if it outputs a normal and depth buffer that can be used by pixel shaders.
    You say that the fact they are instancing is not a problem yet now you hold against them that they cannot display insanely huge (or unlimited) datasets (also I think that for example their tree model shows fairly impressive detail) - which they never claimed they can.
    "Their demo with high levels of instancing points to the real issue, they could not move all this data if it was really unique. They have not shown to solve this issue."
    They have not claimed to have solved this, neither have polygon or voxel approaches and also it isn't necessary to solve it in the first place for most applications.

  13. I failed to explain myself, you got some of my points backwards.

    Yes, they say they did not make any progress on compression and data management. This is exactly my point. This is absolutely needed for their tech to be useful. These are the real issues the industry needs to tackle if you want to use this kind of tech in a game. Rendering point clouds or voxels of a few very detailed models that fit in your memory, not a big deal. Many people can do it, they just don't see the point to it or they don't make a fuzz about it.

    There are many efficient voxel renderers that use GPU. I cannot provide any links to one that uses CPU only, maybe because this is not very interesting. Eventually you want to run on the GPU so everyone starts there and this is where the results are. And they are quite fast.

    I also meant to say their reliance on instancing IS the problem. Instancing is a cheap form of compression, which is OK if you are honest about it, but it is not really unique information and it really limits what you can achieve in a game. You cannot have something like Crysis powered by the type of instancing they have shown.

    If they want this to be used in a game, they need a way to display a large dataset. The fact remains they have not shown this on any demo so far. They have dodged this question even when asked directly.

    But going back to the Messiah scenario I described in the original post, you get to have Faith. If you want to believe and choose to believe, you got my respect and we should not be going over details here.

  14. Actually, it is a big deal. The reason you cannot show SPVOs running on the CPU at interactive rates is because it doesn't exist. SPVOs aren't really that efficient, you need a powerful GPU to do it at interactive rates. You don't want to run on the GPU because it's interesting, but because it makes it fast.
    If we could raytrace all our static geometry and have a huge amount of instances and detail and still have room to the top for our dynamic geometry, don't you think we would do it? It's just that we can't. Raytracing isn't efficient enough on current hardware.
    I don't think it is the Messiah, I don't know how practical it will be, but I damn well now that if what they say is true, then what they have demonstrated is an amazing result that simply hasn't been achieved before - and I do have my fair share of experience in CG.

  15. You want to do rendering on the graphics card because it is what makes sense. If your CPU is busy with rendering, where are you going to run the game logic, like AI, on the GPU? It is a cul-de-sac, unless you have a CPU agenda like Intel.

    Check out this article that was suggested to me by someone who is convinced about the power of CPUs: http://www.cs.utah.edu/~knolla/ovrc.pdf

    The results are true. For "search" algorithms like ray tracing and volume rendering, today's CPUs beat GPUs. This particular case is not using SPVO, but who says that is the only way to index voxels and point data?

    I give you Euclideon's technology is very nice, even if it cannot be used to make a game, and it does not do what they claim it does. I just wished they be more serious about it.

  16. Arguably todays multi-core CPUs are under-utilized by games and in many cases the GPU remains the bottleneck (an if the CPU is the bottleneck, it is often because of rendering being single-threaded, something that only just now is changing)
    They stated that they definitely want to go to the GPU, but they aren't there just yet and I don't know how efficient the algorithm can run on modern GPU architecture.
    Also, while many forms of raytracing are not as efficient on GPU architecture as on a CPU, a powerful GPU will still outperform a CPU, often by magnitudes - this is exemplified by SPVO. It is rather the memory architecture that causes headaches here.
    I picked SPVOs as example (of course it is not the only way) because they are one of the most efficient and promising ways to raytrace the kind of datasets that could be useful for static game environments.
    How useful the UD algorithm will turn out to be remains to be seen, I merely disagree with the notion that this has all been done before. Also, I think many of the claims Euclideon is being attacked for, haven't ever really been made. It is the observer who takes the bold talk and turns it into outrageous claims in his perception. I do not know the name for this psychological phenomenon, but I keep observing it, especially in politics ;)

  17. The point remains there are many GPU voxel renderers faster than what Euclideon can do. If they are breaking any records, it is in the kiddie league. If I'm making my game I want the best framerate I can get, I don't care how nice they can be for a CPU-only solution.

    This psychological phenomenon you mention probably works both ways. When someone makes outrageous claims the observer may tone them down to something agreeable.

    There is a reason why they got so much attention, it is not like we collectively misfired. If their message was only about a fast CPU voxel renderer that relies on a lot of instancing to fill up space and cannot be used in games, they would remain largely unknown.

  18. Kiddie League, eh. The point was that if I can use one CPU core for this static geometry I have my whole GPU free to do whatever else I need. Those "many" renderers might achieve more FPS - but they also use up all of the GPU.
    I don't think it is reasonable to assume that this cannot be used one way or another in games, it's just another approach at solving a particular problem.
    But you're right, they would not be as known today had their initial demo video not provided enough flamebait to start a major internet controversy...

  19. Yes I also thought about a hybrid system like the one you described while I was putting down their CPU efforts :)

    That would work if they can get 30 fps or more out of a single core. At this point we do not know if their heavy instancing also has a role in their FPS. There is a chance whatever tricks they do are coupled to the instancing. The FPS may drop as they start using really unique data, assuming they can move it fast enough.

    Anyway, this is such a waste of time. It feels like psycoanalyzing a dead guy. It could be so easy for them and all of us. Showing the same island worth of space but with no instancing is all they need to do to be taken seriously.

    I could generate a few terabytes of unique terrain, architecture and vegetation for them. I would deliver it in any "atom" format they like. For only 20K USD, because I want to help. You cannot get a deal like that anywhere... Euclideon, are you reading this?

  20. I already take them seriously. The Objects they have shown are already sufficiently detailed to make it worthwhile for a couple of use cases, even if they are instanced all over the place.
    It looks good.
    I don't see the need for gigabytes of unique data to make this interesting, and they have never claimed that they're able to handle huge datasets.

  21. "The Objects they have shown are already sufficiently detailed to make it worthwhile for a couple of use cases, even if they are instanced all over the place. I don't see the need for gigabytes of unique data to make this interesting"

    Could you point to a couple of existing games that would be possible with this kind of technology?

  22. That is kind of a trick question, of course, relying SOLELY on the features that where demonstrated will not be enough for any game, it would have to be combined with other things, likely polygon-based.
    If they manage to output a normal and depth buffer, this could be used to replace most static geometry on current games. (as could SPVO)
    Apart from that, geometry that is statically unidirectionally lit could be useful for tiles in RTS or RPG games.

  23. No, I mean even if they used polygons for characters and dynamic objects.

    What they have cannot replace static geometry in current games. To have terrain, or sections of a city or an island, they would need to handle massive datasets like Rage. The only way to avoid this is to use procedural generation on-the fly to supply the detail. As Carmack said, this is just crappy compression and it will show.

    For tiles in RTS it may work, but this is a type of game where you don't need that much detail.

  24. Alright let's leave RAGE's approach to the side, because no other game does it that way.
    The large terrain part in games isn't a particularly large data set. It is either a plane displaced by a heightmap and tesselated or a bunch of large, cleverly reduced polygons. In either case it is textured by several blended layers of repeating textures with maybe some decals on top. But to break it up it is littered with many many repeating elements like rocks and trees and foliage. Most other elements of a game level are made up from modular, repeating elements, simply because that saves artist hours.
    This is true even for RAGE, but in order to break up some of the repetition they bring in an army of "stampers" (as Carmack calls them) to add some unique texturing to each one. However, texel density in RAGE isn't particularly high (though the data set is massive) and it makes me ask if there can't be another way bring in variation in texturing. (after all the stampers use the same textures over and over again, too)
    Procedurals might be one way to approach this, and that could be done in the pixel shader, too, so it is not limited to any particular kind of surface.
    Also, Carmack never said "This is just crappy compression and it will show".

  25. As you say most elements in modern games are made up from modular components. That lamp, trash can or chair you can find many times over in a level is just an instance of a stock item. But this instancing occurs at design time, it is done by an artist. The polygons that make these objects are actually cloned with different position, orientation and properties like texture coordinates, baked light, etc. The same applies to other pieces of the level that originally were stock items, like windows, doors, columns, but now are part of the watertight mesh that makes the level.

    In games that feature interesting terrain, terrain is no different than any other part of the level. Even some vegetation elements like tree-trunks are modeled with independent polygons, no matter if they came out of speedtree.

    So right now game levels feature unique geometry for most of what you see on screen. This has been possible for a while now, and makes for interesting levels.

    If you wanted to replace this unique geometry by voxels, point clouds, atoms, whatever you may call them you will end up with large datasets. Polygons were just a way to compress it. Since you are not using polygons anymore, you will need to find another way.

    Why we should keep talking about Rage? They have made texturing unique. It is an evolutionary step towards what voxels will do for us.

    When we say voxels are the future, it is because they will allow to keep very detaied unique geometry and color in the same structure.

    Any approach that makes us loose either unique texturing or unique geometry is a step back.

  26. While you can bring in some variation through modified UV maps or Vertex Color blending, I would say this is still the exception, not the rule for everything but larger structures. (and this isn't ruled out with UD)
    Levels usually are to a large degree made up from identical, repeating elements, and some use lightmaps and some don't. If you really want consider this "unique geometry" you easily can, because it is likely that the geometry will be "baked" and batched in automated process, simply because instancing isn't efficient in rasterization and every draw call is expensive. But for the level editor it certainly isn't: He moves around instances which are building blocks that he gets from the modelers.
    As far as RAGE goes, it does not really have unique texturing either. All RAGE does is bake the not so significantly large entropy of all the artists stamping strokes into one large file so that it may be rendered fastly. And yet despite the massive megatexture size, without repetition or detail maps the texel density of RAGE levels is fairly low on close-up.
    Calling this kind of geometry or texturing unique is about as valid as calling euclideon's tech "unlimited" ;)
    We will for the time being certainly rely on repetition for practial reasons, we just need to avoid or break up patterns.

  27. Unique in this case does not mean it appears unique to the human eye. Unique means it is stored separately. In Rage, every texel is stored, it does not rely on instancing, tiling, etc. It does not matter how the texels came to be. They could be all white, still each one of them is stored and mapped individually to geometry.

    The same applies for unique geometry in levels. Forget about Rage now. If you take a Quake 3 level, most of the geometry is unique.

    If the Euclideon guys wanted to render a Quake 3 level they would need to deal with this large-scale, mostly low-frequency uniqueness that polygons are able to compress so well. On top of that they would need high-frequency elements, which in Quake 3 are not unique cause they use traditional textures.

    Maybe they have a solution for that which does not requires baking everything into a huge dataset. But they have not shown anything like that, and I guess that is not the way they want to go. Creating this is even more challenging than solving the bandwidth and data management issues of an unique atom solution.

    The fact remains what they have now cannot do a Quake 3 level.

  28. The reason the geometry is unique and not modular in a Quake 3 level is that back in those days the levels where created by carving them with CSG and applying the textures directly, a workflow which has long since been abandoned.
    That you bring out Quake 3 of all games as if not being able to reproduce that particular kind of geometry (not visuals - those could most likely be reproduced) made the tech seem prehistoric is really begging the question.
    Indeed the big flat polygon with UV parametrization has a storage advantage here, but this isn't the case we need to worry about today - and if you need them, by all means use them. One does not exclude the other.

  29. My point is all games of today (and yesterday) have a lot of unique geometry covering large spaces. Euclideon cannot do that with their tech today.

    You say Rage is too new, Quake 3 is too old. Take a look at the CryEngine SDK sample:


    Most of what you see here is unique geometry.

    Also the trend is to have as much unique geometry as possible, since it helps designers too. You can look at what Lionhead is doing with their mega meshes.


    If Euclideon wants to make something useful out of this demo, they need to go into unique geometry too.

    Regarding a hybrid system where polygons are used for larger structures, but "atoms" are used for instanced detail like grains of dirt or elephant statues, I don't think anyone wants to go that way. I rather keep using polygongs all over and wait until voxels really have their day.

  30. I didn't say these games where too old or too new, but they are bad examples because both do things differently from what most modern games do.
    I'm glad you picked out the CryEngine example because it is a good example to prove my point, and you can check it out yourself by downloading the SDK:
    Most of the geometry you see is NOT unique. The rocks, the foliage, the trees are all instances of a handful of meshes, without varying textures. You will see dozens of identical trees right next to each other. They are not baked into one big mesh - for a good reason: They all have LOD levels so that large amount of meshes is even displayable. There are a few assets that are used only once in the scene, such as the lighthouse or some of the houses, but they account for a small amount of polycount. The terrain is generated on-the-fly from textures.
    You keep shifting the point of the debate. I said it could be useful for parts of current games, now you say it has to do unique geometry to be useful, something that no current game does.
    You say you don't think anyone wants to go a hybrid way, but don't give a reason. I can tell you that I would want to do it that way, so that's someone.

  31. Yes you are right. Rocks and trees in that sample are not unique. Still there are very large unique elements there: the houses, the ruins, the lighthouse, the dock...

    I agree with you UD could be used for parts of a game scene. We also agree this will be only for whatever is not unique geometry. Where we don't agree is in the amount of unique geometry that is required or present in games.

    Another point of debate was the usefulness of a voxel-polygon hybrid for the static world scenery. I don't have a real argument against it. I just don't find it worth the effort when we could he having unique geometry like mega meshes.

    So it is all about unique geometry. You say it does not matter, I say it does. Probably we won't get past that point.

  32. Keep in mind those houses or the dock could be made out of individual (but instanced) bricks, planks, tiles and shingles if it wasn't for the strict polycount limit. Even if they are seperate assets (and in that sense unique geometry), the roofs on the houses are for the most part identical and the planks are same identical elements over and over.
    Repetitive modular elements are not just a technical consideration, artists cannot produce that many unique elements. There are techniques to bring in variation but they are not necessarily ruled out with UD.
    It is also interesting that in the Mega-Mesh demo, they used (instanced, repetitive, non-unique) billboards for foliage - one might think that if they can render 100 billion polygons that they would have some left over for individual leaves. One answer might be that small or subpixel-size polygons cause heavy aliasing (something that is observable in your work too).
    I didn't say that unique geometry doesn't matter (or did I?) but I don't think that huge datasets are the right approach. It's always a choice of drawbacks: Megatextures are easy on the artists because all they need to do is stamp the variation in with simple tools (one does not even need a trained artist for that), so in the end you have semi-unique texturing but your texel density isn't all that high even though you have to ship gigantic datasets. Bringing in variation through multiple UVs, texture blending, detail maps is quite a bit more involved for the artist and most likely more computationally expensive but the resulting assets also have a unique look to them and texel density is much higher.
    The pragmatic approach is to simply pick from the available tools what seems best for the job (and budget) at hand, and I for one would welcome UD in my toolkit. And yes, in making those choices I will always be someone who picks the modular approach over the Michelangelo-esque "my world is a huge painting" approach.

  33. Benhohn: That's the answer I was waiting for, your analysis is near genial, I was about to write something similar, but you have explained it better. Yes there can be a repetition+procedural generation over a polygon that can distort this geometry, and elementary shapes and cosine approximations can be used and reused along with noise "overtones" (you know the sound wave made af increasing smaller microwaves) added to it. Thes can be all combined to make very complex shapes with minimal amount of memory and Ram usage. Obviously you can tell in the code to let the Ram fill so to speed up and spare processing power for AI and whatnot, especially the most procedural generations.


There was an error in this gadget