I was hoping they would tone down the hype in the future. They however named this video "The World's Most Realistic Graphics". I wonder which World they are talking about. In planet Earth, 2014, pretty much any AAA game looks better and runs faster than this.
I'm not sure why they would go and compare themselves to engines like UE4 when everybody knows UE4 will produce far better looking graphics. Same for CryENGINE and even Unity. It is not enough you say it looks better, it has to look better. Jedi tricks do not work over YouTube.
The "conspiracy of fools" aspect is also interesting. The true sign of genius is to see all fools conspiring against you. Somewhere in this video the narrator points that many experts around the web were very critical of the tech. These are the fools. That they got so many fools to speak against them must surely mean they are geniuses.
Well we know it does not work like that. Sometimes the fools just go against other fools. Being criticized does not make you right. Now in this video they claim they have proven all the fools wrong. That has yet to happen. The burden of proof is on them.
I had some secret hopes for this tech to be applied to games, but the tech gap is growing every year. Let's see what happens next year, when the comet approaches us again.
It doesn't seem like their "3d pixels" are significantly different from Ken Silverman's Voxlap which could have done the same thing in 2003 given the same data set. And their forest scene at the end didn't look much different from the type of scanned-in data and level of detail system that Google Earth uses for 3d buildings and terrain nowadays, just at a smaller scale. In both of the examples, the majority of the work has already been done in the form of actual lighting in the physical scene they scanned, or rendered from a 3d model if that was the case. So it seems like their tech isn't doing much more than just efficiently accessing a large amount of existing data about a 3d scene, which isn't doing all that much, would I be right in this observation?
ReplyDeleteIt seems like the Euclideon comet gathers up the dust of existing technology every time it passes us, and comes back each time trailing all that old dust under the illusion of being something new.
On a final note, in their rendering demos, are they using polygons, voxels, or point clouds? Their algorithms for cleaning up laser scanner data may be good, but on the realtime side of things, am really not seeing anything that can't already be accomplished with polygons and parallax occlusion bumpmapping. Am I missing something, or are they?
Yes, I would say it is mostly about how much data you can move/display. This is measurable. They could have a significant improvement over anything that came before, but it is hard to know from just videos. I do not know how it works, does anyone know?
DeleteI'm fairly sure they don't want to even hint at what their technology is. They'd rather go the "smoke and mirrors" way.
DeleteThey released a patent on the tech found here: https://www.google.com/patents/WO2014043735A1 Basically, they traverse an octree front to back and attempt to project as many nodes as possible using orthogonal projection when the difference between ortho and perspective would be sub pixel. The tech itself has pretty good performance and quality if it truly is still running on a single core.
DeleteThey had another claim where they said they don't use divisions or floats in their code to render (turns out to be partially false given the patent). There is a blog where the owner actually made a sample which would do persepective correct projection without ever using divisions or floats. The source is available but the performance wasn't too good. You can find it here: http://bcmpinc.wordpress.com/
Thanks for the patent link. Will check it out.
DeleteI had a project for a while that used front-to-back octree traversal and used perspective projection on a macro scale and orthogonal projection on a micro scale (developed independently, but apparently quite similar to what Euclideon did). I got the runtime representation of voxels down to 4 bytes/voxel amortized, and had plans to take it down to 1.3 bytes amortized. It'd be pretty easy to add MegaTexture-style streaming data on to that, but I don't think it'd be possible to make a decent sized game in under 100GB without some procedural generation.
DeletePerformance can get good enough for real-time rendering on the CPU without lighting, but once you start wanting to render the scene from multiple angles per frame (for shadow buffers, etc.) you need a dramatic increase in memory bandwidth.
Branislav Síleš (Atomontage.com) claims to get 1000-2000 FPS with a GPU/CPU hybrid approach. My belief (though I lack evidence to support it) is that he renders chunk-by-chunk on CPU and does caching and perspective correction on the GPU. The CPU updates chunks slowly, and the GPU warps chunks to adjust for small camera angle changes as needed, and keeps rendering asynchronously while the CPU supplies it with chunk updates whenever a chunk is becoming too warped. This makes actual performance comparison difficult, because a slow CPU wouldn't reduce frame rate, but increase the rate of visual artifacts.
Although I have strong opinions from a technical side, as a voxel enthusiast my loyalty lies with whoever brings voxel-based realistic rendering to mainstream gaming first. At the moment that's looking like Euclideon will take the lead, though I'm still really hoping that Ithica (made by Vercus Games, using the Atomontage engine) will actually produce some gameplay footage.
We're getting close to 2/3 completion (Oct 2016) of, Ithica. It's around the size of Zelda TP. I'm working on storyboards/audio/cut scenes now. It's also thanks to Miguel, Gavan and others that we were able to get to this point. This post was made years ago. So I hope you get to see this. We're all on twitter these days. -vercusgames
DeleteHe clearly seems to believe his own hype though. At 0:58 when he said "like this from the real world", he really expected people to not see that it was a 3D rendering. I (and probably most people who have eyes trained on 3D graphics) immediately noticed the hallmarks of a 3D-scan though - the blobbiness, the static reflections etc. Of course he never even mentions that they clearly can't do dynamic lighting.
ReplyDeleteHis talk about 3D scanning being "full of holes" (mentioning "of course you could increase their resolution by putting 200 points between each point") is just completely idiotic. As if no-one has heard of interpolation before. He's not only promising the unreal, he's also claiming credit for basic tech.
The 3D-scans are definitely very impressive though, but I suspect the credit for that should probably go more to their partners Zoller-Fröhlich than Euclideon.
While they actually seem to be doing some good work, the video is just so full of bullshit that it just ruins their credibility.
Yes I also do not get why the focus in games. The tech could suck at making games, but be excellent at many other things for even bigger markets.
DeleteExactly.
DeleteDynamic lighting would really be one of the biggest issues for games I think (and something he never even mentions in the video, probably hoping no-one will notice).
First you would have to somehow strip away all the natural light in the scene, probably using powerful lights when scanning, maybe multiple takes under different lighting, and then write some tricky algorithms to sort it all out, and you'd still never get it perfect.
And once you're done with that, it's time to add it all back in again! First you'll need to get data on the specularity of everything, which would require a whole other scanning technology, probably using some kind of moving scanner.
There's been decades of research done on polygon graphics, now you need to come up with completely new algorithms that can give similar results and performance but for your voxels. Also, lighting requires normals, and computing them from a point cloud is going to be an imperfect process.
Lighting is usually what takes up the majority of rendering time these days, so even if you manage to achieve the same performance as modern polygon engines, you'll still be running several times slower than your original natural light version.
Of course, if you just keep to virtual tours, you can forget about all that hassle, and use the perfect natural light that you get for free!
Yea I also thought: They finally have found their niche in this geo spacial industry (which I think is a perfect match for them) and quit about telling us how this is the new shit in games gfx!
DeleteHe seems to underestimate all the tech in Unity, CE, UE & Co that actually makes up "the game". And all the effort that went into pipelines for dozens of years..
Ok.. I'll now get back to my puny polygons ;]
I have read a few papers on how material properties like normal, colour, specularity etc could be get by scanning the same data with different predefined illumination from a fixed camera location. The methods were able to extract normal and an approximated BRDF function in realtime, so something like that is at least possible.
DeleteI think the difficult part is how to illuminate big scenes with predefined intensity from predefined angles. And the extracted BRDFs only work when using an unaltered scene as the data for each point already contains the indirect lighting from the rest of the scanned scene.
So even having such data would not be usefull for games as the indirect lighting effects have to be eliminated and reconstructed in realtime.
I suspect that Euclideon will never achieve something comparable as they seem to ignore that they are missing everything except the most trivial of those problems (even though they seem to be good at that one, but then it is an old problem which already had working solutions ages ago)
There is a good reason why CG-companies make millions of dollars even when their technology requires seconds to minutes to render a single frame.
Do we know this church?
ReplyDeleteAlso notice that the two details are different shapes. They may indeed be fake, but on the part of the church, not the 3d data. It is not uncommon for architectural details of that sort to be painted on.
ReplyDeleteLooks like it's St Martins catholic church in Wangen im Allgäu, Germany.
ReplyDeleteIf you were to use bright red/green/blue lasers to acquire voxel colour data that is almost independant of local light sources. Then algorithms and multiple scanning locations should be able to provide reflectivity and translucivity info. So it should be possible for full dynamic lighting.
ReplyDeleteBut those lasers would have to be shoot from multiple positions as the point-cloud has to low resolution to extract the normal from it and without normal or enough sample points it is impossible to extract diffuse color and specularity from the reflectivity value. And even then would the normal be needed for rendering. I think that a very rough approximation of those values requires at least 4x4 different directions for incoming light.
DeleteBut your suggestion has one advantage. If for each sample point a full image is taken it would be theoretically possible to extract exact values for indirect lighting. At least once one finds a way to process the vast amount of data.
You would still need to scan from multiple locations to make sense of translucivity/reflectivity data.
DeleteThose details are painted in the church, yeah. You can see some in this photo: http://upload.wikimedia.org/wikipedia/commons/b/be/St_Martin_Wangen_im_Allgaeu.JPG
ReplyDeleteAs for the tech...well it would be nice to see something other than a static scene. But nope, just the same old stuff: nothing that hasn't been done with other tech.
Bruce tend to over hype his achievements, but your criticizing of the technology is highly based on a fear that your own engine may soon be made obsolete.
ReplyDeleteNo, it's based on dislike of his exaggeration and arrogant tone in his videos.
DeleteOther people who have released impressive voxel tech have been heralded with awe and esteem, but Bruce Dell is just so patronizing that we can't help but try to find reasons to disapprove of what he has to say.
I wish he'd get someone else to do the voiceovers. Maybe then we could take his tech seriously.
Wait there.
DeleteMy work here is about systems that help you create more content and also bring lots of people to work together on that content. How we get this content on display is not relevant.
We actually suggest people to use professional grade rendering (like UE4) to get our content on screen. If someone had a better method we would be suggesting that. If Euclideon's engine could really be used for games it would be a nice fit, as the content our engine produces is already voxel.
Actually Euclideon's take on the problems we do try to solve is that we should laser scan game worlds. (Go tell that to the art team building Coruscant for the next Star Wars game.)
What I find interesting here is the human psychology aspect about those who chose to believe the hype.
-@Anonymous
DeleteI dont know this Bruce,but from my opinion I found this not objective enough for a tech video, The technology does look great, but in the end you cant imagine forcing the world into chainging something. The whole point is to adapt. I mean, There is quite alot of benefits with the system, like the higher detail, better lighting and all that stuff, however its still has a large amount of drawbacks which people cant really live without.
I would find it much wiser to actually make a list of all the flaws, come up with solutions to those flaws and display how you can solve this. Then you would get the attraction of alot of companies which is the actual root of making the system come to life.
Also about his engine to become obsolete, I dont believe that would become a problem, Miguel made wise decisions by adapting to current technologies, it would still last alot of years because the world still has alot of time to change to a new system, *IF* the eucledian engine or any other point/voxel standard were gain popularity then he could still adapt to the new system, cause the problem his engine solves is entirely diffrent than what Euclidion is trying to achieve. His engine is focused on content generation which then later turn converts it into triangle-meshes, He still has the Content Generation solved from before, what stops him from adapting to a new display method?
-@Miguel
I have no PHD in psychology for that matter, But its pretty obvious that alot of people which are 20-30~ now and lived in the 90's are noticing how the younger generation is getting a much more common opinon due to most cases Media. I might be 20, yet i still notice myself getting draged into that kind of thinking. Hype is always a thing that will exist and be a big part in the talks around the masses, but in the end, its the companies which decide what should be used,
I've read his patent and it's very vague, but it also appears to be exactly like Donald Meagher's method from 1984!!
ReplyDeleteWhich is an interesting read:-
http://www.google.com/patents/EP0152741A2?cl=en
I would like to believe him about animation but he said it was ready years ago, and we haven't got a single blade of grass moving yet. And from his patent, I'm not sure it's possible anyway. Maybe he's waiting for computers to get incredible fast first!
No mirrors. This rendering technique will not work with mirrors, unless he resorts to cube-maps, which just won't do for 'next gen' rendering. :P
Lighting. Other than a very early demo, we haven't seen any lighting lately.
Apart from a massive increase in memory per point, the CPU just won't cope with the mathematics involved to compete with GPU techniques. Could he use standard deferred rendering on GPU? Maybe, but it also may alias like crazy because of competing points.
Lets see now, multiple dynamic light sources with shadows please! And then we can talk.
Well looking at NVIDIA's push for voxel GI, since he uses a octree he could potentially replicate the same technique but instead of storing the lighting data at the leaf, he could store lighting data for only the nodes that he managed to render with his technique. Using something like that, he could instead only store lighting data for voxels in screen space only plus he doesn't have to convert polygons to voxels since that is already done for him. Then he could proceed with the standard voxel GI technique. There was a interesting paper which described a rendering and lighting method for software point clouds, you can find it here: https://www.graphics.rwth-aachen.de/media/papers/octree1.pdf It would calculate the lighting info for a sphere and then since that covers all the possible normals, he would have the voxels with a given normal just take the lighting data on the sphere for that given normal (if I recall correctly).
DeleteFor animation, we have seen it in UD in this video a long time ago: https://www.youtube.com/watch?v=cF8A4bsfKH8 There was a bit of interesting research done into animating octree data: https://www.youtube.com/watch?v=TnUQeoAUEs8 The author has a thesis on the video, his method allows all kinds of animation including stretching/deforming but I'm not sure about performance. He also uses the GPU. You can find him in the description and add ask him on twitter if you like.
I don't think the data is stored in a way for him to usefully use shaders. But I guess there's always OpenCL. Mr Dell has always stated he doesn't use the GPU.
DeleteHe renders anti-aliased 'atoms' quite well in the demos, presumably using the hierarchy colour information. BUT that's all lost when you get down to lighting, as those atoms have to be lit to average the result.
I just can't see hierarchical rotation matrices moving thousands and thousands of pixels, sorry. Not even on a fast GPU using compute shaders can he move and bend the amount of atoms he needs to on a laptop, imaging animating the trunk of a tree and all the branches and each leaf at the end. And then doing the same for a forest of trees?
I've seen some voxel animation demos and they are of really simple, our friend Bruce here has stated before that his animation system will be amazing. And the years roll by... :D
I'm not sure completely how Bruce plans on tackling the issue of animation but if you ask the guy I linked to, he has some private videos he can link you where he has hundreds or thousands of unique animations running in realtime on one screen. I think the idea is to only animate to the level of detail visible, similar to how Ryse in the tech slides, reduces bone number for the face as your farther away. The guy I linked to specifically does this in shaders with decent success so I would at least give it a skim of his thesis.
DeleteAs for lighting, lighting data only needs to be calculated to the level of detail the screen is displaying. If you do have sub atomic "atoms" stored for your models and your model is a kilometer away, you don't need to store the lighting for each point but only for the octree nodes visible at that frame. Again, its a proposition, but a lot of the current GI soultions in some form do use voxels to achieve their lighting which could potentially fit very well with what UD has.
Could you elaborate on the need for so many rotational matrices? You would still only need to have 1 matrix for the models orientation and then 1 for each bone in the model. Each bone would have an area or group of points which are assigned to it and so on. When a bone moves, you only recalculate the nodes to level visible on screen. So if its far away, it might be a simple task of reinserting one node into the octree and the rest is recursive. Its probably not this simple to code but I do think it is still feasible. When your tree is 100 meters away, 1080p can't display a good level of detail for each leaf anymore so having to move the potential thousands of points for each leaf would be a waste with little visual impact. Only moving the root node of the leaf however would produce similar results at that distance while saving on computation.
Their lack of showing the animation does bother but I do think the tech has a lots of potential just not a lot of research went into it as it did for polygons. For example, voxelization was a little discussed until polygon lighting solutions needed it to get better results. Now we have many different papers proposing new methods to achieve some kind of advantage over the others.
Considering they are trying to market dataset capturing technology, the people behind Get Even game already did this job better, judging purely from visual results.
ReplyDeleteThere many traps to fall down when moving voxel data. For example, if a large object in the distance was rotated towards you, you'll need to make sure the correct number of points were rendered - but, as he's streaming in data depending on distance to the camera it'll have to stream in extra points for the object as it rotates towards the player. Which also yields an unknowable amount of data loaded at once.
ReplyDeleteIt also means that animated objects would probably have to be in memory all the time.
My point about the lighting was about anti-aliasing, not pixel detail. He'll need to calculate in real-time all the neighbouring sub-pixels otherwise he'll start getting shimmering artifacts. Currently the merged atom colours are baked into the data structure (I think).
And what about joint blending of vertices, like the merging of matrices on arm and leg joints? The more you think about what needs to be done, it shows how we take for granted many of the established shader techniques used today, and how wonderfully parallel it all is.
I would personally use the standard polygon GPU rendering mixed with the Geoverse rendering the z-buffered backgrounds. That would work, probably.
Also, IMHO, Dell needs to keep up to date with the current state of the art, even the new Unity 5 has physically-based shading which includes reflections of colours onto other surfaces, how the heck is he going to compete with that?!
Well, UD's videos are certainly no less ridiculous than they were. I really don't understand who this particular video is even being made for, honestly. It seems like they're targeting it at the game industry, but anyone in the game industry can see all the obvious problems in their presentation - from old video game footage being passed off as cutting edge (presumably to make their sub-par stuff look better), all the talk of lighting (when that's clearly a serious failing of their engine), to their incredibly hilarious suggestion that the solution to the art creation problem is to scan real-world locations (that doesn't even entirely work for those games that used only real-world locations and contain only real-world objects and people, if there even are any such games). That last one makes me really roll my eyes, because it's the sort of thing a person who knows nothing about game development would say - it's like they've not even spoken to any game developers about what the industry needs are. How are we supposed to take them seriously? Or are we supposed to take them seriously? Are their claims really not intended for anyone in the game industry, but instead just for potential backers, to convince them their software can do things that it can't and has a market that it doesn't?
ReplyDeleteActually... your description of real world environments/etc made me REALLY want to see a car racing game with 3d scanned vehicles and a 3d scanned/coloured world.
DeleteThere *are* actually racing games that laser-scanned (and/or used photogrammetry for) the cars and a good bit of their environments - it's pretty much the perfect use for the technology (as well as in some sports games), since they want to use real-world vehicles and racing tracks as much as possible. And the games I'm aware of do seem limited to racing tracks (Simraceway, Forza 5), presumably because it's a lot easier to scan an empty racetrack than get a clean scan of crowded urban streets. But there's still a huge amount of editing, clean-up and hand work that needs to be done, too, especially for placing things like plants, and a large number of artists used. They scanned tracks for the Xbox One version of Forza 5, and ironically, the huge amount of work involved apparently meant they had fewer tracks. The idea that UD presented that you can replace all your artists and the scanner magically sticks the real world in the game without any work is hilarious, even in the situations where you do have games that primarily revolve around real world locations, objects and people.
DeleteOh I understand this all, these racing games are now pushing upward of a million polys per car. Then when it comes to the tracks themselves, yes they are accurate, and yes the relatively large size of the tracks geometry doesn't matter as much (compared to vehicular geometry). But the amount of detail you can capture in the track is limited by file size, processing time, man power to generate the content, etc. That said, in the past decade scanning technology has improved orders of magnitude more times than the visuals in these games. Find a lightweight way to represent this data and you can unlock a whole new playing field... Instant custom content being streamed live, the ability to generate worlds around what is around you (oculus rift mounted scanners? Hell yes!
DeleteI have to say, they openly will give a 90 day version of their product to developers who want to review the product for possible future clientele. Maybe some of you should try that approach to figuring out whether it's a development arena you or your clients can utilize.
ReplyDelete