tag:blogger.com,1999:blog-3779956188045272690.post3367772739024707246..comments2024-03-22T01:46:59.425-04:00Comments on Procedural World: Your Euclideon Yearly FixMiguel Ceperohttp://www.blogger.com/profile/17586513342346629237noreply@blogger.comBlogger34125tag:blogger.com,1999:blog-3779956188045272690.post-62167417443917958482016-09-24T06:43:31.868-04:002016-09-24T06:43:31.868-04:00We're getting close to 2/3 completion (Oct 201...We're getting close to 2/3 completion (Oct 2016) of, Ithica. It's around the size of Zelda TP. I'm working on storyboards/audio/cut scenes now. It's also thanks to Miguel, Gavan and others that we were able to get to this point. This post was made years ago. So I hope you get to see this. We're all on twitter these days. -vercusgamesAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-81379142339873315582014-09-29T00:26:00.433-04:002014-09-29T00:26:00.433-04:00Oh I understand this all, these racing games are n...Oh I understand this all, these racing games are now pushing upward of a million polys per car. Then when it comes to the tracks themselves, yes they are accurate, and yes the relatively large size of the tracks geometry doesn't matter as much (compared to vehicular geometry). But the amount of detail you can capture in the track is limited by file size, processing time, man power to generate the content, etc. That said, in the past decade scanning technology has improved orders of magnitude more times than the visuals in these games. Find a lightweight way to represent this data and you can unlock a whole new playing field... Instant custom content being streamed live, the ability to generate worlds around what is around you (oculus rift mounted scanners? Hell yes!Ajmhttps://www.blogger.com/profile/12985928729302303917noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-49154453083508304472014-09-28T12:52:18.350-04:002014-09-28T12:52:18.350-04:00I have to say, they openly will give a 90 day vers...I have to say, they openly will give a 90 day version of their product to developers who want to review the product for possible future clientele. Maybe some of you should try that approach to figuring out whether it's a development arena you or your clients can utilize.Juno1959https://www.blogger.com/profile/08048065278237021197noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-29691437493894947512014-09-26T19:58:13.239-04:002014-09-26T19:58:13.239-04:00There *are* actually racing games that laser-scann...There *are* actually racing games that laser-scanned (and/or used photogrammetry for) the cars and a good bit of their environments - it's pretty much the perfect use for the technology (as well as in some sports games), since they want to use real-world vehicles and racing tracks as much as possible. And the games I'm aware of do seem limited to racing tracks (Simraceway, Forza 5), presumably because it's a lot easier to scan an empty racetrack than get a clean scan of crowded urban streets. But there's still a huge amount of editing, clean-up and hand work that needs to be done, too, especially for placing things like plants, and a large number of artists used. They scanned tracks for the Xbox One version of Forza 5, and ironically, the huge amount of work involved apparently meant they had fewer tracks. The idea that UD presented that you can replace all your artists and the scanner magically sticks the real world in the game without any work is hilarious, even in the situations where you do have games that primarily revolve around real world locations, objects and people.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-36822378090405553022014-09-26T14:39:34.283-04:002014-09-26T14:39:34.283-04:00Actually... your description of real world environ...Actually... your description of real world environments/etc made me REALLY want to see a car racing game with 3d scanned vehicles and a 3d scanned/coloured world.Ajmhttps://www.blogger.com/profile/12985928729302303917noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-5312556829704545912014-09-25T19:39:45.830-04:002014-09-25T19:39:45.830-04:00Well, UD's videos are certainly no less ridicu...Well, UD's videos are certainly no less ridiculous than they were. I really don't understand who this particular video is even being made for, honestly. It seems like they're targeting it at the game industry, but anyone in the game industry can see all the obvious problems in their presentation - from old video game footage being passed off as cutting edge (presumably to make their sub-par stuff look better), all the talk of lighting (when that's clearly a serious failing of their engine), to their incredibly hilarious suggestion that the solution to the art creation problem is to scan real-world locations (that doesn't even entirely work for those games that used only real-world locations and contain only real-world objects and people, if there even are any such games). That last one makes me really roll my eyes, because it's the sort of thing a person who knows nothing about game development would say - it's like they've not even spoken to any game developers about what the industry needs are. How are we supposed to take them seriously? Or are we supposed to take them seriously? Are their claims really not intended for anyone in the game industry, but instead just for potential backers, to convince them their software can do things that it can't and has a market that it doesn't?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-21717327262144845692014-09-23T23:54:23.132-04:002014-09-23T23:54:23.132-04:00You would still need to scan from multiple locatio...You would still need to scan from multiple locations to make sense of translucivity/reflectivity data.Ajmhttps://www.blogger.com/profile/12985928729302303917noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-9706077225504532682014-09-23T19:50:17.015-04:002014-09-23T19:50:17.015-04:00-@Anonymous
I dont know this Bruce,but from my opi...-@Anonymous<br />I dont know this Bruce,but from my opinion I found this not objective enough for a tech video, The technology does look great, but in the end you cant imagine forcing the world into chainging something. The whole point is to adapt. I mean, There is quite alot of benefits with the system, like the higher detail, better lighting and all that stuff, however its still has a large amount of drawbacks which people cant really live without.<br />I would find it much wiser to actually make a list of all the flaws, come up with solutions to those flaws and display how you can solve this. Then you would get the attraction of alot of companies which is the actual root of making the system come to life.<br /><br />Also about his engine to become obsolete, I dont believe that would become a problem, Miguel made wise decisions by adapting to current technologies, it would still last alot of years because the world still has alot of time to change to a new system, *IF* the eucledian engine or any other point/voxel standard were gain popularity then he could still adapt to the new system, cause the problem his engine solves is entirely diffrent than what Euclidion is trying to achieve. His engine is focused on content generation which then later turn converts it into triangle-meshes, He still has the Content Generation solved from before, what stops him from adapting to a new display method?<br /><br />-@Miguel<br />I have no PHD in psychology for that matter, But its pretty obvious that alot of people which are 20-30~ now and lived in the 90's are noticing how the younger generation is getting a much more common opinon due to most cases Media. I might be 20, yet i still notice myself getting draged into that kind of thinking. Hype is always a thing that will exist and be a big part in the talks around the masses, but in the end, its the companies which decide what should be used,<br /><br />Mindragenoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-9467227617589056092014-09-23T09:32:24.150-04:002014-09-23T09:32:24.150-04:00There many traps to fall down when moving voxel da...There many traps to fall down when moving voxel data. For example, if a large object in the distance was rotated towards you, you'll need to make sure the correct number of points were rendered - but, as he's streaming in data depending on distance to the camera it'll have to stream in extra points for the object as it rotates towards the player. Which also yields an unknowable amount of data loaded at once.<br />It also means that animated objects would probably have to be in memory all the time. <br /><br />My point about the lighting was about anti-aliasing, not pixel detail. He'll need to calculate in real-time all the neighbouring sub-pixels otherwise he'll start getting shimmering artifacts. Currently the merged atom colours are baked into the data structure (I think). <br /><br />And what about joint blending of vertices, like the merging of matrices on arm and leg joints? The more you think about what needs to be done, it shows how we take for granted many of the established shader techniques used today, and how wonderfully parallel it all is.<br />I would personally use the standard polygon GPU rendering mixed with the Geoverse rendering the z-buffered backgrounds. That would work, probably.<br /><br />Also, IMHO, Dell needs to keep up to date with the current state of the art, even the new Unity 5 has physically-based shading which includes reflections of colours onto other surfaces, how the heck is he going to compete with that?!DaveHhttps://www.blogger.com/profile/03144797839905475079noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-75943636645578885502014-09-23T04:43:44.565-04:002014-09-23T04:43:44.565-04:00Considering they are trying to market dataset capt...Considering they are trying to market dataset capturing technology, the people behind Get Even game already did this job better, judging purely from visual results.Krzysztof Kluczekhttps://www.blogger.com/profile/16523089648048160154noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-20466709226061294442014-09-22T18:37:49.778-04:002014-09-22T18:37:49.778-04:00But those lasers would have to be shoot from multi...But those lasers would have to be shoot from multiple positions as the point-cloud has to low resolution to extract the normal from it and without normal or enough sample points it is impossible to extract diffuse color and specularity from the reflectivity value. And even then would the normal be needed for rendering. I think that a very rough approximation of those values requires at least 4x4 different directions for incoming light.<br /><br />But your suggestion has one advantage. If for each sample point a full image is taken it would be theoretically possible to extract exact values for indirect lighting. At least once one finds a way to process the vast amount of data.JoselBnoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-28445115081740126472014-09-22T18:25:56.500-04:002014-09-22T18:25:56.500-04:00I have read a few papers on how material propertie...I have read a few papers on how material properties like normal, colour, specularity etc could be get by scanning the same data with different predefined illumination from a fixed camera location. The methods were able to extract normal and an approximated BRDF function in realtime, so something like that is at least possible.<br /><br />I think the difficult part is how to illuminate big scenes with predefined intensity from predefined angles. And the extracted BRDFs only work when using an unaltered scene as the data for each point already contains the indirect lighting from the rest of the scanned scene.<br /><br />So even having such data would not be usefull for games as the indirect lighting effects have to be eliminated and reconstructed in realtime.<br /><br />I suspect that Euclideon will never achieve something comparable as they seem to ignore that they are missing everything except the most trivial of those problems (even though they seem to be good at that one, but then it is an old problem which already had working solutions ages ago)<br /><br />There is a good reason why CG-companies make millions of dollars even when their technology requires seconds to minutes to render a single frame.JoselBnoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-27704272413873461632014-09-22T12:24:15.236-04:002014-09-22T12:24:15.236-04:00I'm not sure completely how Bruce plans on tac...I'm not sure completely how Bruce plans on tackling the issue of animation but if you ask the guy I linked to, he has some private videos he can link you where he has hundreds or thousands of unique animations running in realtime on one screen. I think the idea is to only animate to the level of detail visible, similar to how Ryse in the tech slides, reduces bone number for the face as your farther away. The guy I linked to specifically does this in shaders with decent success so I would at least give it a skim of his thesis.<br /><br /><br />As for lighting, lighting data only needs to be calculated to the level of detail the screen is displaying. If you do have sub atomic "atoms" stored for your models and your model is a kilometer away, you don't need to store the lighting for each point but only for the octree nodes visible at that frame. Again, its a proposition, but a lot of the current GI soultions in some form do use voxels to achieve their lighting which could potentially fit very well with what UD has.<br /><br />Could you elaborate on the need for so many rotational matrices? You would still only need to have 1 matrix for the models orientation and then 1 for each bone in the model. Each bone would have an area or group of points which are assigned to it and so on. When a bone moves, you only recalculate the nodes to level visible on screen. So if its far away, it might be a simple task of reinserting one node into the octree and the rest is recursive. Its probably not this simple to code but I do think it is still feasible. When your tree is 100 meters away, 1080p can't display a good level of detail for each leaf anymore so having to move the potential thousands of points for each leaf would be a waste with little visual impact. Only moving the root node of the leaf however would produce similar results at that distance while saving on computation.<br /><br />Their lack of showing the animation does bother but I do think the tech has a lots of potential just not a lot of research went into it as it did for polygons. For example, voxelization was a little discussed until polygon lighting solutions needed it to get better results. Now we have many different papers proposing new methods to achieve some kind of advantage over the others.D.V.Dhttps://www.blogger.com/profile/02585220864929175991noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-44759364376675307602014-09-22T10:59:51.661-04:002014-09-22T10:59:51.661-04:00I don't think the data is stored in a way for ...I don't think the data is stored in a way for him to usefully use shaders. But I guess there's always OpenCL. Mr Dell has always stated he doesn't use the GPU.<br />He renders anti-aliased 'atoms' quite well in the demos, presumably using the hierarchy colour information. BUT that's all lost when you get down to lighting, as those atoms have to be lit to average the result.<br /><br />I just can't see hierarchical rotation matrices moving thousands and thousands of pixels, sorry. Not even on a fast GPU using compute shaders can he move and bend the amount of atoms he needs to on a laptop, imaging animating the trunk of a tree and all the branches and each leaf at the end. And then doing the same for a forest of trees?<br /><br />I've seen some voxel animation demos and they are of really simple, our friend Bruce here has stated before that his animation system will be amazing. And the years roll by... :D<br />DaveHhttps://www.blogger.com/profile/03144797839905475079noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-51359814035657219352014-09-22T10:42:04.965-04:002014-09-22T10:42:04.965-04:00Well looking at NVIDIA's push for voxel GI, si...Well looking at NVIDIA's push for voxel GI, since he uses a octree he could potentially replicate the same technique but instead of storing the lighting data at the leaf, he could store lighting data for only the nodes that he managed to render with his technique. Using something like that, he could instead only store lighting data for voxels in screen space only plus he doesn't have to convert polygons to voxels since that is already done for him. Then he could proceed with the standard voxel GI technique. There was a interesting paper which described a rendering and lighting method for software point clouds, you can find it here: https://www.graphics.rwth-aachen.de/media/papers/octree1.pdf It would calculate the lighting info for a sphere and then since that covers all the possible normals, he would have the voxels with a given normal just take the lighting data on the sphere for that given normal (if I recall correctly).<br /><br />For animation, we have seen it in UD in this video a long time ago: https://www.youtube.com/watch?v=cF8A4bsfKH8 There was a bit of interesting research done into animating octree data: https://www.youtube.com/watch?v=TnUQeoAUEs8 The author has a thesis on the video, his method allows all kinds of animation including stretching/deforming but I'm not sure about performance. He also uses the GPU. You can find him in the description and add ask him on twitter if you like.D.V.Dhttps://www.blogger.com/profile/02585220864929175991noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-57097690196146084412014-09-22T10:32:57.140-04:002014-09-22T10:32:57.140-04:00Wait there.
My work here is about systems that h...Wait there. <br /><br />My work here is about systems that help you create more content and also bring lots of people to work together on that content. How we get this content on display is not relevant. <br /><br />We actually suggest people to use professional grade rendering (like UE4) to get our content on screen. If someone had a better method we would be suggesting that. If Euclideon's engine could really be used for games it would be a nice fit, as the content our engine produces is already voxel.<br /><br />Actually Euclideon's take on the problems we do try to solve is that we should laser scan game worlds. (Go tell that to the art team building Coruscant for the next Star Wars game.) <br /><br />What I find interesting here is the human psychology aspect about those who chose to believe the hype.Miguel Ceperohttps://www.blogger.com/profile/17586513342346629237noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-14885305762936732122014-09-22T09:00:58.635-04:002014-09-22T09:00:58.635-04:00Yea I also thought: They finally have found their ...Yea I also thought: They finally have found their niche in this geo spacial industry (which I think is a perfect match for them) and quit about telling us how this is the new shit in games gfx!<br /><br />He seems to underestimate all the tech in Unity, CE, UE & Co that actually makes up "the game". And all the effort that went into pipelines for dozens of years..<br /><br />Ok.. I'll now get back to my puny polygons ;]ëRiChttps://www.blogger.com/profile/13155422816081085456noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-88117249257064053122014-09-22T06:28:01.547-04:002014-09-22T06:28:01.547-04:00I've read his patent and it's very vague, ...I've read his patent and it's very vague, but it also appears to be exactly like Donald Meagher's method from 1984!!<br />Which is an interesting read:-<br />http://www.google.com/patents/EP0152741A2?cl=en<br /><br />I would like to believe him about animation but he said it was ready years ago, and we haven't got a single blade of grass moving yet. And from his patent, I'm not sure it's possible anyway. Maybe he's waiting for computers to get incredible fast first! <br /><br />No mirrors. This rendering technique will not work with mirrors, unless he resorts to cube-maps, which just won't do for 'next gen' rendering. :P<br /><br />Lighting. Other than a very early demo, we haven't seen any lighting lately.<br />Apart from a massive increase in memory per point, the CPU just won't cope with the mathematics involved to compete with GPU techniques. Could he use standard deferred rendering on GPU? Maybe, but it also may alias like crazy because of competing points.<br /><br />Lets see now, multiple dynamic light sources with shadows please! And then we can talk.<br />DaveHhttps://www.blogger.com/profile/03144797839905475079noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-67230999997457513882014-09-22T06:01:05.607-04:002014-09-22T06:01:05.607-04:00No, it's based on dislike of his exaggeration ...No, it's based on dislike of his exaggeration and arrogant tone in his videos.<br /><br />Other people who have released impressive voxel tech have been heralded with awe and esteem, but Bruce Dell is just so patronizing that we can't help but try to find reasons to disapprove of what he has to say.<br /><br />I wish he'd get someone else to do the voiceovers. Maybe then we could take his tech seriously.Lachlanhttps://www.blogger.com/profile/08481291930396704721noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-48352472351883243012014-09-22T05:45:02.729-04:002014-09-22T05:45:02.729-04:00I had a project for a while that used front-to-bac...I had a project for a while that used front-to-back octree traversal and used perspective projection on a macro scale and orthogonal projection on a micro scale (developed independently, but apparently quite similar to what Euclideon did). I got the runtime representation of voxels down to 4 bytes/voxel amortized, and had plans to take it down to 1.3 bytes amortized. It'd be pretty easy to add MegaTexture-style streaming data on to that, but I don't think it'd be possible to make a decent sized game in under 100GB without some procedural generation.<br /><br />Performance can get good enough for real-time rendering on the CPU without lighting, but once you start wanting to render the scene from multiple angles per frame (for shadow buffers, etc.) you need a dramatic increase in memory bandwidth. <br /><br />Branislav Síleš (Atomontage.com) claims to get 1000-2000 FPS with a GPU/CPU hybrid approach. My belief (though I lack evidence to support it) is that he renders chunk-by-chunk on CPU and does caching and perspective correction on the GPU. The CPU updates chunks slowly, and the GPU warps chunks to adjust for small camera angle changes as needed, and keeps rendering asynchronously while the CPU supplies it with chunk updates whenever a chunk is becoming too warped. This makes actual performance comparison difficult, because a slow CPU wouldn't reduce frame rate, but increase the rate of visual artifacts.<br /><br />Although I have strong opinions from a technical side, as a voxel enthusiast my loyalty lies with whoever brings voxel-based realistic rendering to mainstream gaming first. At the moment that's looking like Euclideon will take the lead, though I'm still really hoping that Ithica (made by Vercus Games, using the Atomontage engine) will actually produce some gameplay footage.Lachlanhttps://www.blogger.com/profile/08481291930396704721noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-30748843124715168822014-09-22T04:46:10.675-04:002014-09-22T04:46:10.675-04:00Bruce tend to over hype his achievements, but your...Bruce tend to over hype his achievements, but your criticizing of the technology is highly based on a fear that your own engine may soon be made obsolete.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-27292016140039286962014-09-22T04:44:04.123-04:002014-09-22T04:44:04.123-04:00Those details are painted in the church, yeah. You...Those details are painted in the church, yeah. You can see some in this photo: http://upload.wikimedia.org/wikipedia/commons/b/be/St_Martin_Wangen_im_Allgaeu.JPG<br /><br />As for the tech...well it would be nice to see something other than a static scene. But nope, just the same old stuff: nothing that hasn't been done with other tech.pdurdinhttps://www.blogger.com/profile/17009164000358908795noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-69285287930724134492014-09-22T01:13:51.592-04:002014-09-22T01:13:51.592-04:00Thanks for the patent link. Will check it out.Thanks for the patent link. Will check it out.Miguel Ceperohttps://www.blogger.com/profile/17586513342346629237noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-32639865189410462062014-09-22T00:06:00.949-04:002014-09-22T00:06:00.949-04:00If you were to use bright red/green/blue lasers to...If you were to use bright red/green/blue lasers to acquire voxel colour data that is almost independant of local light sources. Then algorithms and multiple scanning locations should be able to provide reflectivity and translucivity info. So it should be possible for full dynamic lighting.Ajmhttps://www.blogger.com/profile/12985928729302303917noreply@blogger.comtag:blogger.com,1999:blog-3779956188045272690.post-54433352274768313532014-09-21T22:37:43.076-04:002014-09-21T22:37:43.076-04:00They released a patent on the tech found here: htt...They released a patent on the tech found here: https://www.google.com/patents/WO2014043735A1 Basically, they traverse an octree front to back and attempt to project as many nodes as possible using orthogonal projection when the difference between ortho and perspective would be sub pixel. The tech itself has pretty good performance and quality if it truly is still running on a single core.<br /><br />They had another claim where they said they don't use divisions or floats in their code to render (turns out to be partially false given the patent). There is a blog where the owner actually made a sample which would do persepective correct projection without ever using divisions or floats. The source is available but the performance wasn't too good. You can find it here: http://bcmpinc.wordpress.com/D.V.Dhttps://www.blogger.com/profile/02585220864929175991noreply@blogger.com