I have been a member of all these camps in one moment or another, so really no judgment here. But what probably unites all coders is that if you write a program, you will likely have to debug it.
If you have not stepped through your code at least once, you probably do not know what it is doing. Yes you can unit test all you want, but even then you should also step through your test code.
Karma dictates if you put a new language out there, you must also give a way for people to trace and debug it. That is what we just did with our LSystem and grammar language:
It turned to be quite surprising. We are very used to tracing imperative languages like C++ and Java, but tracing the execution of a L-System is unlike anything I had tried before.
I get the power of L-Systems and context sensitive grammars better now. This system feels like it had the ability to foresee, to plan features ahead. You see it happening in the video: often empty boxes appear in many different spots, like if the program was testing for different alternatives. That is in fact what it is happening.
It looks amazing that end features like the tip of a tower may appear even before than the base. In reality the program has already decided there will be a base, but its specific details are still a bit fuzzy so they come up much later. But then once they do appear, everything connects as it should, all predictions line up properly.
All that did not require planning from the programmer. In this case the code is not a series of instructions, it is more like a declaration of what the structure should be. The blocks fall into their pieces, the debugger just shows how.
It reminds me of another type of code at work: DNA. Now that would be one really cool debugger.
Very nice and inspirational !
ReplyDeleteKeep up the good work and the awesome writeups
So how much does coding a building compress it in file size? Lets just use one instance of a building, as it would be very easy to extrapolate and reuse the same code to do a plethora of buildings with different random/external variables.
ReplyDeleteInteresting question, but I do not have this info. A single run outputs a list of instance ids and their transformation matrices, you would have the "brick" mesh once and a large number of 4x4 matrices, one for each occurrence of a brick. The compiled code for the round tower is around 1K, so this is the "compressed" size of all these matrices.
DeleteSo the initial run through "unpacks" the tower where the data is then stored in a format that is quicker to load?
DeleteIs it possible to export the round tower in a common file format for comparison?
This comment has been removed by the author.
ReplyDeleteOh my. Awesome and scary at the same time. I wish I could write some architectural code for your Engine someday. There are so many hidden wonders that wait to be built and explored in virtual worlds. Can't wait.
DeleteFor someone not used to L-systems, I wonder if you could give a practical example of how and what you could debug with this tool. Also, do you think it could be possible to have more interactivity on the preview side like having code highlighting when you click an element so that you know which lines led to the creation of the given element ?
Yes, producing debug information is the first step. Many features can be built on top of that now, like the one you have suggested, also breakpoints which we do not have.
DeleteAs for practical uses... time will tell :)
Watching the buildings build up like that is fascinating! When it comes time for designing many L-systems, to create specific looking geometry, it's clear a tool like this will be invaluable. I think your samples themselves were great examples of this!
ReplyDeleteI especially loved watching the stairways build themselves step by step, and then later skin themselves over with walls and arrow slit windows! So cool. I'd love to see a similar process applied to tear it back down again... ruination style!
In terms of building procedural content, it is clear how this is superior (though requiring a different expertise) than hand built assets. I'd be curious to hear how your team member from the architectural background finds coding L-Systems.
Also, watching step by step you can see how such iterative processes can be coded efficiently AND inefficiently... If I recall, in an earlier post you explained that the rendering of the world was reconstructed with each draw. Is this still the case? Obviously brute force methods, without efficient caching could have lots of dependance on L-system's efficiency.
My primary hope is to see the L-system building blocks we're looking back here progress back up to the level of "city-state -> City -> Block -> individual Buildings" placement systems, with realistic variations that you laid groundwork for previously!
Great work again! So exciting!
Why the constructions are not built from ground then up? e.g. Base first, then pillars, then floors? Isn't that more 'procedural' because may be run time decisions can be taken with that approach.
ReplyDeleteDid you read the blogpost?
DeleteIt is described there^^
You can define your building from the ground up if you like, but you are not forced to.
DeleteIt is like when someone asks you to define/explain what is a car. You can start by mentioning the wheels, but a different approach would work too.
If you can decide where to start, couldnt you start building up on an uneven ground?
DeleteDo you plan to take landscape-data into your L-system and use it as base of your buildings?
Because it would be very nice to see buildings spread over hills or cliffs, you could even proceduraly generate stilt houses above water or citys which project beyond the coast in form of harbors.
Dear Miguel
ReplyDeleteI really appreciate your work and the surrounding chronicles.
You got so many great concepts and its amazing that you capture the whole act of creation in form of this blog. It took me a long time( and alot lot of fun :) ) to read and understand your profound ideas and implementation entirely.
Enough bootliking i get to the Point now:
It is about Level of Detail, your problems with it and your solution.
You wrote in your post "Doughnuts and Cofee Mugs" ( http://procworld.blogspot.ca/2010/11/doughnuts-and-cofee-mugs.html ) about LoD and that (quote Miguel:) 'windows in the houses appear as tiny dots' 'a couple of miles away'.
Your solution didn't realy solved that problem and also the switch between different Levels of Detail is plainly visible.
A few weeks ago i found an interesting video on youtube ( http://www.youtube.com/watch?v=XkSS_veoSg0&list=UUhRWNDqFpsAE8txON6GbqUg ) showing an hybrid graphics engine with polygon rendering around the player and sparse voxel octree for rendering objects and landscapes far away.
I saw this video and had to thought about your blogpost "Doughnuts and Cofee Mugs"
and your problem with rendering detail in the distance.
I would really like to know your opinion on this solution :) and I'm sure that this is a really good way of creating different Levels of Detail.
Just watch the video and tell me what you think and if its possible for you and your team to implement such a hybrid system ( i'm sure you can do it ).
Sorry for going offtopic but i didn't wanted to revive an old post.
greetings busmalis
I think we should avoid hybrids at least for the current and next hardware generation in PCs/consoles. And maybe for a few decades in mobile and tables.
DeleteLOD switches can be completely masked, we are just not working on that now. Same goes about capturing detail, like small windows in the distance. These are somewhat soft problems. For procedural content, the challenge is to produce information at the same pace as the camera moves. It is not about rendering, it is about content.
Hybrid rendering does not address this problem. The video you link for instance has very low information density, so it does not work as an example. Examples of higher information density are Atomontage and Euclideon's Unlimited Detail, but they have not shown realtime procedural generation of complex content. Also their scenes are still too coarse at close range.
But the real issue with a hybrid system is nobody will want to use it for games. First you would need to show you can hit the same level of quality of a professional grade polygon renderer (UE4, Frostbite, etc). I have seen some path tracers getting close to that, but still a long way to go.
And then rendering is just one of the things you need to worry about. There are already other very complex systems built on top of polygonal models like physics, pathfinding, etc. Are you going to do the work twice and feed polygons to these systems, or will you convince everyone to drop middleware like Havok in favor of your non-polygon solution?
So while we could do a hybrid renderer, I also think it is a very bad investment for the next few years.
I am really curios how you want to mask the LOD switches which in my opinion displease the overall picture. But im very glad that you adress the topic and im looking forward to see the results.
DeleteBut im very glad that you *will* adress the topic and im looking forward to see the results.
Deletevery neat to see the step by step animation!
ReplyDelete