John Carmack recently said procedural techniques are nothing more than a crappy form of compression. This is while he was asked about the Unlimited Detail tech. To be fair, this should be taken only in the context of that question and runtime-engines, which is anyway a bit strange since Unlimited Detail is not really using anything procedural, it is mostly tiling instances of unique detail. Still, is he right about that? Is procedural generation just crappy compression?
Others have said that procedural generation in games inevitably leads to large, bland scenarios. Or maddening repetitive dungeons and corridors. As they put it, humans can only be engaged by content that was created by other humans.
I actually agree with them. And yes, you are still reading Procedural World.
Despite of what we naysayers may believe, procedural techniques have seen a lot of success already. It is not really in your face, not a household name, but it has been instrumental producing content that otherwise would simply not exist.
Take a look at this image:
Well thanks, it does looks like the shinny things I'm trying to do, but this is Pandora from the Avatar movie. Now, do you think this was entirely handcrafted?
Pandora is mostly procedural. They have synthetic noises everywhere, L-systems for plants and ground cover, it is actually a textbook example for procedural techniques. A lot of people went to visit it, and it made heaps of money. If you define success on those terms, this baby takes the cake. And it is thanks to procedural generation. Without it Pandora would have looked like this:
So clearly proceduralism can help creating appealing content. As Carmack says, it is some form of compression. What got compressed in this case was not information over the wire, or over the bus to the graphics card, it is the amount of work artists needed to do. Instead of worrying about every little fern, they could spend their effort somewhere else.
What is wrong with this? Now imagine it was a game and you were marooned in this huge Pandora place. You would probably get bored after a while, besides staying alive there is not much to do. If there is a thing we humans can do is we can get bored anywhere, all the time.
We need this sort of battle-is-brewing, storm-is-coming thing lurking in the horizon. We need a sense of purpose and destiny, or a mystery for the lack of it. Is this something that only other humans can create for us?
At this point it becomes about Artificial Intelligence. Think of a variation of the Turing Test, where a human player would not be able to tell whether a game was designed by another human or by a computer. This is the real frontier for pure Procedural Content generation.
In the field of physics there are two theories that are very successful in explaining the two halves of the real world: Quantum Physics and General Relativity. They don't get along at all. There is a discontinuity where one theory ends and the other starts.
A similar scenario can be found in Procedural generation. On one side you have the Roguelike-Dwarf Fortress type of games which can generate compelling experiences, but lack any visual appeal. These games use text or ASCII screens. It is like never jacking into the Matrix and keep staring at the floating text. Can you see the woman in the red dress?
Then any attempt to visualize the worlds they describe, the magic dies. You realize it was better when it was all ASCII, which is sad cause only weirdos play ASCII games.
The other side is rich visual worlds like Pandora. With the advent of computing as a commodity -the cloud- bigger and richer worlds will become available, even for devices with limited power like mobiles and tablets. But they lack the drama. Even a Pocahontas remix is beyond what they can do.
Triple A studios have realized this. It is better to use procedural techniques at what they do best: terrain filler, trees, texturing, and leave the creative work for the humans. They do not have a reason to invest in this type of AI, they won't push this front if it was up to them.
I do believe it is possible to marry these two opposed ends. Someone with an unique vision into both fields will be able to come up with a unified theory of procedural generation. And I think it will be the Unlimited Detail guys.
(EDIT: OK, OK. I'm kidding about Unlimited Detail. Their next bogus claim could be they made procedural content generation 100,000 times better. It is surprising to me they still can be taken seriously.)
Following one man's task of building a virtual world from the comfort of his pajamas. Discusses Procedural Terrain, Vegetation and Architecture generation. Also OpenCL, Voxels and Computer Graphics in general.
Thursday, August 18, 2011
Saturday, August 13, 2011
Ruins
The nicest part about creating buildings is to make them fall apart:
Some ruins near a cliff. I still need to do a lot of work for the ruin generator. So far there is no rubble. Also a lot of floating chunks.
I will post more screenshots of ruins shortly.
Some ruins near a cliff. I still need to do a lot of work for the ruin generator. So far there is no rubble. Also a lot of floating chunks.
I will post more screenshots of ruins shortly.
Friday, August 12, 2011
Ribbed Vault
I had some time and added some columns and ribbed vaults:
The way it works, actual architectural elements can be swapped at generation time. This means different types of vaulting, columns and ornaments can be used without having to change the grammar.
The grammars are modular. What you see here is the result of one particular module that is able to fill up any space with vaults and columns.
I still have some issues in the radiosity solution. There is some light at the top of one vault I cannot really explain.
The mood in this screenshot is set by post-processing tone mapping. This is not something I can do in realtime yet. I plan to cover it as soon as I start working on the client.
EDIT
A few minutes later I found the bug in the radiosity. Here is the same vault with improved lighting now:
That was quick.
The way it works, actual architectural elements can be swapped at generation time. This means different types of vaulting, columns and ornaments can be used without having to change the grammar.
The grammars are modular. What you see here is the result of one particular module that is able to fill up any space with vaults and columns.
I still have some issues in the radiosity solution. There is some light at the top of one vault I cannot really explain.
The mood in this screenshot is set by post-processing tone mapping. This is not something I can do in realtime yet. I plan to cover it as soon as I start working on the client.
EDIT
A few minutes later I found the bug in the radiosity. Here is the same vault with improved lighting now:
That was quick.
Tuesday, August 9, 2011
Staircraft
The staircase grammars are turning out nicely. Here is a screenshot:
This is taken from the low-resolution model. Please ignore the texturing for now, since it is the same brick texture everywhere. I did not bother yet to apply any materials to the architecture. It is only about the shapes and how they connect to each other.
The columns in the image are part of the staircase, which is in the middle of a large building. I will show more of this later.
This is taken from the low-resolution model. Please ignore the texturing for now, since it is the same brick texture everywhere. I did not bother yet to apply any materials to the architecture. It is only about the shapes and how they connect to each other.
The columns in the image are part of the staircase, which is in the middle of a large building. I will show more of this later.
Friday, August 5, 2011
Back Indoors
I got the city lots so I'm back to working in the architecture. I still have a long way to go, especially now that I need to write building grammars that also produce believable interiors. Writing staircases can give you a headache like nothing else in this world.
Before going too far, I wanted to test how the buildings blended with the environment.
Some of the buildings I'm planning rest on top of large pillars. This is so their base is level with the terrain. The player will be able to get under those pillars. I wanted to see how they would feel. Here is a render:
Here is another scene I did to test how the outside world be seen from inside:
As cities start emerging, I hope to have soon a lot more of this.
Before going too far, I wanted to test how the buildings blended with the environment.
Some of the buildings I'm planning rest on top of large pillars. This is so their base is level with the terrain. The player will be able to get under those pillars. I wanted to see how they would feel. Here is a render:
Here is another scene I did to test how the outside world be seen from inside:
As cities start emerging, I hope to have soon a lot more of this.
Tuesday, August 2, 2011
Unlimited Detail
My baloney meter went off the chart last night while watching this:
It is actually a nice piece of software once you consider what it really is. My problem is what they claim it to be.
If you look closely at the video, they only have a few blocks of data they display in different positions. They claim this is to bypass artwork, that is just copy & paste. Well, it is not copy & paste. It is instancing. It is what makes this demo possible in today's generation of hardware.
They can zoom-in to a grain of dirt, but they do not tell you it is the same grain of dirt over and over. If they had to show really unique "atoms", this demo would not be possible today. The challenge is data management. They chose the best compression possible, which is to not have information at all.
Something similar is done in this video by the Gigavoxel guys:
In this case you see some fractal volume coming out of the repetition of the same blocks, but it is essentially the same the Unlimited Detail guys have done. Impressive, yes, but no real applications for it at this stage.
Here is another example from NVIDIA research:
Also the classic Jon Olick's SVO demo:
And then you have the very promising Atomontage Engine:
None of these people claim they have revolutionized graphics, that they have made them 100,000 times better. Why? They know better. The problems to tackle are still too big, we are still generations of hardware away.
You see for many years we have known of this Prophesy. There is this One Engine that will come and replace polygons forever. And there is no question this engine will come, maybe in just a few years. Meanwhile, whoever claims to be this Messiah will get a lot of attention. As usual the Messiah scenario has only three possible solutions:
1. It is really the Messiah
2. It is crazy
3. It is just plain dishonest
I'm not sure about anything, but I have a very strong feeling about which one to pick.
It is actually a nice piece of software once you consider what it really is. My problem is what they claim it to be.
If you look closely at the video, they only have a few blocks of data they display in different positions. They claim this is to bypass artwork, that is just copy & paste. Well, it is not copy & paste. It is instancing. It is what makes this demo possible in today's generation of hardware.
They can zoom-in to a grain of dirt, but they do not tell you it is the same grain of dirt over and over. If they had to show really unique "atoms", this demo would not be possible today. The challenge is data management. They chose the best compression possible, which is to not have information at all.
Something similar is done in this video by the Gigavoxel guys:
In this case you see some fractal volume coming out of the repetition of the same blocks, but it is essentially the same the Unlimited Detail guys have done. Impressive, yes, but no real applications for it at this stage.
Here is another example from NVIDIA research:
Also the classic Jon Olick's SVO demo:
And then you have the very promising Atomontage Engine:
None of these people claim they have revolutionized graphics, that they have made them 100,000 times better. Why? They know better. The problems to tackle are still too big, we are still generations of hardware away.
You see for many years we have known of this Prophesy. There is this One Engine that will come and replace polygons forever. And there is no question this engine will come, maybe in just a few years. Meanwhile, whoever claims to be this Messiah will get a lot of attention. As usual the Messiah scenario has only three possible solutions:
1. It is really the Messiah
2. It is crazy
3. It is just plain dishonest
I'm not sure about anything, but I have a very strong feeling about which one to pick.
Subscribe to:
Posts (Atom)