A couple of months ago we released Voxel Farm and Voxel Studio. Many of you have already played with it and often we get the same questions: why so much focus on artist input, what happened to building entire worlds with the click of a button?
The short answer is "classic" real time procedural generation is bad and should be avoided, but if you stop reading here you will get the wrong idea, so I really encourage you to get to my final point.
While you can customize our engine to produce any procedural object you may think of, our current out-of the-box components favor artist generated content. This is how our current workflow looks like:
It can be read like this: The world is the result of multiple user-supplied data layers which are shuffled together. Variety is introduced at the mixer level, so there is a significant procedural aspect to this, but at the same time the final output is limited to combinations of the samples provided by a human as input files.
This approach is fast enough to allow real time generation, and at the same time it can produce interesting and varied enough results to keep humans interested for a while. The output can be incredibly good, in fact as good as the talent of the human who created the input files. But here is also the problem. You still need to provide reasonably good input to the system.
This is the first piece of bad news. Procedural generation is not a replacement for talent. If you lack artistic talent, procedural generation is not likely to help much. You can amplify an initial set into a much larger set, but you can't turn a bad set into a good one. A microphone won't make you a good singer.
The second batch of bad news is the one you should worry about: Procedural generation has a cost. I have posted about this before. You cannot make something out of nothing. Entropy matters. It takes serious effort for a team of human creators to come up with interesting scenes. In the same way, you must pay a similar price (in energy or time) when you synthesize something from scratch. As a rule of thumb, the complexity of the system you can generate is proportional to the amount of energy you spend. If you think you can generate entire planets on-the-fly on a console or PC, you are up for some serious disappointment.
I will be very specific about this. Procedural content based on local mathematical functions like Perlin, Voronoi, etc. cannot hold our interest and it is guaranteed to produce boring content. Procedural content based on simulation and AI can rival human nature and what humans create, but it is not fast enough for real time generation.
Is there any point in pursuing real time procedural generation? Absolutely, but you have to take it for what it is. As I see it, the only available path at this point is to have large sets of interesting content that can be used to produce even larger sets, but there are limits for how much you can grow the content before it becomes predictable.
For real time generation, our goal will be to help people produce better base content sets. These could be produced on-the-fly, assuming the application allows some time and energy for it.
Here is where we are going next:
Hopefully that explains why we chose to start with a system that can closely obey the instructions of an artist. Our goal is to replace human work by automating the artist, not by automating the subject. We have not taken a detour, I believe this is the only way.
Hi Miguel! So far it sounded, like you are done now, but with this post it sounds like you are just now getting real and everything else was just a warmup.
ReplyDeleteAnyway, you are not alone in not believing in noise/fractals as "the solution". My own experiments with noise functions mostly produced boring results, which were by far less interesting than your screenshots of late 2011 or 22nd of May 2011, but may be you just selected a few interesting views?
Anyway, this was the reason why I asked about geologic simulation the other day. I believe, that is the way to go to produce a large scale world, with other methods to fine tune and some "AI" which can be told something like "I want a castle on a cliff towering over a town". The AI would look for a good spot for that in the generated world or modify the terrain generator parameters to create that.
Anyway, this cannot be done in realtime and not for infinite terrain, but I don't think anyone really wants/needs infinite terrain as you can see plenty of different landscapes in the realworld if you hike for a few 100km in the real world.
It is kind of funny, but this is the only approach I have ever used. I used Perlin and Worley only for small scale features like boulders at some point, but quickly discarded them.
DeleteAbout the real-time aspect, I believe it is more about camera speed and information density than size. Imagine a small finite planet and yourself traveling in a spaceship. The ship's speed approaching the planet will determine how complex the planet's surface can be.
This is very much a fidelity thing. Low fidelity can get away with a huge amount of simplicity: the generation algorithm Spelunky uses is incredibly simple but produces good results.
ReplyDeleteBut this is also a semantic issue. Spaces without meaning are boring. Interesting game play elements can exist in boring spaces. Layering systems (physics, fire propagation, choices with consequences) on top of procedurally generated spaces will make them interesting.
Agreed, the more abstract the easier it becomes. Your mind fills in the gaps. Roguelikes are a good example of this.
DeleteSo does this mean that you'll be working towards completely/mostly computer-generated worlds again? With cities and nations and stuff?
ReplyDelete'cause though the terrain stuff was really pretty and interesting, what really drew me to this blog and this project initially, was the fact it created really interesting stuff simulating human/sentient intervention, such as the cities and the buildings =).
Kamica I guess he is refering to the artist made content to produce terrains
DeleteYes, I would like to have the best terrains possible before going back to cities, but who knows.
Delete"...before going back..." Good to know that it's still in your planned stuff, that's all I need to know =).
DeleteMiguel, what do you think about crowd computing. Like using players pcs to process simulations in a asynchronous way.
ReplyDeleteI would love to develop something like that, but there are some difficult problems involved. Quality of service is one, as you cannot guarantee getting responses in predictable time. Security and trust is the other one. You could go byzantine on that but that would require you a larger user base.
DeleteIn fact all problems are solvable if the user base is very large, but then how you grow it. These very same problems will likely prevent you from growing. You could ease the growing pains by offsetting work to cloud servers at the beginning, but this sort of hybrid is even more complex and you would need to shell some cash for servers until you hit that critical mass.
In general I like to think P2P computing is the very long term future and centralized (cloud) computing is only a phase. Any network system in nature offsets power to the edges of the network -i.e. the brain- but with Moore's Law dying I do not know anymore. We may be locked in the cloud for good.
Moore's Law should revive again with mainstream Quantum Computing
DeleteYes, Moore's Law will be alive and dead at the same time in superimposing quantum states. :)
DeleteSchroedinger's Moore =P.
Delete"Schroedinger's Moore" I laughed at loud
DeleteI imagine artist generation as in more of variables based on personality.
ReplyDeleteI sort of think of how you would stroke as an artist and applying that to the lines of terrain, foilage, even textures.
Some variable i would assume to be useful:
Agressive <-> Graceful, The variation of rotations of strokes, and how sharp the edges are to be.
Sloppy <-> Detailed: basically strength in numbers of strokes.
Subtle <-> Bold: The thickness/Randomness in strokes.
You can make these strokes both in 3d, and when drawing textures.
you can also apply simular to coloration variation of strokes.
Feels like these kind of artist generation could aid human artist to gain inspiration.
Also you could apply multiple styles of artist by trying a genetic approach with generations and so on.
It is interesting how you have described it. I see it more as cloning the method, tricks and approach in general of an artists to producing a given subject. A classically trained artist begins by knowing the subject very well. If the subject is a mountain range, the artist probably studied how mountains are formed, how erosion works, etc. If the subject is a creature, the artist would apply knowledge of skeletons, muscle systems, etc. Then on top of that comes what in the artists opinion makes the subject interesting and worthy of attention. Different people will choose different things, but there is always a method even if subconscious. That is for me the path to artist automation.
Deletehttps://www.youtube.com/watch?v=pgaEE27nsQw
DeleteImagine drawing on a piece of paper with the different strokes. And the artist adapts to a certain level of acceptance to evolve further each generation. Some extemeties can cause changes in biomes and the artist will try to adapt to the new values.
Nvm the previous link, i linked wrong video :(
DeleteThe core concept what i wanted to say is trying to do a psycological analysis when a artist draws, (You could watch youtube, with someone who has commentary, even livestreams are good)
The patterns i would have looked for is what they think doesnt really look right for them when drawing an image,like what are they correcting the most, or what areas do they put most details in.
Generating textures/shapes as input files ofc, could be a recursive proccess until the AI feels its satisfied with its files.
so the ai keeps adding details, altering details in certain areas which describes thier personality.
If you do this you could possibly apply this to almost any kind of content.
From buiildings, terrain, textures, foilage/trees, creatures. Its as abstract i can think of.
Oh my, reading your reply now, it feels like were saying the same things :P, I guess you already thought more in detail. The only difference i see is that focused on the definition of the artist and not thier methods. I dö have some methods in mind that might work, but im too bad with words to describe it correctly it seems. Sorry if anything was confusing. Might post a visualisation of how i thought
DeleteMindrage, I guess you are focusing to much in the way that a artist render their image. I imagine that Miguel will first need create something, so he is focusing on the creative process, like understanding archetypes and concepts.
DeleteI understand that it is counterintuitive to an artist. Cause we first learn to paint what we see, to before create from imagination, but a computer already knows how to paint what he imagine, so what we need now is something that feed this imagination with
coherent information about our world, like simulations of geological formation and vegetation spreading.
Hm, what do you think about No Man's Sky? It seems like there is much procedural generation involved.
ReplyDeleteTheir creature and botany generation seems combinatorial over artist created samples. I like what they have shown so far.
DeleteThe terrains appear to be based on local functions. It is not interesting enough for me, but might be good enough for gameplay backdrop.
I cringe a little bit when I hear someone promising entire galaxies of content. While technically correct, the space your mind will explore will not be the endless surfaces but the scope of the formulas, data seeds and algorithms which will feel very limited.
If you have ever played with a caleidoscope you get the same feeling. Yes there are a gazillion possible configurations but you get bored very quickly because your mind has grasped the potential of the device.
The game can still be quite good, I am commenting on one aspect of the generation.
I admire your efforts on optimal terrain generation, but I don't believe that this efforts pays off on gameplay of a single game. So, when I have enough money to produce my game, I will be glad in acquire your technology and your support.
Deletehttps://www.youtube.com/watch?v=xV4_ZJFJ2nA
ReplyDeletehttps://www.youtube.com/watch?v=6NXIpfThxn4
And mainly: https://www.youtube.com/watch?v=xV4_ZJFJ2nA
^All those, can Voxelfarm still be used with a focus on them?
Yes, these are vanilla Voxel Farm.
DeleteAny plans for Maya integration in the future?
DeleteNot that I like Maya, but it basically bridges to every modern dcc tool out there today while Max is being left behind in the compatibility department.
Plans, yes. As soon as we are done with 3ds max. Maya is also a C++ API so it should go easy. (Anything non C++ should be banned like public smoking, looking at you Unity!)
DeleteMiguel, how about the large C++ elephant in the room...UE4. How is integration going with it? I took a couple of swipes at it and was even loading bundles (without materials). Sadly, my day job and life got a little busy to continue.
DeleteWe are not working on UE4 integration at the moment. UE4 users like yourself are savvy to get around it. Having a default integration would be very nice, we rather have you work on your game, so it is still top of our to-do list.
DeleteMiguel, what do you think about last year's SIGGRAPH paper "Topology-Varying 3D Shape Creation via Structural Blending" (http://www.youtube.com/watch?v=Xc4qf7v6a-w)? This method seems to produce much more variation than the usual part-combining. How effective will it be in expanding the "input file" sets for the mixer?
ReplyDeleteVery interesting. Could work, but there are a lot of misses too. It seems to have a mind of its own, sometimes even a sense of humor.
Deleteisnt this what no man sky are doing with the creatures and ships
DeleteSIGGRAPH 2015 also has a couple of relevant presentations:
Delete"Semantic Shape Editing Using Deformation Handles".
http://www.youtube.com/watch?v=ob1y8mJ6rfk
New objects are generated from a prototype sets according to user-controlled semantic attributes.
"Elements of Style: Learning Perceptual Shape Style Similarity"
http://www.youtube.com/watch?v=PWqZwpHQtnE
The artistic style similarities between objects are recognized on the basis of a prototype set.
Personally, I'd be interested to see how the two algorhythms interact. Because, theoretically, the first can be taught to produce the hybrids of styles identified by the second.
I don't think procedural generation has to be real-time to be valuable.
ReplyDeleteI ran a succesful Minecraft community for many years and while Minecraft is perfectly capable of real-time procedural generation we never used that possibility. To save server resources and reduce lag we would just pre-generate a world with boundaries and expand it a little bit every once in a while. One world for players to build in and another world that we regenerated from new seed every 2 weeks so there would always be something new to explore.
Minecraft has actually a very poor terrain, with very limited "new information".
DeleteWhat really really made the game so popular, was possibility of grinding and making creations really big.
Therefore the really interesting information in the game was created by the player.
real-time genearation saves resources and adds gameplay value. If you create a fixed size map people (and especially one person) will likely just look at a small part of it, while being bothered with the edge of the world.
Sadly minecraft is not capeable of providing real time generation for servers in reasonable way
One other solution :
ReplyDeleteBasing your inputs on the content generated by online users. From there you can find multiple kinds:
- like a Minecraft or Second Life where people consciously create content.
- based on data mining of independent data. For exemple analyzing Facebook/Twitter content and generating procedural stuff from it.
I guess the second one would be terribly hard to produce, but it is a possibility.
I can't edit my previous so I add another one...
ReplyDeleteI hope procedural gets to become modular, like components. If it's modular enough, people could share and procedural algorithms like bricks and even have them on a kind of store. So your entropy can come from an accumulation of algorithms generated from an amount of work much larger than your current work force.
Oh it's all gone a bit Disney.
ReplyDeleteYou know artists only just want to make the perfect fairy land over and over again don't you?
:)
Real worlds, including the Earth, have astonishingly ugly and boring locations in them.
The art in film making is FINDING the locations to film at, not building them! That's what teams of flying helicopter scouts are paid for.
I just think you're being a little unfair towards procedural generation. Real planets are a mess and the beautiful parts are often locations where unusual aberrations come together, like massive waterfalls, ice caves, volcanoes etc.
Merging different many different fractals at different weights with 20+ iterations each, can sure make beautiful locations especially when corrosion is applied to simulate water over centuries. But they can also irritate traditional artists who always want control to just make that rock and tree to look more like it was laid down by the fairies for a Christmas card scene. :D
I'm not saying there's anything wrong with that, of course, but artists can take a long time to do stuff, and it can get expensive, and anyway, I thought this was 'procworld?'
The thing is, currently he started a little company and actually has people to pay (I believe) from the earnings of this engine, so now it's important to make it useful and user-friendly, rather then just interesting.
DeleteAnd though it's true that procedural generators CAN produce interesting locations, they often don't do it in the density or quality that artists require for it to be useful for games. For now, I agree that it's best to do the main grunt of the work with procedural generation, and then have artists make the final decisions. This way you're not wasting an artist's (expensive) time with making every hill look just right, while also not having a hard to control or even boring setting.
With time I reckon more will be capable with procedural generation, but currently it's a good thing to stick with man and machine working in harmony =P.
Oh, and would you play a game like, say, Skyrim, where it's actually realistic sizes and you have to spend several hours to reach anything interactable? =P. (Skyrim would probably be about the size of... Germany? Iunno, hard to estimate, the Iliac Bay region was about the size of Great Britain)
It's a compromise that happens.
I'm just saying it's a bit of a shame that everything has to be picture postcard perfect. When in reality nature is so brutal on the surface of planets, with all that physics playing its roll for millions and millions of years.
DeleteI thought this was the one project wouldn't go in that typical direction, that's all.
Well, It's all industry driven, so I guess there wasn't much choice.
All of planet Earth is interesting to me. The little rocks in my backyard do not bore me. Disney and Niagara are high points, but a small creek is beautiful too.
DeleteMy point is you cannot do a small creek unless you simulate it or have someone draw it. Both approaches are too slow for real time, today.
"You know artists only just want to make the perfect fairy land over and over again don't you?"
DeleteHeeeey I represent that remark! :p
Seriously though, I take issue with the whole philosophy that procedural generation aught to be implemented from the ground up, with no outside input - primarily because I consider that to be a total fantasy (and you think WE'RE the ones obsessed with fairy worlds! Ha!) There's no such thing as "no creative input" - all you're REALLY removing is your own knowledge of what that creative input is, and any control over it. Doesn't sound very effective to me at all.
Anyway, the endgame he laid out sounds like it will blow ground-up procedural generation of the water :)
Water erosion simulations work fairly well these days, but perhaps because they are mainly on height maps not on voxels. They are at least interactive enough.
ReplyDelete"Interactive Terrain Modeling Using Hydraulic Erosion":-
https://youtu.be/_HUHPyJTAtM
Sigraph 2013:-
https://www.youtube.com/watch?v=JCsj0v-wmIM
Yes, it is a simulation, this is what I believe works. This is not interactive in the in sense you can fly into a planet and have all terrain processed by centuries of it. So not applicable for real time.
DeleteObviously it wouldn't simulate centuries of actual data, but perhaps a compromise can be made. It could be able to be forced into completing quickly or be area bound, and acceptable to run it as pass for the design stage.
DeleteI thought you stored all the geometry on servers anyway, so it wouldn't be created at player time?
I believe that was before it went into "Minecraft-clone(but way better than minecraft) mode =P.
DeleteIt depends on the project. Some projects are OK with pre-generating content. In this case we can afford to run heavy procedural generation.
Deleteso with the release of the deep mind neural network code, are you working on using these recursive neural nets to start on your goal for generative art with ai ?
ReplyDeletehttps://www.reddit.com/r/deepdream/comments/3c2s0v/newbie_guide_for_windows/
No, this is not something we would look at in the near term.
Delete