The demo is not available yet, we are still working on the game-side of it in UE4, but the Citadel model is pretty much complete at this point. I would like to cover a couple of aspects that I find interesting from this experience.
A question I often get is why use voxels at all. I usually point at the obvious bits: If you want to do real-time constructive solid geometry (CSG) pretty much anything else is too slow. CSG is what allows to create game mechanics like harvesting, tunneling, destruction and building new things. Also, if you are doing procedural generation of anything that goes beyond heightmaps, voxels make it much easier to express and realize your procedural objects into something you can render using traditional engines like UE and Unity.
What I rarely say is that once you work with voxels, your mind changes. I let people figure this out by themselves, I do not want to be that weird guy saying you really need to try LSD. You change because you begin seeing your entire project as a single fabric of content. You feel more like you are working on a canvas. There is no difference between a tower roof versus terrain you have terraformed. It is a really distinct feel, which cannot be explained rather experienced.
If you have developed for UE4 or Unity before, think of how you would approach a project like this Citadel. While it is possible, you would be building out of a myriad of objects placed in your scene. You would have an object for the terrain, static meshes for the towers, walls, even the rocks making up your cliffs would be a bunch of instanced meshes clearly intersecting each other. Simply put, there is no canvas, instead, you have a collection of things.
If you want to have large organic shapes, like a massive spiral tower that slowly unravels over hundreds of meters, you would need to carefully plan how to deal with all this unique geometry. The image below shows an example of this from the Citadel:
It gets messy. This often leads to not having unique geometry at all, as it is too much trouble. It is unfortunate. Unique geometry can take your content to a whole new level. Once you have experienced it for a while, going back to the traditional instance-based approach is immersion breaking, at least it is for me now.
When you build out of individual small pieces, even if they have LODs of their own, their agglomeration cannot be trivially condensed into single objects that will efficiently LOD. Serious consideration needs to go into which objects you use to build the world, how large they can be, how you can reuse them and create cheap variations of them. All this planning takes a lot of work and mostly, a big deal of experience.
This is why it takes a Triple-A team to produce complex scenes and rich open worlds. Even when there is plenty of very talented artists out there, the slew of tricks you need to apply remains a veiled, mysterious art. We should not need GDC talks. The current state of the industry is as if Microsoft Word would limit the kind of novel you can write with it, and only those versed in Word's options and macros were able to create compelling fiction with it.
As I see it, it is really about the "fabric" that makes the virtual world. Once it becomes an organic canvas, you can automate tricks like LODs, culling and visibility sets in simple, robust ways. Let the computer do the hacks for you.
The other advantage of developing a virtual world as it were a canvas, is that your workflow becomes closer to what you experience working in Photoshop, versus the Maya-Blender experience. This is one of my favorite bits in the video above, it starts around the 2:54 mark. The artist first defines the basic volumes and then continues to refine them. I find this very intuitive and close to how people create in pixel-based systems like Photoshop.
Talking about artists, this Citadel project was possible thanks to Ben, who became part of the Voxel Farm team early this year. The amount of work he was able to put into this Citadel is incredible, as is the quality of his work. Ben caught everyone's attention as a player-builder in Landmark, under the Ginsan alias. Here is one voxel beauty he created back then:
Screenshot from Landmark (SOE/Daybreak)
A true Renaissance man, Ben also created the superb music for the video above. He often tweets about his progress in new Voxel Farm projects, if you are curious about what he is working on, make sure to follow him https://twitter.com/adamiseve
Hi Miguel, I'm not a game developer, just a gamer. I've been following your progress for a long time now and this is the first time I've really understood the power of your engine. It looks like something even I could pick up and understand. The end result is stunning. What happens if someone were to completely mine out underneath the entire structure (so that it falls)? Does the whole citadel get instanced?
ReplyDeleteThat would be awesome, but it won't happen. There is a maximum radius that will be scanned for detached fragments and the whole site is just too large to be scanned at interactive rates. In this demo it is set to about 30 meters around the area of impact.
DeleteThat makes sense. Is that a limitation of voxels or of the game engine?
DeleteIt is a limitation of computers in general. The scan happens in mesh space actually, we do not use voxels for that part. It is just that when things become too large, everything takes proportionally a larger amount of time.
DeleteIf I understand correctly, the scan happens after each impact. Would it be possible to store additional data about material strength and forces in voxels, so it doesn't have to be calculated each time? Lets say we have a 100 meter long bridge and its ends already have the information that the weight on one side of the voxel is almost as much as the voxel can handle. Maybe a skeleton mesh, of lower resolution (like bones in animation), could be added to store this data?
Deleteinb4 Bridge Builder : Masonry edition
Also, I understand that it might be difficult to crumble already detached stones, but at least adding some dust particles would make them look less like giant pieces of rubber :)
We did some R&D on this. The best approach we found was to thin the voxel models until a node-link skeleton would be computed. As a residue of the thinning stage, we would have the mass of each segment in the skeleton. Using this connectivity graph and the weights, we could compute the location of breaking points and cut the model there. At this point the fragment discovery logic kicks in, and it looks as if the structure had collapsed on its own weight.
DeleteThere is plenty to do in this field, which is very exciting for us: different material properties, different breakage patterns, fragments that crumble, fragments that break other fragments off the world (forming chain reactions). There is a lot of low hanging fruit there, hopefully we will get to work on this soon.
Since you guys seem to be getting to points where you're pushing new grounds in computer programming, have you guys considered writing papers for publication?
DeleteOne question, why 30m radius? Shouldn't it be independent of scale? Image two objects of same shape, but one 10 time bigger. If you break them apart would computation of this need similar amount of data and time? Smaller need compute per voxel but larger compute same algorithm but on higher level of voxel hierarchy. It cold have benefit of that mountain will not see one vexel that keep connecting it to rest of the world becasue it not in same scale as mountain calculation.
DeleteIn this demo, 30 meters is large enough to capture most fragments, but small enough to provide quick results.
DeleteThis is scale independent. We are detecting a fragment that was separated from the voxel world, which is always huge. It includes the whole citadel and the desert landscape. Since the potential size of a fragment could be thousands of kilometers, we need to pick a practical scope where to look for fragments.
I mean scale independent in bit different way. Image we have 10m arch and we blow up both supports, it will fall, right? Now we have 10km arch and we do exactly same and it will not fall because it not fit 30m detection radius. But topologically both are same but on different scale. AFAIK voxels are hierarchical data struct, we can easy reduce accuracy and precision as you show in other videos. If we reduce details available for algorithm it could handle this 10km arch in same speed as 10m arch. It could miss some details because of reduction but typical cases could be handled.
DeleteYou could split this algorithms in couple of passes:
10m radius with 1 voxel accuracy
100m radius with 10 voxel accuracy
1000m radius with 100 voexel accuracy
...
1000000m radius with 100000 voxel accuracy
It is possible it will do same amount of work as
60m radius with 1 voxel of accuracy
I see what you mean. In practice, a 10m arch will have fewer points than a 10km arch. In theory you could use the same model, having the same amount of vertices. Still, our engine will segment the 10km arch in multiple chunks, while the 10m will fit into a single chunk. The chunking introduces additional geometry, so it is not really possible to have the same amount of total vertices. There is also the fact voxels are written for the highest resolution LOD, so deleting the 10km arch from the voxel world will take a lot of more voxel updates.
DeleteOk, I see. I assume more uniform in scale data struct that you have. Changing it now to fit my idea would take lot of time and will probably sacrifice performance in other aspects.
Deletebtw deleting shape in voxels take O(n^2) or O(n^3) time where n is scale of object? I except n^2 because most work will be done on border of deleted shape. And isn't in some corner cases possible to delete something in O(1) time?
Deleting the shape is a voxelization operation, where you take the mesh that just became detached and write it back as air voxels.
DeleteWhile I'm sure that a voxel-based workflow is different to a standard one, I do disagree with the assessment that blocking and refinement isn't a part of a polygon-modelling workflow.
ReplyDeleteAll the modelling I've done has started from rough blocks which are then refined into more detailed shape. You might be confusing the kit-bashing aspect of building larger scenes (which would probably be done in UE or Unity anyway) with the actual modelling workflow done in Blender or Maya.
I'd say that the advantage of a voxel-based workflow is that it extends the standard "creative process" (broad strokes followed by progressive refinement) to environment creation in a single tool. I don't have to be mindful of how my tower piece blends into the rest of scene because I'm building it directly in the scene.
Yes you are correct, I probably took it too far there.
DeleteCertainly you can start with base geometry in Maya/Blender and continue to refine it by splitting faces, doing boolean ops, etc. Still that does not "feel" like you are working on a natural medium, like painting with oil, Photoshop or even volumetric tools like Mudbox/Zbrush. But we should not talk about "feelings" since it is something we cannot quantify and measure.
My remarks were more in the context of large scenes, which are mandatory for even the most basic game levels. Like you say, this is why we need to resort to kit-bashing.
If you were building such a large scene in Maya, I think you would also need to rely on kit-bashing. You could not refine your base block-out down to a 100,000,000 polygon model... or, could you? It has been a while since I tried something large in Maya.
Well then, this is just really impressive... How long did it take to make the scene?
ReplyDeleteIt was about four months of solid work for a single artist.
DeleteThis took some programming too. We had to improve the tools, create a new scene management system to handle the massive amount of content, etc.
It took longer also because it was a learning experience for us. I guess if we did it again with the current state of the tools and engine, it would take less time.
I feel like even so, four months for a single artist for such a large, intricate piece of scenery is an impressive rate if I'm not mistaken.
DeleteThis is starting to look like it could possibly become an industry standard tool =P. Are you guys working with many studios at the moment? Any projects you guys can talk about that use your tools and technology?
Have you seen what's been happening in neural networks picture processing lately? I bet that within the next few years you'll be able to teach a network what voxel shapes you want and to generate unlimited variety of voxel buildings in requested style.
DeleteThat would be so cool. I remember a paper where you would sketch a building by hand and the network would generate a detailed building that matched your sketch.
DeleteAt the same time, I'm healthy skeptical about this. Deep Neural Networks are a specific class of algorithm, we are excited about the things they can do, but once the hype settles it will also be about what they cannot do.
All the "official" hype behind AI makes me naturally suspicious. Like with the VR fad, it is as if the industry needed these stories to keep some sort of pyramid scheme going. We need to keep clapping or the last fairy will die.
I'm not saying these are vaporware, on the contrary these are massive technical developments, but it is clear these ideas are being marketed. You can say my neural network has been trained to raise a red flag every time I sense a marketing team :)
Yea, we should keep a sense of cautious optimism I reckon =). Though it may definitely be worth investigating whether machine learning and such could be used at least for parts in your tools. There are a lot of papers on machine learning. You'll indeed need to just figure out what are the right areas to implement them, how, and what their pros and cons are. That's why I reckon implementing a pre-trained neural network could be useful as a tool, but possibly not yet as a fully core system =P. But I have no idea how long it takes to learn how to make and train one... And time is money, especially since you're a company now =P.
DeleteI'm sure you guys know what you're doing =P.
From what I've seen about TensorFlow, it's quite streamlined these days. You define a structure of the network, set up several parameters and throw data at it until it sticks or until you figure out why it doesn't stick.
DeleteIt would be fascinating to see if it could manage translating a cube of voxels into a grammar. Generating grammars and translating them into shapes automatically is fairly straightforward and gives unlimited training data with a clear fitness function. The hard part would be generating some kind of a grammar as a neural network output.
I love the idea. I wonder if someone has used the large number of Minecraft builds out there for something like this.
DeleteMiguel I really love VoxelFarm and what you guys are doing and am trying to build stuff using it but still i think what you are talking about here as an advantage has disadvantages too or maybe not disadvantages but requires a different type of thinking when coding the game.
ReplyDeleteIn the old model, i get a model for a prop from an artist and put it in the world and add a collision trigger around it and code some logic for it. so for small structures that is very helpful when coding it, i mean for the small parts of the world which are a part of gameplay, now if that small prop is a thing on top of this tower, in theory being voxel based means it is a part of the world and i can not easily code logic for it unless
- in my procedural algorithms i put a trigger there and manage the voxels around it in code using it
- not use voxels for those small gameplay things like triggers and enemies in landmark i guess
- use very big voxels and consider each voxel a gameObject/entity/actor i guess Minecraft kinda does this for red stone or similar stuff.
Could you in a post clarify how do you think these stuff should be approached or how you think is a good way to approach them for some mechanics? Say if i have a farming game and want the user to dig and then put seeds and then water and ... with voxels it can become organic but without limiting the user modifications it would be impossible/very hard to code it unless i scan voxel 3d arrays many times write logic there (cellular automata included for things like spreading fire and watering and ...)
I am talking about many things at once maybe but i hope you understand my concern and it is a good food for thought.
All this said I'm building a game with it and am amazed with it but I'm more focusing on geometry compared to big beautiful worlds atm
In principle, old tricks continue to apply the same. You can keep using mesh props like before. You get to decide what will be made of voxels (and part of the world) and what will be objects. In your example, you would have a voxel column and then a traditional prop on top of it.
DeleteIt is when you add new mechanics, like destruction, harvesting, etc. when you face new problems. A game-world that can be changed by players will potentially break the approaches that evolved for static worlds. Back to your example, the prop on top of the column needs to have physics simulated, so when the player releases the top of the column by cutting its base, the prop also falls.
While things like doors, triggers, etc. would not be part of the world, you would still need some additional ways to bind them to the voxel content. For instance, imagine we have a torch prop that is nailed to the top of a voxel column. When you cut the base of the column, the torch should remain attached to the falling column top.
We are beginning to consider how to support these scenarios, and like you have suggested this would make a nice post.
Yup correct. As said i guess the only approach tried until now is the minecraft way of having one gameObject per voxel which doesn't work if you don't have giant voxels like they have there probably. But After this starts to work then huge possibilities of gameplays will be in front of us and I'm sure the only people in the game industry who can do it right at the moment are you guys.
DeleteWhat comes to my mind are voxel entities and scanners so after doing a modification addition/deletion you scan the surrounding cells to update stats you need and this would be a per cell operation and no voxel modification can directly manipulate global/remote stuff since it will make searches slow enough that they will be impossible to run on current machines.
So when you cut the column , first of all you calculate the size of the column and its bounds (We have it now when we turn it to a mesh) and also you might want to for example calculate the fire spreading if the column hits the ground in a place that fire collides with grass, The hard question other than how to do these efficiently is how to create object boundaries so for example you can attach the fire spreading logic to top of the fire/sphere around it, doing it per voxel should make O(N^3) operations for everything which is impossible to run,
Maybe an algorithm where after modifications considers all neighbor voxels with same material and iff bigger that a certains size then attaches a shape and an object logic based on their material, at modification if sliced then will run the calc for the parts and parts bigger than a size will have the logic but not the rest. in this way the stone column no matter what shape and what size will have single logic attached but then for making it a separate object from the stuff below it, you have to put a different material there (can be visually the same one) or define separator entities
Yes, this particular line of development is very interesting to me as well. I wish we had more time to sink into this right now.
DeleteThe topic in general is "virtual matter". I would like to simulate fire, moisture, mold, snow and dirt accumulation, among others. Once your algorithms see the world as tiny bits, there so much you can accomplish.
Well you are doing a great job man! I hope the business goes well and some of our games get successful and this journey of you guys continues for a long time
DeleteWonderful sharing, Thanks a lot for helping us. All your posts are amazing. keep sharing your words with us.
ReplyDeleteroof repairing
So today I have been wondering if the L systems could be extended with another dimension to create a very in depth prefab construction system for a survival game. Like you could designate a wall and then have each and every brick added to the wall as you supply them.
ReplyDeleteYes, that would work nicely. You would run the LSystem and collect the instances output without visualizing them right away. As the player supplies an equivalent for each instance, you could make the next instance of that type materialize.
DeleteWhen writing the prefab code, you would need to be careful enough to make sure instances appear in a plausible order, that is, brick walls are filled from bottom to up, etc.