You can get Voxel Farm now. For more information click here.

Tuesday, May 2, 2017

Plugin Status

I'm happy to see our team of excellent developers at chez Voxel Farm has made quick and large improvements to our plugins for Unreal Engine 4 and Unity 5.

The UE4 plugin is now a proper UE4 plugin, not just an example for integration anymore. This opened a whole new set of possibilities. In a very short time, we were able to put together this video from different scenes and interaction modes within UE4:

The new Voxel Farm UI makes it quite simple to add Voxel Farm to any existing or new project. There is a button that will do that for you, requiring you to just point to the target project:

The plugin already offers blueprint access for typical tasks like block edition, voxelization and physics. The threading model is much better, resulting in a smoother experience.

There is a new demo for UE4, now including the plugin:

If you want to get a feeling of how the plugin is used in UE4, these topics will help.

If you are thinking Unity gets no love, you would be wrong. Most of our recent efforts went into the Unity plugin. I will cover this in my next post.


  1. Wait, wait, wait...

    At 2:20 you destroy the floor and the floor wire reveals that the explosions have a sphere topology, rather than approximate the explosions to the voxel often used low resolution grid.

    Please explain what kind of sorcery is this!

    Also, wanted to ask, with all this complexity of having to transform voxels to polygons at the end of the rendering pipeline, isn't it possible to create a polygon engine that works as voxels? I mean, isn't the overhead cost of transforming voxels to meshes as expensive as making a polygon engine that behaves as voxels?

    1. Excellent questions.

      The voxels here capture a surface. The surface can have any shape, and a sphere is a particular case. These spheres are no different than the terrain beside them, which is also smooth. The surfaces could also not be smooth, like you see in the blocky destruction demo.

      Voxel to polygon is much faster than polygon to voxel. So if you want to operate with voxels, you would be better starting with voxels. If you stay as polygons, you would be doing constructive solid geometry. This is quite tricky to do at interactive rates when the meshes get complex.

      And there is the other issue which is voxels are by nature amenable to parallelism. This is because one voxel is not connected to their neighbor. Pretty much like pixels, they can be isolated into chunks that can be dispatched to parallel processing. With meshes is trickier, because each element in a mesh is connected to other elements, so all elements are potentially relevant to the operation you want to perform.

    2. That was pretty clear.

      Thanks for your time!

    3. Miguel, the voxel/polygon transformation question was one I had lurking ever since your video where you carve shapes out of a pillar on a steep mountain and the physics just does the right thing. I confess I'm not stellar at math/geometry in general and visualization engines in particular, so my question may be redundant. If so I apologize and thank you for reading it.

      My question: When a voxel-modeled portion of the world becomes disconnected from its origin and starts tumbling, does its coordinate system rotate with it, or is its shape transformed within the global coordinate system? If the coordinate system rotates with each disconnected model, what happens when models come in contact with each other? Are they then re-mapped to a shared coordinate system? Are polygons only generated for visualization or do they have other purposes?

      Thank you for all your educational posts and videos. You've been reminding me why I got into tech in the first place. :)

    4. Polygons are used for more than rendering. Physics, for instance, is simulated using polygons. The operation that discovers detached fragments also runs over polygons.

      The detached fragment is a mesh, and it has its own coordinate system. Right when it becomes detached, it matches the world's system. Then, as it moves and rotates, its system ends up having different values.

      The examples we have shown so far keep fragments as meshes forever. They do not come back into voxels.

      If a fragment is large enough, it could be voxelized back when it stops moving. At this point it would lose its coordinate system and the mesh would become part of a larger mesh.

    5. That is awesome. I'm a fan of hybrid systems like that. It's like transitioning between Cartesian and polar coordinates when doing so is worth performance and accuracy the trade-offs. Thanks for the clarification!

  2. This comment has been removed by the author.

  3. Voxel farm is exciting, I've read your blog and a lot of it as said on twitter :)

    I have a few requests
    - You got a post a while back which shows an app kinda like a coloring book but with voxels, can you go into detail how you made that? Lots of meshes stamped there using air and later on filled with different materials?

    - Can you please make programmability of the systems better so we have kinda like GameObject concept using a group of voxels? So for example we can say each voxel instance is a GameObject/Actor and then in game we can instantiate and destroy them and also can detect them using raycasting? The only other way coming to my mind is coding the GameObject using a function with AddBlocks with our own code (and create the instance procedurally) and scanning voxels and based on material understand if it is the instance we thought or not, In this way I can code the behavior for the time that player axe hit the tree or the fireplace and ...

    Also can you describe how one can implement tings like minecraft logical gates and electric gates using voxel farm in an efficient way since we have a lot more blocks I guess.

    1. The coloring effect was the combination of two tricks. First the target mesh is voxelized with air, so this will create no visible content, but the voxel vectors, which define the surface will be set anyway. Then, using the paint tool, the materials are flipped to a non-air material. Since the paint tool only set the voxel material leaving the surface untouched, the resulting effect was as if you were coloring the model.

      The systems at your disposal are designed to overcome serious technical challenges, for instance how to have multi-resolution views of very large objects like terrains. If you want to use the same system for small objects, it is not a matter of extending the programability of the system. Often this requires a different system, which it own new programability rules. So far we have given you a system for elephants (let's say). If you want to make an eagle with this same system, it is not just about interface. We are planing systems like this so you can have voxel creatures and vehicles, but this will be something different than the objects you currently see.

      For dynamic systems like Minecraft's redstone, the use of smaller voxels increases the challenge, but in general these are 1D problems so it does not get so bad. Please take a look to this topic in our forums:

    2. Thanks for the response, makes sense , Read the post and got what you mean.

      Are painting tools available in SDK, in general all voxel editing tools or we have to rebuild them in game?

      Last question, If I want to have access to a group of voxels at runtime do I have to create them with AddBlock/StampMesh or getting them from generator is fine too? What will happen if some voxels are edited by a user in the area and others are a result of procedural generation?

      I'm not sure if asking here is fine or I should ask in the forum? Let me know if I should keep my mouth shot here. Seeing voxel farm opened up a lot of possibilities in my mind, I'm making a server backend using (which I recommend taking a look at because you might prefer it for your farm cloud :) ), also worked on networking middleware at and now am goin indie, I want to build something with voxelfarm soon and just need to know my limitations. A painting app for kids might be the first thing, specially if I can make the end result of my stampMeshes and paints a mesh and let the kid ride the car he made or ...

      Ah and yes I loved your distributed and concurrent software development discussion , It is interesting to see one person being fluent in multiple areas.

    3. All basic tools in VoxelStudio map to calls into the clipmapview object. This covers paint, smooth, grow and "forget".

      By accessing voxels at runtime, do you mean writing voxels or just reading them?

      We have not tried Orleans, but it comes up frequently among those who want to create distributed Voxel Farm environments. Orleans' grains can take care of your user and other application state, but you still need a distributed repository for the data which is something Orleans won't do for you. Also, keep in mind an Orleans grain will execute serially to provide a lock free programming model. Voxel Farm assumes you want to run tasks in parallel, because of the large scope of each task (for instance producing a scene), so mapping Voxel Farm tasks to the grain model will require some new clever paradigms. If you have any ideas on this, I would love to discuss them with you.

      Asking questions here is perfectly fine, although you may expect a more detailed response if you post in the forums and it will help others.

    4. By accessing voxels I meant reading them, Your fellows told me I should get procedural material ones from Generators and user edits from GetVoxelData

      About Orleans, Nice, I didn't know it is getting that popular :) Can I know what tech do you use for the farm cloud? Maybe you can post about it, You said DynamoDB and some reactive technology for data flow of edits IIRC in an old post.

      On Orleans, For user modification one grain per user is fine for edits, For generation one should either divide the task to multiple grains and by multiple I mean many or simply bypass the actor model and Orleans scheduler and use TPL itself , using Task.Run will run your tasks on the .NET thread pool and task scheduler so one can have one grain per chunk/cell and then these grains can run their stuff in parallel as well. Newly in 1.5 , they've added the ability to place grains on machines using a custom placement strategy, so you can code it in a way to distribute grains which own cells on all machines in the cluster evenly. Does it seem good? Ah and for DB something like Azure CosmosDB or DynamoDB can be leveraged , I've used Riak back then at MuchDifferent which was modeled after DynamoDB's paper and couchbase as well. Both have pretty good perf, Due to Orleans keeping hot data in memory the on memory capabilities of CouchBase are not that much important (if the software is designed well) however.

      Ah and can I know what do you think of Orleans and similar technologies in general (actor frameworks I mean), They seem a pretty good tool for the tasks they are designed for, For heavily parallel code which needs to modify a huge amount of data probably even Task.Parallel and fellows will not be able to compete with raw C++ if you can scale it to multiple machines using ZMQ or something else.