If you are into computer graphics, massive online games, digital botany and architecture, voxels and parallel computing, you may want to stick around and check for updates.
This is a screen capture of the progress I have made so far:
It is still far from where I want to be, but I'm happy with the results seen here. You can find more videos on my YouTube Channel page: http://www.youtube.com/user/MCeperoG
A few years ago I started wondering how far you can go into creating a fully procedural world. This is a recurrent idea. It has been implemented with some success many times before. Still, I was not satisfied with what I could find. The procedural worlds out there seemed too repetitive, and their generation patterns soon became evident.
The real world itself is full of ever-repeating patterns, but in this case we take them as natural. I had the feeling that maybe the techniques were right, that the problem was just the degree to which they were applied. As it appeared, it would take many layers of complex patterns to make a single believable world. The current implementations were not rich enough to trick the eye.
My goal was to create a rather large virtual environment, around 100 square kilometers, and allow people to connect and explore it over the Internet. Maybe play a game on it. It would cover not only different types of terrain, but also vegetation and architecture. And I wanted all that generated with a minimum of artistic input, but still be visually appealing.
So I started by devising a framework that could support the amount of detail I felt was necessary. Geometric detail could go as fine as one 10 centimeters (~4 inches), and the texturing maybe a few texels per square centimeter. The texturing of the world would be unique, meaning that each texture pixel would be mapped exclusively to one place in the world's geometry.
Considering the world size, this would translate into a few Terabytes of geometry and textures. Soon it became evident that I would need to stream the world's representation to the viewers as they moved. The sheer size of the world data made it prohibitive to pack it as a one-time download.
All these requirements shaped the solution I chose. In my next post I will introduce the basic building block that made all possible.
All these requirements shaped the solution I chose. In my next post I will introduce the basic building block that made all possible.
Your work is amazing :-) I have to ask, are you familiar with the game "Minecraft"? I wonder if what you are doing is similar in any way similar to how "notch" (developer of minecraft) is creating his algorithmic worlds, described briefly here: http://news.ycombinator.com/item?id=1733157
ReplyDelete"I'm not sure how to explain it without getting technical.. The complicated high level technical version is: First I generate a linearly interpolated 3d perlin noise offset along the y axis. I fill that in so that everything except the top x blocks is stone, then I do a second pass to add features like grass, trees, gravel, sand, caves and flowers. The world is generated in chunks of 16x16x128 blocks, based of pseudorandom seeds that are a mix of the level base seed and the chunk location in the world. This ensures that you always get the same terrain in an area regardless of what direction you traveled there from."
Yes, it is similar to some extent. We both use Perlin noise to generate some features. I will cover my use of Perlin noise in a future post.
ReplyDeleteStill both approaches are different. Minecraft voxels are huge, it is part of the charm of the game. Since the detail level is so low, Minecraft world alterations can be seen in real-time. You work on the world's voxels right there.
My voxels are a tiny fraction of the size of a Minecraft voxel. For the same volume I need to process a lot more information. My approach requires baking voxels into geometry in a server and then streaming results to the client. This cannot be made realtime now, unless I invest in better hardware, but eventually the client's processing power will be there.
Yeah, I can see from some of the more recent CUDA/OpenCL voxel demos (Gigavoxels, etc) that the level of detail (and/or scale) of what can be done in real-time isn't quite there yet. Your "baking" approach does sound interesting and the visual results are very impressive; I'm looking forward to your future blog posts :-) IMHO, this is the future of game development.
ReplyDeleteExcellent! Wonderfull!!!
ReplyDeleteHi the part where you say:
ReplyDelete"The texturing of the world would be unique, meaning that each texture pixel would be mapped exclusively to one place in the world's geometry.
Considering the world size, this would translate into a few Terabytes of geometry and textures. Soon it became evident that I would need to stream the world's representation to the viewers as they moved. The sheer size of the world data made it prohibitive to pack it as a one-time download."
Isn't this the basic premise of megatexturing such as the feature in RAGE?
Yes, it is the same. Not that Rage's world is much more limited in size, also not every portion of the world is stored at the same detail so there are a lot of invisible walls. All this was needed to limit the game's amount of data.
Delete