Mesh simplification is as old as polygon meshes. Even then, it is still a hot topic. There are plenty of methods out there, picking the right one is the tricky part. It is not only about the quality of the resulting mesh, but also how fast can you do it.
I had decided not to use progressive meshes at all and chose clipmaps instead. That meant I had to deliver each cell of the clipmap in a single shot. My plan was to use a very simplified mesh and keep most of the detail in a series of maps: normal map, material map, etc. In this post I will describe the method I used for mesh simplification.
Here you can see some results:
The voxel countouring phase was churning out huge meshes. A single cubic cell of 6 meters side would easily take a few hundred thousand polygons. I needed to fit that into a budget of one or two thousand polygons.
Hitting this budget was the first problem. Removing so many faces from the mesh could severely alter its appearance. It was necessary that the simplification introduced the least amount of error possible.
There are two main classic approaches to mesh simplification: Vertex Clustering and Greedy Serial simplification. I had used some vertex clustering methods before, but disliked their boxy artifacts.
One of the best methods for minimizing error is one described by Hoppes in 1993, which is a greedy serial approach. It uses Quadratic Error metrics to minimize errors. The algorithm can be described like this: For all edges in the mesh, find the one edge that when collapsed introduces the least error. Then repeat until the desired polygon count is reached.
This method produces beautiful results, but it is very slow. It sequentially visits each edge in the mesh to discover which one to collapse. If your mesh has to be compressed from 100,000 triangles to 1,000 that's a lot of iterations for every collapse. It can be optimized so errors are pre-computed in one initial phase and kept in a sorted list, having the smallest error edge on top. As long as the list is properly sorted, you can get the edge at the top and collapse it. Still, every time an edge collapses, several other edges need their collapse error to be recomputed and their place updated in the sorted list. This comes at some cost.
Then I found this paper which opened my eyes to a technique called Multiple Choice Algorithms. These guys apply it to mesh simplification, but it is a very general and powerful way to solve a big family of problems. Load balancing and network routing are some typical applications.
This is how the method is usually explained: Imagine you have 100 large odd-shaped containers and a tube that continuously ejects a ping-pong ball. Your task is to fill the containers evenly with balls. The catch is that someone else will remove balls behind your back, so the only way to know how many balls you have in any given container is to count them. You cannot just keep track of how many balls you have put since some may have been removed already.
The naive approach would be to count the number of balls in each container and then place the ball in the one having the least amount. That would work, but it is an awful lot of work and it would take a lot of time.
Now, instead of that, imagine you just pick 5 random containers, count the number of balls in them and place the new ball in the one having the least amount of balls. Would it work in the long run? If it did, it would be great, you have reduced the number of containers to count from 100 to just 5.
The question of whether this really works depends on the total number of balls. For small numbers it won't be good, but when the balls start to pile up the randomness evens out and so do the containers.
Applied to mesh simplification, the method means you pick only a few edges randomly, let's say 10, and you collapse the one introducing the smallest error.
Does this create noticeable differences with the sequential greedy method? Again it is a matter of how many times you perform this operation. These are huge meshes and most edges in the original mesh will be gone. This is the equivalent of having a large number of balls.
You can actually estimate the chances of it making an error. A compression from 100,000 triangles to 1,000 means only 1% of the edges will remain. An error happens when the best edge collapse in the set of 10 random edges belongs to the 1% that should never collapse. This means that the other nine candidates are also in the 1%, otherwise one of them would have been picked for the collapse. So the probability of picking the wrong edge is 0.01 at the tenth power: 0.00000000000000000001. This is what in engineering we like to call never.
I implemented this method and the speedup was considerable. It also made the whole algorithm a lot simpler. In the serial greedy method you need to do a lot of housekeeping to avoid recomputing edge collapse errors. Also when a collapse actually happens, you need to recompute the error metric for the adjacent edges. In the multiple choice method it is OK to recompute the error metrics for the small set of edges every time, the optimization structures of the serial greedy approach are not needed anymore.
What is truly remarkable about the multiple choice optimization is that it lends very well to parallelization. You can have thousands of simultaneous threads, each one looking at one different bucket of random candidates. Each thread would output the result of the collapse operation for its bucket and the resulting mesh would be the input for the next iteration. A single iteration could collapse thousands of edges in one shot.
Remember, this can be used in many other things than just meshes. Like the scan algorithm, this is one tool you may want to keep in your set for the future. As for myself I'll keep walking with this hammer in my hand, I'll let you know if I find another nail.
Just a question, but have you tried some open source mesh simplification algorithms like the one in MeshLab?
ReplyDeleteI remember looking at some different libraries, not very sure about MeshLab, but I think they use a QEM greedy serial implementation. I looked at qslim, OpenMesh, MeshLib and several others. They were either too slow or bloated with feature creep. But the main issue is even the good ones will impose a mesh structure on you.
ReplyDeleteThe Multiple Choice simplification method is only a couple hundred lines of code, I think it is better to write it yourself and get to know it pretty well. It will take you probably one day of work and it will pay later, since you will be able to control it and fine tune it yourself.
Great article!
ReplyDeleteAbout the chances of errors:
In the last iteration there are 1001 edges remaining and the algorithm picks 10 out of those.
The chances to correctly pick the worst edge are 10 to 1001 or around 1% or around 99% to have an error in the simplification process.
What about switching from the greedy heuristic to the original algorithm once the chances to make an error are too high? (tens of thousands edges remaining perhaps?)
@Worthstream: I actually did this for a while, do a first pass using MCA and when the number of remaining edges was small enough switch to greedy. I did not make a big difference error-wise so I ended up removing the greedy phase and relaxing a little bit the final polygon budget.
ReplyDeleteWhat also happens is that very often the 1% remaining edges would never collapse anyway cause they fail a maximum error test. This maximum error is an input parameter to the method, meaning that you are not only interested in the triangle count, some minimum quality has to be preserved. In this case as you approach the finish line you start having sets of edges that produce no collapse at all, because they all fail the maximum error test. When this happens too often, the algorithm knows it is time to stop.
Anyway in the MCA paper I linked they show some error metrics between the two methods, it is not bad at all.
If you collapse an edge you don't have to re-evaluate *all* other edges in your mesh because the only changed edges are the newly introduced ones. So why not just use a Heap (ordered by the maximum error) and add the new edges into the heap (which should be O(log N)) -> repeat and you have a nice O(N log N) algorithm)
ReplyDelete@Julien: You don't need to reevaluate all of them, only the ones that were affected by the previous collapse. Then you would need to re-sort them, cause their error metrics would have changed. This is what the optimized serial greedy method does. Still the MCA approach is twice as fast.
ReplyDeleteAgain: You don't need to sort all the edges. You can just use a heap data structure for that. A description of the complete algorithm (using a heap) can be found here: http://www.cs.mtu.edu/~shene/COURSES/cs3621/SLIDES/Simplification.pdf
ReplyDelete@Julien: I mean you need to re-sort the edges that have changed. You can have a heap with the lowest error edge on top, but as soon as you collapse this edge two vertices will change their positions. The remaining edges sharing these two vertices change too. It can go easily from 5 to 15 edges affected. That means the error metric that was initially computed for them is not valid anymore. As soon as you recompute their errors, you need to see where they go in the heap. After all, one of the changed edges may become now the best next choice and hence top of the heap.
ReplyDeleteThis step is actually in the slides you point (slide 48):
"update the mesh and the keys"
Updating the keys means recomputing the errors for the edges that were changed and see where they end up in the heap.
Really great article! Lots of the stuff on here is too high level for my skill set, but clever posts like this and your "Political Landscape" post I really like!
ReplyDeleteLooks like great work you are doing here.
ReplyDeleteThis is something I want to learn myself. How are you creating it (like what programming language are you using, and are the procedural building's parts first made in a 3D program and then mixed together procedurally etc)?
@Sam: Thanks.
ReplyDeleteAll the code is in C++ using OpenCL and OpenGL. I create the basic architecture blocks in 3ds max. It is all running in Windows.
If I understand correctly, you still initialize the cost for each vertex when you initialize this subroutine. I wonder, if you couldn't guide the routine a bit at this step, by generating a smoothed surface representing the level of detail needed in an area (the bunny's nose, to use an example from the paper).
ReplyDeleteYou could then use the curve to weight the probabilities as you randomly pick your points, favoring areas where mesh simplification is more likely to be beneficial instead of using a uniform random function. It wouldn't require any significant additional overhead (it's just an extra couple operations every time you generate a random number, and the curve generation can be done while you are initializing the mesh), and could make it that much more efficient...
@Abnaxis: Well with the MCA approach you don't need to initialize the costs for each edge anymore. Costs are computed every time for all the edeges in the set and immediately discarded after one edge is selected for collapse.
ReplyDeleteIt may seem an overhead, but it is not. When costs are precomputed, you still need update some of them when there is an edge collapse. A single collapse usually alters from 5 to 15 neighbor edges depending on mesh connectivity. That means each collapse requires you to compute an average of 10 edge errors. This roughly is the same the MCA does for each set.
Your idea of guiding the random function is intriguing. However, it may come to some cost. The blind luck approach could still be faster. But no way to tell without measuring it.
Derp. I should've know the thing wasn't initialized with costs. That's the point...
ReplyDeleteNonetheless, the point is the initial mesh has to be generated at some point, and since the probability curve I am envisioning is based on shape, not mesh (areas with sharp changes need more detail), and since it's going to take some integration over the surface, mesh generation seems like a good step to do it.
I wish I had more time to spend looking at this...
Quick question about the algorithm: suppose you're simplifying a structure with ~100,000 edges to 1%, 1,000 edges. Wouldn't the rate of error go up as there are fewer edges remaining? It's a bit beyond my level, just interested in understanding how that works.
ReplyDelete@Voodoo: The chance error goes up dramatically, near to 100%. But at this time you are almost done with the simplification. Check an earlier comment about this issue.
ReplyDelete