For my next “white paper” post on the texture processing system I developed, I want to talk about the random texture decal propagation script that is an integral part of that system. On the most basic level, the script generates a random point in UV space and hit tests it to see if that point is within a UV shell boundary of the currently selected object. If the point is within a shell boundary, a texture decal can be placed in that spot by moving the placement of the decal via its place 2D node. Now, simply repeat that process out and connect all the decals with a layered texture node, and you have a texture stack of randomly placed decals. That stack can then be layered on top of the base textures if you want something like a bunch of stickers, or used to used to modulate textures like adding random scratches across a reflection/specularity map. The script is more complicated than this, but that should give you a general idea of what is going on before I talk about each part in more detail.
Generating a random point is easy, the hard part is doing the UV hit testing. First things first, what does UV hit testing entail? The idea is to use a method called polygonal ray cast testing which lets you see if a point is in a polygon with two or more points. The method walks around the boundary of a polygon casting rays into it along the y intercept of the point being tested. If the y intercept is above the highest point or below the lowest point, that line segment between the two points is ignored. If, however, the the y intercept falls between the two points, a new test is done to see if the point is to the left of the y intercept between the ray being cast and the line segment joining the two points. If it is, a counter is increased by one. After every line segment has been tested, the counter is analyzed to see if the counter is even, odd, or zero. If the counter is zero or even, the point is outside of the the polygon. If it is odd, the point is inside the polygon. The idea is, if the point the ray is being cast from intercepts no line segments – or an even number of line segments – the point is outside the the polygon. If, instead, the point is only to the left of an odd number of segments, there is an equal number of segments the point is to the right of. In that case, the point is inside the polygon.
Here are a few sites that go over the process with some pictures:
http://kazaev.com/2012/03/08/algorithms-c-point-in-polygon-and-ray-casting/
http://www.alecjacobson.com/weblog/?p=1260

-Note: There are other methods of doing polygonal hit testing, but this system is easy to conceptualize and implement, so it was the one I decided to use. Also, there are special cases for perfectly horizontal line segments that are accounted for, but I won’t go over here.
Now, what does all that have to do with UV hit testing? Well, if you take just the boundary edges of a UV shell, it forms a close looped polygon that can be tested with the above method! The tricky part is isolating the boundary edges/UVs and organizing them in such a way that you are walking them in order. I used a series of selections and selection conversions in order to isolate the just the boundary edges. It is important to do this in series, or you will get some junk edges that will mess up the testing process. You can view the process in the source code. Once you know what the boundary edges are, you can grab the boundary UVs which will be used to do the ray cast testing just like the polygonal points are used above. The problem at this point is figuring out how to walk the UVs. If you do not walk them in the correct order, lines will be “drawn” across the UV shell which will give you incorrect intercept counts (an extra line will cause points inside to count as outside and vise versa). The situation is similar to having a connect the dots image with no numbers (UV id’s are arbitrary and not useable for this process), but having a cheat sheet with the finished image next to it. It is easy to walk by a person, but much trickier for a computer to figure out.
In order to solve that problem, I developed an algorithm that does the best it can to try and figure out where to go next. The algorithm starts at a given UV that appears on the list of UVs making up a UV shell boundary. From there, the first step is to grab every UV that is adjacent to that current UV by converting it to boundary edges and then back to UVs. From there the filtering begins. The first stage makes sure the new list of possible destination UVs are all in the current shell. Then it removes both the current UV and the last UV to be tested. The next stage is designed to removed UVs that appear on the wrong side of a shell if the shell is of a closed mesh. This causes the edges and vertices to wrap around to the other side of the shell (like a cylindrical projection). This second stage converts the current and last UVs to their related vertices, then back into UVs. Those UVs are then able to be removed since the computer now knows they can not be possible destinations. The third stage of filtering removes any points that have already been tested. Finally, the last stage guesses based on the closest point to the current UV. Even with the guess however, both the initial edge filtering and UV filter sequence have proved very successful in walking UV shells. While errors can still occur, the better the UV layout, the better the results. Bad in, bad out, as they say.
Here are some images illustrating the what happens when trying to walk a base Maya sphere shell, and what the filter stages do: (Click to Enlarge)
Initial UV selection via boundary edges (6 possible UVs to walk to)

First Stage Filtering (down to 4 UVs)

Second Stage Filtering (down to 2 UVs)

Fourth Stage Filtering (closest UV chosen as destination)

…