Hi there! Here’s a quick writeup of how we occlude wearables, mainly for coming back to when future me is puzzled over his past decisions.
No Truce’s fantastical realism does not lend itself to clearly distinguishable character classes that look so much alike you could get away with just a texture swap. What we have is more or less everyday clothes on more or less (mostly less) everyday people. That calls for underwear, shirts, pants, boots, coats, hats, gloves, etc. In other words, a truckload of assets to be combined in a truckload of ways. That, in turn, means a lot of mesh clipping if you’re not careful. The problem with careful is that it’s time consuming and not fun at all.
The initial plan was chopping the character base mesh into pieces and hiding the meshes not visible under current apparel. That would include chopping up shirts which could be partly covered by a coat or a jacket. And trousers which could be partly covered by various lengths of boots. Or the other way around. Referential joke: Hey, that’s even more chopping than Hugh Jackman.
This is apparently where most technical artists put their foot down and ask character artists to start standardizing their clothes. But I’m a people pleaser and wouldn’t dare tell kinnas how to art, so I prefer sorting things out before opting for the “technological limitations” excuse.
A less naive approach
(Did he just call industry’s standard methods naive? Read on to find out.)
Since we’re in the privileged position of not pushing many polygons, we don’t really need the polygon reduction from aforementioned method and could actually just get away by making the underlying geometry invisible.
And once we’re just setting transparencies, we don’t even need to do it by polygon. A low-resolution map will suffice.
However, we still have a few problems to solve:
- each article of clothing does not know what it occludes or what occludes it.
- each article of clothing is an arbitrary soup of polygons that does not know or care where on the body it sits.
For the time being the former will be handled by a simple script which places assets into an array (hat/coat/shoes/etc)and they will occlude each other in a static order.
The latter is a more interesting task however. To avoid any time-consuming proximity baking, we will need to describe the body mesh and the wearable assets in a single topological space.
We will define an additional UV map to each asset to describe just that.
Enter the Vitruvian Map
…and making a little b/w map to describe where the object occludes. (could be automated I guess)
Jacket’s occlusion map to be applied to underlying layers of clothing.
In engine, we loop over the array of clothes, grab the vitruvian map as we go from outer to inner layers, multiply it to the previous ones and apply it to each layer. So a shirt will receive the jacket’s vitruvian map and body will receive the shirt’s vitruvian map multiplied by the jacket’s vitruvian map and will thus be occluded by both. Use the multiplied maps to dictate alpha cutoff and you’re done.
The jacket’s vitruvian applied to body alpha. Cascading nature of this method not illustrated.
Sometimes we keep buggy code to use for a potential dream sequence.