

It's a half-solution with a bunch of side effects.

But it doesn't reduce the computational cost of shadowing.

I'm sure one day (maybe in the relatively near future) games will be able to do the same thing with the same level of accuracy, and at that point the process of modelling assets for games and film becomes virtually identical.Ĭlick to shrink.Deferred rendering creates a bunch of problems with transparency in exchange for cheap dynamic lighting. My profile picture is actually a model of some cartoon moon that I made from zbrush, but if you look at actual scene in Maya you'll just see a super low poly sphere, because all the craters, indents and such are simply stored as a black and white displacement map. It's like LOD, but generated in real time based on camera distance, with very little pop-in due to the lack of a few distinct levels of polycount.Īnd of course, if you can subdivide a model, then you can use a displacement map to displace the polygons of a model as well. Now imagine an adaptive system in a game, where the game seamlessly transitions between the original model on the left to the one on the right depending how many pixels each polygon takes up on screen, and the player could continue to zoom in to an infinite amount without seeing any hard polygonal edges. The actual modelled asset is far lower poly then we'd seen in a modern AAA game (look at that sphere that's literally represented by a simple cube), but when subdivided a few times, the actual form in revealed. Look at this example for Monsters University.
