If there’s one singular aspect in computer imagining to which the credibleness of the image comes down the most, it is how faithfully the behavior of light is depicted, there’s no question about it. To put it in terms used in my thesis, it’s indeed ultimately the simulation of light – or lack thereof – that has historically made computer-generated imagery unconvincing for the human eye. However, there’ve been enormous breakthroughs during the past decade or so in this particular field, which have led to near photo-realistic representations.
In my mind, the most prominent advancement in terms of realistic lighting has been the emergency of so-called global illumination (GI) techniques, which take account, in short, not only the light emitting directly from a light source, but the bounced light, i.e. indirect illumination, as well. Consequently, every surface the light touches becomes a light source in itself ad infinitum, which unsurprisingly makes the situation rather demanding, to say the least, in terms of needed CPU cycles.
Of course, there’s not enough CPU power in the world to calculate the exact results of genuine global illumination, so the process – as in modeling at large – has to be optimized somehow, like limiting the number of bounces and the overall definition of the solution. So, today we do have highly optimized GI algorithms out there that take a reasonable time to render even in real-time, producing somewhat credible results, but which are still, to my knowledge, completely absent outside the tech-demo context.
Since real-time GI is basically out of the question on today’s hardware and use cases, most games use some kind of static indirect illumination to provide that much needed realism to the overall lighting scheme. A game that notoriously ignored indirect illumination altogether was super dark Doom 3 (2004) which was perhaps more of a proof of concept from John Carmack that a game can be realized with a fully dynamic lighting system. As a result, Doom 3 looked exceedingly artificial and, as said, extremely dark making it hard to make sense of what was going on at times. Carmack did backpedal with Rage (2011) of which lighting approach was more of a practical/aesthetical than ideological one, which in part made Rage one of the better looking games of its genre.
However, a direct opposite to Doom 3 and a prime example of beautifully used static GI is Mirror’s Edge (2008) of which aesthetics relied heavily on the effect. In fact, I would argue that the use of the high-quality and quite realistic GI solution, allowed Mirror’s Edge to employ otherwise more abstract and stylistic visuals, such as the completely white foliage and the extremely clean and sterile look overall. The realness of the visuals didn’t stem from the geometry nor the textures, but solely from the indirect illumination, even if being static.
In the light of all this, I consider that, for example, hardware tessellation that allows ultra refined geometry is a completely redundant direction to go as long as there are these fundamental limitations in the field of light simulation. It’s not about polycount anymore and, in a sense, it never was.
It’s increasingly about the need for genuine, dynamic GI solutions, and I can’t wait to see what the next generation has up to its sleeve regarding this. Hopefully something.