All in None

As much as we’d like to believe otherwise we are not that much, if any, smarter than the people who roamed this sphere before us, generally speaking that is. The thing is, every technological achievement we now manage to pull off rests on the previous discoveries that we now take for granted and consider self-evident, even though they have taken exceptionally creative minds to come up with in the first place. Technology-wise, we’d be rather helpless without our vast cultural heritage and the knowledge that has cumulated over centuries, or even millennia, and the old metaphor about “standing on the shoulders of giants” encapsulates the concept quite nicely.

That said, it’s difficult to imagine more technologically involved a medium than the real-time image. Granted, non-real-time computer graphics (CGI) does take enormous amount of effort and expertise from multiple fields of knowledge, but I’d still assert that producing real-time imagery can be even more technologically demanding because of the additional dimension to take account of that is performance. People at Pixar can tweak, in theory, a singular frame for weeks to make it look perfect without sacrificing anything else than production time, whereas in games there’re only fractions of a second (1/60 usually) to generate one frame, although these two processes aren’t exactly 100 % comparable. In addition, in the world of real-time, there’s no luxury of setting up a so-called render-farm consisting of hundreds of nodes to distribute the rendering burden. We’ll see, though, if cloud gaming will change that at some point in the future.

So the art of real-time image is first and foremost an art of understanding the hardware that is the very enabler of the image to be produced and interacted with. Secondly, the art of real-time image rests on the concept of simulation, or more specifically, on the expertise to understand and produce algorithms that connects various simulations to the real world. This is when things turn complicated.

It indeed takes an enormous amount of skill and expertise from a number of people to produce a modern AAA game, and increasingly so as games become more and more sophisticated. In retrospect, it’s been actually fascinating to witness completely new areas emerging in gaming that were completely absent from the real-time discourse before, like simulation of physics circa early 2000s. And each time the medium refines and introduces a new area of simulation, it gives a birth to a whole new discipline to which a host of talented people will commit their professional and academic lives.

Consequently, a sole developer team – let alone one lone man – cannot excel and push boundaries in every area of the medium anymore, but have to pick their battles instead, even when it comes to bigger studios. In my mind, Doom (1993) was one of the last “perfect games” that really recalibrated the expectations of what real-time imagery can do in almost every respect, and the phenomenal success and cultural impact of the original Doom derives essentially from that fact.

What this means in the grand scheme of things is that innovations and breakthroughs spread unevenly across the games, making some do one or two things medium-pushingly well while lagging behind in others. Consider, for instance, how embarrassingly bad the simulation of physics is in one of the biggest franchises ever, that is Call of Duty, but which at the same time continuously pushes the (ambivalent) art of scripted spectacles forward release after release. Or, how Gran Turismo 5 (2010) excels in the field of simulated light like no other, but handles vehicular damage just about as badly as it gets.

The very nature of technology dictates that once something enters the realm of possibility, it becomes a default soon after the novelty has worn off. In other words, we tend to take for granted something that didn’t exist a moment ago, which, of course, applies beyond gaming and has got to do with the human condition in general.

In the light of all this, one of the better things that have ever happened to the medium is the middleware industry that commodifies innovation and liberates developers from reinventing the wheel over and over again. Still, it frustrates me to see one game doing some particular thing extraordinarily well, like, say, the brilliant birds in Half-life 2 (2004), and realizing that said technology may never end up in other future titles. Ever.

Thus, it is often the ultimate fantasy of a gamer to imagine an all-in-one title that would cherry pick each and every most advanced features across the medium and put them together, but that’s indeed a mere daydream. Unfortunately, the gaming industry just isn’t some communist utopia where everyone works to benefit the medium at large, even if that would definitely sound great.

So, the issue is that yes, the knowledge and innovation does accumulate and spread to a degree across the games over time, but not fast and systematically enough for my liking.