Work, Play in Real-time

August 13, 2012

It’s pretty given that life consists of not only everyday, mundane tasks and goals that make our day-to-day living possible in terms of pure existence, but aspirations and ambitions of higher order as well. As cliché as it may sound, I believe it’s the latter form of endeavors that make us human, that our existence can rise upon mere survival and procreation, enabling our very being to connect into something universal, beautiful or some other entity that may be considered transcendental.

Sure, activities like self-expression or learning new things can be seen as some sort of survival of the mind, although only way we can die from lack of such is merely from the inside. So as long as our basic needs are met, we tend to require more sophisticated goals toward which to strive that cater to the creative and intellectual forms of our being.

One of the central goals of my earlier creative life was to learn the art of so-called 3D imaging that was gaining serious popularity in the early 90s. Seeing back then cool looking, but poorly produced by today’s standards, music videos using 3D animation as a visual element, such as Swamp Thing by The Grid, really pushed me to pursue 3D graphics and leave pixel-based animation aside. I just had to find the way to get into that place where I could create computer-generated images just like ones that were so fascinating to look at on TV.

It was somewhere in the latter part of the 90s when I finally cracked the invisible wall between me and 3D imaging by getting a hold of and learning 3D Studio 4 by Autodesk. 3D Studio 4 was rather user friendly a 3D software at the time relatively speaking, but looking at it now 15 years later, it’s striking how dull, limited, stiff and uninspiring the work environment that supposed to feed the creative process really was. Everything was divided into separate modes and sub-programs that made the user constantly to jump in between them. Furthermore, the hardware on which 3D Studio 4 was running on was so sluggish that it struggled to even keep up with the wireframe rendering, making it sometimes quite frustrating to carry out even a slight adjustment to the camera or to the geometry.

However, when 3D accelerated cards finally became everyday items, the whole 3D game, if one pardons the pun, changed in more ways than one. Now using a 3D application like 3ds Max that took advantage of 3D acceleration, was completely different an experience. The engineer-like work environment had turned into a sandbox that was a delight to merely play with, like spinning the camera around a cube in a 3D space at buttery smooth 60 frames per second only because one could, and because it looked so, so cool.

The creative process comes down to iterations, iterations and iterations. So when everything happens in real-time and at high frame-rate, the speed at which new iterations can be made is really limited only by the user. And every fraction of a second the user has to wait the machine to comply with the input, pulls him/her further away from the flow, which is why I generally like working in an environment like 3ds Max as much as possible over After Effects even in the simplest animation cases.

To me, playing around in 3ds Max is in a way purest real-time image experience. There’s no ludus controlling and limiting the play, only one’s imagination and creative skills. The act of 3D modeling, for instance, can be as immersive and captivating of an experience as playing a high-end video game, and Minecraft (2011) proves if anything that creativity and playfulness can be fused together quite successfully.

One Man Band

July 14, 2012

In retrospect, it seems unbelievable that there was a time when one man, just one, could produce an AAA game which not only took the hardware to its limits, but delivered an intransigent artistic vision as well. An epitomic example in my mind of such is Andrew Braybrook who designed and produced some of the brightest Commodore 64 hits that are now considered as milestones in home computing, namely Paradroid (1985) and Uridium (1986).

Even though I love both Paradroid and Uridium, I do have a special relationship to the latter one, which still amazes me how well it took advantage of C64’s hardware and even some of its disadvantages, like horizontally doubled pixels. Uridium indeed was a looker in many regards, not least of which being the silky smooth 50 Hz scrolling that put some of the arcade games of that time to shame. Also, the multi-phased ship explosion looked nothing like ones in previous games I had seen so far. Uridium was a visually perfect C64 game, if there’s such a thing as perfect.

Uridium is one of those rare, magical occurrences where a right person collided with a right technology at a right time. Braybrook knew the C64 inside out, had a vision, the skills and determination to carry through that vision, which resulted as a game that basically blew the competition out of the water on that platform, at least what comes to the mere visuals. Unfortunately, the success of Braybrook stayed on the C64 and didn’t translate to more advanced systems that followed it, like the Amiga 500, which is often the case in success stories that involve right timing and profound knowledge of right technology. Uridium 2 released 1993 on the Amiga 500 platform was indeed just another shooter that barely left a mark on history.

Nevertheless, I can only imagine the power trip Braybrook must had been having when designing and coding the original Uridium, that one man could make such a big contribution to the gaming community and real-time imagery at large. That’s something, as said, that most likely will never happen again in any platform. Not even in the so-called indie-scene that has found a new foothold on downloadable market places such as the App Store, Xbox Live Arcade or Steam.

Indeed, small one/two men operations today simply can’t push the medium forward through technology in multiple fronts like id Software, Epic or Crytek do. Instead, they can do it through a distinct visual style that makes it possible to produce, in a sense, an AAA game within that particular artistic framework. Consider some of the most celebrated indie games of late such as Limbo (2010), Superbrothers: Sword & Sworcery EP (2011) or Fez (2012), what they all share is some novel, breakthrough visual paradigm that is easy on the hardware, but which pushes the medium artistically to its limits at the same time.

Small developers have to pick their battles if they are planning to go against the big boys, there’s no question about it. With Uridium, Andrew Braybrook didn’t have to. He was the big boy back then.

Simulated Ownership

July 1, 2012

I came across recently with a scene that I barely knew existed, which is the car replica scene. The goal of the scene is to replicate the appearance of an exotic car, such as Ferrari or Lamborghini, as accurately as possible by modifying a regular, far cheaper base-vehicle. Yes, it’s an old thing and on some level I was aware of it, but, as said, it didn’t occur me until recently how sophisticated replicas of today could be. The level of detail simply blew my mind.

It’s indeed quite remarkable how far the industry that’s virtually car piracy has come, and that some high-end replicas are even labeled as being of “showroom quality”, which casts a shadow of doubt over every exotic car I confront in the future.  And everything is, of course, being realized with a fraction of the cost of a genuine article.

What, then, makes such an endeavor reasonable is the strategy to replicate the mere surface of the original and some of the functionality, which resembles quite closely the concept of simulation and its connection to the concept of toy of which I have discussed at length here on the site and in my thesis.

In some sense, a replica car is indeed something of an ultimate toy since it simulates, like a toy, ownership of something unobtainable that’s in this case an expensive luxury car, or in the case of a child, any actual car. What we want but can’t have, we fantasy of having, and owning a replica Ferrari is just that. In the end, though, replica cars, or pirated luxury items in general, are obtained for the glazing eyes of the Other, not just for personal enjoyment. The thing is, it’s pretty difficult to enjoy luxury items when there’s no-one watching.

However, my ultimate point here is that replica cars resemble conceptually not only toys but also very much their digital counterparts in the realm of real-time imagery. Indeed, a simulated car in a video game does exactly what a real-life replica does: copies the mere surface of the original without the underlying structure, and the functionality of the original to some extent.

One particularly fascinating and illuminating detail in some of higher-end replicas is the way the supposed engine is being realized when it’s visible through a glass hood. Since the actual engine powering a replica most likely doesn’t look anything like the original, a molded engine cover is put on top of it to make the engine appear something from a genuine exotic car.

As already established, this kind of visual mimicry is intriguingly similar to the principles found in the digital realm, in that a) it makes sense to model solely the parts visible to the casual observer, and b) the appearance and the function are two separate entities or layers. In a replica, functionality is provided by the base-car with its engine, chassis, hinges, transmission and so forth. In a polygon-based car, it’s the simulation algorithms, such as those handling the physics and car mechanics, that in the end make the vehicle tick. Polygons and textures are mere surface.

As oppose to replicas, however, simulated cars in video games like Gran Turismo 5 are more of a private fantasy than shared one since it’s quite difficult to fool and impress others with a collection of polygons and textures, in contrast to plastic and fiberglass. Or, at least not before 3D printing makes it possible to translate polygons to real-life items. Like exotic luxury cars.

It seems there’s an innate need for ownership in all of us, more in some than in others. In the case of financially unobtainable things, we tend to resort to all sorts of shortcuts like daydreaming or, in the most extreme cases, violent robbing and stealing. So in comparison, replicas and video games are pretty harmless an alternative for that kind of aspirations.

The Power of Indirect Light

May 31, 2012

If there’s one singular aspect in computer imagining to which the credibleness of the image comes down the most, it is how faithfully the behavior of light is depicted, there’s no question about it. To put it in terms used in my thesis, it’s indeed ultimately the simulation of light – or lack thereof – that has historically made computer-generated imagery unconvincing for the human eye. However, there’ve been enormous breakthroughs during the past decade or so in this particular field, which have led to near photo-realistic representations.

In my mind, the most prominent advancement in terms of realistic lighting has been the emergency of so-called global illumination (GI) techniques, which take account, in short, not only the light emitting directly from a light source, but the bounced light, i.e. indirect illumination, as well. Consequently, every surface the light touches becomes a light source in itself ad infinitum, which unsurprisingly makes the situation rather demanding, to say the least, in terms of needed CPU cycles.

Of course, there’s not enough CPU power in the world to calculate the exact results of genuine global illumination, so the process – as in modeling at large – has to be optimized somehow, like limiting the number of bounces and the overall definition of the solution. So, today we do have highly optimized GI algorithms out there that take a reasonable time to render even in real-time, producing somewhat credible results, but which are still, to my knowledge, completely absent outside the tech-demo context.

Since real-time GI is basically out of the question on today’s hardware and use cases, most games use some kind of static indirect illumination to provide that much needed realism to the overall lighting scheme. A game that notoriously ignored indirect illumination altogether was super dark Doom 3 (2004) which was perhaps more of a proof of concept from John Carmack that a game can be realized with a fully dynamic lighting system. As a result, Doom 3 looked exceedingly artificial and, as said, extremely dark making it hard to make sense of what was going on at times. Carmack did backpedal with Rage (2011) of which lighting approach was more of a practical/aesthetical than ideological one, which in part made Rage one of the better looking games of its genre.

However, a direct opposite to Doom 3 and a prime example of beautifully used static GI is Mirror’s Edge (2008) of which aesthetics relied heavily on the effect. In fact, I would argue that the use of the high-quality and quite realistic GI solution, allowed Mirror’s Edge to employ otherwise more abstract and stylistic visuals, such as the completely white foliage and the extremely clean and sterile look overall. The realness of the visuals didn’t stem from the geometry nor the textures, but solely from the indirect illumination, even if being static.

In the light of all this, I consider that, for example, hardware tessellation that allows ultra refined geometry is a completely redundant direction to go as long as there are these fundamental limitations in the field of light simulation. It’s not about polycount anymore and, in a sense, it never was.

It’s increasingly about the need for genuine, dynamic GI solutions, and I can’t wait to see what the next generation has up to its sleeve regarding this. Hopefully something.

Smoother, Sharper, Duller

May 20, 2012

I think it’s fair to say that Wolfenstein 3D, released almost exactly 20 years ago, was the de facto ground zero for the modern first-person shooter, even more so than Doom which came 18 months later. Wolf 3D was a killer combination of remarkably fluid gameplay and groundbreaking visuals that pushed the envelope what consumer hardware could do back then. I, for one, saw real-time texture mapping in action at the first time in Wolf 3D, and remember initially believing it was just another tile-based first-person game like Dungeon Master when my friend tried to describe it to me. When I finally had the chance to see Wolf 3D for myself, the whole first-person paradigm I had in my head hitherto changed in that very instant.

Those moments of realization that there’s no going back are the ones that keeps me following the medium, and in the Wolf 3D case it was the texture mapping that did it to me. Suddenly, the line between polygon-based and bitmap-based graphics got increasingly blurred forever.

Of course, at that time there were drastic technical restrictions to the textures in terms of resolution and color palette. Indeed, as if the blocky look of them wasn’t enough, the lack of colors had to be compensated by generating additional shades through dithering, which was and is a common practice whenever operating on a limited palette. Dithering is obviously now obsolete of a technique, as the number of colors available on modern systems is virtually infinite.

What happens, then, when visuals using dithering, such as the textures in Wolf 3D, are brought into an environment that is free from restrictions described above, and on top of that, features a drastically higher screen resolution and texture filtering, like the iPhone?

To me, it comes across as wrong and out of place, especially since it’s clear that the dithering isn’t a product of artistic choice, but something stemmed originally from technical limitations.

I’m a firm believer in the notion that an art piece should be experienced first and foremost in the exact condition it first left the creators’ hands with all the flaws and deficiencies included. “Enhancing” an old piece, especially a historically remarkable one, with modern technology simply doesn’t add as much to the piece as it takes away from it, which is something George Lucas notoriously failed to grasp. The fact is, a considerable portion of an art piece consists of its historical and technological context, which is then eroded away with anachronistic technologies, such as the higher screen resolution and texture filtering in this case.

Seeing the dithering effect on a filtered texture through the iPhone’s high-resolution screen is a strange, Frankenstein-esque visual experience. To me, the game appears now merely as a cheap and rather uninteresting piece of real-time imagery, not one that pushed the medium forward.

Lovely Noise

May 9, 2012

Post-processing can be, and often is, a pretty muddled place when it comes to the realm of creative imagining. Consider, for example, the people who are new to Photoshop, how they tend to apply every effect and filter there is into the image only because they can. Later on, hopefully, it comes clear that not every photo needs a massive amount of lens-flares and other Photoshop trademarks to justify its existence. What’s worse, the extensive use of filters, especially the gimmicky ones, is oftentimes carried through to mask the deficiencies of the original imagery, which is, of course, misguided and abusive behavior towards any visual piece.

Not always, though. If there’s one post-processing effect I’m okay with that’s suitable for the job described above, it’s noise, or film grain, if you will. I really find noise as an visual effect extremely fascinating and eye-pleasing in the real-time context as long as there’s at least some kind of rationale behind the effect and a certain subtlety to it, which applies obviously to post-processing in general.

One example of such is Mass Effect 2 (2010) that shows us how simple noise can be elegant and yet powerful a post-processing effect at the same time. The noise disrupts quite nicely the otherwise clean and sterile surface that we have come to expect from a modern synthetic image, and it’s in fact something of an antithesis for the digital medium that is generally free from such phenomena, in contrast to film, for instance. And, like said, the subtle noise in ME 2 hides, or rather distracts from, the minor problems in the image, like those related to filtering, anti-aliasing and such. In addition, the noise makes the visuals in a way more lively and coherent to an extent.

On the other hand, if more games used the effect in question, I probably wouldn’t care that much of it. I believe it’s indeed in part the curiosity of the effect that fascinates me, especially for the fact that that type of pixel-sharp noise is virtually absent in modern digital imagery at large, specifically when it comes to video. This is due to the compression algorithms involved, such as MPEG, that often get rid off the subtle noise the original, uncompressed imagery may have had. Funnily enough, the high-resolution, 60 frames per second noise of ME 2 registers for that reason something of luxury to me, even though noise is generally perceived as an unwanted visual artifact.

Only thing that bothers me with the ME 2 noise is that BioWare didn’t have the balls to fully embrace the effect as a genuine artistic decision, ending up making it optional. Furthermore, in Mass Effect 3 (2012), the noise was just gone, so I guess in the end people didn’t like it that much.

Well, I did.

Sophie’s Choice

April 27, 2012

In his 2004 book, The Paradox of Choice – Why More is Less, American psychologist Barry Schwartz argued that, in general, the more choice we as consumers are given to, the less happy we become. Turns out, the psychological burden that comes with an array of choices actually outweighs the hypothetical gain of the optimal decision, which leads to anxiety and distress. Making choices makes us by and large miserable, it seems.

The things that bother us in choices are the trade-offs and, consequently, feelings of loss that we have to deal with when evaluating the options in hand. Yes, for some reason we tend to think, as Schwartz writes, that we actually lose all the alternative choices as a whole when we have to pick just one, which is obviously an erroneous line of thinking. Still, it hurts to make a decision whenever there’s plenty to choose from, and one has to only observe a kid at McDonald’s choosing his/her Happy Meal toy to see how painful it sometimes can be when there’re mutually exclusive but equally enchanting options on the table.

The McDonald’s kid leads us conveniently to the agony most PC gamers face every time they launch a freshly installed game, which is the host of sliders and drop-down menus found in the graphics options. Of course, the graphics options are a non-issue for the players who happen to possess a top of the line PC on which everything can be maxed out, quite joyfully so, I’d assume. But for the rest of us, the graphics sliders can be a serious source of misery and distress.

The agonizing trade-off here is the classic performance vs. image-quality dichotomy, which is something that has defined the real-time image throughout its existence. When operating on a certain piece of hardware, we simply can’t have both maximum performance and maximum image-quality happening at the same time, both of the notions being, of course, theoretical ideals in and of themselves. Once there’s a single pixel drawn on the screen, we are trading performance for image-quality.

The key question, then, rises: To what extent we are willing to sacrifice performance over image-quality? The one thing I love about consoles is that, due to their fixed hardware specs, it’s developers who solve the performance / image-quality equation for the player, which a) makes console games more auteur, and b) means the console gaming experience is identical throughout the platform so no one feels left out.

However, as said, the PC player with a lower-end hardware isn’t that fortunate. Adjusting the graphics settings to “Low” or “Medium” isn’t the source of anxiety per se, but rather the pure awareness of the fact that there’re also “High” and “Very High” or, let alone, “Ultra” settings available, but at the same time, unattainable performance-wise. It’s indeed amusing, and sad, how the high-end settings can make the current settings seem worse by mere existence. And what’s worse, the “Low” and “High” settings bear no absolute value whatsoever. I believe people would’ve been much happier with Crysis (2007) if the “Medium” setting had been simply renamed “High”, just like in Starbucks “Small” is called “Tall”, I hear. It’s all relative.

In the end, the true agony of graphics options comes down to the heartbreaking optimization process when the player struggles to find some magical combination out of tens of sliders that affect the performance the least while still keeping the image-quality acceptable. Personally, I’m all about performance so every frame below 60 per second is a compromise in my eyes, which keeps me jumping between the gameplay and the settings for a quite some time.

There’re indeed usually tens of choices and trade-offs to be made before the game can really start for many playing on a dated PC hardware. Don’t get me wrong, though: I’m all for options, tweaking, fine-tuning, and all that. Yet, the freedom of choice tends to come with a considerable psychological price that console gamers are generally free of.

A price that keeps us mulling over what we can’t have instead of enjoying what we do have.