Beauty in the Battlefield

October 28, 2013

The Battlefield –series by DICE hasn’t really been on my radar until recently when I finally made myself familiar with Battlefield 3 (2011). Unsurprisingly, on a decent PC, BF 3 is a thing of beauty most of the time, even though the heavy post-processing can be quite overblown and visually taxing for some people. Nevertheless, in terms of technology, somewhat art direction, and other systems like vehicles and destruction, BF 3 is so far ahead of the competition (Call of Duty) that it isn’t even funny, and soon to be released Battlefield 4 seems to widen that gap even further.

BF 3 makes so many things right visually, especially on the department of what I dubbed as Simulation of Style in my thesis, meaning the simulation of how the world appears through devices such as a thermal vision. Granted, Call of Duty did it first, but BF 3 makes it so much better with more dynamic rendering in place, like more nuanced noise and the cool over-exposure artifact that appears on the screen when depicting explosions. The effect is brutal and beautiful.

Also, the various HUD (Head-Up Display) elements are carefully designed to convey that gritty, functionalistic feel with low-resolution graphics, low-frame-rate updating, and subtle flickering. I’m always impressed when I catch a subtle effect that yet reinforces the overall concept further. In addition, the HUDs genuinely appear to reside within the simulated world clearly apart from the user interface graphics, which isn’t always the case. In fact, I would argue that the HUD treatment BF 3 provides is one such detail that may go unnoticed, but yet demonstrates the developer’s profound understanding towards the real-time image and the concept of simulation at large. The devil and the understanding are in the details.

However, putting all the “superficial” visual excellence aside, I was surprised to find aesthetical pleasure from a place I never would have thought of: the online multiplayer. I’m sure that beauty isn’t necessarily the first concept that comes to mind when brining up a modern military multiplayer, but for me it was. The thing is, I’ve never been an online multiplayer gamer until recently, only a casual observer every now and then over the years. So my mental image of what an online multiplayer is and can be was based on visions of technical glitches and other such difficulties.

Soon I realized that my view was remarkably outdated.

I just couldn’t believe how flawlessly a modern online multiplayer did work at best, and to me it occurred especially when watching smoothly gliding helicopters in the sky, knowing that real people operated them. For some reason, it was indeed the motion of hovering helicopters that made me awe the most, and as always, the fascination stems from the perceived framework of limitations through which the piece is experienced.  Apparently, the smoothly fluctuating motion of a helicopter breaks, or at least, pushes the boundaries of what is possible in my mind within the online framework, which makes it so beautiful to look at. In the end, it comes basically down to the logic of magic, as in everything that goes seemingly beyond one’s understanding of reality, is fascinating and remarkable.

Often times, one to appreciate the beauty of a thing, one must understand the history, or in the case of real-time image, the technological struggle behind the thing. The online multiplayer gaming has come a long way and personally it was fascinating experience to jump in at a point where things are finally starting to come together seamlessly and, yes, beautifully.

Next gen titles such as The Division and Destiny are strong signals of the dying sole single player experience, and if the tech is there as it seems to be, I’m all for it.

Drawing the Line

January 8, 2013

I’ve always been a fan of such aesthetics that rely heavily on thick outlines and strong contrasts like, for instance, so-called street art and graffiti in particular. The latter ones especially are often based on the use of outlines with variable thickness to accomplish visuals that have the ability to catch the viewer’s eye from a distance and in an instant, which is indeed the whole point of the exercise.

The other medium that is known for outline-based visuals is, of course, comics and graphic novels. The reason for such a visual device, I presume, must have originally got to do with the early printing technologies that lacked fidelity to reproduce sophisticated shading, and later on, the style has remained merely as a visual language of the graphic novel. Or, perhaps drawing with outlines is just dramatically faster and thus makes economically more sense (think of traditional animation). And it looks cool.

In any case, the whole concept of outline is fascinating to me since it’s, in a way, a pure abstraction of solid matter with no real world counterpart in contrast to other graphical phenomena like, say, silhouette. Nevertheless, we decode such lines as solid objects with relative ease, and in some cases, even more so, which is why technical manuals are more often than not illustrated with line drawings instead of realistic renditions.

What’s even more interesting is the fact that small children tend to draw objects as outlines, not as solid objects as they appear in the real world. One would think that a child would lack such a cognitive function that reduces the phenomenal world to mere lines, but it seems that the reality is quite the opposite: we have to actively learn not to draw everything as outlines and meticulously train how to depict the world through shading and texturing without the lines.

So, as said, outlines are indeed a result of human creativity and ability to abstract, not something we encounter in the natural world, which is, in fact, the core problem when trying to simulate such imagery within the realm of computer graphics. The challenge is that mathematical algorithms can deal with natural phenomena, such as physics or light, rather straightforwardly, but not quite so when it comes to simulating artistic sensibilities. Creativity seems to be solely a domain of the human mind and we are yet to see an algorithm to produce something even comparable to genuine imagination, even when looking into the near future.

However, a rendering technique that traces the edges of 3D geometry, popularly known as Cell-shading, is one endeavor in trying to mimic human ingenuity that is the comic book/graffiti aesthetics, one of the most iconic use-cases being Jet Set Radio (2000). The results still vary and even the best Cell-shading algorithms cannot produce completely error-free outlining or, let alone, artistically interesting line variations and nuances. We are getting closer and closer, though, with landmark titles like Street Fighter IV (2008) or Madworld (2009) who combined beautiful Cell-shading outlining with highly stylized texturing, especially the latter one.

Speaking of stylized texturing, what Telltale Games’ celebrated The Walking Dead (2012) did with its art direction must have been one of the cleverest things yet in the company’s history. The thing is, Telltale’s games have been generally sub-par in terms of technology, especially when it comes to the mere visual surface. However, The Walking Dead concealed that deficiency by adhering to a highly stylistic, “low-end” visual scheme (the comic book look) that, paradoxically, elevated the visuals to a whole new level. The Walking Dead wasn’t indeed so much of a compromise anymore like the earlier titles using the same tech, but a competent piece of real-time imagery within that particular visual, not technological, framework.

What saddens and frustrates me, though, is the fact that Telltale didn’t go all the way through with the visual scheme. The genius and the tragedy of The Walking Dead is that they only stylized the textures in order to create an appearance of a comic book, which, I’d assume, required only a few, if any, modification on the graphics engine. Indeed, I would’ve loved to see some kind of Cell-shading technology in place in addition to the stylized textures (see Borderlands, 2009), which would’ve made the graphics that more authentic and visually complete.

Games like The Walking Dead are a testament for how thoroughly technological the real-time medium is by nature in that it can take an appearance of something completely novel and unprecedented, but also, something familiar and established. To me, it’s indeed the simulation of style that oftentimes makes the largest impact, not necessarily strive for realism.

Burning Light

November 30, 2012

One of the key features of real-time imagery that defines it and sets it apart is the thoroughly explorative nature of it. The core real-time imagery experience indeed resembles very much the act of unboxing a new toy and figuring out what that toy can do. Consequently, one cannot understate the importance of games of the transitional era such as Doom (1993) that made it ultimately possible to genuinely wonder “what’s there behind that corner?” or “where do these stairs go?” within the real-time context. I guess one had to be there to grasp the full significance of this.

What made Doom such a profound evolutionary step was the fast and responsive agency in a 3D space, making the exploration almost frictionless. I personally came a little late to the party which, fortunately, made the experience that more mesmerizing since I was able to run Doom in full screen at relatively high frame rate. The utter feeling of everything being right in front one’s face was almost overwhelming and something I had never experienced until then – or after, for that matter.

It can be said that a first-person view popularized by Doom only really made sense with a somewhat sophisticated (fast, texture-mapped) 3D engine, so it was only natural that early games of that graphics paradigm were first-person ones, generally speaking that is. As texture-mapped 3D imagery became more mundane and stabilized a paradigm due to consoles like Sony PlayStation, we started to see other uses for polygons and textures than depicting the virtual from the first-person view.

Tomb Raider (1996) wasn’t perhaps the first game to utilize a third-person view in a 3D space, but it was the one which de facto created the basis for the modern third-person paradigm. The third-person view wasn’t so much about experiencing the world firsthand, but depicting the interaction between the 3D space and the controllable character more in detail. This made the exploration of the world less frictionless, but it gave the opportunity for developers to build a strong brand around the recognizable protagonist. Think of Lara Croft, Marcus Fenix, or, say…

Remedy Entertainment’s Alan Wake (2010) is indeed an epitomic example of a modern third-person game that did excellent job at branding the main character with multilevel narrative and the clever use of a real-life actor. However, the ultimate main character of Alan Wake was the outstanding simulation of light that, for obvious reasons, had received extra care and attention from the developer. The volumetric lighting effects were in particular really something to marvel at, both in terms of technological achievement and artistic use.

So all that being said, the one thing that boggles my mind in Alan Wake is the rendering of the flashlight beam that in many cases overexposes the image in such a way that the exploration of the environment becomes quite frustrating. As I tried to establish above, real-time imagery is at its core very much about exploration and discovery, thus it’s extremely distracting and, more over, counterintuitive to have a blind spot right there where one supposed to focus one’s view on, making the third-person view for exploration even more disconnecting that it already is.

Of course, it could’ve been an error in the exposure engine since the beam works, more or less, fine most of the time, but still, an error (or artistic decision) of that magnitude that defeats the whole purpose of having a flashlight is something that I can’t get my head around with. Overexposure is indeed a classic strategy to aestheticize light and illumination, and quite an effective one too, but the problem is that plain white conveys no information whatsoever.

All in all, ever since I saw polygons being drawn on the screen, I’ve felt compelled to explore that non-existent space inside out, although admittedly not so much in later years. Nevertheless, the carefully constructed 3D space goes in vain if one cannot explore that space in a satisfactory way.

CGI is Dying and It’s OK

November 5, 2012

After watching recently The Expendables 2 (2012) and consequently reading a stomach-wrenching making-of piece regarding its VFX production (hat tip to @osulop), I think I’m finally starting to be done with CGI (i.e. non-real-time computer graphics) as a certain kind of ubiquitous, uninspired visual filling that’s found in contemporary mainstream live-action cinema.

For me, it’s particularly sad since the most exhilarating moments I’ve had watching movies involve CGI in one way or another, Terminator 2 (1991) and Jurassic Park (1993) being probably the most profound examples of that. Especially Jurassic Park completely altered the paradigm in my head what cinema can do, and to see for the first time a credible living, breathing dinosaur on the screen was something that unlikely will ever be topped. To me, the experience must have been in the same ballpark than of those who saw moving images for the first time, or at least it felt like it.

Both Terminator 2 and Jurassic Park were at their core not only about telling a compelling story, but more importantly pushing the boundaries, showing the unshowable. Of course, the CGI sequences in, say, Terminator 2 do look dated now and stick out like hell, but nevertheless, the effects still resonate ambition and pure will to make an impact to the medium, what they undeniably ended up doing, to say the least.

However, the cost of producing decent CGI has dramatically fallen from the days of aforementioned films, consequently the quality has not being able to keep up with the ever-increasing quantity. What’s worse, to come back to The Expendables 2, CGI is now being (mis)used to cut costs (say that to an ILM worker of 1992) and salvage ill-conceived live-action sequences.  And that’s the use of CGI that frustrates me the most: something ordinary made with CGI only since it’s more convenient that way, not because it would be otherwise impossible.

Conversely, if we look at the very high-end of the modern CGI spectrum, we do see ambition, medium pushing and other traits historically associated with CGI, but there’s not so many players left playing that game. Indeed, it’s almost poetic that James Cameron who started the mainstream CGI revolution with movies like The Abyss (1989) is the one who has been keeping that frontier spirit alive in the recent past.

That being said, I think the grand story of blockbuster CGI is coming to an end little by little, and I believe it was indeed Cameron’s Avatar (2009) that made it ultimately difficult to break genuinely new technological ground within the realm of commercial CGI. I do acknowledge that it’s always dangerous to say such a thing about a field that involves so heavily technology, but like David Cohen wrote on his Variety article, the demise of mainstream CGI in terms of artistic integrity and innovation is already evident and in full effect, and, I might add, most interesting CGI can be now found from small budget students projects all over on Vimeo.

But I’m okay with that. I’ll continue to follow the non-real-time side of computer graphics but I’m no longer excited about it per se or where it’s heading. Indeed, if I could have a sneak peek into the future of any human endeavor, it would be real-time imagery, no question about it, especially now that we are at the brink of the new hardware generation.

I can’t wait for the next gen Gran Turismo, but I can live without Avatar 2.

Cool but Untruthful Story, Bro

October 2, 2012

 

When it comes to the traditional narrative arc that consists of the exposition, the complication, the climax and the resolution, it really bears no relation to the reality whatsoever. The reason why we love that structure, though, is that it feeds the belief system that in the end all the random occurrences in our lives make sense, as in an “everything happens for a reason” way. The yearning for reason and meaning is so profoundly built into us that billion-dollar industries are based on those premises, the most prominently the mainstream cinema, literature and drama television.

In reality, this structure of narrative can only be found in fiction, and every piece of supposedly non-fiction that adheres perfectly to that logic should be viewed with extreme suspicion. The history has shown time after time that the truth and a really good story tend to be mutually exclusive concepts, and the controversial cases of fabulists presented as truth tellers like James Fry or Mike Daisey are telling of how upset people get when that exceptionally compelling non-fiction ends up being more or less fabricated.

As I have noted before, the medium that is video games isn’t at its core a vehicle for traditional story telling, but rather, ideally speaking, a framework in which people can come up with their own little strings of events. Like kids do with their toys. This should be pretty evident by now for even the most hardcore narrative apologists who keep on hoping for a Citizen Kane (1941) of video games to come up someday and legitimize the medium once and for all.

There is, of course, tremendous value beyond structured narrative, and, I would argue, the apparent inability to convey traditional stories is not the video game’s weakness as a medium but an inevitable outcome of its core strength and substance. The thing is, in video games, especially in ones like Grand Theft Auto IV (2008), the total and complete absurdity of life simply becomes more tangible than through any mainstream prewritten narrative.

Indeed, when for instance driving accidentally over people in GTA IV, it is an empty, cold, meaningless occurrence without any redeeming factor or impact on the larger picture. It just happens. And this kind of representation of such an event resonates so much more with the reality than, say, a mainstream movie where every little detail has to bear meaning and make sense. The movie Signs (2002), for instance, paints a picture that all life’s occurrences, even the most unfortunate ones, form one big jigsaw puzzle that only makes sense once the pieces come together. GTA IV shows us, however, that one wrong turn can result as most nonsensical and meaningless (but sometimes hilarious) casualties, without any reason or redemption.

What makes video games less truthful is the fact that one can always start the game over if one fails by, say, dying. Heavy Rain (2010) acknowledged that and was an attempt in making a narrative-based game without fail states and the need for saving. In fact, the game’s director David Cage explicitly advised everyone not to load a previous state even if the events didn’t go as the player would’ve wanted to. Still, Heavy Rain was more like a glorified choose-your-own-adventure book than the messianic, Oscar winning interactive narrative some of us are still waiting to arrive.

It’s weird that when it comes to representations, the strongest emotions are evoked not by truth but by fabrications. Movies, even so called documentaries, are excellent at that, while games not so much. But, like I said, games can be more truth to the real than movies or any other form of representation will ever be, and that’s what makes video games such a subversive medium.

All in None

September 16, 2012

As much as we’d like to believe otherwise we are not that much, if any, smarter than the people who roamed this sphere before us, generally speaking that is. The thing is, every technological achievement we now manage to pull off rests on the previous discoveries that we now take for granted and consider self-evident, even though they have taken exceptionally creative minds to come up with in the first place. Technology-wise, we’d be rather helpless without our vast cultural heritage and the knowledge that has cumulated over centuries, or even millennia, and the old metaphor about “standing on the shoulders of giants” encapsulates the concept quite nicely.

That said, it’s difficult to imagine more technologically involved a medium than the real-time image. Granted, non-real-time computer graphics (CGI) does take enormous amount of effort and expertise from multiple fields of knowledge, but I’d still assert that producing real-time imagery can be even more technologically demanding because of the additional dimension to take account of that is performance. People at Pixar can tweak, in theory, a singular frame for weeks to make it look perfect without sacrificing anything else than production time, whereas in games there’re only fractions of a second (1/60 usually) to generate one frame, although these two processes aren’t exactly 100 % comparable. In addition, in the world of real-time, there’s no luxury of setting up a so-called render-farm consisting of hundreds of nodes to distribute the rendering burden. We’ll see, though, if cloud gaming will change that at some point in the future.

So the art of real-time image is first and foremost an art of understanding the hardware that is the very enabler of the image to be produced and interacted with. Secondly, the art of real-time image rests on the concept of simulation, or more specifically, on the expertise to understand and produce algorithms that connects various simulations to the real world. This is when things turn complicated.

It indeed takes an enormous amount of skill and expertise from a number of people to produce a modern AAA game, and increasingly so as games become more and more sophisticated. In retrospect, it’s been actually fascinating to witness completely new areas emerging in gaming that were completely absent from the real-time discourse before, like simulation of physics circa early 2000s. And each time the medium refines and introduces a new area of simulation, it gives a birth to a whole new discipline to which a host of talented people will commit their professional and academic lives.

Consequently, a sole developer team – let alone one lone man – cannot excel and push boundaries in every area of the medium anymore, but have to pick their battles instead, even when it comes to bigger studios. In my mind, Doom (1993) was one of the last “perfect games” that really recalibrated the expectations of what real-time imagery can do in almost every respect, and the phenomenal success and cultural impact of the original Doom derives essentially from that fact.

What this means in the grand scheme of things is that innovations and breakthroughs spread unevenly across the games, making some do one or two things medium-pushingly well while lagging behind in others. Consider, for instance, how embarrassingly bad the simulation of physics is in one of the biggest franchises ever, that is Call of Duty, but which at the same time continuously pushes the (ambivalent) art of scripted spectacles forward release after release. Or, how Gran Turismo 5 (2010) excels in the field of simulated light like no other, but handles vehicular damage just about as badly as it gets.

The very nature of technology dictates that once something enters the realm of possibility, it becomes a default soon after the novelty has worn off. In other words, we tend to take for granted something that didn’t exist a moment ago, which, of course, applies beyond gaming and has got to do with the human condition in general.

In the light of all this, one of the better things that have ever happened to the medium is the middleware industry that commodifies innovation and liberates developers from reinventing the wheel over and over again. Still, it frustrates me to see one game doing some particular thing extraordinarily well, like, say, the brilliant birds in Half-life 2 (2004), and realizing that said technology may never end up in other future titles. Ever.

Thus, it is often the ultimate fantasy of a gamer to imagine an all-in-one title that would cherry pick each and every most advanced features across the medium and put them together, but that’s indeed a mere daydream. Unfortunately, the gaming industry just isn’t some communist utopia where everyone works to benefit the medium at large, even if that would definitely sound great.

So, the issue is that yes, the knowledge and innovation does accumulate and spread to a degree across the games over time, but not fast and systematically enough for my liking.

Me and a Gun

August 27, 2012

 

The reality is, the biggest franchises in the current video game space are so-called first-person shooters what comes to genre specification. The franchise that most likely comes to everyone’s mind is the Call of Duty series that is nowadays very common to label as the lowest common denominator of interactive entertainment due to its popularity. Personally, I try my best to avoid seeing artifacts through their social stigmas, and I think it’s quite ridiculous sometimes how far some people go in order to make themselves appear superior by bashing popular pieces of entertainment or art.

Anyhow, it’s not a coincidence that top selling video games are more often than not about shooting people with firearms: people generally like shooting. And one doesn’t have to have real-life experience of an actual combat rifle to recognize that holding and using one is, in a way, an ultimate power trip, as in utter dominance over others. Moreover, the fact that we have this established, massively popular genre known as first-person shooter is indeed telling that the act of shooting is a central theme particularly in first-person games in general.

In fact, there really aren’t any other major genres with the first-person prefix, even though there perhaps should be. It seems that a game to qualify as a first-person one needs to include not only a first-person view (most simulators are depicted from first-person), but also offer a certain level of freedom for the player to wander around the 3D space as a person.  Therefore, I guess, Doom (1993) was considered as a first-person game but Microsoft Flight Simulators aren’t. Many times I wish there was a flight simulator or a racing game that incorporated meaningfully such a freedom into the gameplay, but it’s always about guns, guns, guns.

So, shooting people is such a profound way of interacting with the virtual from the first-person point of view that it feels strange and out of place when an AAA game with said perspective comes along that involves barely use of weaponry, like Mirror’s Edge (2008). Mirror’s Edge was based on finding a right path to come over the obstacles, keeping the momentum going, and avoiding the enemy fire at the same time. However, occasionally the player got a hold of a gun and could fire back, which made the shooting feel that much more special and meaningful, if you will. Now the weapon wasn’t a fundamental part of the player’s character like in most first-person games, but a luxurious object that one kind of cherished and which radiated genuine authority.

This all comes down to the fact that I find it highly fascinating when a first-person game (=shooter) introduces functionality that’s not directly connected to the core ethos of the game, a fascination which dates back to Duke Nukem 3D (1996) that famously contained all kinds of extra stuff to play with. What’s amusing, then, is that in the case of Mirror’s Edge, that functionality was indeed shooting. Also, I remember how exciting it was to be able to drive civilian cars in the original Operation Flashpoint (2001), which had little to do with the actual militaristic gameplay, but which transformed the game as a whole into something much cooler, even if being quite cool to begin with.

I’m not saying shooting isn’t necessarily enough for a game like Call of Duty. I’m saying first-person games should aim a bit higher than being mere shooters in terms of functionality. Crysis (2007) was an ambitious endeavor into that direction in that the player could pick up and hold almost any object, not only a gun (the system is unparalleled even today), and drive around freely with vehicles, military and civilian.

The first-person view is not an artistic statement, but the most natural and obvious way of portraying the virtual, and it frustrates me that the most prevalent first-person genre is tagged with such a specific and limiting term as shooter. At the end of the day, I guess, I want genuine first-person Grand Theft Auto -esque games that deliver on-par experiences in all fronts. Please.