Archive for the ‘Medium’ Category

Beauty in the Battlefield

October 28, 2013

The Battlefield –series by DICE hasn’t really been on my radar until recently when I finally made myself familiar with Battlefield 3 (2011). Unsurprisingly, on a decent PC, BF 3 is a thing of beauty most of the time, even though the heavy post-processing can be quite overblown and visually taxing for some people. Nevertheless, in terms of technology, somewhat art direction, and other systems like vehicles and destruction, BF 3 is so far ahead of the competition (Call of Duty) that it isn’t even funny, and soon to be released Battlefield 4 seems to widen that gap even further.

BF 3 makes so many things right visually, especially on the department of what I dubbed as Simulation of Style in my thesis, meaning the simulation of how the world appears through devices such as a thermal vision. Granted, Call of Duty did it first, but BF 3 makes it so much better with more dynamic rendering in place, like more nuanced noise and the cool over-exposure artifact that appears on the screen when depicting explosions. The effect is brutal and beautiful.

Also, the various HUD (Head-Up Display) elements are carefully designed to convey that gritty, functionalistic feel with low-resolution graphics, low-frame-rate updating, and subtle flickering. I’m always impressed when I catch a subtle effect that yet reinforces the overall concept further. In addition, the HUDs genuinely appear to reside within the simulated world clearly apart from the user interface graphics, which isn’t always the case. In fact, I would argue that the HUD treatment BF 3 provides is one such detail that may go unnoticed, but yet demonstrates the developer’s profound understanding towards the real-time image and the concept of simulation at large. The devil and the understanding are in the details.

However, putting all the “superficial” visual excellence aside, I was surprised to find aesthetical pleasure from a place I never would have thought of: the online multiplayer. I’m sure that beauty isn’t necessarily the first concept that comes to mind when brining up a modern military multiplayer, but for me it was. The thing is, I’ve never been an online multiplayer gamer until recently, only a casual observer every now and then over the years. So my mental image of what an online multiplayer is and can be was based on visions of technical glitches and other such difficulties.

Soon I realized that my view was remarkably outdated.

I just couldn’t believe how flawlessly a modern online multiplayer did work at best, and to me it occurred especially when watching smoothly gliding helicopters in the sky, knowing that real people operated them. For some reason, it was indeed the motion of hovering helicopters that made me awe the most, and as always, the fascination stems from the perceived framework of limitations through which the piece is experienced.  Apparently, the smoothly fluctuating motion of a helicopter breaks, or at least, pushes the boundaries of what is possible in my mind within the online framework, which makes it so beautiful to look at. In the end, it comes basically down to the logic of magic, as in everything that goes seemingly beyond one’s understanding of reality, is fascinating and remarkable.

Often times, one to appreciate the beauty of a thing, one must understand the history, or in the case of real-time image, the technological struggle behind the thing. The online multiplayer gaming has come a long way and personally it was fascinating experience to jump in at a point where things are finally starting to come together seamlessly and, yes, beautifully.

Next gen titles such as The Division and Destiny are strong signals of the dying sole single player experience, and if the tech is there as it seems to be, I’m all for it.

CGI is Dying and It’s OK

November 5, 2012

After watching recently The Expendables 2 (2012) and consequently reading a stomach-wrenching making-of piece regarding its VFX production (hat tip to @osulop), I think I’m finally starting to be done with CGI (i.e. non-real-time computer graphics) as a certain kind of ubiquitous, uninspired visual filling that’s found in contemporary mainstream live-action cinema.

For me, it’s particularly sad since the most exhilarating moments I’ve had watching movies involve CGI in one way or another, Terminator 2 (1991) and Jurassic Park (1993) being probably the most profound examples of that. Especially Jurassic Park completely altered the paradigm in my head what cinema can do, and to see for the first time a credible living, breathing dinosaur on the screen was something that unlikely will ever be topped. To me, the experience must have been in the same ballpark than of those who saw moving images for the first time, or at least it felt like it.

Both Terminator 2 and Jurassic Park were at their core not only about telling a compelling story, but more importantly pushing the boundaries, showing the unshowable. Of course, the CGI sequences in, say, Terminator 2 do look dated now and stick out like hell, but nevertheless, the effects still resonate ambition and pure will to make an impact to the medium, what they undeniably ended up doing, to say the least.

However, the cost of producing decent CGI has dramatically fallen from the days of aforementioned films, consequently the quality has not being able to keep up with the ever-increasing quantity. What’s worse, to come back to The Expendables 2, CGI is now being (mis)used to cut costs (say that to an ILM worker of 1992) and salvage ill-conceived live-action sequences.  And that’s the use of CGI that frustrates me the most: something ordinary made with CGI only since it’s more convenient that way, not because it would be otherwise impossible.

Conversely, if we look at the very high-end of the modern CGI spectrum, we do see ambition, medium pushing and other traits historically associated with CGI, but there’s not so many players left playing that game. Indeed, it’s almost poetic that James Cameron who started the mainstream CGI revolution with movies like The Abyss (1989) is the one who has been keeping that frontier spirit alive in the recent past.

That being said, I think the grand story of blockbuster CGI is coming to an end little by little, and I believe it was indeed Cameron’s Avatar (2009) that made it ultimately difficult to break genuinely new technological ground within the realm of commercial CGI. I do acknowledge that it’s always dangerous to say such a thing about a field that involves so heavily technology, but like David Cohen wrote on his Variety article, the demise of mainstream CGI in terms of artistic integrity and innovation is already evident and in full effect, and, I might add, most interesting CGI can be now found from small budget students projects all over on Vimeo.

But I’m okay with that. I’ll continue to follow the non-real-time side of computer graphics but I’m no longer excited about it per se or where it’s heading. Indeed, if I could have a sneak peek into the future of any human endeavor, it would be real-time imagery, no question about it, especially now that we are at the brink of the new hardware generation.

I can’t wait for the next gen Gran Turismo, but I can live without Avatar 2.

Cool but Untruthful Story, Bro

October 2, 2012

 

When it comes to the traditional narrative arc that consists of the exposition, the complication, the climax and the resolution, it really bears no relation to the reality whatsoever. The reason why we love that structure, though, is that it feeds the belief system that in the end all the random occurrences in our lives make sense, as in an “everything happens for a reason” way. The yearning for reason and meaning is so profoundly built into us that billion-dollar industries are based on those premises, the most prominently the mainstream cinema, literature and drama television.

In reality, this structure of narrative can only be found in fiction, and every piece of supposedly non-fiction that adheres perfectly to that logic should be viewed with extreme suspicion. The history has shown time after time that the truth and a really good story tend to be mutually exclusive concepts, and the controversial cases of fabulists presented as truth tellers like James Fry or Mike Daisey are telling of how upset people get when that exceptionally compelling non-fiction ends up being more or less fabricated.

As I have noted before, the medium that is video games isn’t at its core a vehicle for traditional story telling, but rather, ideally speaking, a framework in which people can come up with their own little strings of events. Like kids do with their toys. This should be pretty evident by now for even the most hardcore narrative apologists who keep on hoping for a Citizen Kane (1941) of video games to come up someday and legitimize the medium once and for all.

There is, of course, tremendous value beyond structured narrative, and, I would argue, the apparent inability to convey traditional stories is not the video game’s weakness as a medium but an inevitable outcome of its core strength and substance. The thing is, in video games, especially in ones like Grand Theft Auto IV (2008), the total and complete absurdity of life simply becomes more tangible than through any mainstream prewritten narrative.

Indeed, when for instance driving accidentally over people in GTA IV, it is an empty, cold, meaningless occurrence without any redeeming factor or impact on the larger picture. It just happens. And this kind of representation of such an event resonates so much more with the reality than, say, a mainstream movie where every little detail has to bear meaning and make sense. The movie Signs (2002), for instance, paints a picture that all life’s occurrences, even the most unfortunate ones, form one big jigsaw puzzle that only makes sense once the pieces come together. GTA IV shows us, however, that one wrong turn can result as most nonsensical and meaningless (but sometimes hilarious) casualties, without any reason or redemption.

What makes video games less truthful is the fact that one can always start the game over if one fails by, say, dying. Heavy Rain (2010) acknowledged that and was an attempt in making a narrative-based game without fail states and the need for saving. In fact, the game’s director David Cage explicitly advised everyone not to load a previous state even if the events didn’t go as the player would’ve wanted to. Still, Heavy Rain was more like a glorified choose-your-own-adventure book than the messianic, Oscar winning interactive narrative some of us are still waiting to arrive.

It’s weird that when it comes to representations, the strongest emotions are evoked not by truth but by fabrications. Movies, even so called documentaries, are excellent at that, while games not so much. But, like I said, games can be more truth to the real than movies or any other form of representation will ever be, and that’s what makes video games such a subversive medium.

All in None

September 16, 2012

As much as we’d like to believe otherwise we are not that much, if any, smarter than the people who roamed this sphere before us, generally speaking that is. The thing is, every technological achievement we now manage to pull off rests on the previous discoveries that we now take for granted and consider self-evident, even though they have taken exceptionally creative minds to come up with in the first place. Technology-wise, we’d be rather helpless without our vast cultural heritage and the knowledge that has cumulated over centuries, or even millennia, and the old metaphor about “standing on the shoulders of giants” encapsulates the concept quite nicely.

That said, it’s difficult to imagine more technologically involved a medium than the real-time image. Granted, non-real-time computer graphics (CGI) does take enormous amount of effort and expertise from multiple fields of knowledge, but I’d still assert that producing real-time imagery can be even more technologically demanding because of the additional dimension to take account of that is performance. People at Pixar can tweak, in theory, a singular frame for weeks to make it look perfect without sacrificing anything else than production time, whereas in games there’re only fractions of a second (1/60 usually) to generate one frame, although these two processes aren’t exactly 100 % comparable. In addition, in the world of real-time, there’s no luxury of setting up a so-called render-farm consisting of hundreds of nodes to distribute the rendering burden. We’ll see, though, if cloud gaming will change that at some point in the future.

So the art of real-time image is first and foremost an art of understanding the hardware that is the very enabler of the image to be produced and interacted with. Secondly, the art of real-time image rests on the concept of simulation, or more specifically, on the expertise to understand and produce algorithms that connects various simulations to the real world. This is when things turn complicated.

It indeed takes an enormous amount of skill and expertise from a number of people to produce a modern AAA game, and increasingly so as games become more and more sophisticated. In retrospect, it’s been actually fascinating to witness completely new areas emerging in gaming that were completely absent from the real-time discourse before, like simulation of physics circa early 2000s. And each time the medium refines and introduces a new area of simulation, it gives a birth to a whole new discipline to which a host of talented people will commit their professional and academic lives.

Consequently, a sole developer team – let alone one lone man – cannot excel and push boundaries in every area of the medium anymore, but have to pick their battles instead, even when it comes to bigger studios. In my mind, Doom (1993) was one of the last “perfect games” that really recalibrated the expectations of what real-time imagery can do in almost every respect, and the phenomenal success and cultural impact of the original Doom derives essentially from that fact.

What this means in the grand scheme of things is that innovations and breakthroughs spread unevenly across the games, making some do one or two things medium-pushingly well while lagging behind in others. Consider, for instance, how embarrassingly bad the simulation of physics is in one of the biggest franchises ever, that is Call of Duty, but which at the same time continuously pushes the (ambivalent) art of scripted spectacles forward release after release. Or, how Gran Turismo 5 (2010) excels in the field of simulated light like no other, but handles vehicular damage just about as badly as it gets.

The very nature of technology dictates that once something enters the realm of possibility, it becomes a default soon after the novelty has worn off. In other words, we tend to take for granted something that didn’t exist a moment ago, which, of course, applies beyond gaming and has got to do with the human condition in general.

In the light of all this, one of the better things that have ever happened to the medium is the middleware industry that commodifies innovation and liberates developers from reinventing the wheel over and over again. Still, it frustrates me to see one game doing some particular thing extraordinarily well, like, say, the brilliant birds in Half-life 2 (2004), and realizing that said technology may never end up in other future titles. Ever.

Thus, it is often the ultimate fantasy of a gamer to imagine an all-in-one title that would cherry pick each and every most advanced features across the medium and put them together, but that’s indeed a mere daydream. Unfortunately, the gaming industry just isn’t some communist utopia where everyone works to benefit the medium at large, even if that would definitely sound great.

So, the issue is that yes, the knowledge and innovation does accumulate and spread to a degree across the games over time, but not fast and systematically enough for my liking.

Me and a Gun

August 27, 2012

 

The reality is, the biggest franchises in the current video game space are so-called first-person shooters what comes to genre specification. The franchise that most likely comes to everyone’s mind is the Call of Duty series that is nowadays very common to label as the lowest common denominator of interactive entertainment due to its popularity. Personally, I try my best to avoid seeing artifacts through their social stigmas, and I think it’s quite ridiculous sometimes how far some people go in order to make themselves appear superior by bashing popular pieces of entertainment or art.

Anyhow, it’s not a coincidence that top selling video games are more often than not about shooting people with firearms: people generally like shooting. And one doesn’t have to have real-life experience of an actual combat rifle to recognize that holding and using one is, in a way, an ultimate power trip, as in utter dominance over others. Moreover, the fact that we have this established, massively popular genre known as first-person shooter is indeed telling that the act of shooting is a central theme particularly in first-person games in general.

In fact, there really aren’t any other major genres with the first-person prefix, even though there perhaps should be. It seems that a game to qualify as a first-person one needs to include not only a first-person view (most simulators are depicted from first-person), but also offer a certain level of freedom for the player to wander around the 3D space as a person.  Therefore, I guess, Doom (1993) was considered as a first-person game but Microsoft Flight Simulators aren’t. Many times I wish there was a flight simulator or a racing game that incorporated meaningfully such a freedom into the gameplay, but it’s always about guns, guns, guns.

So, shooting people is such a profound way of interacting with the virtual from the first-person point of view that it feels strange and out of place when an AAA game with said perspective comes along that involves barely use of weaponry, like Mirror’s Edge (2008). Mirror’s Edge was based on finding a right path to come over the obstacles, keeping the momentum going, and avoiding the enemy fire at the same time. However, occasionally the player got a hold of a gun and could fire back, which made the shooting feel that much more special and meaningful, if you will. Now the weapon wasn’t a fundamental part of the player’s character like in most first-person games, but a luxurious object that one kind of cherished and which radiated genuine authority.

This all comes down to the fact that I find it highly fascinating when a first-person game (=shooter) introduces functionality that’s not directly connected to the core ethos of the game, a fascination which dates back to Duke Nukem 3D (1996) that famously contained all kinds of extra stuff to play with. What’s amusing, then, is that in the case of Mirror’s Edge, that functionality was indeed shooting. Also, I remember how exciting it was to be able to drive civilian cars in the original Operation Flashpoint (2001), which had little to do with the actual militaristic gameplay, but which transformed the game as a whole into something much cooler, even if being quite cool to begin with.

I’m not saying shooting isn’t necessarily enough for a game like Call of Duty. I’m saying first-person games should aim a bit higher than being mere shooters in terms of functionality. Crysis (2007) was an ambitious endeavor into that direction in that the player could pick up and hold almost any object, not only a gun (the system is unparalleled even today), and drive around freely with vehicles, military and civilian.

The first-person view is not an artistic statement, but the most natural and obvious way of portraying the virtual, and it frustrates me that the most prevalent first-person genre is tagged with such a specific and limiting term as shooter. At the end of the day, I guess, I want genuine first-person Grand Theft Auto -esque games that deliver on-par experiences in all fronts. Please.

Work, Play in Real-time

August 13, 2012

It’s pretty given that life consists of not only everyday, mundane tasks and goals that make our day-to-day living possible in terms of pure existence, but aspirations and ambitions of higher order as well. As cliché as it may sound, I believe it’s the latter form of endeavors that make us human, that our existence can rise upon mere survival and procreation, enabling our very being to connect into something universal, beautiful or some other entity that may be considered transcendental.

Sure, activities like self-expression or learning new things can be seen as some sort of survival of the mind, although only way we can die from lack of such is merely from the inside. So as long as our basic needs are met, we tend to require more sophisticated goals toward which to strive that cater to the creative and intellectual forms of our being.

One of the central goals of my earlier creative life was to learn the art of so-called 3D imaging that was gaining serious popularity in the early 90s. Seeing back then cool looking, but poorly produced by today’s standards, music videos using 3D animation as a visual element, such as Swamp Thing by The Grid, really pushed me to pursue 3D graphics and leave pixel-based animation aside. I just had to find the way to get into that place where I could create computer-generated images just like ones that were so fascinating to look at on TV.

It was somewhere in the latter part of the 90s when I finally cracked the invisible wall between me and 3D imaging by getting a hold of and learning 3D Studio 4 by Autodesk. 3D Studio 4 was rather user friendly a 3D software at the time relatively speaking, but looking at it now 15 years later, it’s striking how dull, limited, stiff and uninspiring the work environment that supposed to feed the creative process really was. Everything was divided into separate modes and sub-programs that made the user constantly to jump in between them. Furthermore, the hardware on which 3D Studio 4 was running on was so sluggish that it struggled to even keep up with the wireframe rendering, making it sometimes quite frustrating to carry out even a slight adjustment to the camera or to the geometry.

However, when 3D accelerated cards finally became everyday items, the whole 3D game, if one pardons the pun, changed in more ways than one. Now using a 3D application like 3ds Max that took advantage of 3D acceleration, was completely different an experience. The engineer-like work environment had turned into a sandbox that was a delight to merely play with, like spinning the camera around a cube in a 3D space at buttery smooth 60 frames per second only because one could, and because it looked so, so cool.

The creative process comes down to iterations, iterations and iterations. So when everything happens in real-time and at high frame-rate, the speed at which new iterations can be made is really limited only by the user. And every fraction of a second the user has to wait the machine to comply with the input, pulls him/her further away from the flow, which is why I generally like working in an environment like 3ds Max as much as possible over After Effects even in the simplest animation cases.

To me, playing around in 3ds Max is in a way purest real-time image experience. There’s no ludus controlling and limiting the play, only one’s imagination and creative skills. The act of 3D modeling, for instance, can be as immersive and captivating of an experience as playing a high-end video game, and Minecraft (2011) proves if anything that creativity and playfulness can be fused together quite successfully.

One Man Band

July 14, 2012

In retrospect, it seems unbelievable that there was a time when one man, just one, could produce an AAA game which not only took the hardware to its limits, but delivered an intransigent artistic vision as well. An epitomic example in my mind of such is Andrew Braybrook who designed and produced some of the brightest Commodore 64 hits that are now considered as milestones in home computing, namely Paradroid (1985) and Uridium (1986).

Even though I love both Paradroid and Uridium, I do have a special relationship to the latter one, which still amazes me how well it took advantage of C64’s hardware and even some of its disadvantages, like horizontally doubled pixels. Uridium indeed was a looker in many regards, not least of which being the silky smooth 50 Hz scrolling that put some of the arcade games of that time to shame. Also, the multi-phased ship explosion looked nothing like ones in previous games I had seen so far. Uridium was a visually perfect C64 game, if there’s such a thing as perfect.

Uridium is one of those rare, magical occurrences where a right person collided with a right technology at a right time. Braybrook knew the C64 inside out, had a vision, the skills and determination to carry through that vision, which resulted as a game that basically blew the competition out of the water on that platform, at least what comes to the mere visuals. Unfortunately, the success of Braybrook stayed on the C64 and didn’t translate to more advanced systems that followed it, like the Amiga 500, which is often the case in success stories that involve right timing and profound knowledge of right technology. Uridium 2 released 1993 on the Amiga 500 platform was indeed just another shooter that barely left a mark on history.

Nevertheless, I can only imagine the power trip Braybrook must had been having when designing and coding the original Uridium, that one man could make such a big contribution to the gaming community and real-time imagery at large. That’s something, as said, that most likely will never happen again in any platform. Not even in the so-called indie-scene that has found a new foothold on downloadable market places such as the App Store, Xbox Live Arcade or Steam.

Indeed, small one/two men operations today simply can’t push the medium forward through technology in multiple fronts like id Software, Epic or Crytek do. Instead, they can do it through a distinct visual style that makes it possible to produce, in a sense, an AAA game within that particular artistic framework. Consider some of the most celebrated indie games of late such as Limbo (2010), Superbrothers: Sword & Sworcery EP (2011) or Fez (2012), what they all share is some novel, breakthrough visual paradigm that is easy on the hardware, but which pushes the medium artistically to its limits at the same time.

Small developers have to pick their battles if they are planning to go against the big boys, there’s no question about it. With Uridium, Andrew Braybrook didn’t have to. He was the big boy back then.