Archive for the ‘Medium’ Category

Back Off

December 2, 2010

I wrote earlier about the problems the freedom of choice poses on real-time medium, and how every video game has a sweet spot in which it looks optimal. Polyphony Digital, the developer of the Gran Turismo series, CEO Kazunori Yamauchi must have read that piece since in the newest installment of the series, Gran Turismo 5, the game takes arbitrarily and without any subtlety a portion of that freedom away from the player – a decision motivated by vanity and fear.

See, in certain conditions, GT 5 politely instructs the player to move away from the car before “taking a shot” in the otherwise gorgeous and print-quality-images-producing photo mode. GT 5 is acting like a bodyguard pushing paparazzi away from a declining star who just couldn’t fade away with dignity.

But why on earth GT 5 would pull a stunt like that?

One part of the reason stems definitely from the crazy “over thousand cars” premise, which inevitably led to the standard-premium division, making majority of the car rooster look like something out of Playstation 2, or even a generation earlier. Yes, it’s indeed the standard cars of which GT 5 doesn’t allow people to take pictures up close, which amusingly gives away the fact that even Yamauchi himself acknowledged that there was something fundamentally wrong with putting low-definition assets in a Playstation 3 game with a highest profile to so far.

Second part of the reason can be found from the mere existence of the Internet. Without the Internet, Yamauchi wouldn’t have had any problem whatsoever with people taking unflattering screenshots of his game for their own enjoyment. But thanks to the Internet, no stone is left unturned (and unshared in the age of the Net) when the hardcore audience starts dissecting every possible flaw the much-hyped game may or may not contain. Kill your idols, and so forth.

It’s interesting to see if this kind of screenshot-limiting becomes more of a standard in the future, where developers may turn more paranoid of how their game will appear in screenshots posted in hardcore forums, like NeoGAF. It’s common knowledge that gaming press even today receives strict guidelines from publishers if some poor soul wants to use manually taken screenshots as an illustration, and my understanding is that for that very reason reviewers often end up using glossy PR-shots purely out of convince.

Of course, in the PC side of the gaming, such limiting endeavors would not go far, but as the gaming by and large is constantly drifting towards walled-garden approaches, like gaming consoles and AppStores, a developer-dictated screenshotting could indeed become a valid scenario somewhere down the line.

But let’s hope not.

*Playing* With Games

November 25, 2010

One quite popular strategy to criticize a video game is to label it as a glorified tech demo of some sort. I get that sentiment, but still, it comes across a bit funny from the perspective that I actually love tech demos, or  at least the core idea of them.

The thing is, tech demos often condense and encapsulate the essence that ultimately makes the whole medium tick and stand out from the traditional media. It really is the technology, tech, which remains left when everything else is stripped down, and it’s nothing short of exhilarating to watch every time when a developer brings something novel, or in the best case scenario groundbreaking, technology to the table, using a tech demo.

But before we go any further, we have to look into how Roger Caillois divides act of playing into two separate concepts, which he calls ludus and paidea. Ludus refers to the type of play that contains rules and goals, whereas paidea is something that a small child performs spontaneously, without having any specific objectives or rules in her mind.

So tech demos by definition lack the element of ludus (and story) altogether, which supposedly makes them boring and pretty much pointless endeavors. However, the mode in which I have played video games the most throughout my life actually resembles more paidea type of playing than ludus, which in a way makes my relationship to video games as that of tech demos to an extent. I play with them like a child plays with his or her toys.

Yes, I have no problem admitting spending hours playing only with shadows in Crysis, rotating 3D models in Starglider 2, and messing around with the Euphoria engine in Grand Theft Auto IV. Of course, it takes some level of childishness to allow oneself to behave that way, but isn’t that what video games are all about?

End of Story

October 19, 2010

I remember the time when gaming journalists started their reviews by summarizing the background story, and yet, being aware to some extent of the ludicrous nature of such concept as stories in video games. Perhaps people knew back then better (or just were intellectually more honest) that it really didn’t make a whole lot a difference what the motivation supposed to be behind given sprite to travel left or right killing everything.

But then someone had an idea that video games as a medium must be taken more “seriously”, just like movies do, and apparently investing on the story and the characters is seen as the ticket for that. In effect, there emerged a pressure for developers to incorporate more thought out characters, compelling stories, and engaging plots into their games with variable success.

The problem, however, was and still is that developer simply can’t force the player to choose according developer’s will, and because stories consist of usually characters making decisions about their lives, interactive (scripted) story is nothing but an oxymoron. And because video games’ very core has everything to do with interactivity, there really is no way around it.

Gonzalo Frasca lays down a compelling argument that video games are based on a semiotic structure known as simulation,

which is a way of portraying reality that essentially differs from narrative. […] Simulation does not simply represents objects and systems, but it also models their behaviors.

I believe Frasca’s stance is the key in understanding why scripted story and pre-designed characters are in conflict with the very nature of video games. The thing is, video games don’t just show you stuff, they let you do stuff, and video games should therefore first and foremost provide best possible means and environments for doing so through simulation. And what’s cool about simulation is that it allows people to do and experiment things safely without ethical, economical or any other kind of repercussions whatsoever, which ultimately makes video games unique among the visual mediums.

Indeed, I happily admit playing Grand Theft Auto IV from start to finish by having only a vague concept of the overall storyline, but still enjoying the gaming experience nevertheless. GTA IV is a game with highly simulated environment, which enables it to generate thousands of micro-stories on the fly when one is playing the game outside of the written storyline. It’s almost needless to say that those unique “happy accidents” that take place when wandering off the designated trail are usually the most exciting and hilarious events of the whole game, and the situations where the medium manifests itself most genuinely – at least in my mind.

I really can’t help but feel uneasy when someone speaks video games as a platform for telling stories. Playing a video game for its story is like watching a movie for its editing: both of them are mere structural devices for something more profound, and in themselves quite meaningless. In movies, editing serves the story, but in video games, it’s the simulations (shooting, driving, fighting, flying, whatever) that in the end of the day count, not the story that excuses them.

Okay, Heavy Rain was a great piece of entertainment that had an emphasis on compelling scripted story, but was Heavy Rain more like a glorified “choose your own adventure” –book than an actual video game? Were there any non-scripted events to be found in the game at all? No?

This may seem harsh but in my mind all this story-character -nonsense is basically conducted in order to impress the outsiders who don’t and won’t get the real-time medium in the first place.

So, who are those people whose expectance we seek so desperately?

The Problem Is Freedom

September 23, 2010

In an earlier post, The Problem Is Choice, I contemplated the concept of sweet spot, and how the player can choose to experience the real-time image in a way the developer, the author, would never have wanted the game to be experienced. This freedom of choice is exactly the reason why Roger Ebert says that video games can never be art, and I partly agree with him; art is too limiting and degrading label to the real-time image which is so much more.

As a continuation to that post, I started to think how the amount of spatial freedom the player has in any given game, relates to the visual fidelity of that game. It seems that high level of freedom results in either low fidelity all over, like in Minecraft, or highly uneven fidelity, like in Microsoft’s Flight Simulator X. In FS X the planes and cockpits are top-notch, but the scenery at large is dull and mostly repeats itself, excluding the particular landmarks, like the famous airports or certain parts of iconic cities. FS X is an example of a game with ultimate freedom that comes with the price of ultimate incoherence.

If we then look at the other end of the spectrum, like Street Fighter IV, the fidelity in SF IV is as high as it can get, universally, thanks to the extremely limited spatial freedom of the player (and thus of “the camera”). It’s hard to take a bad screenshot of SF IV, even if you tried to do so, in contrast to FS X.

In a way, it’s sad that the child-hood’s fantasy of “a game where you can go anywhere in the world and do anything” isn’t going to happen, not on our lifetime at least. Of course, one can create infinite worlds using procedural techniques, but procedural is never equal to the stuff that is made by design. I’m really not a big believer in procedurally generated content, but in some limited cases, it has its place, for sure.

As the friends across the pond use to say, freedom is not free.

Are We There Yet?

September 19, 2010

Drawing with the Commodore 64’s classic Koala Painter wasn’t the easiest task to do; joystick as an input, lots of crashes, pixels as big as Lego-bricks, and not to mention, a highly limited color palette. In fact, everything else was somewhat tolerable and forgivable, even the crashing if you just knew what procedures to avoid, but you just couldn’t get around with the poor amount of colors that was available. And because the resolution was equally poor, rasterization techniques were essentially ruled out from the get-go due to the hideous results.

Luckily, color palette has since then increased steadily from C64’s 16 colors, to modern hardware’s millions of colors. What this transition has caused by and large is that the number of colors has become basically a non-issue in contemporary mainstream real-time imagery discourse, as if the whole project of colors would be concluded. And it essentially is, since the 16,7 million color palette has been a consumer standard for years now, and obviously “good enough” for majority of people.

So, I started to think what else has come to its evolutionary end in the realm of real-time imagery, and one instance I could think of was screen resolution. I’m highly skeptical that there will be a demand for higher than 2560 x 1440 resolution (which is the resolution of a typical 27” display for professional use) in near future, since even 1920 × 1080 (Full HD) has been something of a gold-standard for quite some time. And bigger resolutions would entail bigger displays, which is hard to imagine happening in home environment for logistical reasons alone, given the enormous physical size of today’s flat-screen TVs.

Ok, I really didn’t see the iPhone 4’s Retina Display coming, but I guess only few of us did. The Retina Display’s resolution is far beyond the reasonable need, so it’s rather safe to declare that the pixel-density has now officially hit the ceiling, or at least is about to hit in the very near future.

As said, color palette and resolution haven’t been issues for a while now, which gives rise to the question when do we have, for instance, enough onscreen polygons ? Or when lighting is “good enough”? Perhaps I’m comparing apples to oranges here, since palette and resolution are more directly dependent on the technical features of hardware, than number of polygons or quality of shadows. Still, it would make sense that there will be a day when we aren’t anymore discussing polygons or shadows per se, but solely the artistic use of them. The technical discourse becomes obsolete.

In a way, I really don’t want to see that day, since the chase is always better than the catch.

Really Smooth

September 13, 2010

If someone would come up to me and asked what is the single most important aspect of the real-time image from the aesthetic point of view, I would say frame-rate. Without blinking an eye. It’s so important that I’ve uninstalled a game only for the fact that it was arbitrarily locked in 30 frames per second, a decision that made little sense to me.

Interestingly, frame-rate hasn’t always been an issue in video games, and I believe the discussion of it really fired up with the introduction of simulated z-axis, that is the 3D graphics. The thing was, even the most rudimentary 3D was such a struggle to the early days’ hardware, likes of Commodore 64 and Amiga 500, that it took relatively long time before one could see fluent 3D in a home environment without all the rendering errors and hick ups. And I’m not only talking solely about polygons here, but other methods to depict z-axis, too.

That been said, one could only imagine the pure bliss that was to see completely smooth frame-rates in arcade-games, like Outrun (1986) or Chase H.Q. (1988), back then with, I’d say at least, 40 frames per second sprite scaling. As a matter of fact, it was exactly those games that taught me the value of high and steady frame-rate, and how mesmerizing it really can be to see an image simulating the z-axis so smoothly in real-time.

So, I believe it was not until the 3D hardware acceleration revolution in the late 90s that truly changed the paradigm for frame-rate in a way that people started to expect smooth frame-rates from games in a home environment. Seeing Jedi Knight: Dark Forces 2 (1997) rolling on a 3Dfx Voodoo –card in our home PC for the first time with no stuttering whatsoever, was something that I will probably never forget. In fact, most of the jaw dropping experiences in general have usually been related to high frame-rate, in one way or another.

Now that frame-rate in real-time imagery is steadying in about 60 frames per second, it’s funny how old mediums, like film or television, are still pushing only 25-30 frames onto the screen per second. It baffles me why film industry is concentrating to the obnoxious 3D technology, and totally ignoring the very problem (besides the mandatory and horrible glasses) why the contemporary stereoscopic 3D is failing: the low frame-rate. James Cameron acknowledges that, but apparently that seems to be not nearly enough.

All in all, frame-rate is in the core of real-time image, and if it fails to deliver, everything else falls apart, in my mind. That is especially true in modern games. There really are no excuses for bad frame-rate, at least ones that I’d be okay with.

Born In 1980

September 3, 2010

It is fair to say that the best thing the 80s ever had to offer people, was the possibility to be born in it. I’m not saying 80s was a particularly bad decade, but it definitely had its dark moments, which we don’t have to go into detail.

From the perspective of real-time imagery, the 80s, and especially the early 80s, really was the best era to be born in, if you just happened to be interested in computers and such. And I, for instance, learned quickly that I was.

To be born in 1980 means that I have an optimal vantage point, which covers the most crucial developments, the Cambrian Explosion, in the history of real-time imagery, starting from the release of Commodore 64 in 1982. I consider C64 as a tipping point after which strive to graphical excellence in home environment really started happening. “But you were only two years old when C64 was released”. Yes, but our family got the C64 not until 1984, so the decent software was already there, just waiting to be played with. So I had only lived four years without any exposure to real-time imagery whatsoever, four years of which I can’t remember a thing. But I remember clearly the first time when I saw Pitfall II: Lost Caverns running on C64 in the Christmas of 1984.

This means that the evolution of real-time imagery is an integral part of me growing up, and of my very being. I have the privilege to be watching real-time medium to evolve up close, of which future researchers can’t be anything but envy. It’s like being an Egyptian watching the pyramids being built.

The 80s wasn’t so bad place to be in after all.