Archive for the ‘Medium’ Category

Sophie’s Choice

April 27, 2012

In his 2004 book, The Paradox of Choice – Why More is Less, American psychologist Barry Schwartz argued that, in general, the more choice we as consumers are given to, the less happy we become. Turns out, the psychological burden that comes with an array of choices actually outweighs the hypothetical gain of the optimal decision, which leads to anxiety and distress. Making choices makes us by and large miserable, it seems.

The things that bother us in choices are the trade-offs and, consequently, feelings of loss that we have to deal with when evaluating the options in hand. Yes, for some reason we tend to think, as Schwartz writes, that we actually lose all the alternative choices as a whole when we have to pick just one, which is obviously an erroneous line of thinking. Still, it hurts to make a decision whenever there’s plenty to choose from, and one has to only observe a kid at McDonald’s choosing his/her Happy Meal toy to see how painful it sometimes can be when there’re mutually exclusive but equally enchanting options on the table.

The McDonald’s kid leads us conveniently to the agony most PC gamers face every time they launch a freshly installed game, which is the host of sliders and drop-down menus found in the graphics options. Of course, the graphics options are a non-issue for the players who happen to possess a top of the line PC on which everything can be maxed out, quite joyfully so, I’d assume. But for the rest of us, the graphics sliders can be a serious source of misery and distress.

The agonizing trade-off here is the classic performance vs. image-quality dichotomy, which is something that has defined the real-time image throughout its existence. When operating on a certain piece of hardware, we simply can’t have both maximum performance and maximum image-quality happening at the same time, both of the notions being, of course, theoretical ideals in and of themselves. Once there’s a single pixel drawn on the screen, we are trading performance for image-quality.

The key question, then, rises: To what extent we are willing to sacrifice performance over image-quality? The one thing I love about consoles is that, due to their fixed hardware specs, it’s developers who solve the performance / image-quality equation for the player, which a) makes console games more auteur, and b) means the console gaming experience is identical throughout the platform so no one feels left out.

However, as said, the PC player with a lower-end hardware isn’t that fortunate. Adjusting the graphics settings to “Low” or “Medium” isn’t the source of anxiety per se, but rather the pure awareness of the fact that there’re also “High” and “Very High” or, let alone, “Ultra” settings available, but at the same time, unattainable performance-wise. It’s indeed amusing, and sad, how the high-end settings can make the current settings seem worse by mere existence. And what’s worse, the “Low” and “High” settings bear no absolute value whatsoever. I believe people would’ve been much happier with Crysis (2007) if the “Medium” setting had been simply renamed “High”, just like in Starbucks “Small” is called “Tall”, I hear. It’s all relative.

In the end, the true agony of graphics options comes down to the heartbreaking optimization process when the player struggles to find some magical combination out of tens of sliders that affect the performance the least while still keeping the image-quality acceptable. Personally, I’m all about performance so every frame below 60 per second is a compromise in my eyes, which keeps me jumping between the gameplay and the settings for a quite some time.

There’re indeed usually tens of choices and trade-offs to be made before the game can really start for many playing on a dated PC hardware. Don’t get me wrong, though: I’m all for options, tweaking, fine-tuning, and all that. Yet, the freedom of choice tends to come with a considerable psychological price that console gamers are generally free of.

A price that keeps us mulling over what we can’t have instead of enjoying what we do have.

Speaking of Engine

March 30, 2012

One way of comprehending the software side of the real-time image is to divide it roughly into two basic components or layers. The first layer can be seen consisting of art assets, such as 3D models, textures, sprites, and so forth that populate the virtual space. The second, perhaps more fundamental, or even metaphysical, layer covers the collection of algorithms, popularly referred to as the graphics/game engine, which deals with simulation of space, light, and motion, and in the end, puts everything together. Put differently, the former layer deals with the static, and the latter with the dynamic.

The above dualism comes actually down to the earlier post regarding the dichotomy of algorithm vs. design, which was about the idea that algorithms cannot produce genuine design structures in and of themselves, but that dictated by a set of rules. For instance, a physics engine does produce an unlimited number of different outcomes, which is the beauty of it, but which all are deterministic and thus predictable in nature. The same goes with everything else produced algorithmically, like shading, perspective, post processing, or even fractals. Therefore, sure, an engine can provide a solid foundation upon which to build a modern high-end video game, but overplaying a specific engine in marketing as a certification for quality is usually just that: marketing.

Still, I believe the relative importance of engines in general has exponentially increased over the time as the technology behind the games has become more sophisticated. If we look at the dawn of the real-time image, such as Pong (1972), there wasn’t that much either design or complex algorithms involved. It was the era of Commodore 64 and the likes, when we really started to see actual art direction emerging in games, such as Andrew Braybrook’s Paradroid (1985) and Uridium (1986), which both had a distinct visual style to them.

However, if we scrutinize the said games merely through still images, the engine being used really contributed nothing to the ultimate look of the games. The art assets, such as sprites and background elements, appeared on the screen just like they were initially designed pixel by pixel in a paint program or what have you. Here, the visual impact of the engine came to play only after the piece like Uridum started to move and became dynamic. And, like said, in the end it’s the engine what makes things dynamic in the realm of real-time imagery, which meant in the C64’s case basically little more than moving art assets along the x- and y-axis.

In the C64 era no one talked about graphics engines. It was only after the art assets started to traverse along the z-axis in a massive scale in games like Wolfenstein 3D (1992) and Doom (1993) when the serious engine-discourse started to take place, I believe. Now the engine did contribute fundamentally to the overall outcome of the real-time image, which came across even through still images. Indeed, art assets no longer displayed on the screen as is, but went through algorithmic processes, like simulation of perspective (i.e. space) and/or illumination, before hitting the player’s retina.

Now marketing departments and people in general started to evangelize all of a sudden about the Doom engine, Build engine, and let alone Quake engine(s), which led to a plethora of derivate games coming to existence through engine licensing. The more sophisticated technology behind the games became, the more it made sense to license it instead of developing one’s own from the ground up, it seemed.

Yet, the most relevant developers today generally don’t use off-the-self graphics engines, but propriety engines instead, or at least heavily modified licensed ones. I guess it’s fitting here to adapt the famous saying of Alan Kay: People who are really serious about their games should make their own engine.

Except for the physics engine, that is. There’s still no better physics engine than Euphoria in the market.


March 19, 2012

Today I came across with this image on the Internet that reflects the popular myth of graphics as a trivial aspect in gaming, which is, of course, wrong and dishonest on so many levels.

So, I prefer my edited version instead:

One can ask oneself, which element has seen more evolution over the decades: the visuals or the stories?

The Princess is still in another castle.

CGA Hell

March 7, 2012

Before the millions of colors today’s hardware is able to push on the screen, we, as consumers of real-time imagery, have had to put up with a number of far more modest color configurations, starting all the way from a palette of two distinct colors.

It seems that the more limited a color palette is, the more recognizable look and feel it tends to produce to a given image. For instance, the Commodore 64’s 16-color palette is so iconic that I, for one, can quite easily single out C64 screenshots from other platforms with similar specs. And besides playing games with the system, the C64 palette burned somewhat permanently onto my mind through extensive use of paint and animation software, which were the cases that made me realize of how restricted the 16-color palette really was.

Little did I know, though, that the 16 colors of my C64 was luxury in comparison to the palettes found in PCs of that era. Back then PCs meant strictly business so they consequently lacked everything that even remotely had to do with producing aesthetic pleasure, both visually and aurally.

Personally, the PC palette I found the most appalling at the time was the 4-color CGA variant that consisted of cyan and magenta as primary colors in addition to black and white. The games using the CGA palette were almost insultingly horrible looking, and to combine that with the screeching sound of the PC speaker, it seems now like a miracle that people, me included, willingly played such games in the first place.

So basically my animation is a study, if you will, of something about which I wrote a while ago, that is using medium specific artifacts as a means for artistic expression. The animation in question adheres indeed not only to the now obsolete CGA color palette, but other technical characteristics and limitations (listed under Content) of that era as well.

On top of that, the animation utilizes the idea of an additional medium through which the imagery is presented, which renders, as I wrote, the content in some cases more authentic and credible. In this one, I decided to simulate a CRT monitor of which parameters are broke down under Simulated Screen. To me, the CRT look just fits perfectly with low-definition computer imagery, which is why I mainly prefer using a scanline filter when playing older games on a TFT screen.

And finally there is the Actual Screen on which the imagery is displayed to the viewer, which is, of course, beyond artistic control. For what I know, someone could be indeed watching my animation even with a real CRT monitor, which would be interesting setting considering the animation attempts to simulate one.

The end goal was to create an exceedingly “mediumized” piece of animation, and as such, I’d consider it a reasonable success. And now, in 2012, I find the three decades old CGA palette actually not that appalling anymore when using it by choice, and not because the technology says so.

Necessary Evil

February 20, 2012

Every creative person can affirm with ease the fact that a work of art is only rarely an exact manifestation of the author’s initial vision, but oftentimes very much about dreadful trade-offs and compromises. Which is especially true with pieces pushing new boundaries in terms of technology. It is what it is, and to accept that reality as early as possible helps to deal with the frustration later on if and when the finalized product falls short of the expectations.

From the perspective of the end user, there are basically two kinds of compromises, or at least in the realm of real-time imagery which is by nature very much about trade-offs. Ones that are reasonable obvious to the spectator possessing general knowledge of the medium, and ones that become evident only when additional, specific information of the product is provided by the developer.

Let’s first examine the former variety of compromises that are indeed fairly noticeable from the surface. An apt example of such can be found, for example, in a polygon-based racing game called Stunts (1990) that featured a rather compromised bitmap backdrop.

Indeed, due to the peculiar algorithm that handles the rotation of the bitmap milieu, the solution begins to fall apart the more the view is being tilted. The way the backdrop is dealt with is a rather odd one as the algorithm doesn’t actually rotate the bitmap at all, but rather skews it quite crudely. The backdrop is clearly divided vertically into 10 pixels wide strips which are then moved individually along the y-axis in order to create an appearance of rotation. And when the screen tips over a certain point, the backdrop vanishes altogether.

One can only speculate the reason for such a bizarre algorithm. Perhaps it was genuinely best what the developer could come up with given the hardware limitations at the time. Or they just didn’t know how to code a proper algorithm for bitmap rotation within the time frame they had. In any case, the developer had to make a call to either include the inconsistent and unstable solution into the game, or simply leave the whole feature out.

Since the backdrop performs most of the time reasonable well, I believe the pros ultimately outweighed the cons. However, it’s quite obvious that the developer wasn’t particularly proud of the solution, as the horizon indeed disappears, I would argue, by design, when it really starts to disintegrate.

Another more recent instance of a compromise that comes across quite noticeably is the Gran Turismo 5’s (2010) particle system that, for some reason, gets exceedingly blocky when viewed from certain angles. Frankly, I’m not completely sure what’s going on there since other games with similar particle effects chiefly don’t do that. However, I’m positive there’s some valid trade-off involved considering the super-ambitious developer, Polyphony Digital. Perhaps the particle system was put in place to future-proof the graphics engine for the next generation of hardware, since the smoke and dust perform beautifully in the Photo Mode.

As said, the above two cases are instances of compromises evident to the spectator simply by experiencing the product as is. To recognize, then, the second form of compromise requires indeed specific information of the production process itself and the original vision, dreams and hopes of the developer.

One example of such is the production of Alan Wake (2010) that initially was very much hyped for its supposed open-world structure. The end result, however, was a purely linear experience which made the game seem like a compromise, even if true, to those who had followed the production from the outset. But people without such knowledge saw Alan Wake merely as a kick-ass action thriller, which it was.

In the end, compromises are what actually get things done. In fact, one could argue that the whole concept of design is at its core about dealing with compromises and trade-offs. More than anything, though, the art of compromise is to be able to step back and evaluate the big picture, to see the forest for the trees, and then doing the right thing for the product as a whole and everyone involved.

Lost in Translation

February 10, 2012

As long as I remember, I have had this particular fondness toward arcade games (and I mean the actual coin-operated ones), especially when growing up. Obviously we are now living in the post-arcade era where sophisticated home systems made arcades finally obsolete, but fortunately at least classic arcade games continue to live on through collectors and, of course, emulation.

What made arcade games so special back then was that they offered, in a way, a window into the future of consumer real-time imagery, as in, what could be possible in home environment somewhere down the line. In fact, for me they acted like windows quite literally since I rarely had resources to actually play the games, but awkwardly hang around them. Watching other people play was almost equally exciting nevertheless, which made me a rather lousy customer for the local arcade as a juvenile.

Even though arcade games by and large came from a variety of developers, one publisher was and, in a way, still is in a league of its own: Sega, and particularly the Sega AM2 team led by design genius Yu Suzuki. I’ve yet to encounter an entity that has broken ground in video game graphics as ambitiously as Sega has, with games that continuously redefined what the real-time image can do.

The closest games to my heart out of the Sega’s overwhelming portfolio are the ones released in the 80s using so-called Super Scaler technology. These are the titles that simulate 3D space by algorithmically scaling the bitmap art assets creating an illusion of traversing along the z-axis. The effect was nothing short of staggering and light years ahead what home systems could do at the time. Games, like Out Run (1986), After Burner (1987) and Thunder Blade (1988), to name but a few, all used this Super Scaler system, and the whole charm of them, I would argue, was ultimately reduced to the smooth scaling effect.

As said, the above-mentioned arcade games ­represented the absolute high-end of the gaming spectrum at the time. The commercial success of them naturally created financial pressures to bring the arcade experience to the home systems, such as the Commodore 64, as well. The problem was, that the C64 represented virtually the direct opposite end of the spectrum with its lackluster hardware in terms of screen resolution, color palette and computational horsepower in general.

Of course, that didn’t stop money-grabbing publishers, such as Ocean and U.S.Gold, bringing Out Run and the likes to the low-end home systems. The issue was that, for instance, Out Run was not so much about the gameplay per se, but the spectacle of driving smoothly through the colorful scenery filled with eye pleasing details. When those things were stripped off in the low-end versions, such as the one on the C64, there wasn’t that much, if anything, conveyed from the original experience anymore due to the hardware limitations. All that there was left was a really bad game, even by the C64 standards.

The original Out Run was indeed a rock-solid fusion of software and hardware that carried through the visual concept Out Run was built on gracefully with no hiccups whatsoever. The game ran beautifully at high and steady frame rates, contained striking transition effects when driving from one section to the next one, and offered vast variation in terms of visuals in general. I’d say Out Run was best the year 1986 had to offer for real-time imagery which the everyday audience had access to.

However, the way the C64 version was constructed was completely backward. The exercise here was to shove the concept of Out Run into the system in any way possible regardless the inherent hardware limitations. There was indeed nothing – not a single chip – inside the C64 that would’ve warranted or justified the ludicrous idea of porting a game like Out Run to such a weak system. Which is painfully obvious just by glancing at the end result, especially in motion.

In the end, everything comes down to the fact that the real-time image as a medium can’t be separated from the hardware platform that it’s on; the real-time image is the software and the hardware. I can only imagine the level of disappointment of someone who actually paid real money for an arcade conversion like the C64’s Out Run and thought having nearly the same arcade experience at home. It was like buying Star Wars: Episode IV on DVD and getting Star Wars Uncut instead.

It goes without saying that in the end the logic of such endeavors had got to do more than anything with the power of Intellectual Property and the “fraudulent” financial leverage that came along with it. In fact, all this makes me think of fast food joints where the pictures of the burgers above the counter represent nothing of the actual products people are shoving, rather happily, into their faces. What they are doing is consuming the simulacrum of the Big Mac, not the Bic Mac depicted in the marketing materials.

The problem of horrible arcade conversions wasn’t the poor target hardware in and of itself. There were quite beautiful games on the C64 at the time, like, for instance, Uridium (1986) that utilized even the awkward shape of the C64 pixels for its advantage. The problem was the completely backward and corrupt creative process. And I’m using the term creative very loosely here.

Yes, Size Did Matter

February 4, 2012

The younger audience of today is probably having hard time comprehending the concept of sprite that pretty much defined real-time imagery for more than a decade or so between the 80s and 90s. Sure, we do still use the term to denote planar 2D images representing 3D objects in a 3D space, but it has a very different meaning today than back in the day.

In the era of 8 and 16 bit systems, like the Commodore 64, Super Nintendo or Neo Geo, sprite denoted specifically an independent, dynamic visual object (like a space ship or a race car) of which properties, such as size and number, related directly to the capabilities of a certain piece of hardware. So, for instance, the C64 could display no more than 8 sprites at a time which paled in comparison to the 96 sprites the more advanced Neo Geo was able to push to the screen at once. Put simply, sprites were Lego-like building blocks for the dynamic elements of the game, and very much of the visual outcome depended on them, whether one liked it or not.

As said, not only the number of simultaneous sprites varied between the systems, but so did the sizes of them as well. That is, how many pixels one sprite could consist of in maximum, which was relatively few in lower-end systems, such as the C64. This limitation made larger game characters and other dynamic objects that more fascinating at the time, even if the assets usually consisted of several sub-sprites to make an impression of a one large one.

An iconic example in my mind is The Way of the Exploding Fist (1985) originally developed on the C64 that was, in addition to the great gameplay, celebrated indeed for its seemingly massive characters, which, of course, consisted of a number of separate sprites. Nevertheless, the size of the characters in and of themselves bore so much aesthetic value that the technical side was secondary: big was big was big.

Since the fascination of real-time imagery is very much based on the recognition of technical limitations and, at the same time, pushing that envelope of what is considered possible, it’s not coincidence that some games, especially the ones released on the launch of a new hardware platform, exploit this frame of thinking. The thing is, there is this short post-launch window within which a game can make an impression employing merely the basic features of the freshly released platform.

One instance that comes to mind is Super Mario World (1990) that came bundled with the Super Nintendo as a launch title. So, I believe the sole reason why there was a huge Bullet Bill, aka Banzai Bill, right at the first level was indeed to impress new console owners, or people playing at stores, with the power of the SNES. And since the sizes of game characters were highly limited in the previous generation of hardware, it was only natural to put an oversized version of an already established character to mediate the point across: Kids, we can have this big Bullet Bills from now on.

It was the inevitable decline of sprite-based hardware that made the issue of size obsolete in the realm of real-time imagery. Once everything was constructed with polygons, the dimensions of art assets became totally relative, and thus a non-issue technically speaking. Which seems to escape people who marvel, say, God of War III (2010) as a technological achievement for its colossal characters. I can too scale up a 3D model in a 3D software environment as much as I please and technically it makes no difference whatsoever.

If, then, one “superficial” attribute has to be singled out that bears any technical meaning in art assets today, it’s definition instead of size.