Perfect License

November 27, 2011

It may not come as a surprise that I was a huge fan of Legos as a kid and spent countless of hours putting them together and fantasizing of making ambitious structures that never ended up realized, just like every other kid. As the time has gone by, I’m starting to see increasingly clearer not only the connection between the fascination of video games and Legos, but the structural similarities in the mediums as well.

The first and most obvious similarity is the “Legos as pixels” parallel, which can be seen, for example, in The White Stripes music video directed by Michel Gondry. Of course, using physical objects as pixels isn’t solely a Lego-specific exercise, since a myriad of other house hold items will do as well, such as Rubik’s Cubes, Post-it notes, or cross-stitching, to name but a few.

There is, however, more substantial similarity of a higher order, if you will, that has got to do with reuse and recombination of a certain set of base components. This is most apparent in sprite-based games of the 8/16-bit era where the effect was quite in-your-face, but the concept still exists even in modern graphics systems such as of RAGE that is indeed celebrated, and partly marketed, for its non-tiling nature.

The Lego brick-ish nature of video game graphics is more often than not an inescapable fact that stems from the limitations regarding system memory and workload of the artists. And as I concluded in an earlier post, this phenomenon of tiling and reusing makes the unique assets or “blocks” appear so much cooler and precious than they perhaps otherwise would. But that’s beside the point.

The point is that in the light of everything said above, the Lego games of recent years, such as LEGO Star Wars or LEGO Harry Potter, developed by Traveller’s Tales make so much sense that it’s almost ridiculous. In a way, Lego is a perfect license for a video game, and actually one of the few that doesn’t exceedingly compromise the original concept that is Lego. Obviously a strong movie IP doesn’t hurt, but the real magic lies not in the IP, but the aforementioned correspondence of video game graphics and Lego as mediums. As a fan of both, it makes at least my imagination run wild of the possibilities.

And on top of it all, plastic, the sole material of Legos, must be the most straightforward material to simulate with real-time graphics. There’s just no need for sophisticated shaders, and for once the often-used derogatory term “plasticky” works in favor to the title, not against it.

In the end, the brilliance of the Lego license comes down to the advantages of a low-fidelity visual principle I previously wrote about. Consider, for instance, how little effort it must go into creating Lego versions of Harry Potter or Indiana Jones once the generic base character is done, in contrast to more realistic approaches. Minecraft says hello, too.

Of course, I’m not saying the Lego games by Traveller’s Tales are good, as they have generally been, merely by the virtue of the Lego license. I’m saying in the right hands a license like Lego is an enormous benefit and an asset to the production on the whole, especially in the case of a smaller studio.

Simulation by Proxy

November 20, 2011

If there’s one concept regarding video games to which everything written here at my blog comes ultimately down, it have got to be the concept of simulation. What’s so intriguing and unique, then, about simulation in relation to other forms of representation is that simulation doesn’t replicate only how a certain entity look, but how it functions beneath the appearance. Simulation is always a dynamic system of input and output, which is the beauty of it.

As I noted in my thesis, simulation as a notion has basically two sides: An educational/scientific, i.e. more “serious” side and a playful side, of which the latter video games are more closely representative. The main difference between the two is that simulations in the video game space are mostly aesthetic-driven and less accurate regarding the physical reality than the educational/scientific simulations. So simulations for merely entertainment purposes need merely to look good and be truthful enough to provide a credible impression, which makes such simulations that more interesting to a visually oriented person, like myself.

Bearing that in mind, video game simulations, such as of lighting or physics, are more than anything about compromises and trade-offs, and thus generally far from perfection, but decreasingly so as the technology moves forward. Consequently, it’s pretty given that practically every simulation technology that comes along is mere temporary, waiting to be replaced with more advanced one. Of current technologies, for instance Ambient Occlusion in all its forms is such a lighting solution of which obsolescence is only a matter of time once genuine indirect illumination schemes become more feasible.

What, then, comes to simulation of space, there was this endeavor in the 90s to use voxel ray casting technique to depict 3D landscapes in a relatively detailed, but highly constrained manner. The first commercial video game using such a method was NovaLogic’s Comanche: Maximun Overkill released in 1992, which blew visually the competition out of the water. There wasn’t anything near as impressive as Comanche on the market at the time, and everyone with a decent PC must have thought that the future belonged to some kind of a voxel system, not filled polygons.

We all know what happened to the mainstream use of voxels later on, but there was that brief moment in the early 90s when voxels did arguably better job in conveying the appearance of a bumpy 3D landscape. Of course, the visual glory of voxels came with the typical limitations associated with ray casting. Still, polygons in 1992 were too few and far between to be able to depict even remotely the complexity provided by voxels, and as such, far behind purely aesthetically speaking. Eventually games like Flight Unlimited released in 1995 started to establish the idea that texture mapped polygons was really the way to go even for detailed scenery in which voxels excelled so well, and today we really have no credible alternative paradigm what comes to geometry.

To yet further illustrate my case, Comanche 4 released in 2001 was the first one in the series to abandon voxels for hardware-accelerated polygons, the technology that was the final nail in the coffin of software rendered voxels.

The demise of voxels proves if anything that in the realm of simulation technologies, some solutions are hot at one point, and obsolete in the next. And who knows if the contemporary polygon-paradigm which we so certainly believe in, will be somewhere down the line substituted with some uncanny technology that our current minds can’t even grasp.

After all, technology by nature is not built to last, but to be replaced.

Ghost in the Machine

November 14, 2011

It’s thoroughly evident by now that the project of Artificial Intelligence (AI) has turned out to be a lot trickier, to say the least, than it was perceived around 50 years ago. It seems that genuine, creative intelligence is something of a mystery even for today’s scientific inquiry, and as late John McCarthy, responsible for the coining of the term Artificial Intelligence, famously put it, “We understand human mental processes only slightly better than a fish understands swimming.”

So, even if we are nowhere near to replicate, or even understand, human intelligence per se, we can in certain cases mimic some of the most basic cognitive functions more or less convincingly. Of course, video games are a major arena for this kind of an exercise, in which the need for AI has been of importance from the very beginning of the medium. For instance, each ghosts in Pac-Man has its own separate AI behavior following own logic of how to get to the player. Unsurprisingly those AI routines didn’t fool anyone, even at the time, to believe the ghosts were actually strategizing their patterns, but regardless made the game that more interesting, or even a classic, for all I know.

Of modern games, the Portal series is now famous, among other great things, for its peculiar “AI” character GLaDOS, who is, interestingly, realized by not employing something that could be traditionally described as AI routines at all, but merely by scripting. The thing is, GLaDOS is nothing but a plot device through which the overarching, prewritten narrative is conveyed, so the scripting is understandable, and even unavoidable. She isn’t thus interesting from the point of view of actual AI that is under discussion here.

However, in addition to GLaDOS there were these cute little turrets that were also supposedly controlled by AI according to the fiction of the games. And in contrary to GLaDOS, the turrets actually possess genuine, although extremely basic, AI behavior, in that they react dynamically to the player’s actions.

Besides acid pools and abysses, the turrets are the sole enemies in the Portal universe so far, and what differentiates them, in my mind, from other AI antagonists in other games is the exceptionally credible nature of them. As said, human intelligence is still beyond the grasp of today’s most sophisticated AI systems, which means every simulated human thought process in any video game is a far cry from the real thing. This isn’t, of course, the case with our Portal turrets.

Indeed, because the turrets are, according to the Portal narrative, controlled by “AI”, the simulation of such a turret using real AI routines (which are admittedly at a poor state) is so much more feasible, and thus believable, than simulating behavior of any human(oid) character. And that is why the Portal turrets fascinate me, as they are, in a way, as credible as AI characters can be in a video game. It’s really not that much of a stretch to imagine something like those turrets actually becoming real at some point in the future, with a similar AI behavior.

So it seems that simulation of a certain type of fictitious “AI” is far more within the reach of current state of actual AI technology than simulating genuine intelligence, which said out loud sounds quite obvious. But still, it struck me how such low-level AI can be at the same time the most convincing AI, when wrapped with a cunningly designed context, like a sentient turret in this case.

Juvenile Textures

November 3, 2011

If I had to single out one game that has had the most profound impact on me merely in terms of visuals, I’d say Daytona USA by Sega back in 1993 when I saw it at the local arcade for the first time. I was 13 years old, and to describe the experience as religious to some extent isn’t that far-fetched. It rocked my world and irreversibly changed the way I perceive real-time imagery.

What I didn’t realize back then was the fact that Daytona USA represented the first title of so-called Model 2 lineup of games, many of which would follow more or less the success of Daytona USA. Model 2 was the first arcade system board by Sega which used filtered texture mapping to provide speed and detail never seen before in a mainstream real-time imagery. That meant the team behind Daytona USA had to consider texturing as an integral part of creative process for the first time. And it shows.

There is, in my mind, this carefree, even childish, sentiment towards texturing, in a sense that the texturing is more stylistic than realistic. Granted, the visual design of that era of games at large was generally more grounded in visual styles than a striving for photo-realism due to technical limitations which were compensated with immense artistic license. Still, considering the context of the game, which resembles real-life stock car racing, the texturing was perhaps more stylistic and fantastical than would’ve been technically necessary. In short, Daytona USA could’ve looked more (photo) real than it did.

For instance, the foliage textures are not only unrealistically inconsistent in terms of hue throughout the game, but far too bright and colorful to be taken as credible representations of vegetation – even by the 1993 standards. The next major Model 2 game, Sega Rally, released a year after was already far more coherent and convincing what comes to visual realism through texturing, which underlines the fact that the fantastic and stylistic nature of Daytona USA wasn’t something made out of technological reasons but by design (deliberately or not).

There are yet other examples of this rather creative and laid-back nature of texturing, such as the cliffs that look like moon surface in the Beginners Circuit, the yellow strobo-lights in the Advanced Circuit’s tunnel, and the surveillance monitor passageway in the Expert Circuit, to name but a few.

In the end, I believe the overall design of the textures in Daytona USA came down to two main factors.

The first factor, as hinted earlier, was the inexperience of the team, which manifested as lack of proper art direction in terms of visual consistency. It seems like each member of the texturing team designed their own subset of textures without seeing each other’s work until the end.

The second factor ­– and remember, this is me speculating – has to do with the technology itself, in a sense that back in 1993 when Daytona USA came out real-time textures, let alone filtered ones, were a subject of amazement in themselves, meaning basically any design would’ve been somewhat remarkable. So, I guess it was expected to see the developer to go a little crazy with the designs, which in fact happens with just about any novel technology that comes along.

But regardless of everything, I find the look of Daytona USA extremely charming in all its incoherence, even today. In addition, I really liked the Sega’s overall vision back in the 90s of one virtual parallel universe that its arcade games supposedly shared, at which heart Daytona USA undeniably was.

Proper Definition

October 17, 2011

Every era seems to have its own hyperbolical prefixes, such as “super” in the 90s, which are then being used ad nauseam by marketing professionals to sell people stuff they didn’t realized needing. Super this and super that, and when everything is super, nothing is. Eventually, the buzzword becomes obsolete and replaced with some other soon-to-be meaningless word.

In the past half of a decade or so, one of such words has been “HD” which is, of course, an acronym for High Definition. It used to indicate a specific level of sharpness in video imagery, but today, the term is slapped pretty loosely on everything imaginable, including mascara containers and contact lenses.

However, where the HD situation is really peculiar is the iOS App Store where the developers have been mostly running wild with HD labeling since the release of the iPad. It seemed for a while that no iPad app was safe from the magic touch of HD attached to its title (as if there would be Standard Definition iPad apps), but luckily this trend is starting to fade out. In fact, the titles like Flight Control HD, Fruit Ninja HD, Real Racing 2 HD, and Cut the Rope HD to name but a few are very much reminisce of Super Nintendo games of which many were labeled as Super Something, or Nintendo 64 era with Something 64 titling, which did nothing but diminished the impact of the prefix / suffix at high rate.

HD labeling was and is obviously a way to differentiate iPad apps from iPhone ones, and an attempt to justify the higher pricing, as there are supposedly more pixels to work on, which is an absurd position to begin with. But what made the iPad’s HDness even more silly and arbitrary was the Retina Display introduced by the iPhone 4, which carries only 28 % less pixels than the displays found in both iPad one and two.

Of course, HD isn’t completely an empty marketing ploy, but does refer to an actual phenomenon of increased pixel number and density on displays at large. In the realm of technology, the more is usually the better, and in that regard resolution isn’t considered to be an exception. And in most cases, it isn’t.

So, I would argue that resolution isn’t indeed necessarily an absolute value, but should be treated as an integral component of the visual landscape as a whole. In other words, sometimes resolution can be too high in relation to the actual content of the imagery.

The most obvious case of the above that comes to mind is emulation of old hardware where, in my opinion, the original resolution should always be kept intact even if higher resolutions were available. This has got to do more than anything with the sanctity of piece of art, which is fundamental. On a side note, I generally despise the idea of “HD remakes”, too.

The second case is something I realized just recently when playing Shadowgun by Madfinger Games on my iPod touch with a Retina Display. As I wrote earlier, Retina represents something of an end of evolutionary advancement of pixel density on consumer displays, since it’s hard to imagine any human need for much sharper image. And it isn’t about that 640kt RAM this time. Sure, one can differentiate singular pixels in some cases even on a Retina Display, but with proper anti-aliasing applied, pixels become virtually unnoticeable to a bare eye.

So, Shadowgun, being a somewhat tour de force in iOS visuals otherwise, illustrated the disconnection between a super-sharp resolution and a relatively low polycount. What most iOS games still lack in terms of visual fidelity is indeed the number of polygons, thus it’s rather peculiar to see fairly crude imagery geometry-wise through such a clear lens, i.e. resolution. It goes without saying it’s better to mask the deficiencies somehow than bring them forward, and the Retina Display does exactly the latter.

Consequently, it’s weird to say but, in my mind, some of the more ambitious polygon-based games on iOS, such as Shadowgun or, say, Dead Space, don’t in a way “deserve” the Retina resolution. Yet. A visual landscape of a real-time product, or any visual product for that matter, should be first and foremost about authenticity (in regard to emulation), balance and coherence. At the moment – and I do believe the situation is mere temporary – the resolution of the Retina Display is a bit too high in relation to other visual structures, at least in the aforementioned instances and the like.

According to Script

September 30, 2011

The original Half-life released in 1998 was a genuine game changer for the FPS genre in more ways than one, there’s no question about it. The opening tram sequence alone made it very clear that this wasn’t your everyday shooter, but a novel and ambitious take on the genre.

What HL did so well back then, among other things, was to depict scripted events (i.e. predefined and animated subsets of game) without employing any cut-scenes whatsoever. The tram sequence was the most prominent of such that included a fair amount of choreographed events with characters and robots minding their own business. And the player could wander around the tram freely and focus his/her sight on whatever seemed most eye-catching at the moment. It was limited freedom, but freedom nevertheless, unlike cut-scenes that usually take away any latitude the player otherwise may have within a game. And you know how I feel about cut-scenes. Machinimas, instead, are cool.

Scripted events have since HL developed to integral part of practically any AAA game, the most illustrious of which being the Call of Duty –series whose incarnations are filled with scripted events similar to those of action movies. Interestingly, scripted experiences are popularly described more often than not with pejorative terms and prefixes, such as “on-rail” or “lowest-common-denominator”, and I’m guilty for my part to that as well.

But, even though I think video games should always be first and foremost about simulating dynamic systems and natural phenomena in real-time, I still can’t help but find some scripted events in certain games extremely fascinating. For instance, the said tram sequence found in HL was and still is one of the most memorable moments in my personal gaming history, and it was indeed all thanks to the scripting. It was like being in the middle of a movie, which isn’t either a positive or negative stance, but more of an interesting observation.

Another fascinating totally scripted scene very similar to the one in HL was the credit sequence in Call of Duty 4: Modern Warfare, in which, in contrary to HL, the player couldn’t in fact move at all, but only look around the scenery through the windows of a moving car. Since the player’s position was fixed, the sequence lost some of its charm, as, in theory, it would have been feasible to pull off using solely pre-rendered animation in a Google Street View style.

However, where I find scripted events most fascinating is when the player maintains his/her spatial freedom in total. The thing is, usually scripted events happen behind a window or some other see-through obstacle, but there are instances in which the player is relatively free to circle around one. The feeling is like watching live theatre happening right before you, and even though I’m generally against non-dynamic set pieces in games, well done scripted events, and especially those using sophisticated motion capture, fascinate me nevertheless when pulled off outside the cut-scenes.

For instance, the first level of Call of Duty: Modern Warfare 2 is an apt example of such, in which the player can freely wander around the military base and watch soldiers playing basketball and do other informal stuff for their leisure. I, for one, remember marveling especially the basketball players like I’d never seen motion captured characters in a game before, which speaks volumes of the power of natural human movement, even if predefined. But still, make no mistake: I, like said, always prefer fully simulated motion, i.e. systems like Euphoria, over motion captured such, as the ones who have checked out my thesis would have already guessed.

The biggest problem with scripting is, of course, the disposable nature of them when comparing to dynamic systems of which outcome is each and every time unique. A scripted event can indeed be really amazing once or twice, but in the end it’s dynamic systems, simulations, what video games are all about and should be made of.

Dark and Stormy Night

September 12, 2011

The way an eye adapts to various lighting conditions, such as extreme darkness, I believe that an eye, or a perception, adapts in a similar manner to a certain visual principle, i.e. a means through which a given visual structure is realized. So, let me open this sentiment a little.

Consider, for instance, Lego-made structures. My argument is that the longer one observes, say, a fire truck made out of Legos and becomes familiar with it, the more the “representative fascination” of the object start to wear off and non-representative aspects begin to rise to the surface, like the Lego bricks themselves and details (scratches, Lego logos) related to them. It’s indeed the visual structure behind the representation that starts to emerge when the representational layer loses its illusive charm through familiarity. In the end, one cannot escape the fact that the Lego fire truck is just a collection of Lego bricks put together to make it appear as a fire truck, but not being a fire truck at all, how ever self-evident it may sound.

What, then, my ultimate point is and what led me to this line of reasoning were these numerous stormy sequences in video games of late that have had a particularly profound aesthetic effect on me in terms of visual realism. And it’s not obviously just me: GameTrailers, for instance, used exactly the thunderstorm imagery from Modern Warfare 2 in their review to emphasize the good looks of the game. Also the first Modern Warfare was first demoed by using the stormy intro sequence with a lot of rain and lightning. And the mother of all video game trailers, the Metal Gear Solid 2 reveal video, did the same exact thing back in 2001.

So, I believe it’s really not so much the rain component of the storm that makes such visuals so convincing, but the lightning that illuminates the scenes seemingly in random just for a couple of frames. And it’s the flashing lightning that interrupts the aforementioned adaptation, i.e. process of familiarity of given visual structure. This is especially true in cases like MW 2 in which the lightning casts real-time shadows on the environment, completely altering the visual landscape for a moment. Consequently, the polygons, textures, and shadow maps gain back enough of their representational power to trick an adapted eye, so it’s the fire truck once again instead of Lego bricks, even if just for a little while.

When talking about lighting, or in this case, lightning, it often comes back to the original Doom. As I previously wrote, what made Doom particularly scary was the back-then realistic illumination scheme, and I may add that the situations where Doom’s visuals were most credible had to do with flickering, strobe-like lighting conditions.

Of course, this kind of an effect has to be employed wisely, and in no case overused, otherwise the effect loses its appeal rapidly. Additionally, there should always be a natural phenomenon explaining the effect, such as a malfunctioning fluorescent tube in Doom or a lightning strike in MW 2, in order to work as described.