Miracle of Literacy

February 21, 2011

What comes to the history of human enterprises, there’s nothing like the one that involves conveying ideas through series of characters forming words and sentences, i.e. writing. Regardless that I love the concepts of reading and writing per se, and believe that the whole human existence actually comes down to the said practices, I’m not a huge reader (or a writer) myself. I’m more like a listener, or a spectator, who just happens to spurt every now and then something out onto the keyboard, for better or for worse.

Interestingly, regarding mediums in general, it seems to require a certain level of technological sophistication for a medium to uphold and convey legible information. Think about for instance the evolution of common paper, how much iteration it must have took before we had anything near as convenient and suitable for written communication as modern paper is.

In the same sense, when texture-mapping finally broke through into the realm of real-time imagery, the most mind-blowing aspect for me was indeed the mere idea that, for instance, in a racing game you could now read stuff out of other cars and trackside objects, instead of just watching solid rectangles to glide past. The ability to read words like “Toyota” or “Valvoline” out of objects was like an ultimate indicator that something profoundly subversive had just happened to the real-time polygon-based imagery as a medium. Stone tablets had now become, if not paper, then papyrus at least.

I’ll never forget the experience at the local arcade when I encountered for the first time Ridge Racer and Daytona USA, which were the first cases of real-time texture-mapping I had ever seen thus far in the early 90s. It was nothing short of surreal to see the graphics paradigm as you knew it to shift before your very eyes. And when I got my first texture-mapped game IndyCar Racing few years later, I remember constantly eyeing those billboards around the tracks, reading them and thinking how cool this new super-detailed “world” really was. And how there was no going back.

Yes, it really can’t be stressed enough how big of a deal texture-mapping was to real-time imagery at large. The sheer amount of visual details basically skyrocketed overnight in the wake of texture-mapping, and it’s hard to imagine a breakthrough of equal caliber happening again in any foreseeable future.

World of Papercraft

February 18, 2011

Polygons are fascinating things. Even though they are de facto building blocks for modern 3D imagery, they are in fact inherently 2D entities, bearing no structural thickness or depth whatsoever. Consequently, the logic of how polygon-based objects behave resembles more of an origami than actual, analog solid matter – even today. Only thing that has evolved in the end is the number of folds the hollow papercraft-like objects are being made of, the core nature remaining basically the same over the years.

Of course, polygons are only one side of the prevailing graphics paradigm, texture-mapping being the other one. Texture-mapping has basically had two main functions: One is to add color and, well, texture to surfaces, and second, to provide detail that is unreasonable to carry out with polygons.

Regarding the latter function, texture-mapping was indeed used for a period of time to depict relatively large details like door handles, air intakes, body seams, and headlights, instead of polygons. And since bitmaps can’t provide genuine depth or structure, such details often looked particularly flat and artificial, even if necessary, at the time. In fact, they often reminded me of those stickers on some of my childhood toys that tried to depict extra detail, like buttons or other gadgetry, but which appeared as a cheap strategy to save effort and plastic.

The evolution of polygon-based graphics naturally discarded decal-like approaches when hardware became capable enough to handle more than a handful of polygons on the screen at once. Finally at some point, it indeed became feasible to model most of the details with polygons, even though bump/normal-mapping is obviously still used to handle the most tiniest shape variations.

So, where I’m going with all this is to set up the somewhat anachronistic moment I had with imagery from Gran Turismo 5, which resonated with something I wrote earlier. See, NASCAR cars have these stickers on them that emulate headlights, grills, and other details found in civil cars, giving them more familiar, identifiably look. However, when that concept is translated into a video game context, such as GT 5 (or any other game including NASCAR cars for that matter), it comes across as a nod to the said dawn of texture-mapped imagery, when complex details were indeed pulled off without heavy use of polygons, like they are now.

All things considered, I can only imagine the oddness the artist must have felt creating such an “unrealistic”, “fakey” texture in the late 2000s at the Polyphony Digital offices. I wonder if there was similar out-of-placeness also present when someone modeled Lockheed F-117 Nighthawk for, say, Tom Clancy’s H.A.W.X. back in 2009.  Just take a brief look at it, and you must grasp what I mean.

Unintelligent Property

February 2, 2011

If someone would come up to me and asked what’s the most irrelevant, infuriating aspect of gaming and gaming related discussions, I would say, without blinking an eye, Intelligent Property (IP). For those of you who are clueless about IP, it basically contains the title of the game or the series; storylines; character’s names, appearances, and personas; and so on.

The thing is, general discussions regarding IPs are usually so distracting and beside the point that I almost feel fraudulent to refer game characters – or games themselves – by their IP names, as if IP somehow defined the fundamental nature of given game.

Well, it doesn’t.

For a start, video games using IPs outside of the realm of gaming (often referred to as licensed games) have had trough their existence a bad reputation – for a reason. The situation is mostly like a car manufacturer who thinks if they put these My Little Pony stickers on a sports car, they can leave the engine out. Of course, there are good licensed games like the Lego ones, but they are good games regardless the IP(s) they are using.

Furthermore, even IPs that are born within the gaming culture are problematic. If we look at for instance Need for Speed: Shift (2009) by Slightly Mad Studios and Need for Speed: Hot Pursuit (2010) by Criterion Games, they are both obviously driving games and published by Electronic Arts. But is it really enough to justify the IP (Need for Speed) connection between the two?

Let’s analyze: NfS:Shift is based on the technology used on GTR –simulations developed by SimBin, and NfS:HP on Criterion Games’ own Burnout –engine. So, different teams, different tech, different subgenres, and yet, the IP associates them both as “Need for Speed games”, which is not only unfair towards both of those titles, but creates confusion into the discussion as well.

It’s important to understand that IP isn’t just a name or a title for a game, but a strategy to create additional meaning, a narrative to a product. So, as I quoted Gonzalo Frasca earlier, video games are not based on narrative, but a semiotic structure known as simulation, which allows you to do stuff, instead of just watching the stuff happening on the screen. Consequently, most of the efforts to inject scripted narration, meanings, metaphors, etc. into the gaming have fallen short, to say the least, and IP often represents exactly that failed, incompatible component in gaming.

Of course, IPs like Need for Speed or Call of Duty come ultimately down to the marketing and brand recognizing, which is why we shouldn’t swallow the IPs as if they meant something to a game. Don’t get me wrong, I’m not against IPs in games per se, but to act as if they added real, meaningful value to a video game gives me nausea.

On that note, think about all the different versions of Monopoly. There are Pirates of Caribbean Monopoly, The Simpsons Monopoly, Pixar Monopoly, etc. but in the end, they all are just that: Monopoly –games. The IPs doesn’t affect in any way the core structure of the game, which is the Monopoly.

And let’s be honest, what difference that made that Niko Bellic was an immigrant, who had a troubled history full of violence, guilt and deception, when you mowed down random pedestrians with a stolen fire truck?

I say this again: Video game at its purest form should be a tabula rasa providing first and foremost an environment and means for the player to create his/her own narrative and meanings. Currently, Minecraft does this pretty damn well.

Round’n Round

January 22, 2011

You know those moments when driving a long straight in a racing game, and you simply have to mess around with the third-person camera for a while before the next curve? I do.

Okay, the sudden 360-degree camera spins can create confusion from a gameplay standpoint, but aside that, they provide this bizarre aesthetical pleasure at the same time – and exclusively in a third-person view.

In fact, I basically never fall into a same type of excess, unnecessary camera-play when playing from a first-person perspective. It just doesn’t happen. Moving the camera around in a first-person view usually serves only one function, which is to make a sense of the environment by scanning it with your field of view. Of course, sometimes you take another look if there’s something cool happening on the screen, but nevertheless, I would argue that this said playfulness that’s often present in a third-person view is completely absent in a first-person mode.

I believe what explains this split is the fundamental difference in how the virtual space unfolds through these two viewing paradigms.

We all know, when rotating the camera in a first-person view (while being still), you could in theory replace the polygon-based 3D environment with 2D imagery and have basically the same exact result. For instance Google Street View operates solely on 2D images that are only distorted so that it looks like you are in the middle of the road – and no single polygon, shader, or texture is needed to achieve that.

However, a third-person view requires always a certain level of real-time imagery to work even in principle. So, when spinning the camera around in a third-person view, it brutally reveals the underlying structure of the imagery, and in a way, celebrates it in a process. Indeed, such a circular trajectory of a camera emphasizes exactly the depth and spatiality of the image (the reason why Michael Bay uses it in his films so much), plus really brings dynamic graphical entities like reflections to life.

I would then argue that people, who have this tendency to play around with the camera in a third-person view, represent – without putting myself up on a pedestal – the deeper, more “medium-aware” layer of the gaming, even if the person him/herself doesn’t acknowledge that.

So, what may look like a random act of silliness, can actually be a profound, philosophical journey to the very fabric of the real-time medium itself.

Or it can be just that: silliness.

Algorithm vs. Design

January 10, 2011

As we all know, the project of artificial intelligence has been nothing but an abysmal disappointment from its beginning in the 60s, and anyone asserting otherwise should check his/her facts. Charles Csuri and James Shaffer wrote back in 1968:

At M.I.T. and Standford University considerable research is in progress which attempts to deal with artificial intelligence programs. Some researchers suggest that once we provide computer programs with sufficiently good learning techniques, these will improve to the point where they will become more intelligent than humans. [italics added]

More intelligent than humans, you say? Pretty bold statement, considering we are now 43 years later nowhere near replicating human intellect, or even of an earthworm for that matter.

So what makes this situation particularly interesting from a standpoint of real-time imagery is not that we have now, and will continue having, dumb enemies in first-person shooters. No, that’s secondary.

The real issue lies in procedurally generated content, and the fundamental nature of it. The thing is, as game worlds in general tend to grow larger and more detailed year after year, the workload behind them often increases logarithmically as a result. Developers have tackled this issue by using complementary procedural methods, algorithms, for their level and art asset creation, like in Fallout 3, Mass Effect 2, and other high-volume/density settings.

However, when using a certain algorithm, i.e. set of rules, as a means for artistic creation, it shines through like a supernova: Everything has this same, unifying feel to them ­­­– and I’m not talking about style here, which is completely different issue. I’m talking about the algorithm’s inability to produce anything meaningful, genuinely novel structures, which reduces to the failure of artificial intelligence discussed prior.  Indeed, the human mind[1] is a sole artistically creative agent in the known universe, and as such, in a unique position.

Notice how the employed algorithm (NURMS subdivision) doesn’t add design to the geometry, but merely refines it according to certain general principles, causing them all to share the same look and feel.

For some reason, our emotional response to procedurally generated content differs fundamentally of the stuff originated by genuine design. We often find non-design uninteresting and boring, which I believe stems from the human’s inherent ability to recognize patterns, especially when one is bombarded with them like in a video game containing procedural material. And no algorithm will ever change that.

Interestingly, there are despite that cases in which algorithms are more suitable than design, which involve usually some sort of an undesigned natural occurrence. In theory, for instance clouds would be more than suitable for procedural generation, since clouds are generally not designed, but formed by natural forces. However, in practice, algorithms aren’t yet sophisticated enough to provide interesting, convincing results, and I have yet to come across procedural clouds that I would be happy with. Trees I have, but they indeed are fairly easy target for algorithmic creation.

Also, various simulations, such as of light or physics, are better to be carried out using an algorithmic approach, since they don’t include designed elements. In fact, before we had simulated, algorithmic physics, we had designed, animated physics that were obviously far inferior than simulated ones. Remember before ragdolls, how awkward it was to see a body laying down completely stiff on a staircase? That indeed was a horrible time in history.

Bottom line is, design is not replaceable. And algorithms have their place.

[1] One could indeed make a similar case about certain animal minds too, but I wouldn’t.


The Next Generation

December 29, 2010

What has kept me following the real-time image industry, if you will, all these years has always been its rapid evolution, the way it constantly reinvents itself through iteration of hardware and software. How everything looked so much better on Amiga 500 after using and getting accustomed to Commodore 64’s visual offerings. The continuous iteration is in the core of the real-time medium, and really the magic and transcendental purpose of it, as in technology at large.

How come, then, it feels like the evolution of the real-time image has been plateauing in the past few years? What is it so different now than, say, five years ago?

It’s the decline of the exclusive high-end PC –gaming, what’s different.

For instance, judging by trailers and screenshots, the upcoming multiplatform Crysis 2 is clearly a step down in almost every possible sense from its three-year-old PC exclusive predecessor, Crysis. It really is, and no amount of post-processing (of which there’s plenty) can hide that uncomfortable fact. And the sole reason for that is the long obsolete console hardware Crysis 2 is primarily developed for. Consequently, I have next to zero excitement towards Crysis 2, which said aloud sound equally sad as distressing.

So, we are now in a situation in which high-end PC hardware is practically a generation ahead of the console counterparts, but without any decent software to take genuinely advantage of it. Instead of getting another Crysis, we get poorly optimized console-ports, such as Call of Duty: Black Ops, which perform often nowhere near as they logically should on a fair PC.

Without the high-end dedicated PC –gaming, we have no more messianic games looming in the horizon, exciting and inspiring us like Doom, Quake, Half-life 2 or Crysis did at the time. PC supposed to be about pushing the boundaries of the medium, not sweeten up console-leftovers with ridiculously expensive hardware.

All things considered, I hate to say but it seems there really will be no major, groundbreaking developments happening in the video game space until the next console hardware cycle emerges, which may or may not happen somewhere in 2012 at earliest. It’s like consoles have taken real-time imagery as a hostage, and we, the enthusiasts, have no options but to wait on their next move.

Myth of 2D

December 21, 2010

From time to time, you encounter a discussion concerning concepts of 2D and 3D namely in the realm of animated movies, but also in computer imagery at large. General consensus of which imagery should be considered 2D and which 3D seems to be that if an image is drew by hand, it’s 2D imagery by default, whereas 3D imagery is always something realized with a 3D software like Autodesk 3ds Max. Simple enough, right?

So this may sound crazy, but that hand-drawn “2D” Bambi over there left looks pretty three-dimensional to me, and in fact, I don’t see any “dimensional differences” to the 3D rendered wireframe ball next to him. Notice how Bambi’s rear leg is behind the front one, and how the shape of his head, ears, and rest of the body are all properly aligned in perspective, no?

The thing is, the whole idea of 2D/3D division is inherently broken, which only distracts and limits our visual thinking in terms of representational imagery.

See, at the most fundamental level, it takes only two objects to overlap each other for image to become 3D – and I’m not just splitting hairs here, but illustrating how empty and misused the notion of 2D really is. Basically, if the use of perspective is considered as 3D imagery – as it logically should be – then at least 99,99% of any depictive imagery is in essence 3D, making the whole split meaningless. That Bambi is as 3D as any.

So, what people actually mean by “3D” in these days, besides the obnoxious stereoscopy, is really the process of how the perspective distortion is achieved in given image. If an algorithm from a 3D software or engine handles the perspective, then it is “3D”, but if the perspective is a result of a well-coordinated hand, it’s “2D”, which is of course a completely nonsensical train of thought.

Sure, 2D is a relevant concept in many fields, even within real-time imagery, like when talking about gameplay. But when discussing and evaluating computer-generated representations in general level, the notion often looses enough of its descriptive power to only confuse the discussion, not adding to it.

So next time, instead of asking:

Gee *drool*, is that two-dee or three-dee graphics I see?, ask:

Pardon me sir, but may I enquire how much of the perspective is conducted in this piece of art algorithmically, and to what extent artistically by hand?