Archive for the ‘Medium’ Category

Real Virtual Things

August 5, 2011

I have always found it fascinating when reality of real-time imagery collides with the actual one. In an earlier post I discussed an artist called Susy Oliveira who, I would assume inadvertently, mimicked aesthetics of polygonal graphics in some of her sculptures, and how peculiar it is to encounter such aesthetic outside the virtual realm at large. Then there is Aram Bartholl, an artist as well, who exports, very deliberately, visual concepts only previously seen in the digital realm, like video games, into the physical reality in which they produce often an interesting visual conflict. One of his newest concepts, Dust, in particular seems quite impressive, especially if one has played heavily Counter Strike at some point his/her life.

My personal hero, though, in this field is Harrison Krix who is most well known for the Guy-Manuel helmet replica he did some years ago. However, a substantial portion of Krix’s body of work seems to consist of constructing physical props found in video games – everything from scratch. And it goes without saying that he’s ridiculously good at what he does. The attention to detail that goes into Krix’s projects is unrivaled, and it’s really freakish to see familiar digital objects, such as the Portal Gun, brought to life in such a meticulous fashion.

So, it was Krix’s work that got me thinking the exact point in the history of real-time imagery after which making such physical replicas of virtual objects became reasonable.

The most obvious technological requirement must be that the virtual object has to consist of polygons, since polygons were the first genuine solution for simulating 3D space. So, in theory, one could make, and in fact, a gentleman named Niklas Roy has made, a physical artifact based on a mere wireframe model. However, the appearance of such a physical object doesn’t really adhere to the logic of our common physical world, like Krix’s Portal Gun for instance, but of the digital realm, meaning we don’t usually have green wireframe objects lying around. Of course, the prop maker could use some imagination and make the digital wireframe object appear more of an actual object by discarding the original visual principle, but that would defeat the whole purpose of such an endeavor, wouldn’t it?

So, my wonderment lies in the question as follows: When did it become reasonable to put together a credible physical object based on a digital one without resorting too much to artistic license? Obviously there is no a definitive answer to that since it depends a lot on the shape and form of the virtual object itself (among other things), but I would say that generally speaking it may have been the introduction of normal mapping that finally rendered virtual objects sophisticated enough to be replicated in the physical world, along with relatively high polycount and texture resolution. To put it in gaming terms, the line would lie roughly somewhere between Quake 3 and Half-life 2, meaning the art assets in the former were still too abstract to be constructed and sold as real world objects, unlike those of the latter.

This all comes down to something that fascinates me more than anything, which is the indexes that correlate with the evolution of real-time imagery. Harrison Krix’s creations are indeed indexical in the sense that real-time imagery has had to evolve to a certain point to enable such a prop-making endeavor. I’m really having hard time to see Krix making abstract and blurry prop weapons from, say, Quake 2 and be equally passionate about it, but what do I know?

Think of the Children

July 7, 2011

It’s interesting that I have never had a problem with video game violence, considering how sensitive I’m of seeing real blood and injuries in still images and videos. Even fictional violence in movies has made me look away, like that of, say, RoboCop’s at the time, although not so much anymore as an adult. Okay, of movies of late the Saw series has made me fast-forward some over the top scenes, but otherwise, I’m starting to be cool with almost all kinds of (fictional) movies.

In regard of actual violence, I’m actually so appalled by it that it gets me even in such an abstract form as thermal imagining camera footage that one encounters in the media from time to time, in which barely recognizable, pixelated human silhouettes get shot into pieces. It’s indeed not the graphical representation of violence in itself, but the mere idea – the belief, even if false – that someone is actually getting hurt that makes my guts shiver.

So when dealing with simulated violence like such found in video games, there’s not even a slightest chance that the object of the make-believe cruelty is real, sentient being, in contrast to movie/video footage violence (cartoons etc. aside), which leaves always the door open regarding that question – at least in theory. A case in point is the cult movie Faces of Death which contains real and fictional acts of violence mixed together in a fairly ambiguous fashion.

It’s pretty given that violence has always been part of video games in one form or another, if we consider violence as an act of disintegrating an opponent, what it basically is. However, if we determine the official birth of video game violence as the moment when the mainstream media got interested and “concerned” about it, I believe it was Doom that first raised some serious headlines back in 1993. In many senses, Doom was a perfect storm: a revolutionary ray-casting technology, killer playability, and, of course, unprecedented graphical violence.

Yes, there’s no denying the splattering blood, exploding bodies and the controversy that followed the game did have very much to do with the success of Doom. I remember reading reviews that made a big deal of the violence, making it clear that Doom was definitely not for kids, which wasn’t something a video game reviewer had a chance to say so often back then.

This is interesting, because while the concern regarding the violence in Doom surely was sincere to a certain extent, there was at the same time an underpinning sense of pride that came across. That finally video games were able to depict violence in such detail that it could actually harm the psyche of a kid growing up, which was, of course, a bit alarmistic sentiment considering how abstract and blocky Doom was. In addition, the pride stemmed as well from the fact that the mainstream media finally acknowledged the existence of video games, even if in somewhat unfavorable light. And video games and gamers have always craved for acknowledgment if anything from the “outside world”, which Doom provided plenty of back then, thanks to the violent nature of it.

Above resonates actually with the case in which an Iraq veteran allegedly suffered from flashbacks through playing Call of Duty 4: Modern Warfare. Of course, in principle, it’s a terrible thing that a man loses his marbles that way, but as a gamer, it brings me some strange pride that a video game can have that kind of an effect on someone. As if the reality is finally starting to intertwine with the simulation, which is the goal of the whole video game project, isn’t it?

All in all, video game violence, for me, is simply a sophisticated continuation for childhood play and games in which toy soldiers got blown up by firecrackers, or mutilated in some other way. It’s just very difficult to see anything more to it.

As I stated in my thesis in Chapter 4.2, there really is no ethical dimension whatsoever in simulated violence, as long as the analog is a non-sentient being.

Frame Rate Is a Feature

July 1, 2011

When discussing real-time graphics (or as I like to put it, real-time imagery), we often associate the term with video games alone, which is, of course, a rather narrow view on the issue. Granted, video games are the most prominent vehicle for high-end real-time imagery, at least on the consumer side of things, but in recent years many everyday consumer objects have had their fair share of sophisticated real-time imagery as well.

Like mobile phones.

The real explosion in that space happened undeniably after the introduction of the original iPhone back in 2007 that established pretty much a new paradigm for how a user interface should look and feel. One major breakthrough from Apple was to implement simple physics simulation [1] into the scrolling that made it look like the list of text (or graphic elements) had mass and thus, inertia. The scrolling was actually the second feature that Steve Jobs demoed at the keynote and it blew everyone away. It’s hard to imagine the impact now as we take such features pretty much for granted, but I for one could barely contain myself when I saw it.

But what made the scrolling in iPhone really a huge deal, I would argue, was the high and steady frame rate it was presented at, especially on the 2007 standards.  As I have stated earlier, frame rate is the most significant singular feature of real-time graphics aesthetically speaking, since it is, I might add, the very measure [2] of real-timeness of given imagery.

This brings us to the importance of fluid imagery for the mainstream, non-technical people. There really are no excuses for jumpy experience when dealing with everyday consumer, since he/she is A: totally oblivious (as he/she is entitled to) to the technical circumstances behind the imagery, and B: compares the imagery to of movies and television which contain obviously smooth stream of images. In other words, fluid imagery is a default position for the consumer, not a luxury item, which is why – to come back to video games – arcade games have always run at exceptionally high and steady frame rates compared to home systems. Arcade games are (=were) aimed for anyone happened to walk by, whereas home systems chiefly for the enlightened hobbyists.

And finally we get to the point, which I was so eagerly preparing: The introduction of the Nokia N9.

First of all, as a Finn, it warmed my heart to see the positive buzz around a Nokia phone, there’s no way around it. The general consensus seems to be that since 2007 Nokia has produced nothing but disappointments, but now, finally Nokia appears to get it right.

Funnily enough, the ultimate reason for all the excitement wasn’t any particular technological innovation per se – not even the rather cool “swipe” function – but the fluidity of the user experience, i.e. the high and steady frame rate of it. It’s sad to notice that even Microsoft realized it before Nokia with their Windows Phone that if you can’t do something at 60 frames per second, don’t do it. Period. And presumably widgets and flash are absent from the N9 for that very reason.

All in all, I would argue that ultimately it’s the high and steady frame rate that renders the touch-based user interfaces, such as of the N9 and the like, not only efficient conduits for interaction, but something that is simply fun and engaging to mess around with. Put differently, if frame rate fails to deliver, everything else falls apart what comes to the user experience.

So, I would go so far as to say that frame rate is the first line of defense between the user and the machine carrying real-time imagery. Losing that battle might cost one losing the war.

I retract the header. Frame rate isn’t a feature: It’s a killer feature.

[1] see my thesis, Chapter 5.5 Simulation of Motion
[2] see my thesis, Chapter 3.5 Frame Rate

High-end Low-end

June 8, 2011

To me, the most fascinating development regarding the evolution of real-time imagery has been, by far, the transition from 2D to 3D that took place in the late 80s and the early 90s. Like I stated earlier, the popular concept of 2D/3D dichotomy is more often than not an arbitrary and even misleading division, and that by “3D” we usually mean algorithmically simulated depth in contrast to “2D” that refers to non-algorithmic (i.e. manually depicted) deepness. However, for the sake of clarity, I will employ the 2D/3D split for now.

The shift from 2D to 3D was a fundamental transition from one graphics paradigm to another, there’s hardly a question about it. The algorithmic simulation of depth brought so many possibilities opening literally a new dimension into the real-time imagery that there’s was no going back. Once I saw sprite-scaling games such as Outrun and Chase HQ, and later on polygon-based games like Virtua Racing that offered a total freedom of camera movement, up and running at 50-60 frames per second, I knew the pure 2D paradigm was irreversibly gone – and rightfully so. The idea of graphical entities traversing along the z-axis effortlessly, without any stuttering or jumpiness whatsoever was and is something to marvel at even today and should not be taken for granted. I surely don’t.

Even though the new paradigm usually is superior in every way, sometimes, however, the old one persists to live on, which can lead to interesting results. This occurred to me when I fired up Raiden Fighters Jet[1], an arcade top-down shooter released as late as in 1998, which adheres completely to the 2D paradigm, meaning no sprite-scaling, let alone polygons. It’s worth noting that at the same time 3D acceleration had already broken through into the mainstream gaming, so 2D shooters were already considered as relics back then.

More than anything, a game like RFJ is a fascinating example of an obsolete 2D paradigm taken to its logical extreme. When operating solely on 2D bitmap planes located in fixed depth, there’s only so much what one can do in terms of technology, so the developer was now able to aim its resources primarily to the actual content of the game, instead of the tech.

And it shows. Sure, RFJ is far from mind-blowing even by the 1998 standards, but either way, it’s ridiculously filled with projectiles, massive explosions, and other visual hodge-podge, that put some of the more advanced 3D games of that period to shame in terms of mere spectacle. Obviously, the developer, Seibu-Kaihatsu, had a long history of making top-down shooters, so they knew how to push the hardware (and consequently the 2D paradigm) to its very limits.

The most fascinating aspect and the ultimate point of all this is, however, the fact that when operating on the 3D paradigm instead of 2D, there’s really no technological (or “paradigmatic”) limit on what a developer can pull off. In other words, there will never be a 3D game that could be considered equally paradigm-pushing as a game like RFJ is.

Indeed, there’s no next “new, revolutionary world” to look forward to in the realm of real-time graphics, like there was in the late 80s and early 90s when 3D was rolling onto the screens. No, it’s all about mere refinement from now on, but I’ll take it.

[1] Of course, there are a number of other high-end 2D examples.

Real-time Imagery That Wasn’t

May 8, 2011

Speaking of movies, I believe we can all agree on the fact that the 80s was a pretty decent decade in terms of popular cinema. Of course, being born in 1980 may have a slight distorting effect on my personal judgment, but who can genuinely say he or she is utterly immune for nostalgia? I personally very much dislike nostalgia as a concept for the reason that it’s always a false, romanticized view on the past, but there’s just no way of escaping it: everything tends to appear nicer when relived from a distance.

The Last Starfighter released in 1984 is one 80s movie of which I have vague but interesting memories.  The main reason why the movie has stuck with me all these years was the fictional arcade machine that kicked off the story arch. At the beginning of the movie, the protagonist plays this polygon-based space shooter that later turned out to be a recruiting machine in disguise for an alien defense force (or something like that).

I was about seven or eight when I saw the movie at the first time and I remember how impressed I was by the graphics of the fake arcade machine. They easily surpassed everything I had seen so far in terms of video game graphics, but still, I couldn’t put my finger on exactly why was that.

On the surface, the flat shaded polygons looked somewhat similar to those produced by Amiga 500 (my frame of reference back then), but what made the graphics ultimately apart, looking in hindsight, was the high and steady frame rate that was light years ahead of stuttering and unstable polygons seen on home systems at the time. On a side note, I believe this was my first realization of what the high frame rate really meant to real-time imagery, which was a lot.

Of course, the graphics on the arcade machine was not genuine real-time imagery but computer animation made to look like it was rendered in real-time. Either way, it fascinated the hell out of me.

Funnily enough, I completely ignored the more sophisticated (and thus logically more impressive) computer animation that was employed heavily thorough the movie. Yes, I learned only later that a big part of the movie was indeed computer animated, but I just had no concept of what computer animation supposed to be or look like, so I didn’t know how to be impressed by it.

Later on I did fell in love with computer animation as well and learned to appreciate it as a separate visual entity. Not superior or inferior, only different.

Round’n Round

January 22, 2011

You know those moments when driving a long straight in a racing game, and you simply have to mess around with the third-person camera for a while before the next curve? I do.

Okay, the sudden 360-degree camera spins can create confusion from a gameplay standpoint, but aside that, they provide this bizarre aesthetical pleasure at the same time – and exclusively in a third-person view.

In fact, I basically never fall into a same type of excess, unnecessary camera-play when playing from a first-person perspective. It just doesn’t happen. Moving the camera around in a first-person view usually serves only one function, which is to make a sense of the environment by scanning it with your field of view. Of course, sometimes you take another look if there’s something cool happening on the screen, but nevertheless, I would argue that this said playfulness that’s often present in a third-person view is completely absent in a first-person mode.

I believe what explains this split is the fundamental difference in how the virtual space unfolds through these two viewing paradigms.

We all know, when rotating the camera in a first-person view (while being still), you could in theory replace the polygon-based 3D environment with 2D imagery and have basically the same exact result. For instance Google Street View operates solely on 2D images that are only distorted so that it looks like you are in the middle of the road – and no single polygon, shader, or texture is needed to achieve that.

However, a third-person view requires always a certain level of real-time imagery to work even in principle. So, when spinning the camera around in a third-person view, it brutally reveals the underlying structure of the imagery, and in a way, celebrates it in a process. Indeed, such a circular trajectory of a camera emphasizes exactly the depth and spatiality of the image (the reason why Michael Bay uses it in his films so much), plus really brings dynamic graphical entities like reflections to life.

I would then argue that people, who have this tendency to play around with the camera in a third-person view, represent – without putting myself up on a pedestal – the deeper, more “medium-aware” layer of the gaming, even if the person him/herself doesn’t acknowledge that.

So, what may look like a random act of silliness, can actually be a profound, philosophical journey to the very fabric of the real-time medium itself.

Or it can be just that: silliness.

Algorithm vs. Design

January 10, 2011

As we all know, the project of artificial intelligence has been nothing but an abysmal disappointment from its beginning in the 60s, and anyone asserting otherwise should check his/her facts. Charles Csuri and James Shaffer wrote back in 1968:

At M.I.T. and Standford University considerable research is in progress which attempts to deal with artificial intelligence programs. Some researchers suggest that once we provide computer programs with sufficiently good learning techniques, these will improve to the point where they will become more intelligent than humans. [italics added]

More intelligent than humans, you say? Pretty bold statement, considering we are now 43 years later nowhere near replicating human intellect, or even of an earthworm for that matter.

So what makes this situation particularly interesting from a standpoint of real-time imagery is not that we have now, and will continue having, dumb enemies in first-person shooters. No, that’s secondary.

The real issue lies in procedurally generated content, and the fundamental nature of it. The thing is, as game worlds in general tend to grow larger and more detailed year after year, the workload behind them often increases logarithmically as a result. Developers have tackled this issue by using complementary procedural methods, algorithms, for their level and art asset creation, like in Fallout 3, Mass Effect 2, and other high-volume/density settings.

However, when using a certain algorithm, i.e. set of rules, as a means for artistic creation, it shines through like a supernova: Everything has this same, unifying feel to them ­­­– and I’m not talking about style here, which is completely different issue. I’m talking about the algorithm’s inability to produce anything meaningful, genuinely novel structures, which reduces to the failure of artificial intelligence discussed prior.  Indeed, the human mind[1] is a sole artistically creative agent in the known universe, and as such, in a unique position.

Notice how the employed algorithm (NURMS subdivision) doesn’t add design to the geometry, but merely refines it according to certain general principles, causing them all to share the same look and feel.

For some reason, our emotional response to procedurally generated content differs fundamentally of the stuff originated by genuine design. We often find non-design uninteresting and boring, which I believe stems from the human’s inherent ability to recognize patterns, especially when one is bombarded with them like in a video game containing procedural material. And no algorithm will ever change that.

Interestingly, there are despite that cases in which algorithms are more suitable than design, which involve usually some sort of an undesigned natural occurrence. In theory, for instance clouds would be more than suitable for procedural generation, since clouds are generally not designed, but formed by natural forces. However, in practice, algorithms aren’t yet sophisticated enough to provide interesting, convincing results, and I have yet to come across procedural clouds that I would be happy with. Trees I have, but they indeed are fairly easy target for algorithmic creation.

Also, various simulations, such as of light or physics, are better to be carried out using an algorithmic approach, since they don’t include designed elements. In fact, before we had simulated, algorithmic physics, we had designed, animated physics that were obviously far inferior than simulated ones. Remember before ragdolls, how awkward it was to see a body laying down completely stiff on a staircase? That indeed was a horrible time in history.

Bottom line is, design is not replaceable. And algorithms have their place.

[1] One could indeed make a similar case about certain animal minds too, but I wouldn’t.