Frame Rate Is a Feature

July 1, 2011

When discussing real-time graphics (or as I like to put it, real-time imagery), we often associate the term with video games alone, which is, of course, a rather narrow view on the issue. Granted, video games are the most prominent vehicle for high-end real-time imagery, at least on the consumer side of things, but in recent years many everyday consumer objects have had their fair share of sophisticated real-time imagery as well.

Like mobile phones.

The real explosion in that space happened undeniably after the introduction of the original iPhone back in 2007 that established pretty much a new paradigm for how a user interface should look and feel. One major breakthrough from Apple was to implement simple physics simulation [1] into the scrolling that made it look like the list of text (or graphic elements) had mass and thus, inertia. The scrolling was actually the second feature that Steve Jobs demoed at the keynote and it blew everyone away. It’s hard to imagine the impact now as we take such features pretty much for granted, but I for one could barely contain myself when I saw it.

But what made the scrolling in iPhone really a huge deal, I would argue, was the high and steady frame rate it was presented at, especially on the 2007 standards.  As I have stated earlier, frame rate is the most significant singular feature of real-time graphics aesthetically speaking, since it is, I might add, the very measure [2] of real-timeness of given imagery.

This brings us to the importance of fluid imagery for the mainstream, non-technical people. There really are no excuses for jumpy experience when dealing with everyday consumer, since he/she is A: totally oblivious (as he/she is entitled to) to the technical circumstances behind the imagery, and B: compares the imagery to of movies and television which contain obviously smooth stream of images. In other words, fluid imagery is a default position for the consumer, not a luxury item, which is why – to come back to video games – arcade games have always run at exceptionally high and steady frame rates compared to home systems. Arcade games are (=were) aimed for anyone happened to walk by, whereas home systems chiefly for the enlightened hobbyists.

And finally we get to the point, which I was so eagerly preparing: The introduction of the Nokia N9.

First of all, as a Finn, it warmed my heart to see the positive buzz around a Nokia phone, there’s no way around it. The general consensus seems to be that since 2007 Nokia has produced nothing but disappointments, but now, finally Nokia appears to get it right.

Funnily enough, the ultimate reason for all the excitement wasn’t any particular technological innovation per se – not even the rather cool “swipe” function – but the fluidity of the user experience, i.e. the high and steady frame rate of it. It’s sad to notice that even Microsoft realized it before Nokia with their Windows Phone that if you can’t do something at 60 frames per second, don’t do it. Period. And presumably widgets and flash are absent from the N9 for that very reason.

All in all, I would argue that ultimately it’s the high and steady frame rate that renders the touch-based user interfaces, such as of the N9 and the like, not only efficient conduits for interaction, but something that is simply fun and engaging to mess around with. Put differently, if frame rate fails to deliver, everything else falls apart what comes to the user experience.

So, I would go so far as to say that frame rate is the first line of defense between the user and the machine carrying real-time imagery. Losing that battle might cost one losing the war.

I retract the header. Frame rate isn’t a feature: It’s a killer feature.

[1] see my thesis, Chapter 5.5 Simulation of Motion
[2] see my thesis, Chapter 3.5 Frame Rate

Pixel Perfect

June 23, 2011

Like it or not, mobile gaming seems to be the space where the most interesting developments in terms of real-time graphics are happening at the moment.

One reason for this recent jump in quality is, in my mind, the success of the Apple’s iOS platform and the competitive pressure that has followed it. What, then, comes to the raw horsepower of mobile hardware, Apple has never played that game like, say, Sony with its upcoming super-performing PS Vita, but instead, concentrated making a compelling platform for third-party developers to express their ideas and in many cases make a living.

One of the finest pieces of such a self-expression in a true meaning of the word is Superbrothers: Sword & Sworcery EP released first on iPad, and later on iPhone and iPod Touch.

As a game it’s an obvious throwback to the now-bygone point-and-click adventure games, and as such not even a particularly good one. But S:S&S EP should not be judged merely as a game in the ludus sense but rather as an experience like one gets from a movie or a book, as much of a tired cliché it may sound.

S:S&S EP employs visual artifacts (i.e. pixelation in this case) which positions it into a visual frame of reference that is obviously based on, like said, decades old adventure games. And it works beautifully, thanks to the visually consistent and imaginative art direction by Craig Adams, aka Superbrothers, who really is one half of the soul of S:S&S EP. The other half consists of, of course, Jim Guthrie who made the cool soundtrack for the game.

So, adhering to this kind of low-fidelity visual principle gives many benefits for a developer, especially an indie one.

For one, the amount of work can be (but not necessarily is) a fraction of what goes into a high-fidelity game, still managing to look appealing and cool. And if S:S&S EP is something, it’s hip and cool thorough.

Second, the visual effects don’t have to be that sophisticated, since our perception is calibrated to the low-fidelity frame of reference from the get-go. So just like with the “unnecessary polygons” I wrote about earlier, a technologically modest visual entity can look spectacular when encountered in a right retro-ish context, since the mind (of a long-time gamer, at least) is conditioned over the years to accept a certain level of technological sophistication from a certain visual look. In other words, a game with this kind of lo-fi visual principle can get away with a lot in terms of pure tech.

So, the times when S:S&S EP looks exceptionally good are exactly the moments when the game seemingly transcends the visual legacy it’s so cunningly mimicking. This is especially true in the epic boss battles in which the classic look is enhanced with modern effects such as color gradients, fluid vector-animations, and the completely synchronized soundtrack.

And that’s really one part of the magic of the visual landscape of S:S&S EP: setting the player’s expectations with a low-fidelity frame of reference, and then exceeding them using tricks and effects only made possible by modern technology.

Of course, it’s far from easy to pull this kind of visuals off right, and I believe it requires a skilled visual sense, like mr. Adams seems to have, to do so. But consider how hard it would be with a high-fidelity game, to transcend the visual frame of reference, that is?

Really, really hard.

Uneven Steven

June 14, 2011

I would make a case that Electronic Art’s EA Hockey released back in 1993 was one of the first sports games that replicated fairly convincingly the look and feel of the exercise they were simulating, and I do have some fond memories playing it on my Sega Megadrive (Genesis in the US). Later on, EA Hockey evolved into the famous NHL –series followed by other sports series as well, which have seen annual iterations ever since.

Indeed, release after release, player models, arenas, and crowds have become increasingly more accurate and definite in terms of polycount, texture-resolution and simulation of light. In fact, the visual quality in the latest installments of any EA Sports’ title is closing in photorealism on certain conditions, which is particularly true in broadcast –like camera angles. Yes, it’s not uncommon to hear people saying that while playing an EA Sports game, someone who walked into the room thought for a moment that the game was an actual TV broadcast.

However, when I was recently watching a trailer of NHL 12, one thing became painstakingly obvious, that resonates with something Lev Manovich describes in his opus Language of New Media as uneven realism. The gist of it is that the reality we occupy is infinite in terms of mathematical complexity, which has lead graphic researches to develop a host of unrelated solutions to deal with different areas of synthetic imagery.

The problem that rises is two-fold. Firstly, some portions of reality can be considered to be more complex to model than the others, and secondly, our human perception is tuned in to recognize certain traits of reality exceptionally effortlessly as artificial.

So, while the lighting and geometry were fairly believable in that NHL 12 trailer I watched, the movement of the players wasn’t so much. The fact is, the simulation of human motion is still in its infancy, even though EA has now implemented Euphoria –like physics engine to simulate body impacts to an extent. What makes the situation especially complicated for sports games is the fact that they are more often than not based on behavior of human body. And there’s nothing more difficult to simulate than the human muscle and nervous system, since we, as human beings, are conditioned to spot any flaw or inaccuracy in how a human body works.

Yes, the disconnection between how, for instance, NHL 12 looks on the surface and how it behaves is enormous, and I think the gap has nothing but increased over the years. And continues to increase if something radical doesn’t happen in the field of (human) physics engines, which is unlikely.

Interestingly, the problem in hand is virtually absent in games like Gran Turismo 5 or any other vehicle-based games, since the physics of, for instance, a racing car are fairly easy to carry out with a simple mathematical simulation – or at least that’s what our biased human perception lets us to believe. Indeed, if we were cars like in Pixar’s Cars instead of humans, we would (or at least I would) moan how unnaturally cars behave even in the most sophisticated racing games.

High-end Low-end

June 8, 2011

To me, the most fascinating development regarding the evolution of real-time imagery has been, by far, the transition from 2D to 3D that took place in the late 80s and the early 90s. Like I stated earlier, the popular concept of 2D/3D dichotomy is more often than not an arbitrary and even misleading division, and that by “3D” we usually mean algorithmically simulated depth in contrast to “2D” that refers to non-algorithmic (i.e. manually depicted) deepness. However, for the sake of clarity, I will employ the 2D/3D split for now.

The shift from 2D to 3D was a fundamental transition from one graphics paradigm to another, there’s hardly a question about it. The algorithmic simulation of depth brought so many possibilities opening literally a new dimension into the real-time imagery that there’s was no going back. Once I saw sprite-scaling games such as Outrun and Chase HQ, and later on polygon-based games like Virtua Racing that offered a total freedom of camera movement, up and running at 50-60 frames per second, I knew the pure 2D paradigm was irreversibly gone – and rightfully so. The idea of graphical entities traversing along the z-axis effortlessly, without any stuttering or jumpiness whatsoever was and is something to marvel at even today and should not be taken for granted. I surely don’t.

Even though the new paradigm usually is superior in every way, sometimes, however, the old one persists to live on, which can lead to interesting results. This occurred to me when I fired up Raiden Fighters Jet[1], an arcade top-down shooter released as late as in 1998, which adheres completely to the 2D paradigm, meaning no sprite-scaling, let alone polygons. It’s worth noting that at the same time 3D acceleration had already broken through into the mainstream gaming, so 2D shooters were already considered as relics back then.

More than anything, a game like RFJ is a fascinating example of an obsolete 2D paradigm taken to its logical extreme. When operating solely on 2D bitmap planes located in fixed depth, there’s only so much what one can do in terms of technology, so the developer was now able to aim its resources primarily to the actual content of the game, instead of the tech.

And it shows. Sure, RFJ is far from mind-blowing even by the 1998 standards, but either way, it’s ridiculously filled with projectiles, massive explosions, and other visual hodge-podge, that put some of the more advanced 3D games of that period to shame in terms of mere spectacle. Obviously, the developer, Seibu-Kaihatsu, had a long history of making top-down shooters, so they knew how to push the hardware (and consequently the 2D paradigm) to its very limits.

The most fascinating aspect and the ultimate point of all this is, however, the fact that when operating on the 3D paradigm instead of 2D, there’s really no technological (or “paradigmatic”) limit on what a developer can pull off. In other words, there will never be a 3D game that could be considered equally paradigm-pushing as a game like RFJ is.

Indeed, there’s no next “new, revolutionary world” to look forward to in the realm of real-time graphics, like there was in the late 80s and early 90s when 3D was rolling onto the screens. No, it’s all about mere refinement from now on, but I’ll take it.

[1] Of course, there are a number of other high-end 2D examples.

Watch, Don’t Touch

May 20, 2011

It’s rather safe to say that video games are a visual medium first and foremost. It shouldn’t come as a surprise that I consider myself substantially a visual person, as well. In fact, so much so that I find it sometimes frustrating to, say, watch a movie, as my concentration gears constantly towards the mere visuals of the film on the expense of the story, motives, and characters. Consequently, I don’t have a problem to go see a movie with a razor thin plot, as long as the visual department delivers.

Besides being significantly visual entities, video games are, by definition, a highly interactive medium, too, and to be able to give justice for video games is to play one. While video games are primarily meant to be played with, I for one have always found also pleasure in just watching someone else to play, or a game to play by itself, like in arcades.

The reason for this may stem from the period in my early childhood (long before the Internet era) when my prime source of novel video game experiences was my big brother, who understandably wasn’t all that excited about me continuously hanging out in his room, let alone playing with his computer. So, when I did was allowed to stay and observe from aside him playing, the moments were pure luxury to me. And I can’t count the number of times when I was blown away by some new game, sitting in a chair, trying to contain my excitement.

I believe this was a phase in my life when I learned, not to play with, but to watch video games, due to the limited access to them. There was like a force field between the games and me, which must have made them even more mysterious and intriguing to me. Later on, when I bought a gaming system of my own and started to follow the industry by myself, at least part of the magic was irreversibly lost. The force field was gone so now I could squeeze every bit out of the games I had in my possession, making them efficiently more mundane and less exciting in the process (which resonates with something I wrote earlier).

All in all, the most fascinating video games to me have always been indeed the ones that I have had a restricted access to, and I think it got to do with something Walter Benjamin called in a 1936 essay the Aura of a Work of Art. According to Benjamin, an object loses its charm when it’s mechanically reproduced and thus becomes more accessible to the audience.

In the era of instant access to everything the humanity has ever produced, we could use some of that big brother mentality (in a non-1984 sense) to keep artifacts more mysterious and thus exceedingly fascinating again.

Texture Mapping Taken Literally

May 13, 2011

In 1946, an essayist and poet Jorge Luis Borges wrote a frequently referred to fable about ambitious cartographers who drew up a map so detailed that it ended up covering the territory of an Empire exactly. This, of course, defeated the very purpose of the map, so it eventually decayed to shreds under the feet of successive generations.

In the digital realm, however, we don’t luckily have above kinds of spatial issues, which enable us to create as detailed representations of reality as we are willing and able to without worrying how much virtual space it may occupy. The world is ours to model.

And that is exactly what must have been on the minds of the people at C3 Technologies when they decided to develop their cutting edge aerial 3D imagining technology. I saw first demo videos of it few years back and couldn’t believe the level of definition and overall quality of the imagery: It looked too good to be true.

But now that the C3’s mapping technology has put in more mainstream use by Nokia, the significance of it becomes clearer and clearer: This changes the geological cartography as we know it. And the true beauty of the tech is the presumably minimal amount of manual labor that goes into the creation process, so the cost and time per square-kilometer should remain reasonable, which is crucial for the future of the technology.

Admittedly Google Earth does include a similar project to 3D model at least the major cities, but the fact is, it doesn’t play even in the same ballpark what comes to the level of photorealism. Currently, C3’s offering simply kills the competition as far as I know, and it’s hard to see anyone else to come up with a better (or even equal) solution in any foreseeable future.

Of course, C3’s maps are far from perfect 3D representations of their real-life counterparts since the maps basically simulate only space [1] and not light, which becomes quite apparent and distracting when looking at metropolitan areas with highly reflective skyscrapers. Plus, the shadows are completely depended on the weather conditions of the day of shooting, so the overall look may vary drastically within a large region, but these are known issues of photo-based modeling in general.

When I was younger, I was very much into scale models, like dioramas and such. They fascinated me beyond comprehension and my theory is that real-time imagery, like these 3D maps, intrigues me for similar reasons. I’m yet to pinpoint exactly what those reasons may be, but my gut feeling says it has got to do with this god-like, omnipotent perspective on the reality. That one can seemingly twist and turn a piece of reality as one pleases, which, of course, isn’t the case with the “real reality”.

[1] see my thesis, Chapter V: Simulation of Visual

Real-time Imagery That Wasn’t

May 8, 2011

Speaking of movies, I believe we can all agree on the fact that the 80s was a pretty decent decade in terms of popular cinema. Of course, being born in 1980 may have a slight distorting effect on my personal judgment, but who can genuinely say he or she is utterly immune for nostalgia? I personally very much dislike nostalgia as a concept for the reason that it’s always a false, romanticized view on the past, but there’s just no way of escaping it: everything tends to appear nicer when relived from a distance.

The Last Starfighter released in 1984 is one 80s movie of which I have vague but interesting memories.  The main reason why the movie has stuck with me all these years was the fictional arcade machine that kicked off the story arch. At the beginning of the movie, the protagonist plays this polygon-based space shooter that later turned out to be a recruiting machine in disguise for an alien defense force (or something like that).

I was about seven or eight when I saw the movie at the first time and I remember how impressed I was by the graphics of the fake arcade machine. They easily surpassed everything I had seen so far in terms of video game graphics, but still, I couldn’t put my finger on exactly why was that.

On the surface, the flat shaded polygons looked somewhat similar to those produced by Amiga 500 (my frame of reference back then), but what made the graphics ultimately apart, looking in hindsight, was the high and steady frame rate that was light years ahead of stuttering and unstable polygons seen on home systems at the time. On a side note, I believe this was my first realization of what the high frame rate really meant to real-time imagery, which was a lot.

Of course, the graphics on the arcade machine was not genuine real-time imagery but computer animation made to look like it was rendered in real-time. Either way, it fascinated the hell out of me.

Funnily enough, I completely ignored the more sophisticated (and thus logically more impressive) computer animation that was employed heavily thorough the movie. Yes, I learned only later that a big part of the movie was indeed computer animated, but I just had no concept of what computer animation supposed to be or look like, so I didn’t know how to be impressed by it.

Later on I did fell in love with computer animation as well and learned to appreciate it as a separate visual entity. Not superior or inferior, only different.