Archive for the ‘Post-processing’ Category

Beauty in the Battlefield

October 28, 2013

The Battlefield –series by DICE hasn’t really been on my radar until recently when I finally made myself familiar with Battlefield 3 (2011). Unsurprisingly, on a decent PC, BF 3 is a thing of beauty most of the time, even though the heavy post-processing can be quite overblown and visually taxing for some people. Nevertheless, in terms of technology, somewhat art direction, and other systems like vehicles and destruction, BF 3 is so far ahead of the competition (Call of Duty) that it isn’t even funny, and soon to be released Battlefield 4 seems to widen that gap even further.

BF 3 makes so many things right visually, especially on the department of what I dubbed as Simulation of Style in my thesis, meaning the simulation of how the world appears through devices such as a thermal vision. Granted, Call of Duty did it first, but BF 3 makes it so much better with more dynamic rendering in place, like more nuanced noise and the cool over-exposure artifact that appears on the screen when depicting explosions. The effect is brutal and beautiful.

Also, the various HUD (Head-Up Display) elements are carefully designed to convey that gritty, functionalistic feel with low-resolution graphics, low-frame-rate updating, and subtle flickering. I’m always impressed when I catch a subtle effect that yet reinforces the overall concept further. In addition, the HUDs genuinely appear to reside within the simulated world clearly apart from the user interface graphics, which isn’t always the case. In fact, I would argue that the HUD treatment BF 3 provides is one such detail that may go unnoticed, but yet demonstrates the developer’s profound understanding towards the real-time image and the concept of simulation at large. The devil and the understanding are in the details.

However, putting all the “superficial” visual excellence aside, I was surprised to find aesthetical pleasure from a place I never would have thought of: the online multiplayer. I’m sure that beauty isn’t necessarily the first concept that comes to mind when brining up a modern military multiplayer, but for me it was. The thing is, I’ve never been an online multiplayer gamer until recently, only a casual observer every now and then over the years. So my mental image of what an online multiplayer is and can be was based on visions of technical glitches and other such difficulties.

Soon I realized that my view was remarkably outdated.

I just couldn’t believe how flawlessly a modern online multiplayer did work at best, and to me it occurred especially when watching smoothly gliding helicopters in the sky, knowing that real people operated them. For some reason, it was indeed the motion of hovering helicopters that made me awe the most, and as always, the fascination stems from the perceived framework of limitations through which the piece is experienced.  Apparently, the smoothly fluctuating motion of a helicopter breaks, or at least, pushes the boundaries of what is possible in my mind within the online framework, which makes it so beautiful to look at. In the end, it comes basically down to the logic of magic, as in everything that goes seemingly beyond one’s understanding of reality, is fascinating and remarkable.

Often times, one to appreciate the beauty of a thing, one must understand the history, or in the case of real-time image, the technological struggle behind the thing. The online multiplayer gaming has come a long way and personally it was fascinating experience to jump in at a point where things are finally starting to come together seamlessly and, yes, beautifully.

Next gen titles such as The Division and Destiny are strong signals of the dying sole single player experience, and if the tech is there as it seems to be, I’m all for it.

Lovely Noise

May 9, 2012

Post-processing can be, and often is, a pretty muddled place when it comes to the realm of creative imagining. Consider, for example, the people who are new to Photoshop, how they tend to apply every effect and filter there is into the image only because they can. Later on, hopefully, it comes clear that not every photo needs a massive amount of lens-flares and other Photoshop trademarks to justify its existence. What’s worse, the extensive use of filters, especially the gimmicky ones, is oftentimes carried through to mask the deficiencies of the original imagery, which is, of course, misguided and abusive behavior towards any visual piece.

Not always, though. If there’s one post-processing effect I’m okay with that’s suitable for the job described above, it’s noise, or film grain, if you will. I really find noise as an visual effect extremely fascinating and eye-pleasing in the real-time context as long as there’s at least some kind of rationale behind the effect and a certain subtlety to it, which applies obviously to post-processing in general.

One example of such is Mass Effect 2 (2010) that shows us how simple noise can be elegant and yet powerful a post-processing effect at the same time. The noise disrupts quite nicely the otherwise clean and sterile surface that we have come to expect from a modern synthetic image, and it’s in fact something of an antithesis for the digital medium that is generally free from such phenomena, in contrast to film, for instance. And, like said, the subtle noise in ME 2 hides, or rather distracts from, the minor problems in the image, like those related to filtering, anti-aliasing and such. In addition, the noise makes the visuals in a way more lively and coherent to an extent.

On the other hand, if more games used the effect in question, I probably wouldn’t care that much of it. I believe it’s indeed in part the curiosity of the effect that fascinates me, especially for the fact that that type of pixel-sharp noise is virtually absent in modern digital imagery at large, specifically when it comes to video. This is due to the compression algorithms involved, such as MPEG, that often get rid off the subtle noise the original, uncompressed imagery may have had. Funnily enough, the high-resolution, 60 frames per second noise of ME 2 registers for that reason something of luxury to me, even though noise is generally perceived as an unwanted visual artifact.

Only thing that bothers me with the ME 2 noise is that BioWare didn’t have the balls to fully embrace the effect as a genuine artistic decision, ending up making it optional. Furthermore, in Mass Effect 3 (2012), the noise was just gone, so I guess in the end people didn’t like it that much.

Well, I did.

Extra Medium

April 6, 2011

When talking about the concept of simulation, we are always dealing with a highly idealized model of reality – otherwise, it wouldn’t be a simulation. Simply put, the logic of how the reality behaves as a whole is just far too convoluted to be fully understood, and as a result, fully replicated in a model. Simulation is, by definition, an inferior (i.e. simpler, cheaper, more practical) construction of its original referent, and as such, an instrument for experimentation and play. But the most importantly, for play.

So even the most advanced scientific simulations today fall short of replicating the reality as it is, and commercial simulations like video games must compromise the modeling even further. Of course, we have come a long way from abstract Lego-sized pixels to relatively credible visual depictions, but the gap between reality and simulation is there – and always will be. The question is, to what extent we notice that gap, and what can we do about it?

On that note, I remember back in 1996 sometimes putting Grand Prix 2 to the replay mode and then squinting my eyes so that the vision blurred enough to make the imagery look more or less photorealistic. I did acknowledged the stupidity of that exercise, but it made me realize nevertheless that convincing “synthetic realism” in real-time was indeed possible, even if one had to alter one’s perception to achieve that.

This is the reason why I find sometimes off-screen YouTube gameplay footages fascinating, since the camera (especially when shaky) adds in a way an extra layer of realness to the imagery that may be otherwise too crisp and sterile. And furthermore, the 60 fps footage produces a cool motion blur-like effect when filmed in 30 fps or lower.

So even though I generally dislike over the top post-processing effects, sometimes they can create interesting results for the reasons presented above. For instance, in Call of Duty 4: Modern Warfare the player can turn on in the cheats menu certain post-processing effects that render the game to look like over-saturated black&white photography, consequently decreasing the gap between simulation and reality to some degree.

And who could forget the notorious Death From Above scene from the same game, of which “thermal imagining” was almost indistinguishable from the real thing. The realness of the scene were very much due to the heavy noise and ghosting effects that masked the deficiencies inherent to real-time imagery, and thus, made it appear more real.

It seems that an additional medium on top of the real-time imagery can really push the (photo) realism further, at least to a certain extent. I’m not sure if the whole game should be carried out this way – it could get exhaustive fast (see Kane & Lynch 2: Dog Days) –, but like Death From Above scene proved, highly “mediumized” real-time imagery can work really well in small doses.

At the end of the day, I believe the Death From Above scene did have the deepest impact on most of us in terms of Call of Duty games at large simply because it just looked so real. And even without the need to squint one’s eyes.

Drawing The Dark Knight

July 19, 2010

Rocksteady Studios’ Batman: Arkham Asylum seems to be (to my knowledge) one of the first games to address the ethical problem that comes with the high body count and you, the player, being the supposed good-guy. The fact of the matter is, in this game you don’t kill anyone, you just knock them out, including the main villain. Even when you push a guy into an endless-looking abyss, you hear a reassuring splash of water whispering soothingly into your ear: “He’ll be okay, don’t you worry about that, big boy.” But that’s a minor and more or less trivial detail of the game.

More interesting detail is the fact that B:AA isn’t visually anything special… until you hit the pause screen. When doing so, the game renders the screen as if it’s been taken straight from Sin City or alike. The effect is eye-meltingly good and makes you wonder what if they would have made the whole game look like that. Now the pause screen is like a nod to the player that “Yes, we could have done it this way, but we didn’t. Sorry.”

Ok, the effect in question is a bit extreme and perhaps it would have rendered the game unplayable by making it too hard for player to make sense of. This comes actually back to the problem of freedom the player has in video games, in contrast to non-dynamic mediums, like movies and graphic novels. When working with old mediums the director/artist has the absolute control over what the spectator sees and hears, which enables highly stylistic and even abstract ways of depicting things, but still keeping the viewer on board what’s happening. Samurai Jack’s certain stylistic scenes are good examples of that.

I’m a big fan of cel shading, and I consider it as one of the biggest breakthroughs in history of real-time image, especially the outline effect. I truly think there’s something magical about seeing something like B:AA’s pause screen to be drawn in real-time knowing that you can affect the outcome by rotating the camera or adjusting the character’s position. It’s like having a personal Frank Miller at your disposal, but not quite.

Madworld, Borderlands, and not to mention cel shading grand-daddy Jet Set Radio, already have proven that a stylistic rendering method can work and add a lot to the experience. Would the cel shading made B:AA a better game then? Perhaps not, but it would have made it definitely more interesting looking game, that’s for sure.

Let There Be Less Focus

July 5, 2010

One of the biggest problems in simulation of depth by real-time graphics is that everything is in focus by default, which is pretty unnatural state of affairs considering our eyes’ limited ability to focus on objects on various depths. Even the newly hyped stereoscopic technology, which I very much dislike, doesn’t address the problem of depth of field (DOF), because one is still staring at a flat image, but only at two instead of one. At the end of the day, completely sharp images, especially when there’s stuff very close in front of you, don’t bear much depth without the DOF -effect, which is why it has to be created artificially, i.e. simulated, when necessary by post-processing.

On that note, earlier I gave a bit of flack to Codemasters’ Operation Flashpoint: Dragon Rising for its over-the-top use of HDR –effect, but what I didn’t mentioned was the one thing OF:DR really does right, which is the DOF –effect. It is spot on. What makes the effect so nice is that there is this constant and subtle DOF –effect going on all the time, and not only when the player is looking through the sights, like it often is with first-person-shooters. I really can’t overstate the massive impact the constant DOF –effect gives to the OF:DR’s visual landscape, and it makes me wonder why the effect isn’t been used more often in other games. I’m sure this kind of DOF would have perfected, for instance, Modern Warfare 2’s already polished look. Ok, I don’t know exactly what the hit to the performance would be keeping the effect constantly on, but I suspect that performance wouldn’t be a major issue. After all, we are talking about rather straightforward post-processing effect here and OF:DR handles it fine.

Furthermore, the constant DOF –effect, besides providing an enhanced feeling of depth, it conveniently blurs the critical parts of the weapon which are extremely close to the camera, and would so otherwise present particular weapon textures in an unflattering light, so to speak. As you can see, MW 2 clearly (pun intended) suffers from this when using some of the weapons.

Interestingly, I would consider the first-person scene from the Doom movie as a benchmark for how a first-person-view should look like in terms of DOF and overall feel, including the movement. Yes, I do acknowledge, that judging by the screenshots and gameplay videos, Guerilla Games’ Killzone 2 (and 3 for that matter) does pretty phenomenal job with constant DOF -layering and everything, but I’m a little hesitant to comment on that since I haven’t played it personally and I don’t have access to a copy. For the record, gun loading and clip changing look in Doom movie (2005) and Killzone 2 (2009) awfully similar.

Artificial depth of field is a complicated thing to pull off, because in real life, we have a freedom to focus our sight on where ever we want to along the z-axis. And because flat image lacks the z-axis altogether, that freedom vanishes and developer has to choose the focus for you, when using the DOF –effect. But in some cases, it works great and adds a lot to the overall immersion.

Brown Is The New Black

June 18, 2010

I recently played through Doom 2 at ultra-violence –level. It wasn’t the easiest task to do, but it was the most pleasurable gaming experience for a long time, I can tell you that. This playthrough was one of those rare cases, when I wasn’t playing a Doomgame with cheats on, and it strikes me how different game can be with a little bit of challenge (really?).  And at the same time, it saddens me how I spoiled the game with cheats when it first came out back in 1994. But that’s beside the point.

My point is, look at how brown Doom 2 is. It’s far browner than its predecessor Doom ever was, or any other its contemporary. This got me thinking, if Doom 2 was the very first of so-called “brown games”? The thing is, there has been lately this tendency to see brown as dominant color in every other game, the most iconic example being Gears of War. Why is that? Some say, in GoW -case, it’s the Unreal 3 –engine, but engines generally don’t make such artistic decisions. People do.

Color management in visual arts is a tricky business, and it can get frustrating quickly. All those different colors which don’t match… what to do, what to do…? One obvious solution is the color grading, that is to tint whole color-scheme with one particular shade of color. That’s easy and effective way to make a unified and coherent visual look, no matter what the underlying colors are, just take a look at The Matrix.

So, I believe Doom 2 is a victim of this easy way out. But why brown? Let me ask you a question: what color do you end up with when mixing complementary colors?

Brown. It’s like a meta-color!  Brown is the least decisive color, a compromise, a safe bet and in paper it should irritate people the least, at least that’s how they must be thinking:

“Guys, we need a unifying color to save this mess, now!

– How about Red?

Not everyone loves red!

– Blue?

Same thing!

– Brown?

Is it even a color?

– Yes, but it’s the least color-y color.

Then brown that is!”

I bet this kind of dialogues have been taken place at Codemasters lately (Race Driver: GRID, DIRT, Operation Flashpoint: Dragon Rising, upcoming Formula 1 –game). Boys and girls at NeoGaf refer to Codemasters’ current visual look as a “piss filter”.

But why Doom 2 particularly was so brown? This is just speculation: Doom 2 was a cash-in release to be sold in retail to complement shareware-Doom’s sales figures (which, however, were great alone), so the passion and creative fury, which were present at making the original Doom, just wasn’t there anymore. Still, Doom 2 had to look different than Doom, so the brown-look was the way to go.

For Your Eyes Only

April 24, 2010

Simulation of human sight in video games is a tricky business. Usually it’s done with replicating features of human perception, which can be found also in a camera: motion blur, depth of field, exposure control and so on. One particular effect that is very popular at the moment, is the so-called HDR (High Dynamic Range) –effect, or “bloom” to which it is often erroneously confused. If you have played Battlefield: Bad Company 2, you know exactly what I’m talking about. In BC 2 the effect is so hideous and extreme, that I had to uninstall the game partly because of it. Furthermore, there are two other games in which the effect is almost equally unbearable but, however, don’t render the whole game unplayable: Techland’s Call of Juares 2 – Bound In Blood and Codemasters’ Operation Flashpoint: Dragon Rising. These screenshots below don’t really do “justice” for the effect, since it’s even more irritating in motion.

The effect should simulate human eye’s finite capacity to handle different volumes of light. The eye has this thing known as iris which regulates the amount of light hitting the retina in order to avoid over or under exposure. The eye is constantly adapting in new lighting conditions with little delay and this is what HDR –effect has come to simulate for.

However, it seems, that HDR –effect and post-processing effects by and large have become masking devices for graphics’ deficiencies and shortcomings, which is really sad. I can imagine how easy and tempting it is just to crank up the post-processing effects, when your game’s visual landscape doesn’t deliver otherwise. But when the effect is done right, it really is a valuable addition to the experience. In Call of Duty 4: Modern Warfare the effect is so subtle you barely notice it, but when you do, it works every time. Respectively Crysis does the HDR –effect exactly right, and the way Crysis handles all other post-processing effects should be ordered as mandatory to every developer making video games.

Final verdict is: do it, but don’t overdo it.