Double Buffered

A Programmer’s View of Game Design, Development, and Culture

Heavy Rain: Thank You For Supporting Interactive Drama

Posted by Ben Zeigler on April 19, 2010

I started playing Heavy Rain at around 7pm on a Saturday a few weeks ago, and finished it at 1am Monday morning. The thing was damn compelling, though it’s certainly heavily flawed. The walking controls suck, a few of the voice actors were pretty painful (looking at you Lauren and Sean), and the aesthetic was uniformly European and foreign to the purported setting of quasi-Philadelphia. Which apparently has two non-white people total, unlike real world Philadelphia which is a majority minority city.

Why did Heavy Rain work for me? The key scenes for me came late in the game where I was being told, both by the game mechanics themselves and a driving character in the plot, to do things that my player character would never do. I only had a few seconds to make a decision, and damn if I didn’t start sweating a bit. I knew I wouldn’t be able to redo the decision if it turned out poorly, because the game makes it just difficult enough to restore old saves. In every thriller I’ve seen and games I’ve ever played the hero would have gone one way, but I was going to try the other, unexplored path.

That particular decision ended up being very satisfying, as I was able to solve a puzzle using some subtle clues, and I avoided a fairly horrible fate. The simple existence of those decisions, and several others, are why Heavy Rain succeeds as an extremely inventive form of truly interactive drama. Sure, it’s just a choose your own adventure book, but at least in my play through it was a very consistent, well thought out, and challenging one. It also has the best implementation I’ve seen so far of quick time events, as the analog nature of the inputs really helps bring across subtle details and draw you into the character’s motivations.

Heavy Rain really reminded me of why I disliked Uncharted 2 so much. Uncharted 2 put tons of effort into recreating the film experience (even doing full performance capturing) in real time, but what was the point? At no point were you able to influence the important actions of your characters. Uncharted 2 is two only loosely connected halves: a game and a movie. Frankly I’ve played better games and seen better movies. Heavy Rain on the other hand, really is the full merging of Game and Movie, because your choices are strongly informed by the narrative, and likewise dramatically influence it in return.

Posted in Game Design, Uncategorized | Comments Off

Game Development vs. the Stock Market

Posted by Ben Zeigler on April 11, 2010

I read a pretty businessy article at Edge Online today, about the fact that there are no longer any publicly held game development studios in England, and how there are virtually none worldwide. The article talks a lot about why this is, but I think the best answer is given in the last paragraph, as a quote from the CEO of Climax: “I am very glad we didn’t list Climax now. There is an inherent tension between the short-term view of capital markets and the long-term nature of game development”. If you look around at the most successful game development studios in the industry you’ll see one of two patterns: completely private ownership like Valve, or elite studios inside very large public publishers that are mostly shielded from quarter-to-quarter financial issues (Blizzard appears to still be immune).

The current Activision vs. Infinity Ward debacle is a superb example of the direct conflict between short term and long term thinking. The heads of Infinity Ward, at least as far as I can tell from rumor mill, really wanted to try and work on a new franchish, which would possibly fill an important long term goal by spawning a new franchise and keeping employee morale high. Activision corporate can’t deal with that kind of thinking, because they have to worry about what’s happening in the short term. Activision is looking about 2 years out (because that’s really the min dev cycle possible) because when you’re a publicly held company without incredibly strong leadership and market position that’s the most you can look forward before you get fired by your board of directors or large shareholders.

Game industry stocks are basically treated like Tech industry stocks, and what are the shareholders looking for? They want Growth, and that’s all they care about. They need the revenue numbers to keep getting bigger and bigger, because that means the company is more valuable, and they can resell with a profit. Dividends, which work to encourage long-term holding of stocks, are largely absent in the tech center because of a lack of profits and the investors don’t look for it anymore. For most of the people who would buy the public stock of a game publisher, any actions the management team take that take even 5 years to bear fruit are 100% worthless.

This is why I wouldn’t feel too secure if I was John Riccitiello right now. He’s tried to renovate the image and developer prestige of EA, and these actions will bring great fruits in the years to come. I feel like he’s done a good job of turning around EA as a long-term going concern. But, that won’t be enough for him because EA’s revenue is down a bit over the last year.

A public corporation is explicitly chartered to do what’s good for it’s stockholders. But, what’s good for the  stockholder is NOT good for the company, the employees, or the industry as a whole. Google can get away with largely ignoring the stock analysts, but not the smaller publishers or independent publicly owned developers. They HAVE to live quarter to quarter because that’s what their fickle owners demand. And working to maximize your revenue for the next fiscal year is simply not what game development is about.  If you want to know why a certain company doesn’t seem to really care about it’s reputation or long term prospects, it’s because the very structure of the corporate funding model doesn’t let them, and they’re not talented or willing enough to resist.

Posted in Game Development | Tagged: , , , | 4 Comments »

GDC 2010: How to Honestly Lie To Your Players

Posted by Ben Zeigler on March 25, 2010

Looking back at this year’s GDC there was a single thread running through most of the sessions I attended: Deceiving the player. Both Sid Meier and Rob Pardo explicitly told us to lie to players about how we calculate random chance, because of the way human psychology interprets probabilities. Chris Zimmerman laid out in detail how to lie to the player about what their hands did. Chris Tector explained how to perform a deeply technical form of lying to build the illusion of a continuous world from streamed chunks.

Sid Meier talked about the “Unholy Alliance” between designer and player, and Ernest Adams talked about the “Tao of Game Design”. On reflection the concepts have much in common: they are about the collaboration between player and game designer to craft a shared experience. From both the player and the designer, a unique mix of deception and trust is required: Suspension of Disbelief.

It can be interesting to compare gaming to another form of popular entertainment: Professional Wrestling. Back in the dark days of carny scams, Pro Wrestling was presented as real with the explicit goal of bilking the consumer. Over the last 30 years or so the deception inherent to Pro Wrestling has shifted: The vast majority of fans are completely aware that it’s all fake and planned, but they don’t care. They are completely willing to suspend their disbelief, and in return become part of the show. The experiences are real even though they’re based on a foundation of deception, and that’s at the core of gaming as well.

So lying to your players with the goal of building a collaborative experience is key to the power of the medium, but where can it go wrong? Ernest clearly talks though the different design philosophies of different types of games and argues that designers should effectively be more truthful the more a game moves away from a conventional Player vs. Environment game. Jaime Griesemer talks about ignoring the literal feedback of players, but never proposes lying to them in a competitive PvP environment. Eskil Steenberg’s whole talk was about moving Procedural Generation away from it’s long history of deception and towards the front of a game’s design.

Finally, much of GDC was talking about a completely different form of player deception. Soren Johnson’s great blog post lays it all out: game designers are increasingly being asked to lie to players about the very process of playing a game. With a flat fee, subscription, or large chunk DLC model the goal of a designer is honest and out in the open: they want to make the player happy so they will recommend the products to their friends and buy future related products. The goal of a designer in a microtransaction-based game is instead to exploit the second to second emotional weaknesses of their players to sell as many individual bits as possible. But, you can’t tell players this so they brand the games  as “free to play” or “social”. This deception (and others such as DRM) doesn’t have anything to do with improving the player’s experience, it is simply about maximizing short term profit at the possible expense of long term credibility.

Update: Just as I posted this originally Soren’s twitter pointed me to a great post by Frank Lantz on essentially the same topic. He takes a bit of an opposing view in holding that the kind of deception advocated by Sid and Rob can be destructive because it stops players from learning about actual truths in the world. Give it a read.

Posted in Game Design, GDC 2010 | Tagged: , | Comments Off

GDC 2010: Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input

Posted by Ben Zeigler on March 21, 2010

Okay, time for my last set of session notes. Here’s what happened at Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input presented by Chris Zimmerman from Sucker Punch. The main topic of the session was the controls of Infamous (sorry, I refuse to use the stupid capitalization), which I quite enjoyed as a game. It presented a few pretty innovative ideas for interpreting player input, which I’m currently trying to figure out how to apply to various projects I’m working on. Stuff I heard:

Controls Immersion

  • For Infamous one of the goals was to push immersion as far as possible. The objective was to make you feel like you WERE Cole, instead of just controlling Cole. They should see and hear what Cole sees and hears, and do what Cole does.
  • However, players expect a certain amount of abstraction in their control interfaces. No player wants an accurate simulation of what it’s like to climb a rope,  which is actually kind of hard. Direct control just won’t work, both because of the complication of the world and the fact that players are somewhat bad at using controllers. Players want to express their wishes, and feel challenged but also successful
  • Chris performed an experiment asking players to point a ps3 stick at objects in a 3D scene. Players were fairly inaccurate, and formed a bell curve. Around 70% are within 15 degrees of the target, but not accurate beyond that. A second experiment asked them to push a button when a ball bounces. Players were within 50 ms most of the time, but not any more accurate then that (plus any A/V lag which is omnipresent).
  • Challenges for Infamous were large. Active and small enemies, inexact controllers. Featured jumping and climbing, and they had the design goal of everything that looked climbable had to be climbable. Turns out the city has a lot of things that look climbable.
  • They considered going with a solution where you would lean on a joystick to climb, like in Assassin’s Creed 2. But, they decided that didn’t feel skillful, and would be a problem given the density of the city. So, the goal was to require the player to execute correctly, but to liberally provide aid in the background.

Aiming

  • For aiming, the goal was to make it so the game ALWAYS shot exactly where you were aiming the reticle. So, the solution was to help in moving the reticle, not leave the reticle manual but help after firing. So, the game adjusts the “input” based on both the player’s real input and what the game thinks the player is trying to do. The movements have to be physically possible on the controller, it just helps the player out.
  • It takes the controller input and looks for targets that are near the direction the reticle is moving. It will adjust the reticle motion to be directly towards that object if it’s a good enough fit. It also slows down the reticle movement as you are close to the target, to give more time to fire. It also keeps track of where it was and where it’s going as well as button presses, so if the press happens within a certain time frame of passing over the target it counts as on target.
  • If there are no inferred targets the reticle is 100% player controlled. It excludes targets that are too far away, and all targets are treated as a line (because they have height) instead of a point. This means that if you’re heading towards a characters feet you’ll hit their feet with your attack, instead of magically jumping to their midsection.
  • For evaluating targets, lexicographic scoring is used. It rates the targets on several dimensions in priority order, and only uses lower dimensions if the top ones are equal. For shooting, this means that each target has two parts of the “egg” that move out from them. All valid centers of any egg rate higher than any outer sections of the egg, but it will use the closest section out of the two groups.

Jumping and Climbing

  • For jumping the goal was the same as targeting. The game allows air steering while in midair, so the game will only adjust your controller input to match a physically possible controller input. But, jumping is way more complicated because of the larger number of possible targets, the commonality of linear targets such as ledges, and various animation affordance issues. Failing to correctly predict while jumping is worse than when aiming.
  • The basic algorithm looked ahead about 0.75 seconds. There were a set of “illegal” filters it would check for all possible locations, and one preference score. It would first apply exclusion filters, in performance order where it would do the cheap tests first. After passing all filters it would heuristically score the remaining targets.
  • To score the targets it uses a golden section 1 dimensional optimizer, which was picked for execution speed. Scoring function was based on trajectory relative to final position. To modify the player input it computes the player’s time to land, and the relationship of the desired target and the current trajectory. It adjusts the player’s stick input to match the desired input. (Note: I missed a bit of the specific math, but there are a variety of ways to rate targets heuristically)
  • Ground landings have to be compared against features. You can’t just trace the feet because they will hit a wall before the ground in many situations (such as jumping up). It instead traces a set of polygons between the head and the feet, and clips them against geo. It stops the search if the polygons hit a segment the player can stand on.
  • The original design didn’t call for wall jumps, but they added them when players ended up hitting falls. When a player hits a wall they can “move” a certain amount or jump off, which ended up feeling more natural. To score wall collisions it traces a parabola at the knees.
  • They ended up doing some modification to running as well, to walk around small objects in the world. Super heroes don’t run into parking meters, why should the player?

Conclusion

  • Why does this sound so complicated? At first they tried stealing from Uncharted, where it computes the jump at take off. But, there were too many objects. Then they tried stealing from their own Sly Cooper games, but having a “grab” button that when activates automatically air steers the player towards the nearest surface.
  • But, the realistic graphics of Infamous brought with them increased expectations. Players were not willing to accept the floatiness of the Sly Cooper solution, which worked just fine in a cartoony world. So, they had to come up with something that matched the world better.

Posted in GDC 2010 | Tagged: , , , | 2 Comments »

GDC 2010: Procedural, There is Nothing Random About it

Posted by Ben Zeigler on March 19, 2010

It’s starting to wind down, but here’s some more notes! These are from the session Procedural, There is Nothing Random About it by Eskil Steenberg. Eskil is working on the indie MMO Love, which is going to go live very shortly, and his talk comes from the perspective of integrating many procedural techniques into his work. It worked well both as an overview of the concept and as an explanation of specific techniques. This talk had a bunch of valuable visual aids (he opened the game live at several points) so these notes are not as useful as a video would be. Sorry. Anyway:

History of Procedural

  • Procedural content generation started as a purely practical pursuit, because many old systems were severely lacking in memory. Games like Rescue on Fractalus and Populous (Eskil said the high selling “Mission Disk” addon was purely a list of random seeds) generated their procedural data in engine, but that solution is fairly pointless in today’s world.  We now have tons of memory and storage space.
  • The next type of procedural content generation is offline generation. One early attempt at this was the Massive crowd simulation tech created for Lord of the Rings. It’s also been used in a variety of modern games such as Far Cry 2 or Eve. This technique is valuable because an imperfect procedural tool can be fixed up in post production to iron out the kinks. This is a valuable way to save time.
  • Last year Eskil told everyone to fire their designers, this year he’s telling everyone to fire their artists. The way you make a good game is to make a bad game and fix it, so you need as fast an iteration as possible. This means you need a super fast art pipeline, and procedural tools are a huge help for this.
  • Ken Levine has said that filmmakers get to make movies while game developers get stuck having to make the camera first. “Ken, I love you but you’re wrong”. Many of the most artistically interesting films have been made by filmmakers who DID make their own camera. Technology is not purely a means to an artistic end, but can in fact inspire new and interesting artistic expressions.
  • Eskil demoed his modeling tool. He showed how it allows artists to make fragments and then use “deploy” to recursively place those objects over any mesh or surface. It’s an example of how you can set it up so artists get to art direct, instead of just make tons of individual custom pieces.
  • In today’s game industry, Art is what is stifling innovation. Design, tech, and innovation and held back by art constraints. Destructible environments are easy, but the high visual requirements mean we can’t do them. “Chris Hecker, I love you but you’re wrong”, there are still interesting tech issues to solve.

Procedural Generation Back In Engine

  • The solution to the issues with the stifling art pipeline is to put procedural generation back into the engine. Ragdoll may not look as good as hand animation, but it reflects the player actions in a stronger way. This feedback and responsiveness is what is missing.
  • How would you procedurally build a labyrinth? You start with a block, carve out a solution, and then add embellishments once you’re sure it works. The traditional way to make a locked house is to make exactly one door that can be opened by exactly one key.  The emphasis is on logically correct structures.
  • But, how about we take a statistical solution? Perhaps we make a house that can be opened in any number of ways. You find a key, and then maybe you find the house. Life is lots of keys and lots of doors, and can be about improvising. Why can’t games be about this kind of improvisation?
  • If Eskil were an assassin, he could pickpocket the entire room and gain hundreds of possibilities. Games can be like that. Instead of enforcing logical consistency, we can build a house with 5 doors, and randomly placed keys. It will be statistically consistent because the odds are functionally 0 to have all 5 keys end up in the house.
  • To build interesting statistically consistent systems you need to take advantage of spatial dependencies. Applying series of what are basically image filters can be used to handle these relationships. Stochastic sampling is a good place to start.
  • Disney said to Pixar that Pixar would fail because computers can’t understand emotions/wants of consumers. But, the designers of said computers can. If a rule can be taught to a designer it can be taught to a computer.
  • As an example, Eskil had an algorithm to place bridges in his world. At first it made way too many bridges, so he kept refining the algorithm. Instead of just reducing the frequency he made the requirements more strict until he arrived at the best bridge he could think of. The bridges made by his algorithm where more interesting and logical than ones he would have hand placed, because the computer didn’t come into it with any biases.
  • Love is basically complicated systems of hierarchical filters, that can construct objects of any type, such as buildings cliffs etc. The world is a grid, but subsections of the grid are replaced by custom artist assets as appropriate, so the world ends up not looking like a grid.

Conclusion

  • Last year, Eskil felt alone. He didn’t share any of the problems of the rest of the industry. The PC is dead, except for steam (but that doesn’t count). Free to play MMOs are all that matter, except for WoW (that doesn’t count). Eskil doesn’t want to count: that’s when you succeed.
  • Finally, Eskil wants us to all go out and explore. He wants us to say next year “Eskil, I love you but you’re wrong”.

Posted in GDC 2010 | Tagged: , , , , | 3 Comments »

GDC 2010: Streaming Massive Environments from 0 to 200 MPH

Posted by Ben Zeigler on March 17, 2010

Here’s my notes for the talk Streaming Massive Environments from 0 to 200 MPH presented by Chris Tector from Turn 10 Studios. He’s listed as a Software Architect there, and obviously has a deep understanding of the streaming system they used on Forza 3. This talk was nice and deep technically, and touches all parts of the spectrum. I did get a bit lost when it got down to the deep GPU tricks, so I may have missed a bit. Anyway, here’s things that I think were probably said:

Overview

  • The requirements are to render the game at a constant 60 FPS, which includes tracks, cars, the UI, crowds particles, etc. Lots of things to do, so very little cpu available at runtime for streaming.
  • It has to look good at 0 mph, because that’s where marketing takes screenshots, and where the photo mode is enabled. It also has to look good at 200 mph, because that’s how people play the game.
  • As an example, the LeMans track is 8.4 miles long, has 6k models and 3k textures. To have entire track loaded would take 200% of the consoles memory JUST for models and textures.
  • Much information can be gathered from the academic area of “Massive Model Visualization”. However, beware that academic “real time” does not equal game real time, because of all the other things a game has to do.

The Pipeline

  • First, the tracks are stored on disk in modified .zip files, using the lzx data format. Tracks take from 90MB to 300MB of space compressed. This data is read off disk in cache-sized blocks. The only actual I/O that is performed is done strictly in order, to avoid the horrible seek times of a DVD.
  • The next stage is the in-memory compressed data cache. The track data is stored in this format in the same format as on disk. Forza 3 uses 56MB for this cache, and uses a simple Least Recently Used algorithm to push blocks out of this cache. Each block is 1MB large.
  • The next stage is a decompressed heap in memory. There’s a 360-specific LZX decompresser that runs at 20 MB/Sec. They had to optimize the heap heavily to get really fast alloc and free operations.  Forza 3 uses 194 MB for this heap and allocates everything aligned and contiguous
  • The next stage is the GPU/CPU caching layer. They do something semi tricky for textures. Textures can be present in either Mip 0 (full res), Mip chain (Mip 1 down to 32×32), or Small Texture (a single 32×32 texture) form. There is special 360 support to allow the Mip chain to be split up in different memory locations, so they can stream the Mip 0 in after the rest of the chain and it will display correctly.
  • A few special things happen in the GPU/CPU itself. First, there is NO runtime LOD calculation, as the streaming data gives the correct LOD to show, and they are seperate objects in the stream. They did add a basic instancing system to allow a single shader variable. They spent a lot of time optimizing the GPU/CPU for the 360. He mentioned using Command Buffers as much as possible. Spent time right sizing assets to fit optimal shader use. 360 has special controls to reduce MIP memory access (Note: This got a bit too deep for me)

Computing Visibility

  • Many projects use conservative occlusion to determine visibility, often because it can run real time. However, Forza does per-pixel occlusion in an extensive pre process step. Uses depth buffer rejection to figure out what’s occluded. It also does the LoD calculation at this point, and will exclude any objects that aren’t big enough to be visible (contribution rejection). Many games do LoD and contribution rejection runtime, but the data set is huge so they end up having horrible cache performance. (Note: I asked later, and this whole process takes up to 8 hours offline, for a very large track)
  • First step in the process is to Sample the visibility information. The tracks have an inner and outer spline that defines the “Active” area, so the sampler picks a set of points inside those splines (and maps them to a grid relative to the center spline). At this point it creates “zones” which are chunks of track.
  • To actually sample at each point, it uses a constant height and 4 angle views, relative to track direction. Visibility for each point are automatically placed at adjacent point, because the object came to exist at some undefined spot between two sample points.
  • The engine then renders all of the models that are plausible (without textures). It then runs a D3D occlusion query to see what and how much is visible. Each model keeps track of it’s object ID, camera location, and pixels visible. The LoD calculation happens at this point, as it uses the distance info. It can do LoD, Occlusion, and Contribution in a single pass, after 2 renders. So fairly quick individual operation. It then keeps track of the pixel count of each object in a zone, as opposed to just a binary yes/no for visibility.
  • After sampling a Splitting process takes place. Many of the artist placed objects are extremely large in their source data, to avoid seams and such. So, it will break these large objects up and cluster smaller objects together into single draw calls. Instancing breaks the clustering, so artists have to be careful
  • The next step is the Building process. At this point it maps the textures on to the models. There’s a pass that aggressively removes duplicate textures. It looks for renamed textures, exact copies, and MIP parent/child relationships and will combine them as necessary. It also computes the 32×32 “small textures” at this point. The small textures for an entire track are put into a separate chunk and are preloaded for the entire track. This chunk is from 20-60 MB depending on track and is the only track data that is preloaded. This is so when the low LoD for an object is up and running, it will at least be colored correctly.
  • Optimization is the next phase and is somewhat complicated. For each zone it finds the models and textures used in that zone as well as the two adjacent zones. It finds the models and then the textures, and sorts them by number of pixels visible. For any textures that are unneeded (if it’s < 32×32 strip it entirely, if it’s < MIP 0 strip MIP 0) it does the trivial reduction.
  • It then does two memory reduction passes. First, it has to lower the total number of models/textures loaded in a zone to be < the decompressed heap. It removes models/textures as required, starting with the least pixels visible. After that it computes a delta of models/textures relative to the zone before and after it. The delta has to be lower than the available streaming bandwidth, so it strips for that reason.
  • Once it computes the set of assets in a chunk, it has to package them. It places them in a cache efficient order and places the objects in “first seen” order. Objects that are frequently used end up near the front of the package and will stay in memory throughout, while objects for later in the track are farther back.
  • The last step is Runtime. The runtime code is responsible for keeping track of what individual objects to create and destroy, based on the zone descriptions. It could do a reference count but does a simple consolidate where it frees everything first. This reduces fragmentation

Summary

  • The keys to the Forza system are work ordering, heap efficiency, decompression efficiency, and disk efficiency. Each level of the data pipeline is critical, and anything that can be done to improve a level is worth doing. Don’t over specialize on a particular aspect of the pipeline
  • System isn’t perfect. Popping can happen either due to late arrival caused by memory/disk bandwidth not keeping up, or it can be caused by visibility errors. They did have to relax some of their visibility constraints eventually, because certain types of textures threw off the calculation. They provided artists with a manual knob that can tweak an individual object to be more visible at the expense of possibly showing up late. Finally, you have to deal with unrealistic expectations.
  • For future games, Chris had a few ideas. First, he would like to expand the system to work for non-linear environments. This would entail replacing linear zones with 3d zones, but would allow open world racing. There are probably more efficient forms of domain specific decompression that could up the decompression bandwidth. The system could do texture transcoding. It should be expanded to add another layer on top of the disk cache: network streaming (Note: Trust me when I say that’s a whole other lecture by itself)

Posted in Game Design, GDC 2010 | Tagged: , , , | 3 Comments »

GDC 2010: Single-Player, Multiplayer, MMOG: Design Psychologies for Different Social Contexts

Posted by Ben Zeigler on March 17, 2010

Here’s the notes I have for Single-Player, Multiplayer, MMOG: Design Psychologies for Different Social Contexts as presented by Ernest Adams. Ernest has a long history of writing about and teaching game design, although primarily single player games. Roughly, this talk is about him extending his previous concepts to encompass multiplayer games, with varying success. It works as a good overview of how social context affects design, but Ernest is a BIT out of date with the MMO world, as he himself admits. Blah blah, any transcription mistakes are purely my own.

Ernest’s General Philosophy

  • Intellectual pursuits can be vaguely separated into deductive (which he described as English) or inductive (French) thinking. The Classic or Romantic contexts. Game design basically straddles the line perfectly, and is a Craft instead of an Art or a Science. DaVinci should be our idol.
  • But game developers aren’t really very good at their craft. They kill 2/3 of projects they start. They never seem to think through the final goal, and generally lack a philosophical direction.
  • Player-Centric design is a solution to this. A designer must imagine a single, idealized player. The goal of a designer is to entertain them, and to empathize with them. The designer has a responsibility to think about how their game will make a player feel.
  • The Tao of Game Design is the model Ernest uses to describe the relationship between player and designer. They are collaborating to create an experience, and neither would exist without the other. Each has the other inside of them, as far as trying to build a mental model.
  • But, Ernest says this model is incorrect, because it specifies a singular player. Ernest said he was falling into a bias of writing about games he likes to play and create: single player games

Player Versus Environment

  • The first type of game is PvE, which is not exactly the same as single player. A strictly cooperative game can be closer to PvE, and a single player game with a simulated AI player (such as football) is not PvE either.
  • In a PvE game, the designer’s job is to design interactions. It’s vital for the designer to maintain a fairness throughout. Difficulty spikes, learn-by-death, stalemates, insufficient information for critical decisions, and expecting outside information can all violate the player-designer pact and pull the player out of the game.
  • The relationship between player and designer is very intimate, and according to Ernest these kind of games can be Art (with a capital A) because they really have the concept of an artist.

Player Versus Player

  • In a pure PvP design, the job of a designer is to do competition design. The goal is to enable the fun that comes out of players interacting with each other, not over designing and trying to force the fun into the system. Fairness is fairly simple, and involves making sure that everyone has an equal start and can’t cheat.
  • Instead of a designer collaborating with a player, a designer is creating a system in which players will exist. Basically, a PvP designer is more of an Architect then an Artist. You can try to make all the rules you want, but players will add their own rules to the system.

Massively-Multiplayer Online

  • Ernest talked a bit about how he worked on one of the first online games, Rabbit Jack’s Casino at AOL. It was pay by the minute, so Ernest feels it kept him extremely honest as a designer. Everyone seemed really nice. If he didn’t keep the player engaged they would just leave. (Note: a cynical view here is that if he didn’t keep them psychologically addicted they would quit)
  • His recent MMO experience was to jump into Second Life, which was a very lackluster experience. Everyone was extremely rude to him, the game took forever to load, and it felt very unfamiliar. (Note: Yeah, that’s Second Life. Which is not a game.)
  • Designing fairness is basically impossible, as the starts are inherently uneven. The best people seemed to figure out was things like Raph Koster’s Laws. These laws are based on empirical evidence from existing communities, and tend to be about SURVIVING an online game, not having fun. Baron’s Laws saws that Hate is good because it brings people together. As long as Raph’s laws are true,  MMOs will suck for the vast majority of potential players. (Note: Many of Raph’s laws are super cynical and really don’t apply to newer designs like World of Warcraft. Which is kind of why it’s successful.)
  • As an MMO designer, it’s about servicing a cloud of players, who really won’t care about you until you screw up. Your job is to be a social engineer.

Free to Play MMO

  • Much of Ernest’s material for this section is based on slides from a presentation Zhan Ye gave at Virtual Goods Summit 2009. That presentation is from the perspective of someone from the Chinese free to play MMO industry giving advice to western developers.
  • In a pay-per-time-period MMO, the only goal of individual features is to increase fun and general engagement, because specific actions are not monetized. However, in a Free to Play (ie, not free at all) MMO the design goal ends up being to maximize revenue from specific actions. Every feature in a F2P game must directly add revenue, or do so secondarily.
  • Fairness is no longer a goal at all, because it doesn’t help revenue. Instead, the goal is to create drama, love, and other elements of the real world. These elements will spur people to purchase items. The large advantage you get from an item, the more likely a player is to buy them.
  • As a result, in the first generation of successful Chinese F2P games, rich players would buy all the weapons and then use them to kill all the poor players. This ended up being too unbalanced, as all the poor players would immediately quit and not provide the player base needed to keep the rich players buying items.
  • So, the solution in the Chinese F2P community is to set up a series of family clans that will hire poorer players to fight for them. They would use gifts, threats, and extortion to control the poorer players. In other words, form in game criminal cartels.
  • Most successful items are based explicitly on exploiting human emotions. “Little Trumpet” is an item that can be purchased and used to publicly humiliate another player. That player can then pay money to have that curse removed, and is very likely to do so due to emotional distress.
  • Zhan compares F2P games to Las Vegas, but Ernest says they are worse because in Las Vegas you at least have the chance to make real money. F2P uses all the same psychological hooks of a slot machine, but with 0 chance of winning.
  • Ernest believes that these games are in fact evil. The designer has a set up a system that explicitly subsidizes real hatred, because there is no such thing as virtual hatred. If a game is set up to incentivize players to inflict emotional harm that game is evil.
  • There are two solutions to this problem. The first one is to NOT make your game zero sum, and remove competition (Note: So Farmville is not evil in this SPECIFIC way as it does not encourage hate), and the other option is to institute various methods to restrict it to competition instead of hatred and destruction. Something like the NFL salary cap vs. the America’s cup or F1 where richest always wins.
  • In F2P the designer’s goal is to be an economist. They still need to entertain the players, but empathizing with them is strictly bad business. If these games continue on this path, Ernest asks that we shoot him.
  • In conclusion, the craft of game design is fragmenting, there is no longer a single unified philosophy.

Note: As a focused response, I found his discussion of F2P MMOs very interesting, although I think he restricts it a bit too much to that genre. I would expand it a bit, because hatred can happen in PvP or MMO environments just as easily. For instance, take your typical 360 shooter populated by teenagers: they clearly want to inflict emotional harm and there is nothing in the game systems to help ameliorate that. But I can definitely stand behind his basic conclusion: Developing games that prey on the weak emotions of players is basically evil, and F2P games are much more likely to incentivize such decisions because of the focus on revenue over empathy.

Posted in Game Development, GDC 2010 | Tagged: , , , | 5 Comments »

 
Follow

Get every new post delivered to your Inbox.