Double Buffered

A Programmer’s View of Game Design, Development, and Culture

Posts Tagged ‘gdc’

GDC 2011: Dynamics, The State of the Art

Posted by Ben Zeigler on March 15, 2011

The first session I attended at GDC 2011 was Dynamics: The State of the Art presented by Clint Hocking, newly of LucasArts. In broad overview, the talk was was another attempt to answer the question “What/How Do Games Mean?”, and posits Dynamics as the base unit of meaning within the medium of Video Games as a whole. Overall it was a great talk, and it was a pleasure to see Clint talk for the for the first time. Like always, any mistakes are purely my own.

Meaning in Film

  • Initially, film was a curiosity, such as the film produced by Thomas Edison of an elephant being electrocuted. The goal was to inspire fear of AC current, but it failed to connect with the audience. Film didn’t know what it was doing yet.
  • The Kuleshov Effect was the first time anyone figured out how to really use film. An actor with a neutral expression was edited together with images of food, a woman, or a coffin. Audiences immediately attributed different emotions to the face of the actor, which was identical. Editing was creating meaning.
  • Editing is the basic tool of meaning in the medium of film. It’s what separates the medium from theater or radio. If you strip more and more of the editing out, eventually you get a televised play, and it’s not truly taking advantage of the medium
  • Games are still in the curiosity phase, and are still looking for their base method of meaning.

Dynamics Show the Way

  • Mechanics, Dynamics, Aesthetics is an approach to analyzing games. Mechanics are the rules explicitly designed into the system. Dynamics are the runtime behavior of the system. Aesthetics is the wrapping of graphics/sound/story over top of both.
  • There’s a continuum of  where meaning comes from in games, between mechanics and dynamics. On one end we have a message model of meaning, where the designer builds the meaning directly into the mechanics. On the other hand we have abdication of ownership, where the meaning comes out of the dynamics. Dynamics is really the Kuleshov of games, the unique element

The Continuum

  • The original Splinter Cell was very far towards a designed experience. Goal was to get across the themes of sensitivity, proximity, and fragility. The game mechanics are designed to reinforce this, so by necessity there was little player choice.
  • Splinter Cell: Chaos theory built on these concepts, but added those of Exploration and Domination, which opened up the choices and pushed more of the meaning towards Dynamics. But, it was still fairly constrained
  • Far cry 2 was designed to be an exploration of human cruelty, comparing the savagery of humans to the savagery of animals. Was designed to be horrific, intimate, shameful. But, the dynamics didn’t always encourage this. For some people it was about being safe and boring, by being as efficient as possible. For others it was the chaos of lighting a field on fire. It turns out the chaos/paranoia of the dynamics overwhelmed the proscribed meaning of the mechanics, and the dynamics are where the meaning really came from.

Dynamics in Context

  • Tetris is about anticipation and keeping opportunity alive, but is largely abstract. The power of aesthetics can be seen by adding an aesthetic layer to add context, such as transporting prisoners. Then, new meaning is derived in the dynamics despite the mechanics being identical. Now the goal is to be less efficient or possibly to fail the game.
  • Competitive games such as fighting games are unique. In those games the true meaning derives from the conflict between the world perspectives of the two players. Each player has a conception of what the game is and should be, and that conflict builds over concepts such as what “cheap” is. The meaning is “synthetic” in that it comes from synthesis of other concepts. It’s “rigorous” in that the meaning depends on how much the players care about the match. And it’s “instantial” because the meaning comes from the individual match instance, not the mechanics as a whole

 

Posted in Game Development, GDC 2011 | Tagged: , , | 1 Comment »

GDC 2011: A Mature Industry Reflects

Posted by Ben Zeigler on March 8, 2011

I know I haven’t posted in forever (Gears 3 Beta in a few weeks!), but last week was GDC 2011 and some thoughts are in order. First, I’ll be putting up some talk notes later, still collecting those. But, I wanted to lead with my general thoughts on the conference as a whole, as from my perspective the whole experience fit pretty well into a central theme. Officially this was the 25th annual Game Developer Conference, and it hosted a set of retrospective panels to reinforce the theme (and that I avoided due to excessive lines). Educationally most of the sessions I went to were about building upon previous research or practically applying previously experimental ideas. Personally this was the first GDC were I spent more time meeting existing friends than making new ones (which was my fault). There wasn’t that much radically “new” at this year’s GDC to be honest.

A large factor in this is the timing of technological advance. In terms of console lifecycle, 2011 seems to be paralleling 2003/4, with the prior generation trundling on, a bunch of great games coming out, and two handhelds on the horizon. PC gaming is on the rise (Minecraft won essentially all awards this year), and there’s plenty of focus on nonconventional business models (2004 was the year of MMOs). There are tantalizing glimpses of the future, but the focus right now is on games instead of tech.

The overused phrase “Paradigm Shift” comes originally from The Structure of Scientific Revolutions, which is an insightful analysis of the history of science. The part of Kuhn’s thesis that inevitably gets lost is that the paradigm shifts are only part of the equation: during the “normal science” period all of the actually useful work gets done. That’s what this GDC was about for me, incredibly useful work that incrementally built on the work of previous innovations.

Matthias Worch’s talk on The Identity Bubble is a great example of this. I’m not going to bother to put up my notes, because his annotated slides are far more comprehensive. Quickly, the talk was about techniques for keeping the “identity bubble” of the player intact, and synchronizing the identities and motivations of player, character, and person. The talk explicitly built on concepts from Rules of Play, Second Person, Shared Fantasy, and others. Even better, it integrated these concepts together in a way that can be directly applied to any game currently in development to make it better. This is exactly what a game developer conference should be for.

GDC will eventually return to a crazy world of apocalyptic change (probably not next year at this rate), but it’s nice to remember that sometimes it’s good to sit down, reflect on what has happened, and reconstruct a solid base of knowledge. This is just as true for individuals as for the industry as a whole.

Although, I really didn’t go to enough insane parties to meet crazily exciting new people. Oh well, there’s always next time!

Posted in Game Development, GDC 2011 | Tagged: , | Comments Off

GDC 2010: How to Honestly Lie To Your Players

Posted by Ben Zeigler on March 25, 2010

Looking back at this year’s GDC there was a single thread running through most of the sessions I attended: Deceiving the player. Both Sid Meier and Rob Pardo explicitly told us to lie to players about how we calculate random chance, because of the way human psychology interprets probabilities. Chris Zimmerman laid out in detail how to lie to the player about what their hands did. Chris Tector explained how to perform a deeply technical form of lying to build the illusion of a continuous world from streamed chunks.

Sid Meier talked about the “Unholy Alliance” between designer and player, and Ernest Adams talked about the “Tao of Game Design”. On reflection the concepts have much in common: they are about the collaboration between player and game designer to craft a shared experience. From both the player and the designer, a unique mix of deception and trust is required: Suspension of Disbelief.

It can be interesting to compare gaming to another form of popular entertainment: Professional Wrestling. Back in the dark days of carny scams, Pro Wrestling was presented as real with the explicit goal of bilking the consumer. Over the last 30 years or so the deception inherent to Pro Wrestling has shifted: The vast majority of fans are completely aware that it’s all fake and planned, but they don’t care. They are completely willing to suspend their disbelief, and in return become part of the show. The experiences are real even though they’re based on a foundation of deception, and that’s at the core of gaming as well.

So lying to your players with the goal of building a collaborative experience is key to the power of the medium, but where can it go wrong? Ernest clearly talks though the different design philosophies of different types of games and argues that designers should effectively be more truthful the more a game moves away from a conventional Player vs. Environment game. Jaime Griesemer talks about ignoring the literal feedback of players, but never proposes lying to them in a competitive PvP environment. Eskil Steenberg’s whole talk was about moving Procedural Generation away from it’s long history of deception and towards the front of a game’s design.

Finally, much of GDC was talking about a completely different form of player deception. Soren Johnson’s great blog post lays it all out: game designers are increasingly being asked to lie to players about the very process of playing a game. With a flat fee, subscription, or large chunk DLC model the goal of a designer is honest and out in the open: they want to make the player happy so they will recommend the products to their friends and buy future related products. The goal of a designer in a microtransaction-based game is instead to exploit the second to second emotional weaknesses of their players to sell as many individual bits as possible. But, you can’t tell players this so they brand the games  as “free to play” or “social”. This deception (and others such as DRM) doesn’t have anything to do with improving the player’s experience, it is simply about maximizing short term profit at the possible expense of long term credibility.

Update: Just as I posted this originally Soren’s twitter pointed me to a great post by Frank Lantz on essentially the same topic. He takes a bit of an opposing view in holding that the kind of deception advocated by Sid and Rob can be destructive because it stops players from learning about actual truths in the world. Give it a read.

Posted in Game Design, GDC 2010 | Tagged: , | Comments Off

GDC 2010: Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input

Posted by Ben Zeigler on March 21, 2010

Okay, time for my last set of session notes. Here’s what happened at Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input presented by Chris Zimmerman from Sucker Punch. The main topic of the session was the controls of Infamous (sorry, I refuse to use the stupid capitalization), which I quite enjoyed as a game. It presented a few pretty innovative ideas for interpreting player input, which I’m currently trying to figure out how to apply to various projects I’m working on. Stuff I heard:

Controls Immersion

  • For Infamous one of the goals was to push immersion as far as possible. The objective was to make you feel like you WERE Cole, instead of just controlling Cole. They should see and hear what Cole sees and hears, and do what Cole does.
  • However, players expect a certain amount of abstraction in their control interfaces. No player wants an accurate simulation of what it’s like to climb a rope,  which is actually kind of hard. Direct control just won’t work, both because of the complication of the world and the fact that players are somewhat bad at using controllers. Players want to express their wishes, and feel challenged but also successful
  • Chris performed an experiment asking players to point a ps3 stick at objects in a 3D scene. Players were fairly inaccurate, and formed a bell curve. Around 70% are within 15 degrees of the target, but not accurate beyond that. A second experiment asked them to push a button when a ball bounces. Players were within 50 ms most of the time, but not any more accurate then that (plus any A/V lag which is omnipresent).
  • Challenges for Infamous were large. Active and small enemies, inexact controllers. Featured jumping and climbing, and they had the design goal of everything that looked climbable had to be climbable. Turns out the city has a lot of things that look climbable.
  • They considered going with a solution where you would lean on a joystick to climb, like in Assassin’s Creed 2. But, they decided that didn’t feel skillful, and would be a problem given the density of the city. So, the goal was to require the player to execute correctly, but to liberally provide aid in the background.

Aiming

  • For aiming, the goal was to make it so the game ALWAYS shot exactly where you were aiming the reticle. So, the solution was to help in moving the reticle, not leave the reticle manual but help after firing. So, the game adjusts the “input” based on both the player’s real input and what the game thinks the player is trying to do. The movements have to be physically possible on the controller, it just helps the player out.
  • It takes the controller input and looks for targets that are near the direction the reticle is moving. It will adjust the reticle motion to be directly towards that object if it’s a good enough fit. It also slows down the reticle movement as you are close to the target, to give more time to fire. It also keeps track of where it was and where it’s going as well as button presses, so if the press happens within a certain time frame of passing over the target it counts as on target.
  • If there are no inferred targets the reticle is 100% player controlled. It excludes targets that are too far away, and all targets are treated as a line (because they have height) instead of a point. This means that if you’re heading towards a characters feet you’ll hit their feet with your attack, instead of magically jumping to their midsection.
  • For evaluating targets, lexicographic scoring is used. It rates the targets on several dimensions in priority order, and only uses lower dimensions if the top ones are equal. For shooting, this means that each target has two parts of the “egg” that move out from them. All valid centers of any egg rate higher than any outer sections of the egg, but it will use the closest section out of the two groups.

Jumping and Climbing

  • For jumping the goal was the same as targeting. The game allows air steering while in midair, so the game will only adjust your controller input to match a physically possible controller input. But, jumping is way more complicated because of the larger number of possible targets, the commonality of linear targets such as ledges, and various animation affordance issues. Failing to correctly predict while jumping is worse than when aiming.
  • The basic algorithm looked ahead about 0.75 seconds. There were a set of “illegal” filters it would check for all possible locations, and one preference score. It would first apply exclusion filters, in performance order where it would do the cheap tests first. After passing all filters it would heuristically score the remaining targets.
  • To score the targets it uses a golden section 1 dimensional optimizer, which was picked for execution speed. Scoring function was based on trajectory relative to final position. To modify the player input it computes the player’s time to land, and the relationship of the desired target and the current trajectory. It adjusts the player’s stick input to match the desired input. (Note: I missed a bit of the specific math, but there are a variety of ways to rate targets heuristically)
  • Ground landings have to be compared against features. You can’t just trace the feet because they will hit a wall before the ground in many situations (such as jumping up). It instead traces a set of polygons between the head and the feet, and clips them against geo. It stops the search if the polygons hit a segment the player can stand on.
  • The original design didn’t call for wall jumps, but they added them when players ended up hitting falls. When a player hits a wall they can “move” a certain amount or jump off, which ended up feeling more natural. To score wall collisions it traces a parabola at the knees.
  • They ended up doing some modification to running as well, to walk around small objects in the world. Super heroes don’t run into parking meters, why should the player?

Conclusion

  • Why does this sound so complicated? At first they tried stealing from Uncharted, where it computes the jump at take off. But, there were too many objects. Then they tried stealing from their own Sly Cooper games, but having a “grab” button that when activates automatically air steers the player towards the nearest surface.
  • But, the realistic graphics of Infamous brought with them increased expectations. Players were not willing to accept the floatiness of the Sly Cooper solution, which worked just fine in a cartoony world. So, they had to come up with something that matched the world better.

Posted in GDC 2010 | Tagged: , , , | 2 Comments »

GDC 2010: Procedural, There is Nothing Random About it

Posted by Ben Zeigler on March 19, 2010

It’s starting to wind down, but here’s some more notes! These are from the session Procedural, There is Nothing Random About it by Eskil Steenberg. Eskil is working on the indie MMO Love, which is going to go live very shortly, and his talk comes from the perspective of integrating many procedural techniques into his work. It worked well both as an overview of the concept and as an explanation of specific techniques. This talk had a bunch of valuable visual aids (he opened the game live at several points) so these notes are not as useful as a video would be. Sorry. Anyway:

History of Procedural

  • Procedural content generation started as a purely practical pursuit, because many old systems were severely lacking in memory. Games like Rescue on Fractalus and Populous (Eskil said the high selling “Mission Disk” addon was purely a list of random seeds) generated their procedural data in engine, but that solution is fairly pointless in today’s world.  We now have tons of memory and storage space.
  • The next type of procedural content generation is offline generation. One early attempt at this was the Massive crowd simulation tech created for Lord of the Rings. It’s also been used in a variety of modern games such as Far Cry 2 or Eve. This technique is valuable because an imperfect procedural tool can be fixed up in post production to iron out the kinks. This is a valuable way to save time.
  • Last year Eskil told everyone to fire their designers, this year he’s telling everyone to fire their artists. The way you make a good game is to make a bad game and fix it, so you need as fast an iteration as possible. This means you need a super fast art pipeline, and procedural tools are a huge help for this.
  • Ken Levine has said that filmmakers get to make movies while game developers get stuck having to make the camera first. “Ken, I love you but you’re wrong”. Many of the most artistically interesting films have been made by filmmakers who DID make their own camera. Technology is not purely a means to an artistic end, but can in fact inspire new and interesting artistic expressions.
  • Eskil demoed his modeling tool. He showed how it allows artists to make fragments and then use “deploy” to recursively place those objects over any mesh or surface. It’s an example of how you can set it up so artists get to art direct, instead of just make tons of individual custom pieces.
  • In today’s game industry, Art is what is stifling innovation. Design, tech, and innovation and held back by art constraints. Destructible environments are easy, but the high visual requirements mean we can’t do them. “Chris Hecker, I love you but you’re wrong”, there are still interesting tech issues to solve.

Procedural Generation Back In Engine

  • The solution to the issues with the stifling art pipeline is to put procedural generation back into the engine. Ragdoll may not look as good as hand animation, but it reflects the player actions in a stronger way. This feedback and responsiveness is what is missing.
  • How would you procedurally build a labyrinth? You start with a block, carve out a solution, and then add embellishments once you’re sure it works. The traditional way to make a locked house is to make exactly one door that can be opened by exactly one key.  The emphasis is on logically correct structures.
  • But, how about we take a statistical solution? Perhaps we make a house that can be opened in any number of ways. You find a key, and then maybe you find the house. Life is lots of keys and lots of doors, and can be about improvising. Why can’t games be about this kind of improvisation?
  • If Eskil were an assassin, he could pickpocket the entire room and gain hundreds of possibilities. Games can be like that. Instead of enforcing logical consistency, we can build a house with 5 doors, and randomly placed keys. It will be statistically consistent because the odds are functionally 0 to have all 5 keys end up in the house.
  • To build interesting statistically consistent systems you need to take advantage of spatial dependencies. Applying series of what are basically image filters can be used to handle these relationships. Stochastic sampling is a good place to start.
  • Disney said to Pixar that Pixar would fail because computers can’t understand emotions/wants of consumers. But, the designers of said computers can. If a rule can be taught to a designer it can be taught to a computer.
  • As an example, Eskil had an algorithm to place bridges in his world. At first it made way too many bridges, so he kept refining the algorithm. Instead of just reducing the frequency he made the requirements more strict until he arrived at the best bridge he could think of. The bridges made by his algorithm where more interesting and logical than ones he would have hand placed, because the computer didn’t come into it with any biases.
  • Love is basically complicated systems of hierarchical filters, that can construct objects of any type, such as buildings cliffs etc. The world is a grid, but subsections of the grid are replaced by custom artist assets as appropriate, so the world ends up not looking like a grid.

Conclusion

  • Last year, Eskil felt alone. He didn’t share any of the problems of the rest of the industry. The PC is dead, except for steam (but that doesn’t count). Free to play MMOs are all that matter, except for WoW (that doesn’t count). Eskil doesn’t want to count: that’s when you succeed.
  • Finally, Eskil wants us to all go out and explore. He wants us to say next year “Eskil, I love you but you’re wrong”.

Posted in GDC 2010 | Tagged: , , , , | 3 Comments »

GDC 2010: Streaming Massive Environments from 0 to 200 MPH

Posted by Ben Zeigler on March 17, 2010

Here’s my notes for the talk Streaming Massive Environments from 0 to 200 MPH presented by Chris Tector from Turn 10 Studios. He’s listed as a Software Architect there, and obviously has a deep understanding of the streaming system they used on Forza 3. This talk was nice and deep technically, and touches all parts of the spectrum. I did get a bit lost when it got down to the deep GPU tricks, so I may have missed a bit. Anyway, here’s things that I think were probably said:

Overview

  • The requirements are to render the game at a constant 60 FPS, which includes tracks, cars, the UI, crowds particles, etc. Lots of things to do, so very little cpu available at runtime for streaming.
  • It has to look good at 0 mph, because that’s where marketing takes screenshots, and where the photo mode is enabled. It also has to look good at 200 mph, because that’s how people play the game.
  • As an example, the LeMans track is 8.4 miles long, has 6k models and 3k textures. To have entire track loaded would take 200% of the consoles memory JUST for models and textures.
  • Much information can be gathered from the academic area of “Massive Model Visualization”. However, beware that academic “real time” does not equal game real time, because of all the other things a game has to do.

The Pipeline

  • First, the tracks are stored on disk in modified .zip files, using the lzx data format. Tracks take from 90MB to 300MB of space compressed. This data is read off disk in cache-sized blocks. The only actual I/O that is performed is done strictly in order, to avoid the horrible seek times of a DVD.
  • The next stage is the in-memory compressed data cache. The track data is stored in this format in the same format as on disk. Forza 3 uses 56MB for this cache, and uses a simple Least Recently Used algorithm to push blocks out of this cache. Each block is 1MB large.
  • The next stage is a decompressed heap in memory. There’s a 360-specific LZX decompresser that runs at 20 MB/Sec. They had to optimize the heap heavily to get really fast alloc and free operations.  Forza 3 uses 194 MB for this heap and allocates everything aligned and contiguous
  • The next stage is the GPU/CPU caching layer. They do something semi tricky for textures. Textures can be present in either Mip 0 (full res), Mip chain (Mip 1 down to 32×32), or Small Texture (a single 32×32 texture) form. There is special 360 support to allow the Mip chain to be split up in different memory locations, so they can stream the Mip 0 in after the rest of the chain and it will display correctly.
  • A few special things happen in the GPU/CPU itself. First, there is NO runtime LOD calculation, as the streaming data gives the correct LOD to show, and they are seperate objects in the stream. They did add a basic instancing system to allow a single shader variable. They spent a lot of time optimizing the GPU/CPU for the 360. He mentioned using Command Buffers as much as possible. Spent time right sizing assets to fit optimal shader use. 360 has special controls to reduce MIP memory access (Note: This got a bit too deep for me)

Computing Visibility

  • Many projects use conservative occlusion to determine visibility, often because it can run real time. However, Forza does per-pixel occlusion in an extensive pre process step. Uses depth buffer rejection to figure out what’s occluded. It also does the LoD calculation at this point, and will exclude any objects that aren’t big enough to be visible (contribution rejection). Many games do LoD and contribution rejection runtime, but the data set is huge so they end up having horrible cache performance. (Note: I asked later, and this whole process takes up to 8 hours offline, for a very large track)
  • First step in the process is to Sample the visibility information. The tracks have an inner and outer spline that defines the “Active” area, so the sampler picks a set of points inside those splines (and maps them to a grid relative to the center spline). At this point it creates “zones” which are chunks of track.
  • To actually sample at each point, it uses a constant height and 4 angle views, relative to track direction. Visibility for each point are automatically placed at adjacent point, because the object came to exist at some undefined spot between two sample points.
  • The engine then renders all of the models that are plausible (without textures). It then runs a D3D occlusion query to see what and how much is visible. Each model keeps track of it’s object ID, camera location, and pixels visible. The LoD calculation happens at this point, as it uses the distance info. It can do LoD, Occlusion, and Contribution in a single pass, after 2 renders. So fairly quick individual operation. It then keeps track of the pixel count of each object in a zone, as opposed to just a binary yes/no for visibility.
  • After sampling a Splitting process takes place. Many of the artist placed objects are extremely large in their source data, to avoid seams and such. So, it will break these large objects up and cluster smaller objects together into single draw calls. Instancing breaks the clustering, so artists have to be careful
  • The next step is the Building process. At this point it maps the textures on to the models. There’s a pass that aggressively removes duplicate textures. It looks for renamed textures, exact copies, and MIP parent/child relationships and will combine them as necessary. It also computes the 32×32 “small textures” at this point. The small textures for an entire track are put into a separate chunk and are preloaded for the entire track. This chunk is from 20-60 MB depending on track and is the only track data that is preloaded. This is so when the low LoD for an object is up and running, it will at least be colored correctly.
  • Optimization is the next phase and is somewhat complicated. For each zone it finds the models and textures used in that zone as well as the two adjacent zones. It finds the models and then the textures, and sorts them by number of pixels visible. For any textures that are unneeded (if it’s < 32×32 strip it entirely, if it’s < MIP 0 strip MIP 0) it does the trivial reduction.
  • It then does two memory reduction passes. First, it has to lower the total number of models/textures loaded in a zone to be < the decompressed heap. It removes models/textures as required, starting with the least pixels visible. After that it computes a delta of models/textures relative to the zone before and after it. The delta has to be lower than the available streaming bandwidth, so it strips for that reason.
  • Once it computes the set of assets in a chunk, it has to package them. It places them in a cache efficient order and places the objects in “first seen” order. Objects that are frequently used end up near the front of the package and will stay in memory throughout, while objects for later in the track are farther back.
  • The last step is Runtime. The runtime code is responsible for keeping track of what individual objects to create and destroy, based on the zone descriptions. It could do a reference count but does a simple consolidate where it frees everything first. This reduces fragmentation

Summary

  • The keys to the Forza system are work ordering, heap efficiency, decompression efficiency, and disk efficiency. Each level of the data pipeline is critical, and anything that can be done to improve a level is worth doing. Don’t over specialize on a particular aspect of the pipeline
  • System isn’t perfect. Popping can happen either due to late arrival caused by memory/disk bandwidth not keeping up, or it can be caused by visibility errors. They did have to relax some of their visibility constraints eventually, because certain types of textures threw off the calculation. They provided artists with a manual knob that can tweak an individual object to be more visible at the expense of possibly showing up late. Finally, you have to deal with unrealistic expectations.
  • For future games, Chris had a few ideas. First, he would like to expand the system to work for non-linear environments. This would entail replacing linear zones with 3d zones, but would allow open world racing. There are probably more efficient forms of domain specific decompression that could up the decompression bandwidth. The system could do texture transcoding. It should be expanded to add another layer on top of the disk cache: network streaming (Note: Trust me when I say that’s a whole other lecture by itself)

Posted in Game Design, GDC 2010 | Tagged: , , , | 3 Comments »

GDC 2010: Single-Player, Multiplayer, MMOG: Design Psychologies for Different Social Contexts

Posted by Ben Zeigler on March 17, 2010

Here’s the notes I have for Single-Player, Multiplayer, MMOG: Design Psychologies for Different Social Contexts as presented by Ernest Adams. Ernest has a long history of writing about and teaching game design, although primarily single player games. Roughly, this talk is about him extending his previous concepts to encompass multiplayer games, with varying success. It works as a good overview of how social context affects design, but Ernest is a BIT out of date with the MMO world, as he himself admits. Blah blah, any transcription mistakes are purely my own.

Ernest’s General Philosophy

  • Intellectual pursuits can be vaguely separated into deductive (which he described as English) or inductive (French) thinking. The Classic or Romantic contexts. Game design basically straddles the line perfectly, and is a Craft instead of an Art or a Science. DaVinci should be our idol.
  • But game developers aren’t really very good at their craft. They kill 2/3 of projects they start. They never seem to think through the final goal, and generally lack a philosophical direction.
  • Player-Centric design is a solution to this. A designer must imagine a single, idealized player. The goal of a designer is to entertain them, and to empathize with them. The designer has a responsibility to think about how their game will make a player feel.
  • The Tao of Game Design is the model Ernest uses to describe the relationship between player and designer. They are collaborating to create an experience, and neither would exist without the other. Each has the other inside of them, as far as trying to build a mental model.
  • But, Ernest says this model is incorrect, because it specifies a singular player. Ernest said he was falling into a bias of writing about games he likes to play and create: single player games

Player Versus Environment

  • The first type of game is PvE, which is not exactly the same as single player. A strictly cooperative game can be closer to PvE, and a single player game with a simulated AI player (such as football) is not PvE either.
  • In a PvE game, the designer’s job is to design interactions. It’s vital for the designer to maintain a fairness throughout. Difficulty spikes, learn-by-death, stalemates, insufficient information for critical decisions, and expecting outside information can all violate the player-designer pact and pull the player out of the game.
  • The relationship between player and designer is very intimate, and according to Ernest these kind of games can be Art (with a capital A) because they really have the concept of an artist.

Player Versus Player

  • In a pure PvP design, the job of a designer is to do competition design. The goal is to enable the fun that comes out of players interacting with each other, not over designing and trying to force the fun into the system. Fairness is fairly simple, and involves making sure that everyone has an equal start and can’t cheat.
  • Instead of a designer collaborating with a player, a designer is creating a system in which players will exist. Basically, a PvP designer is more of an Architect then an Artist. You can try to make all the rules you want, but players will add their own rules to the system.

Massively-Multiplayer Online

  • Ernest talked a bit about how he worked on one of the first online games, Rabbit Jack’s Casino at AOL. It was pay by the minute, so Ernest feels it kept him extremely honest as a designer. Everyone seemed really nice. If he didn’t keep the player engaged they would just leave. (Note: a cynical view here is that if he didn’t keep them psychologically addicted they would quit)
  • His recent MMO experience was to jump into Second Life, which was a very lackluster experience. Everyone was extremely rude to him, the game took forever to load, and it felt very unfamiliar. (Note: Yeah, that’s Second Life. Which is not a game.)
  • Designing fairness is basically impossible, as the starts are inherently uneven. The best people seemed to figure out was things like Raph Koster’s Laws. These laws are based on empirical evidence from existing communities, and tend to be about SURVIVING an online game, not having fun. Baron’s Laws saws that Hate is good because it brings people together. As long as Raph’s laws are true,  MMOs will suck for the vast majority of potential players. (Note: Many of Raph’s laws are super cynical and really don’t apply to newer designs like World of Warcraft. Which is kind of why it’s successful.)
  • As an MMO designer, it’s about servicing a cloud of players, who really won’t care about you until you screw up. Your job is to be a social engineer.

Free to Play MMO

  • Much of Ernest’s material for this section is based on slides from a presentation Zhan Ye gave at Virtual Goods Summit 2009. That presentation is from the perspective of someone from the Chinese free to play MMO industry giving advice to western developers.
  • In a pay-per-time-period MMO, the only goal of individual features is to increase fun and general engagement, because specific actions are not monetized. However, in a Free to Play (ie, not free at all) MMO the design goal ends up being to maximize revenue from specific actions. Every feature in a F2P game must directly add revenue, or do so secondarily.
  • Fairness is no longer a goal at all, because it doesn’t help revenue. Instead, the goal is to create drama, love, and other elements of the real world. These elements will spur people to purchase items. The large advantage you get from an item, the more likely a player is to buy them.
  • As a result, in the first generation of successful Chinese F2P games, rich players would buy all the weapons and then use them to kill all the poor players. This ended up being too unbalanced, as all the poor players would immediately quit and not provide the player base needed to keep the rich players buying items.
  • So, the solution in the Chinese F2P community is to set up a series of family clans that will hire poorer players to fight for them. They would use gifts, threats, and extortion to control the poorer players. In other words, form in game criminal cartels.
  • Most successful items are based explicitly on exploiting human emotions. “Little Trumpet” is an item that can be purchased and used to publicly humiliate another player. That player can then pay money to have that curse removed, and is very likely to do so due to emotional distress.
  • Zhan compares F2P games to Las Vegas, but Ernest says they are worse because in Las Vegas you at least have the chance to make real money. F2P uses all the same psychological hooks of a slot machine, but with 0 chance of winning.
  • Ernest believes that these games are in fact evil. The designer has a set up a system that explicitly subsidizes real hatred, because there is no such thing as virtual hatred. If a game is set up to incentivize players to inflict emotional harm that game is evil.
  • There are two solutions to this problem. The first one is to NOT make your game zero sum, and remove competition (Note: So Farmville is not evil in this SPECIFIC way as it does not encourage hate), and the other option is to institute various methods to restrict it to competition instead of hatred and destruction. Something like the NFL salary cap vs. the America’s cup or F1 where richest always wins.
  • In F2P the designer’s goal is to be an economist. They still need to entertain the players, but empathizing with them is strictly bad business. If these games continue on this path, Ernest asks that we shoot him.
  • In conclusion, the craft of game design is fragmenting, there is no longer a single unified philosophy.

Note: As a focused response, I found his discussion of F2P MMOs very interesting, although I think he restricts it a bit too much to that genre. I would expand it a bit, because hatred can happen in PvP or MMO environments just as easily. For instance, take your typical 360 shooter populated by teenagers: they clearly want to inflict emotional harm and there is nothing in the game systems to help ameliorate that. But I can definitely stand behind his basic conclusion: Developing games that prey on the weak emotions of players is basically evil, and F2P games are much more likely to incentivize such decisions because of the focus on revenue over empathy.

Posted in Game Development, GDC 2010 | Tagged: , , , | 5 Comments »

GDC 2010: Design in Detail: Changing the Time Between Shots for the Sniper Rifle from 0.5 to 0.7 Seconds for Halo 3

Posted by Ben Zeigler on March 14, 2010

Here’s some notes from the session Design in Detail: Changing the Time Between Shots for the Sniper Rifle from 0.5 to 0.7 Seconds for Halo 3 presented by Jaime Griesemer from Bungie. He was in charge of multiplayer balance for Halo 1, 2, and 3 so has a lot of relevant experience. The talk was jam packed with information, so odds are very high that I missed something. At the end he was going pretty dang quick to fit it all in the hour session. Oh, and at some point he had a few slides about the odds of monkeys with typewriters reproducing the talk being like 0.2%, but it was pretty out of place so I honestly can’t remember where it fit in.

Designing Balance

  • Longevity means balanced. If a game like Halo 2 has been played and enjoyed by millions for years, it is balanced.
  • Balance can’t happen until the end of development, but you can’t wait until the end to balance because you won’t get it done. The solution is to balance in iterative passes. Once you’ve balanced at a certain level, don’t go backwards until you absolutely have to.
  • Passes are roughly (Note: I think I missed one) Role -> Flow -> Strength -> Limitations -> Detail
  • Two cognitive halves to balance. First, you have to develop an intuitive sense of balance. Using the non-rational part of reasoning, your brain (orbital frontal cortex) builds models and uses them to predict the future. If something feels wrong to you about the balance of the game, this is what tells you.
  • That part of the brain is great at telling you something is wrong, but not at telling you how to fix it. You have to use the other half to make the hard choices. Your brain (pre frontal cortex) needs to use reason to figure out what to change. But it can only work on so much information at a time. You have to work at a low detail level, and ONLY pay attention to info relevant to the current stage.
  • As an example, there’s an experiment from the Choice episode of Radiolab. Subjects were given either a long number or short number to remember, and then were ambushed with the offer of cake or an apple. People with the short numbers picked apples, but people with the long numbers picked cake, because they didn’t have enough reasoning capacity left to make a rational food choice and went with the emotional one. (Note: That episode of Radiolab is great, and as a fan I have to say you should all go read “How We Decide” by Jonah Lehrer. The evidence is really strong for Jaime’s point here).
  • The first part to each pass is going to be paper design. You need to plan out the behavior of all objects on paper before implementing them, so you can make sure they make sense. Figure out basic mechanics, desired feel, critical assets and important details.

Balancing Roles

  • When designing roles you have to balance simple against complex. The goal is to make the game barely manageable at it’s deepest.
  • Roles need to have actual functional differences. Rock-paper-scissors is not actually good design, as the 3 roles are completely identical. The depth in any multiplayer game comes from the roles and their interactions
  • For a shooter, you should have no more than 1 weapon per role. If you add weapons that satisfy the same role but are different, you’re simply adding complexity and NOT depth. All shooters have the same weapons because they have the same roles. (Note: And players realize that now and get bored)
  • Similarly, you can’t leave any role without a weapon. Rock-paper-nothing is not even a game.
  • When cleaning up your paper design you should practice iterative deletion. Delete whatever isn’t necessary to fill a distinct role, and then delete everything that NOW isn’t necessary. And so forth.
  • You have to balance chaos against certainty at this point. You want players to be able to think about probable, but not inevitable future results.
  • In a largescale multiplayer game you need to be careful about the levels of “yomi” needed to succeed. Basically you should stop at “I know what you know about me” and not get into too much recursion. If it looks like a gun it should be a gun.
  • Beware of positive and negative feedback loops. If doing well causes you to do even better this gets in the way of balance.
  • Use slots whenever possible instead of having to balance larger chunks. Always balance the core elements first. Cut half of whatever you do.

Balancing Flow

  • Flow can’t really be balanced until objects first start to come online during production. At this point, the designer is in charge and should be setting the tone. Feedback is not super important at this point in the process.
  • During this phase you need to get the cadence just right. If it’s too slow the player will get bored, and if it’s too fast the distinct events will star to blur together.
  • Verisimilitude is key at this point. Triggers should be for shooting and buttons are for punching, analogous to the real world action. Work on making it feel real.
  • This is where you add the first pass of spectacle. Think about sounds, control, animation. In the sniper rifle case it unzooms for reload 0.5 seconds late, just so you can see your target die satisfyingly.
  • Causality needs to be established. Your game has to look good on youtube, so you can tell what is making what happen. Players need to understand the causality of game events or they will think it cheats. Make this as obvious as possible, and throw out realism to establish it.
  • The flow of a game is fragile, so as a designer you’ll have to use your imagination to get in the flow state this early in dev. Make your own sound effects. Whatever works.
  • You want your game to have a low floor for flow so players can get into it, but a very high ceiling so they can always go deeper into an individual mechanic. Add as much detail as possible at the high end, to give them something to strive for.

Balancing Strength

  • To enter the next phase, gameplay needs to be functional and largely optimized. Framerate above graphics quality. It has to work.
  • Ketchup works because it’s 5 primary flavors, all pushed to the max. Halo (and by extension all game should) is like ketchup.
  • Affordance is key at this stage. If the strength of something has to be explained, then it isn’t really a strength.
  • It can be tricky to balance, because designers can misinterpret competence (getting good at a weapon) with the weapon being balanced. We CANNOT use our intuition at this stage because it will lie to us. Changes will have to be done in larger batches, and we need to avoid bias effects.

Balancing Limitations

  • Once everything is strong and useful, it’s time to start adding limitations. Limitations are not weaknesses. You add limitations to restrict the situations in which a role is successful, not add randomness.
  • If the same role wins in too many situations, add limitations. If the outcome of a certain role in a situation is essentially random, there may be too many limitations to be understandable by the players
  • Work on serious playtesting at this point. You want players to play, and you shouldn’t argue with them. Look for their reactions, NOT their solutions. “I don’t like x” is useful, “I don’t like x because of y” is great, and “You should do x” is useless. Trust the player’s gut (intuition) but don’t trust their reasoning as they do NOT have the same mental context you do as the designer. (Note: I 100% agree with and endorse this feedback strategy)
  • Negative feedback generally means that the game in their head does not match the game as it actually exists. Either try to match it better, or do a better job of realistically setting expectations via teaching.
  • Identify the specific goals of your playtester and keep that context in mind. Optimizers look to find the best overall, Ragers quit when frustrated, Role players always try the same weapon, “Your mom” will get confused, Griefers will try to destroy it for others, and Pros will hate you for any randomness.

Detail Balancing, and the Sniper Rifle Specifically

  • Look to see if any weapons are being used outside of the roles you initially designed. See if any weapons are strictly dominating others.
  • Eventually, using the Sniper Rifle hit the intuition of the designers. Using it felt wrong, something was out of balance, which was that the sniper rifle was too effective at close range, and too effective when getting body shots without nearby cover.
  • The first idea is to reduce the strength knobs. But you should avoid this as it’s going backwards, and it’ll make the weapon feel week. Don’t reduce damage, range, or add random weaknesses. Don’t make the sniper rifle worse at it’s primary role.
  • Couldn’t reduce sniper rifle magazine to 3, as then you couldn’t kill 2 ranged enemies without a reload. Increasing reload or time to zoom only fixed half the problem. Modifying max ammo count would fix the average problem but NOT the instantaneous problem, which is what people actually notice.
  • In this case Flow made sense to modify. The cadence was picked to make the sniper rifle feel fast and rapid fire (Note: rapid fire sniper with one shot kill is a weird concept, he didn’t explain the original idea behind this) but it needed to change. So, the time between shots was increased, because this didn’t weaken the original role but made it worse in other roles.
  • It went from 0.5 to 0.7 because you should never change anything less than 10%. Players won’t notice it at all, and a balance problem has to be fairly big to be noticeable in the first place. Overshoot and then come back if you can.

Posted in Game Development, GDC 2010 | Tagged: , , , | 6 Comments »

GDC 2010: The AI of BioShock 2: Methods for Innovation and Iteration

Posted by Ben Zeigler on March 14, 2010

Here are my notes for the session The AI of BioShock 2: Methods for Innovation and Iteration presented by Kent Hudson at 2K Marin. Kent has a PDF of the presentation up on his site, with promised future annotation. The session didn’t go into much detail about specific AI details, but was a good overview of how the Little Sister and Brute enemies in BioShock 2 were designed. As always, these are my personal recollections and show what I got from the talk. Details may be incorrect and are my fault.

The Big Sister Wasn’t Fun

  • Initial design of the Big Sister was based on a traditional process: Concept -> Docs -> Prototype -> Production -> Polish/Find the fun.
  • Quality was high at the concept stage. Made sense within context of game, got people excited
  • A lot of time was spent on the Docs phase. Specifically documented each of the combat abilities in detail. At this point the designers got very very attached to their paper designs.
  • The prototype phase was very quick and only proved that it was technically feasible.
  • Production was scheduled very early because it was a key focus of the game and they wanted to get it right. Needed assets for other uses as well.
  • By the time they got to Find the Fun the Big Sister was very brittle. The individual actions worked, but it felt like a grab bag of attacks. There was no consistent combat rhythm. The Big Sister worked, but she wasn’t fun.

New Process for the Brute

  • Changed process for other major BioShock 2 enemy, the Brute. Concept -> Docs -> (Prototype <-> Iteration) <-> Production. No longer a linear process. Concept phase the same as before.
  • At Docs phase, did not get into as much detail. Docs only described the high level behavior. Went into more detail for the various tuning knobs that could be used to tweak and modify the behavior if the doc specifics didn’t work out. Also specified the work flow for the rest of the process.
  • Team built proxy animations (rough quick and ugly) to allow iteration on timing and speed. These proxies helped design AND animation because they quickly determined the initial body shape design wouldn’t work.
  • Got the prototype running in a real world environment as quickly as possible. Not possible to prototype in abstract world. Discovered need to modify world to accommodate behavior, as had to tag objects with weights to make object throw behavior look correct.
  • Opportunistic addition occurs when production and prototype are in a loop. If 80% of the work is done you can add something cool. For Brute, added an “object swat” on top of object throw that filled a gameplay hole and was very quick to implement.

Back to the Big Sister

  • Big Sister had very inconsistent behaviors. She felt essentially random. Using knowledge from Brute, added more tells and predictability. (Personal Aside: I played through BioShock 2 and the Big Sisters still felt very inconsistent and hard to read to me. The first version must have been more so)
  • Big Sister was fast… ish. The animations had been done but the Sister didn’t really feel that fast. Fixed this by hackily speeding up animations to convince art was possible. (Personal Aside: This was successful. The Big Sister felt impressively fast and very different then Big Daddies)
  • Opportunistic addition occurred with the addition of a “perch” attack where the Big Sister would jump off something, hit the floor, and stun nearby. Was based on work put in to make her faster that fell through because of lack of AI support late in project.
  • Summary: don’t over concept and over document. Prototype early and rigorously. Prove features are fun before production. Work iteratively as a group.

Posted in Game Development, GDC 2010 | Tagged: , , | 2 Comments »

GDC 2010: Officially a Thing

Posted by Ben Zeigler on March 10, 2010

Like last year, I’m headed off to GDC. I have my own conference pass this year (no more sleazy pass sharing!), so I’ll be hitting sessions all day Thursday, Friday, and Saturday. For some reason you can’t link directly to your conference schedule, but hey, I have a blog. Post session I’ll fill them in with notes. If the session is interesting I’ll write up a full post, but if the session is “interesting” I’ll just ignore it. Or possibly delete it off here so I can pretend I didn’t go. Oh, and there’s no 9AM sessions because there’s no way I’m driving up from the south bay at 6am. Here’s The Cool Places To Be (with updated notes in bold as I get them transcribed):

Thursday

Friday

Saturday

Posted in GDC 2010 | Tagged: , | 1 Comment »

 
Follow

Get every new post delivered to your Inbox.