Posted by Ben Zeigler on May 12, 2010
Scott Rigby of Immersyve, Inc gave an extremely interesting lecture at today’s LOGIN conference entitled “Rewards that Retain: Understanding how rewards can either motivate or deflate sustained engagement with online games“. This one lecture has entirely justified my trip up here (which has been great btw, nice group of folks up in Seattle), and I picked up a bunch of super useful design ideas based on real psychological research (more research is summarized in a Gamasutra article from a few years ago). The talk was an overview of the concept of intrinsic vs extrinsic motivation and how it affects long term engagement in a game. As a developer of MMOs who live on long-term engagement, this is obviously relevant to my interests. Here’s the notes I’ve got, and Scott promised to put up slides on his site in a week or two:
Rewards vs Rewarding
- The never-ending challenge of an online game is to get players and your game together with the hope of a nice long relationship. But how can you know ahead of time if it’s going to end in painful divorce? Figuring that out lets you maximize the lifetime value you can get from your players, either via subscription or microtransactions.
- We talk about fun, but it isn’t enough. There’s basically no correlation between a perception of “fun” and long-term engagement.
- Rewards (extrinsic motivation) are about reinforcement learning and short term motivation, but Rewarding (intrinsic motivation) behavior is about long term satisfaction. Counter intuitively these can often be directly in conflict. Specifically, reinforcement learning can prevent long-term engagement from occurring
- Extrinsic motivation is enticements added to encourage behavior. But, by encouraging behavior externally you increase churn, abandonment of your product, and a lack of community. It also highly incentivizes cheating and exploits, because players are taught that only the reward matters and not the experience. Basically, pressure of any kind, including internal and social pressure, is also an extrinsic motivator. “I need to raid tonight because I’ll feel guilty otherwise” is an extrinsic motivator that doesn’t drive long term engagement. Also, all punishments are extrinsic motivators.
- Intrinsic motivation is when performing an activity is in itself rewarding. Luckily, games and sports are the king of this kind of motivation. If a player enjoys it enough, they’ll even play through the punishments (eg. the social pressure of being a nerd gamer). These motivations tap into deep parts of our human nature that satisfy us, and aren’t “addicting”.
- Extrinsic motivation is correlated very strongly with rather a player will come back tomorrow, but is not at all correlated with rather they will come back next week, rather they wil buy future games from the same developer, or rather they will evaluate your game positively for reviews and word of mouth. Intrinsic motivation is correlated very strongly to all 4 (although extrinsic still wins for day-to-day engagement).
Types of Intrinsic Motivation
- There are 3 basic needs that are satsified by intrinsic motivation. The first is Competence. This is the ability to gain in skill and ability within a certain domain. For instance, FPSs and other high-skill games primarily push this motivation. It’s very rewarding to learn and become good at something
- The second is Autonomy. This is basically when the player feels like they are in control of and agree with an experience. Complete freedom can provide this, but the actually important thing is volition. If a player feels like the choice the game goes with is one they agree with, then this need is satisfied. Another way to put this is how much the player matters to the world and the events of the game. Turn-based strategy games such as Civilization push this heavily.
- The third is Relatedness. Basically, this is the inherent need to relate to other human beings, so is obviously increased in an MMO setting. However, this can also be faked as relating to NPCs as if they are real people drives this need in the same way. Anything you can do to make the NPCs feel more alive and integrated with the player helps this. This is about how much the player matters to other people, real or virtual. Actually social games, like your good ol board games, push this heavily.
- RPGs have a unique property as a genre, becuase they satisfy each of these 3 needs. Levelling, as long as it involves an improvement in the kinds of things you can do as opposed to a pure number change, drives Competence. Choices, such as an opewn world and non-linear quest structure help autonomy, as does character customization. Guilds, teams, and npc factions help drive relatedness.
Harnessing Intrinsic Motivation
- Intrinsic motivation only works if the player has a perception of volition. They must feel like they are in control of the activity and aren’t being guided too heavily. The danger is that extrinsic rewards can lower tihs perception of volition and autonomy, because a player starts to feel controlled by the game/designer.
- Intangible rewards such as verbal cues or animations are a great way to reward players withour hurting intrinsic motivation at all. If a player does something specific, and is then rewarded with a specific non-gameplay reward (such as “Thanks for saving me from Bob the Pirate!”) it’s cheap and effective. You should avoid controlling language such as “must” and “should” as even mentioning them will cue players to think of being controlled. Also, intangible rewards are LESS effective on children as they are more skeptical in general of verbal rewards for doing a good job, very possibly due to over-exposure in the educational system.
- Unexpected rewards are another safe way to reward a player. Random drops are cool because the player doesn’t feel controlled by the process. You have to give relevant items, though, and any false choices between useful and non useful items brings the feeling of being controlled back. Also, farming for random drops is very bad for intrinsic motivation, because the feeling of being “controlled by the random number generator” is an insidious way of taking volition from the player.
- Contingent rewards can be used but are more dangerous. Engagement rewards given for participation feel less controlling than rewards for explicitly completing a task. When giving a reward completion it should drive competence directly, as they helps offset the loss of control.
- Performance-contingent rewards are the riskiest. If you reward players directly for how well they perform in a specific instance it can feel extremely controlling. All they care about is the end result and not the experience, and you get “learning the test”, exploits, and cheating. In fact it can directly de-emphasize the original task becuase players realize that if they have to be rewarded for how well they do, it can’t actually be important how well they were doing.
- Giving useful and neutral informational feedback is a key way to offset the volition-hurting effects of rewards. If you tell a player exactly why they are or are getting a reward it helps alleviate that feeling of loss of control. For instance, end of round scoreboards are perfect for this. This is why need/greed rolls are visible. Basically if you can give the players information you should.
Improving RPG Intrinsic Motivation
- Quests are a great mechanic in certain ways, because they provide clear goals (which helps improve autonomy) and provides competence-improving rewards. But, once the rewards are completely expected they lose effectiveness, and quest rewards are automatically of the more dangerous contingent type. To help this, you should do anything you can to stop quest givers from feeling like vending machines.
- A key element for drops in RPGs is how logical they are. If you kill a rat and he drops a robot it feels arbitrary and out of place. And anything that is arbitrary and inconsistent decreases volition. Also, every drop must increase one of the intrinsic motivations. It should increase efficiency or tactical opportunity for competence, add choices for autonomy, or increase opportunities for socialization or team interaction for relatedness.
- Different activities in an RPG can help different motivations, and it can help a lot to vet the factors ahead of time. For instance, exploration is big on competence and autonomy. New weapons add autonomy by adding choice. Rare items increase autonomy and relatedness. Epic items do not increase autonomy but are great for competence and relatedness. Soloing is good for autonomy while grouping is great for relatedness.
- The epic example is interesting, because in WoW the aquisition of epic items is tied directly to raiding. Raidins is an activity that highly values competence (very hard to execute a complicated plan) and relatedness (requires social structures), but does absolutely nothing for autonomy. In order to be part of a tight raiding guild, individual choice necessarily suffers. So, as a major failing for World of Warcraft there is no part of the end-game gameplay that drives autonomy (other than grinding for visible luxuries).
Q & A
- Abstract but useless “points” are an extrinsic motivator but can be very effective in providing an informational loop. If points increase when you do intrinsically motivated things it lets you know you’re on the right track. No specific research on achivement points, but they want to start on it soon. Giving out achievements specifically for failing can help mitigate the loss felt at failure.
- Outdoor environments automatically increase the perceived autonomy, with the exception of specifically maze-like environments. This is because you have more opportunities to choose where to go and don’t feel as funneled. This is why jumping feels intrinsically good to most players: jumping allows you to express your will on the environment in a cheap and easy fashion. If you want to jump and can’t jump it feels very limiting.
- Overall, any inconsistencies of any sort in the degree of autonomy automatically decrease intrinsic motivation. For instance, if there are hundreds of doors and you can only open one, this feels very controlling. Giving super obvious clues such as glowy objects and beacons of light go a long way to alleviate this loss of control, despite being technically “immersion breaking”. Remember the goal is autonomy, not specifically immersion.
Posted in Game Design, Game Development | Tagged: login, login2010, psychology, rewards | 2 Comments »
Posted by Ben Zeigler on April 19, 2010
I started playing Heavy Rain at around 7pm on a Saturday a few weeks ago, and finished it at 1am Monday morning. The thing was damn compelling, though it’s certainly heavily flawed. The walking controls suck, a few of the voice actors were pretty painful (looking at you Lauren and Sean), and the aesthetic was uniformly European and foreign to the purported setting of quasi-Philadelphia. Which apparently has two non-white people total, unlike real world Philadelphia which is a majority minority city.
Why did Heavy Rain work for me? The key scenes for me came late in the game where I was being told, both by the game mechanics themselves and a driving character in the plot, to do things that my player character would never do. I only had a few seconds to make a decision, and damn if I didn’t start sweating a bit. I knew I wouldn’t be able to redo the decision if it turned out poorly, because the game makes it just difficult enough to restore old saves. In every thriller I’ve seen and games I’ve ever played the hero would have gone one way, but I was going to try the other, unexplored path.
That particular decision ended up being very satisfying, as I was able to solve a puzzle using some subtle clues, and I avoided a fairly horrible fate. The simple existence of those decisions, and several others, are why Heavy Rain succeeds as an extremely inventive form of truly interactive drama. Sure, it’s just a choose your own adventure book, but at least in my play through it was a very consistent, well thought out, and challenging one. It also has the best implementation I’ve seen so far of quick time events, as the analog nature of the inputs really helps bring across subtle details and draw you into the character’s motivations.
Heavy Rain really reminded me of why I disliked Uncharted 2 so much. Uncharted 2 put tons of effort into recreating the film experience (even doing full performance capturing) in real time, but what was the point? At no point were you able to influence the important actions of your characters. Uncharted 2 is two only loosely connected halves: a game and a movie. Frankly I’ve played better games and seen better movies. Heavy Rain on the other hand, really is the full merging of Game and Movie, because your choices are strongly informed by the narrative, and likewise dramatically influence it in return.
Posted in Game Design, Uncategorized | Leave a Comment »
Posted by Ben Zeigler on April 11, 2010
I read a pretty businessy article at Edge Online today, about the fact that there are no longer any publicly held game development studios in England, and how there are virtually none worldwide. The article talks a lot about why this is, but I think the best answer is given in the last paragraph, as a quote from the CEO of Climax: “I am very glad we didn’t list Climax now. There is an inherent tension between the short-term view of capital markets and the long-term nature of game development”. If you look around at the most successful game development studios in the industry you’ll see one of two patterns: completely private ownership like Valve, or elite studios inside very large public publishers that are mostly shielded from quarter-to-quarter financial issues (Blizzard appears to still be immune).
The current Activision vs. Infinity Ward debacle is a superb example of the direct conflict between short term and long term thinking. The heads of Infinity Ward, at least as far as I can tell from rumor mill, really wanted to try and work on a new franchish, which would possibly fill an important long term goal by spawning a new franchise and keeping employee morale high. Activision corporate can’t deal with that kind of thinking, because they have to worry about what’s happening in the short term. Activision is looking about 2 years out (because that’s really the min dev cycle possible) because when you’re a publicly held company without incredibly strong leadership and market position that’s the most you can look forward before you get fired by your board of directors or large shareholders.
Game industry stocks are basically treated like Tech industry stocks, and what are the shareholders looking for? They want Growth, and that’s all they care about. They need the revenue numbers to keep getting bigger and bigger, because that means the company is more valuable, and they can resell with a profit. Dividends, which work to encourage long-term holding of stocks, are largely absent in the tech center because of a lack of profits and the investors don’t look for it anymore. For most of the people who would buy the public stock of a game publisher, any actions the management team take that take even 5 years to bear fruit are 100% worthless.
This is why I wouldn’t feel too secure if I was John Riccitiello right now. He’s tried to renovate the image and developer prestige of EA, and these actions will bring great fruits in the years to come. I feel like he’s done a good job of turning around EA as a long-term going concern. But, that won’t be enough for him because EA’s revenue is down a bit over the last year.
A public corporation is explicitly chartered to do what’s good for it’s stockholders. But, what’s good for the stockholder is NOT good for the company, the employees, or the industry as a whole. Google can get away with largely ignoring the stock analysts, but not the smaller publishers or independent publicly owned developers. They HAVE to live quarter to quarter because that’s what their fickle owners demand. And working to maximize your revenue for the next fiscal year is simply not what game development is about. If you want to know why a certain company doesn’t seem to really care about it’s reputation or long term prospects, it’s because the very structure of the corporate funding model doesn’t let them, and they’re not talented or willing enough to resist.
Posted in Game Development | Tagged: activision, business, ea, gamedevelopment | 4 Comments »
Posted by Ben Zeigler on March 25, 2010
Looking back at this year’s GDC there was a single thread running through most of the sessions I attended: Deceiving the player. Both Sid Meier and Rob Pardo explicitly told us to lie to players about how we calculate random chance, because of the way human psychology interprets probabilities. Chris Zimmerman laid out in detail how to lie to the player about what their hands did. Chris Tector explained how to perform a deeply technical form of lying to build the illusion of a continuous world from streamed chunks.
Sid Meier talked about the “Unholy Alliance” between designer and player, and Ernest Adams talked about the “Tao of Game Design”. On reflection the concepts have much in common: they are about the collaboration between player and game designer to craft a shared experience. From both the player and the designer, a unique mix of deception and trust is required: Suspension of Disbelief.
It can be interesting to compare gaming to another form of popular entertainment: Professional Wrestling. Back in the dark days of carny scams, Pro Wrestling was presented as real with the explicit goal of bilking the consumer. Over the last 30 years or so the deception inherent to Pro Wrestling has shifted: The vast majority of fans are completely aware that it’s all fake and planned, but they don’t care. They are completely willing to suspend their disbelief, and in return become part of the show. The experiences are real even though they’re based on a foundation of deception, and that’s at the core of gaming as well.
So lying to your players with the goal of building a collaborative experience is key to the power of the medium, but where can it go wrong? Ernest clearly talks though the different design philosophies of different types of games and argues that designers should effectively be more truthful the more a game moves away from a conventional Player vs. Environment game. Jaime Griesemer talks about ignoring the literal feedback of players, but never proposes lying to them in a competitive PvP environment. Eskil Steenberg’s whole talk was about moving Procedural Generation away from it’s long history of deception and towards the front of a game’s design.
Finally, much of GDC was talking about a completely different form of player deception. Soren Johnson’s great blog post lays it all out: game designers are increasingly being asked to lie to players about the very process of playing a game. With a flat fee, subscription, or large chunk DLC model the goal of a designer is honest and out in the open: they want to make the player happy so they will recommend the products to their friends and buy future related products. The goal of a designer in a microtransaction-based game is instead to exploit the second to second emotional weaknesses of their players to sell as many individual bits as possible. But, you can’t tell players this so they brand the games as “free to play” or “social”. This deception (and others such as DRM) doesn’t have anything to do with improving the player’s experience, it is simply about maximizing short term profit at the possible expense of long term credibility.
Update: Just as I posted this originally Soren’s twitter pointed me to a great post by Frank Lantz on essentially the same topic. He takes a bit of an opposing view in holding that the kind of deception advocated by Sid and Rob can be destructive because it stops players from learning about actual truths in the world. Give it a read.
Posted in Game Design, GDC 2010 | Tagged: gdc, GDC 2010 | Leave a Comment »
Posted by Ben Zeigler on March 21, 2010
Okay, time for my last set of session notes. Here’s what happened at Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input presented by Chris Zimmerman from Sucker Punch. The main topic of the session was the controls of Infamous (sorry, I refuse to use the stupid capitalization), which I quite enjoyed as a game. It presented a few pretty innovative ideas for interpreting player input, which I’m currently trying to figure out how to apply to various projects I’m working on. Stuff I heard:
- For Infamous one of the goals was to push immersion as far as possible. The objective was to make you feel like you WERE Cole, instead of just controlling Cole. They should see and hear what Cole sees and hears, and do what Cole does.
- However, players expect a certain amount of abstraction in their control interfaces. No player wants an accurate simulation of what it’s like to climb a rope, which is actually kind of hard. Direct control just won’t work, both because of the complication of the world and the fact that players are somewhat bad at using controllers. Players want to express their wishes, and feel challenged but also successful
- Chris performed an experiment asking players to point a ps3 stick at objects in a 3D scene. Players were fairly inaccurate, and formed a bell curve. Around 70% are within 15 degrees of the target, but not accurate beyond that. A second experiment asked them to push a button when a ball bounces. Players were within 50 ms most of the time, but not any more accurate then that (plus any A/V lag which is omnipresent).
- Challenges for Infamous were large. Active and small enemies, inexact controllers. Featured jumping and climbing, and they had the design goal of everything that looked climbable had to be climbable. Turns out the city has a lot of things that look climbable.
- They considered going with a solution where you would lean on a joystick to climb, like in Assassin’s Creed 2. But, they decided that didn’t feel skillful, and would be a problem given the density of the city. So, the goal was to require the player to execute correctly, but to liberally provide aid in the background.
- For aiming, the goal was to make it so the game ALWAYS shot exactly where you were aiming the reticle. So, the solution was to help in moving the reticle, not leave the reticle manual but help after firing. So, the game adjusts the “input” based on both the player’s real input and what the game thinks the player is trying to do. The movements have to be physically possible on the controller, it just helps the player out.
- It takes the controller input and looks for targets that are near the direction the reticle is moving. It will adjust the reticle motion to be directly towards that object if it’s a good enough fit. It also slows down the reticle movement as you are close to the target, to give more time to fire. It also keeps track of where it was and where it’s going as well as button presses, so if the press happens within a certain time frame of passing over the target it counts as on target.
- If there are no inferred targets the reticle is 100% player controlled. It excludes targets that are too far away, and all targets are treated as a line (because they have height) instead of a point. This means that if you’re heading towards a characters feet you’ll hit their feet with your attack, instead of magically jumping to their midsection.
- For evaluating targets, lexicographic scoring is used. It rates the targets on several dimensions in priority order, and only uses lower dimensions if the top ones are equal. For shooting, this means that each target has two parts of the “egg” that move out from them. All valid centers of any egg rate higher than any outer sections of the egg, but it will use the closest section out of the two groups.
Jumping and Climbing
- For jumping the goal was the same as targeting. The game allows air steering while in midair, so the game will only adjust your controller input to match a physically possible controller input. But, jumping is way more complicated because of the larger number of possible targets, the commonality of linear targets such as ledges, and various animation affordance issues. Failing to correctly predict while jumping is worse than when aiming.
- The basic algorithm looked ahead about 0.75 seconds. There were a set of “illegal” filters it would check for all possible locations, and one preference score. It would first apply exclusion filters, in performance order where it would do the cheap tests first. After passing all filters it would heuristically score the remaining targets.
- To score the targets it uses a golden section 1 dimensional optimizer, which was picked for execution speed. Scoring function was based on trajectory relative to final position. To modify the player input it computes the player’s time to land, and the relationship of the desired target and the current trajectory. It adjusts the player’s stick input to match the desired input. (Note: I missed a bit of the specific math, but there are a variety of ways to rate targets heuristically)
- Ground landings have to be compared against features. You can’t just trace the feet because they will hit a wall before the ground in many situations (such as jumping up). It instead traces a set of polygons between the head and the feet, and clips them against geo. It stops the search if the polygons hit a segment the player can stand on.
- The original design didn’t call for wall jumps, but they added them when players ended up hitting falls. When a player hits a wall they can “move” a certain amount or jump off, which ended up feeling more natural. To score wall collisions it traces a parabola at the knees.
- They ended up doing some modification to running as well, to walk around small objects in the world. Super heroes don’t run into parking meters, why should the player?
- Why does this sound so complicated? At first they tried stealing from Uncharted, where it computes the jump at take off. But, there were too many objects. Then they tried stealing from their own Sly Cooper games, but having a “grab” button that when activates automatically air steers the player towards the nearest surface.
- But, the realistic graphics of Infamous brought with them increased expectations. Players were not willing to accept the floatiness of the Sly Cooper solution, which worked just fine in a cartoony world. So, they had to come up with something that matched the world better.
Posted in GDC 2010 | Tagged: gdc, GDC2010, infamous, sucker punch | 2 Comments »
Posted by Ben Zeigler on March 19, 2010
It’s starting to wind down, but here’s some more notes! These are from the session Procedural, There is Nothing Random About it by Eskil Steenberg. Eskil is working on the indie MMO Love, which is going to go live very shortly, and his talk comes from the perspective of integrating many procedural techniques into his work. It worked well both as an overview of the concept and as an explanation of specific techniques. This talk had a bunch of valuable visual aids (he opened the game live at several points) so these notes are not as useful as a video would be. Sorry. Anyway:
History of Procedural
- Procedural content generation started as a purely practical pursuit, because many old systems were severely lacking in memory. Games like Rescue on Fractalus and Populous (Eskil said the high selling “Mission Disk” addon was purely a list of random seeds) generated their procedural data in engine, but that solution is fairly pointless in today’s world. We now have tons of memory and storage space.
- The next type of procedural content generation is offline generation. One early attempt at this was the Massive crowd simulation tech created for Lord of the Rings. It’s also been used in a variety of modern games such as Far Cry 2 or Eve. This technique is valuable because an imperfect procedural tool can be fixed up in post production to iron out the kinks. This is a valuable way to save time.
- Last year Eskil told everyone to fire their designers, this year he’s telling everyone to fire their artists. The way you make a good game is to make a bad game and fix it, so you need as fast an iteration as possible. This means you need a super fast art pipeline, and procedural tools are a huge help for this.
- Ken Levine has said that filmmakers get to make movies while game developers get stuck having to make the camera first. “Ken, I love you but you’re wrong”. Many of the most artistically interesting films have been made by filmmakers who DID make their own camera. Technology is not purely a means to an artistic end, but can in fact inspire new and interesting artistic expressions.
- Eskil demoed his modeling tool. He showed how it allows artists to make fragments and then use “deploy” to recursively place those objects over any mesh or surface. It’s an example of how you can set it up so artists get to art direct, instead of just make tons of individual custom pieces.
- In today’s game industry, Art is what is stifling innovation. Design, tech, and innovation and held back by art constraints. Destructible environments are easy, but the high visual requirements mean we can’t do them. “Chris Hecker, I love you but you’re wrong”, there are still interesting tech issues to solve.
Procedural Generation Back In Engine
- The solution to the issues with the stifling art pipeline is to put procedural generation back into the engine. Ragdoll may not look as good as hand animation, but it reflects the player actions in a stronger way. This feedback and responsiveness is what is missing.
- How would you procedurally build a labyrinth? You start with a block, carve out a solution, and then add embellishments once you’re sure it works. The traditional way to make a locked house is to make exactly one door that can be opened by exactly one key. The emphasis is on logically correct structures.
- But, how about we take a statistical solution? Perhaps we make a house that can be opened in any number of ways. You find a key, and then maybe you find the house. Life is lots of keys and lots of doors, and can be about improvising. Why can’t games be about this kind of improvisation?
- If Eskil were an assassin, he could pickpocket the entire room and gain hundreds of possibilities. Games can be like that. Instead of enforcing logical consistency, we can build a house with 5 doors, and randomly placed keys. It will be statistically consistent because the odds are functionally 0 to have all 5 keys end up in the house.
- To build interesting statistically consistent systems you need to take advantage of spatial dependencies. Applying series of what are basically image filters can be used to handle these relationships. Stochastic sampling is a good place to start.
- Disney said to Pixar that Pixar would fail because computers can’t understand emotions/wants of consumers. But, the designers of said computers can. If a rule can be taught to a designer it can be taught to a computer.
- As an example, Eskil had an algorithm to place bridges in his world. At first it made way too many bridges, so he kept refining the algorithm. Instead of just reducing the frequency he made the requirements more strict until he arrived at the best bridge he could think of. The bridges made by his algorithm where more interesting and logical than ones he would have hand placed, because the computer didn’t come into it with any biases.
- Love is basically complicated systems of hierarchical filters, that can construct objects of any type, such as buildings cliffs etc. The world is a grid, but subsections of the grid are replaced by custom artist assets as appropriate, so the world ends up not looking like a grid.
- Last year, Eskil felt alone. He didn’t share any of the problems of the rest of the industry. The PC is dead, except for steam (but that doesn’t count). Free to play MMOs are all that matter, except for WoW (that doesn’t count). Eskil doesn’t want to count: that’s when you succeed.
- Finally, Eskil wants us to all go out and explore. He wants us to say next year “Eskil, I love you but you’re wrong”.
Posted in GDC 2010 | Tagged: Eskil Steenberg, gdc, GDC2010, Love, mmo | 3 Comments »
Posted by Ben Zeigler on March 17, 2010
Here’s my notes for the talk Streaming Massive Environments from 0 to 200 MPH presented by Chris Tector from Turn 10 Studios. He’s listed as a Software Architect there, and obviously has a deep understanding of the streaming system they used on Forza 3. This talk was nice and deep technically, and touches all parts of the spectrum. I did get a bit lost when it got down to the deep GPU tricks, so I may have missed a bit. Anyway, here’s things that I think were probably said:
- The requirements are to render the game at a constant 60 FPS, which includes tracks, cars, the UI, crowds particles, etc. Lots of things to do, so very little cpu available at runtime for streaming.
- It has to look good at 0 mph, because that’s where marketing takes screenshots, and where the photo mode is enabled. It also has to look good at 200 mph, because that’s how people play the game.
- As an example, the LeMans track is 8.4 miles long, has 6k models and 3k textures. To have entire track loaded would take 200% of the consoles memory JUST for models and textures.
- Much information can be gathered from the academic area of “Massive Model Visualization”. However, beware that academic “real time” does not equal game real time, because of all the other things a game has to do.
- First, the tracks are stored on disk in modified .zip files, using the lzx data format. Tracks take from 90MB to 300MB of space compressed. This data is read off disk in cache-sized blocks. The only actual I/O that is performed is done strictly in order, to avoid the horrible seek times of a DVD.
- The next stage is the in-memory compressed data cache. The track data is stored in this format in the same format as on disk. Forza 3 uses 56MB for this cache, and uses a simple Least Recently Used algorithm to push blocks out of this cache. Each block is 1MB large.
- The next stage is a decompressed heap in memory. There’s a 360-specific LZX decompresser that runs at 20 MB/Sec. They had to optimize the heap heavily to get really fast alloc and free operations. Forza 3 uses 194 MB for this heap and allocates everything aligned and contiguous
- The next stage is the GPU/CPU caching layer. They do something semi tricky for textures. Textures can be present in either Mip 0 (full res), Mip chain (Mip 1 down to 32×32), or Small Texture (a single 32×32 texture) form. There is special 360 support to allow the Mip chain to be split up in different memory locations, so they can stream the Mip 0 in after the rest of the chain and it will display correctly.
- A few special things happen in the GPU/CPU itself. First, there is NO runtime LOD calculation, as the streaming data gives the correct LOD to show, and they are seperate objects in the stream. They did add a basic instancing system to allow a single shader variable. They spent a lot of time optimizing the GPU/CPU for the 360. He mentioned using Command Buffers as much as possible. Spent time right sizing assets to fit optimal shader use. 360 has special controls to reduce MIP memory access (Note: This got a bit too deep for me)
- Many projects use conservative occlusion to determine visibility, often because it can run real time. However, Forza does per-pixel occlusion in an extensive pre process step. Uses depth buffer rejection to figure out what’s occluded. It also does the LoD calculation at this point, and will exclude any objects that aren’t big enough to be visible (contribution rejection). Many games do LoD and contribution rejection runtime, but the data set is huge so they end up having horrible cache performance. (Note: I asked later, and this whole process takes up to 8 hours offline, for a very large track)
- First step in the process is to Sample the visibility information. The tracks have an inner and outer spline that defines the “Active” area, so the sampler picks a set of points inside those splines (and maps them to a grid relative to the center spline). At this point it creates “zones” which are chunks of track.
- To actually sample at each point, it uses a constant height and 4 angle views, relative to track direction. Visibility for each point are automatically placed at adjacent point, because the object came to exist at some undefined spot between two sample points.
- The engine then renders all of the models that are plausible (without textures). It then runs a D3D occlusion query to see what and how much is visible. Each model keeps track of it’s object ID, camera location, and pixels visible. The LoD calculation happens at this point, as it uses the distance info. It can do LoD, Occlusion, and Contribution in a single pass, after 2 renders. So fairly quick individual operation. It then keeps track of the pixel count of each object in a zone, as opposed to just a binary yes/no for visibility.
- After sampling a Splitting process takes place. Many of the artist placed objects are extremely large in their source data, to avoid seams and such. So, it will break these large objects up and cluster smaller objects together into single draw calls. Instancing breaks the clustering, so artists have to be careful
- The next step is the Building process. At this point it maps the textures on to the models. There’s a pass that aggressively removes duplicate textures. It looks for renamed textures, exact copies, and MIP parent/child relationships and will combine them as necessary. It also computes the 32×32 “small textures” at this point. The small textures for an entire track are put into a separate chunk and are preloaded for the entire track. This chunk is from 20-60 MB depending on track and is the only track data that is preloaded. This is so when the low LoD for an object is up and running, it will at least be colored correctly.
- Optimization is the next phase and is somewhat complicated. For each zone it finds the models and textures used in that zone as well as the two adjacent zones. It finds the models and then the textures, and sorts them by number of pixels visible. For any textures that are unneeded (if it’s < 32×32 strip it entirely, if it’s < MIP 0 strip MIP 0) it does the trivial reduction.
- It then does two memory reduction passes. First, it has to lower the total number of models/textures loaded in a zone to be < the decompressed heap. It removes models/textures as required, starting with the least pixels visible. After that it computes a delta of models/textures relative to the zone before and after it. The delta has to be lower than the available streaming bandwidth, so it strips for that reason.
- Once it computes the set of assets in a chunk, it has to package them. It places them in a cache efficient order and places the objects in “first seen” order. Objects that are frequently used end up near the front of the package and will stay in memory throughout, while objects for later in the track are farther back.
- The last step is Runtime. The runtime code is responsible for keeping track of what individual objects to create and destroy, based on the zone descriptions. It could do a reference count but does a simple consolidate where it frees everything first. This reduces fragmentation
- The keys to the Forza system are work ordering, heap efficiency, decompression efficiency, and disk efficiency. Each level of the data pipeline is critical, and anything that can be done to improve a level is worth doing. Don’t over specialize on a particular aspect of the pipeline
- System isn’t perfect. Popping can happen either due to late arrival caused by memory/disk bandwidth not keeping up, or it can be caused by visibility errors. They did have to relax some of their visibility constraints eventually, because certain types of textures threw off the calculation. They provided artists with a manual knob that can tweak an individual object to be more visible at the expense of possibly showing up late. Finally, you have to deal with unrealistic expectations.
- For future games, Chris had a few ideas. First, he would like to expand the system to work for non-linear environments. This would entail replacing linear zones with 3d zones, but would allow open world racing. There are probably more efficient forms of domain specific decompression that could up the decompression bandwidth. The system could do texture transcoding. It should be expanded to add another layer on top of the disk cache: network streaming (Note: Trust me when I say that’s a whole other lecture by itself)
Posted in Game Design, GDC 2010 | Tagged: Forza, gdc, GDC2010, Turn 10 | 3 Comments »
Posted by Ben Zeigler on March 17, 2010
Here’s the notes I have for Single-Player, Multiplayer, MMOG: Design Psychologies for Different Social Contexts as presented by Ernest Adams. Ernest has a long history of writing about and teaching game design, although primarily single player games. Roughly, this talk is about him extending his previous concepts to encompass multiplayer games, with varying success. It works as a good overview of how social context affects design, but Ernest is a BIT out of date with the MMO world, as he himself admits. Blah blah, any transcription mistakes are purely my own.
Ernest’s General Philosophy
- Intellectual pursuits can be vaguely separated into deductive (which he described as English) or inductive (French) thinking. The Classic or Romantic contexts. Game design basically straddles the line perfectly, and is a Craft instead of an Art or a Science. DaVinci should be our idol.
- But game developers aren’t really very good at their craft. They kill 2/3 of projects they start. They never seem to think through the final goal, and generally lack a philosophical direction.
- Player-Centric design is a solution to this. A designer must imagine a single, idealized player. The goal of a designer is to entertain them, and to empathize with them. The designer has a responsibility to think about how their game will make a player feel.
- The Tao of Game Design is the model Ernest uses to describe the relationship between player and designer. They are collaborating to create an experience, and neither would exist without the other. Each has the other inside of them, as far as trying to build a mental model.
- But, Ernest says this model is incorrect, because it specifies a singular player. Ernest said he was falling into a bias of writing about games he likes to play and create: single player games
Player Versus Environment
- The first type of game is PvE, which is not exactly the same as single player. A strictly cooperative game can be closer to PvE, and a single player game with a simulated AI player (such as football) is not PvE either.
- In a PvE game, the designer’s job is to design interactions. It’s vital for the designer to maintain a fairness throughout. Difficulty spikes, learn-by-death, stalemates, insufficient information for critical decisions, and expecting outside information can all violate the player-designer pact and pull the player out of the game.
- The relationship between player and designer is very intimate, and according to Ernest these kind of games can be Art (with a capital A) because they really have the concept of an artist.
Player Versus Player
- In a pure PvP design, the job of a designer is to do competition design. The goal is to enable the fun that comes out of players interacting with each other, not over designing and trying to force the fun into the system. Fairness is fairly simple, and involves making sure that everyone has an equal start and can’t cheat.
- Instead of a designer collaborating with a player, a designer is creating a system in which players will exist. Basically, a PvP designer is more of an Architect then an Artist. You can try to make all the rules you want, but players will add their own rules to the system.
- Ernest talked a bit about how he worked on one of the first online games, Rabbit Jack’s Casino at AOL. It was pay by the minute, so Ernest feels it kept him extremely honest as a designer. Everyone seemed really nice. If he didn’t keep the player engaged they would just leave. (Note: a cynical view here is that if he didn’t keep them psychologically addicted they would quit)
- His recent MMO experience was to jump into Second Life, which was a very lackluster experience. Everyone was extremely rude to him, the game took forever to load, and it felt very unfamiliar. (Note: Yeah, that’s Second Life. Which is not a game.)
- Designing fairness is basically impossible, as the starts are inherently uneven. The best people seemed to figure out was things like Raph Koster’s Laws. These laws are based on empirical evidence from existing communities, and tend to be about SURVIVING an online game, not having fun. Baron’s Laws saws that Hate is good because it brings people together. As long as Raph’s laws are true, MMOs will suck for the vast majority of potential players. (Note: Many of Raph’s laws are super cynical and really don’t apply to newer designs like World of Warcraft. Which is kind of why it’s successful.)
- As an MMO designer, it’s about servicing a cloud of players, who really won’t care about you until you screw up. Your job is to be a social engineer.
Free to Play MMO
- Much of Ernest’s material for this section is based on slides from a presentation Zhan Ye gave at Virtual Goods Summit 2009. That presentation is from the perspective of someone from the Chinese free to play MMO industry giving advice to western developers.
- In a pay-per-time-period MMO, the only goal of individual features is to increase fun and general engagement, because specific actions are not monetized. However, in a Free to Play (ie, not free at all) MMO the design goal ends up being to maximize revenue from specific actions. Every feature in a F2P game must directly add revenue, or do so secondarily.
- Fairness is no longer a goal at all, because it doesn’t help revenue. Instead, the goal is to create drama, love, and other elements of the real world. These elements will spur people to purchase items. The large advantage you get from an item, the more likely a player is to buy them.
- As a result, in the first generation of successful Chinese F2P games, rich players would buy all the weapons and then use them to kill all the poor players. This ended up being too unbalanced, as all the poor players would immediately quit and not provide the player base needed to keep the rich players buying items.
- So, the solution in the Chinese F2P community is to set up a series of family clans that will hire poorer players to fight for them. They would use gifts, threats, and extortion to control the poorer players. In other words, form in game criminal cartels.
- Most successful items are based explicitly on exploiting human emotions. “Little Trumpet” is an item that can be purchased and used to publicly humiliate another player. That player can then pay money to have that curse removed, and is very likely to do so due to emotional distress.
- Zhan compares F2P games to Las Vegas, but Ernest says they are worse because in Las Vegas you at least have the chance to make real money. F2P uses all the same psychological hooks of a slot machine, but with 0 chance of winning.
- Ernest believes that these games are in fact evil. The designer has a set up a system that explicitly subsidizes real hatred, because there is no such thing as virtual hatred. If a game is set up to incentivize players to inflict emotional harm that game is evil.
- There are two solutions to this problem. The first one is to NOT make your game zero sum, and remove competition (Note: So Farmville is not evil in this SPECIFIC way as it does not encourage hate), and the other option is to institute various methods to restrict it to competition instead of hatred and destruction. Something like the NFL salary cap vs. the America’s cup or F1 where richest always wins.
- In F2P the designer’s goal is to be an economist. They still need to entertain the players, but empathizing with them is strictly bad business. If these games continue on this path, Ernest asks that we shoot him.
- In conclusion, the craft of game design is fragmenting, there is no longer a single unified philosophy.
Note: As a focused response, I found his discussion of F2P MMOs very interesting, although I think he restricts it a bit too much to that genre. I would expand it a bit, because hatred can happen in PvP or MMO environments just as easily. For instance, take your typical 360 shooter populated by teenagers: they clearly want to inflict emotional harm and there is nothing in the game systems to help ameliorate that. But I can definitely stand behind his basic conclusion: Developing games that prey on the weak emotions of players is basically evil, and F2P games are much more likely to incentivize such decisions because of the focus on revenue over empathy.
Posted in Game Development, GDC 2010 | Tagged: ernest adams, gdc, GDC2010, mmo | 5 Comments »
Posted by Ben Zeigler on March 14, 2010
Here’s some notes from the session Design in Detail: Changing the Time Between Shots for the Sniper Rifle from 0.5 to 0.7 Seconds for Halo 3 presented by Jaime Griesemer from Bungie. He was in charge of multiplayer balance for Halo 1, 2, and 3 so has a lot of relevant experience. The talk was jam packed with information, so odds are very high that I missed something. At the end he was going pretty dang quick to fit it all in the hour session. Oh, and at some point he had a few slides about the odds of monkeys with typewriters reproducing the talk being like 0.2%, but it was pretty out of place so I honestly can’t remember where it fit in.
- Longevity means balanced. If a game like Halo 2 has been played and enjoyed by millions for years, it is balanced.
- Balance can’t happen until the end of development, but you can’t wait until the end to balance because you won’t get it done. The solution is to balance in iterative passes. Once you’ve balanced at a certain level, don’t go backwards until you absolutely have to.
- Passes are roughly (Note: I think I missed one) Role -> Flow -> Strength -> Limitations -> Detail
- Two cognitive halves to balance. First, you have to develop an intuitive sense of balance. Using the non-rational part of reasoning, your brain (orbital frontal cortex) builds models and uses them to predict the future. If something feels wrong to you about the balance of the game, this is what tells you.
- That part of the brain is great at telling you something is wrong, but not at telling you how to fix it. You have to use the other half to make the hard choices. Your brain (pre frontal cortex) needs to use reason to figure out what to change. But it can only work on so much information at a time. You have to work at a low detail level, and ONLY pay attention to info relevant to the current stage.
- As an example, there’s an experiment from the Choice episode of Radiolab. Subjects were given either a long number or short number to remember, and then were ambushed with the offer of cake or an apple. People with the short numbers picked apples, but people with the long numbers picked cake, because they didn’t have enough reasoning capacity left to make a rational food choice and went with the emotional one. (Note: That episode of Radiolab is great, and as a fan I have to say you should all go read “How We Decide” by Jonah Lehrer. The evidence is really strong for Jaime’s point here).
- The first part to each pass is going to be paper design. You need to plan out the behavior of all objects on paper before implementing them, so you can make sure they make sense. Figure out basic mechanics, desired feel, critical assets and important details.
- When designing roles you have to balance simple against complex. The goal is to make the game barely manageable at it’s deepest.
- Roles need to have actual functional differences. Rock-paper-scissors is not actually good design, as the 3 roles are completely identical. The depth in any multiplayer game comes from the roles and their interactions
- For a shooter, you should have no more than 1 weapon per role. If you add weapons that satisfy the same role but are different, you’re simply adding complexity and NOT depth. All shooters have the same weapons because they have the same roles. (Note: And players realize that now and get bored)
- Similarly, you can’t leave any role without a weapon. Rock-paper-nothing is not even a game.
- When cleaning up your paper design you should practice iterative deletion. Delete whatever isn’t necessary to fill a distinct role, and then delete everything that NOW isn’t necessary. And so forth.
- You have to balance chaos against certainty at this point. You want players to be able to think about probable, but not inevitable future results.
- In a largescale multiplayer game you need to be careful about the levels of “yomi” needed to succeed. Basically you should stop at “I know what you know about me” and not get into too much recursion. If it looks like a gun it should be a gun.
- Beware of positive and negative feedback loops. If doing well causes you to do even better this gets in the way of balance.
- Use slots whenever possible instead of having to balance larger chunks. Always balance the core elements first. Cut half of whatever you do.
- Flow can’t really be balanced until objects first start to come online during production. At this point, the designer is in charge and should be setting the tone. Feedback is not super important at this point in the process.
- During this phase you need to get the cadence just right. If it’s too slow the player will get bored, and if it’s too fast the distinct events will star to blur together.
- Verisimilitude is key at this point. Triggers should be for shooting and buttons are for punching, analogous to the real world action. Work on making it feel real.
- This is where you add the first pass of spectacle. Think about sounds, control, animation. In the sniper rifle case it unzooms for reload 0.5 seconds late, just so you can see your target die satisfyingly.
- Causality needs to be established. Your game has to look good on youtube, so you can tell what is making what happen. Players need to understand the causality of game events or they will think it cheats. Make this as obvious as possible, and throw out realism to establish it.
- The flow of a game is fragile, so as a designer you’ll have to use your imagination to get in the flow state this early in dev. Make your own sound effects. Whatever works.
- You want your game to have a low floor for flow so players can get into it, but a very high ceiling so they can always go deeper into an individual mechanic. Add as much detail as possible at the high end, to give them something to strive for.
- To enter the next phase, gameplay needs to be functional and largely optimized. Framerate above graphics quality. It has to work.
- Ketchup works because it’s 5 primary flavors, all pushed to the max. Halo (and by extension all game should) is like ketchup.
- Affordance is key at this stage. If the strength of something has to be explained, then it isn’t really a strength.
- It can be tricky to balance, because designers can misinterpret competence (getting good at a weapon) with the weapon being balanced. We CANNOT use our intuition at this stage because it will lie to us. Changes will have to be done in larger batches, and we need to avoid bias effects.
- Once everything is strong and useful, it’s time to start adding limitations. Limitations are not weaknesses. You add limitations to restrict the situations in which a role is successful, not add randomness.
- If the same role wins in too many situations, add limitations. If the outcome of a certain role in a situation is essentially random, there may be too many limitations to be understandable by the players
- Work on serious playtesting at this point. You want players to play, and you shouldn’t argue with them. Look for their reactions, NOT their solutions. “I don’t like x” is useful, “I don’t like x because of y” is great, and “You should do x” is useless. Trust the player’s gut (intuition) but don’t trust their reasoning as they do NOT have the same mental context you do as the designer. (Note: I 100% agree with and endorse this feedback strategy)
- Negative feedback generally means that the game in their head does not match the game as it actually exists. Either try to match it better, or do a better job of realistically setting expectations via teaching.
- Identify the specific goals of your playtester and keep that context in mind. Optimizers look to find the best overall, Ragers quit when frustrated, Role players always try the same weapon, “Your mom” will get confused, Griefers will try to destroy it for others, and Pros will hate you for any randomness.
Detail Balancing, and the Sniper Rifle Specifically
- Look to see if any weapons are being used outside of the roles you initially designed. See if any weapons are strictly dominating others.
- Eventually, using the Sniper Rifle hit the intuition of the designers. Using it felt wrong, something was out of balance, which was that the sniper rifle was too effective at close range, and too effective when getting body shots without nearby cover.
- The first idea is to reduce the strength knobs. But you should avoid this as it’s going backwards, and it’ll make the weapon feel week. Don’t reduce damage, range, or add random weaknesses. Don’t make the sniper rifle worse at it’s primary role.
- Couldn’t reduce sniper rifle magazine to 3, as then you couldn’t kill 2 ranged enemies without a reload. Increasing reload or time to zoom only fixed half the problem. Modifying max ammo count would fix the average problem but NOT the instantaneous problem, which is what people actually notice.
- In this case Flow made sense to modify. The cadence was picked to make the sniper rifle feel fast and rapid fire (Note: rapid fire sniper with one shot kill is a weird concept, he didn’t explain the original idea behind this) but it needed to change. So, the time between shots was increased, because this didn’t weaken the original role but made it worse in other roles.
- It went from 0.5 to 0.7 because you should never change anything less than 10%. Players won’t notice it at all, and a balance problem has to be fairly big to be noticeable in the first place. Overshoot and then come back if you can.
Posted in Game Development, GDC 2010 | Tagged: bungie, gdc, GDC2010, halo | 6 Comments »