Double Buffered

A Programmer’s View of Game Design, Development, and Culture

Archive for the ‘Game Development’ Category

5 Years Is A Long Time

Posted by Ben Zeigler on July 21, 2010

5 Years and 1 month ago I entered the gaming industry. Right out of college Cryptic Studios took a huge chance and hired me. Luckily it worked out, and Cryptic has always been a great place to get started in the industry. I had some awesome mentors during my first year there, and the working conditions were uniformly excellent throughout. For the first few years I had an absolutely perfect job. But after 5 years my passion for the job has completely faded and for my own happiness it’s time to move on. Officially as of last Friday, I have resigned from my position as a Lead Programmer at Cryptic Studios. I am doing this largely for personal reasons. Cryptic is still an awesome studio to work for and has many exciting projects in the works, but it is no longer the studio for me. I am currently taking a few months off to focus on my personal life (travelling to mainland Europe for the first time) and evaluate all of my future career options. I did not make this choice lightly, but I feel strongly that it is the right one at this point in time.

Reading the thoughts of Manveer Heir and Clint Hocking has helped me to clarify my own. If I was personally motivated by financial interests or quality of working conditions I would have absolutely no reason to quit, as Cryptic has treated me very well. But I am motivated by 2 things: the opportunity to solve interesting problems, and the satisfaction of seeing my work appreciated and enjoyed by others. If I can’t get those out of my life, I am objectively not a happy person.

Halcyon Days Gone By

During my first year I was dropped directly into the fire, working hard to get City of Villains shipped on time. For a kid right out of college this was an exhilarating thrill, and I was more than happy to work some overtime to help craft a great game. The final product had a few issues, but overall I was very proud of what we had accomplished in a short period of time. During this time I volunteered to fix a few tricky database corruption issues and I somehow got stuck with maintaining the database despite not having any official training. But hey, it was a new challenge and I took to it, reading up on the intricacies of MSSQL and database transaction theory. Despite being hired as a gameplay programmer, I started diving into the deeply technical infrastructure systems and learned more by the minute. Problems to solve abounded, and I could check the forums every day to see the real improvements I was bringing to players.

Going into my second year, the software team had some high ambitions. Coming off the largely successful launch of City of Villains the focus turned towards future projects, and that meant some extensive changes to the server infrastructure. The server team as a whole decided that a new system based on an Object Database would make the most sense, and because I was too inexperienced to know how hard it was I signed up to construct a database from scratch. It turns out it actually IS possible to construct a database from scratch, and I spent the next year and a half doing exactly that. Here was my chance to really make an impact, and build a critical component that would be at the center of an entire community of users. Despite the naysayers I can report that the Object Database has held up just fine under load from tens of thousands of active players at once. I was solving a deeply interesting technical problem, and I knew my work would enable new experiences (dealing with larger per-shard concurrency) that were otherwise impossible.

Heading into year 3 I started to get tired of the whole database thing. I hadn’t entered the games industry to write game-agnostic infrastructure code. I could have worked at Oracle for better pay and less satisfaction if I was going to do that! My interest in game design is after all why this site exists in the first place. By this time I’d proven my technical chops so I was able to spend half of my time being the principal gameplay programming on a new project. This was REALLY what I wanted to do! Work directly with designers to make a truly great game! It was a stressful but rewarding experience, trying out new gameplay prototypes, learning to manage the expectations of your teammates and superiors, and dealing with horrible game-destroying fires whenever they came up. I was learning how to actually design a game, and I could see the fruits of my labor every week during our playtests.

Sorry, That Was Pretentious

But then things started to go wrong. A newly acquired project delayed my project’s release window. The project’s thematic vision shifted while maintaining most of the same personnel. And then it shifted a second time, with commensurate delays. Eventually I was the only team member left from the original incarnation, and I had spent 3 straight years working on the “third unannounced project”. During those 3 years I had implemented several different combat systems from scratch, sat through hundreds of meetings, and fixed thousands of issues created by the rest of the company forgetting my project existed. This is all while I was spending the other half of my time maintaining and optimizing back end functionality that was about to ship in 2 commercial titles. I was spending more time fixing the same broken sink over and over than solving interesting problems, and I was increasingly skeptical of my gameplay work ever seeing the light of day. I wasn’t happy but I figured that was because I was so stressed out all the time.

Over the last year I made a conscious effort to try and achieve a better work-life balance, and I did a better job of delegating to some of the newer, very talented programmers. I worked hard at my collaboration skills and really focused on my primary game project. But over time something curious happened: as the adrenaline high wore off I realized I hadn’t actually enjoyed work for the last 2 years. What kept me going through the day was the sense of obligation I felt to the company, and my refusal to slack off and release poor quality work. But a huge chunk of my high-quality work had been completely trashed due to company reorganizations and personnel switches, and there were no interesting technical problems left to solve within the the constraints of the organization. As ties of obligation loosened I realized that the structure of the company itself was keeping me from doing my best quality work. Issues that were minor and ignorable 5 years ago were now highly irritating and galling. After spending copious time attempting and failing to “fix” the company’s issues, It was time for me to “fix” my own personal issue of working a job I hated.

Creative Differences

5 years is a long time for any creative collaboration. I consider myself to be a creative person, and I also consider myself to be a slightly odd person. It turns out these are very related, and every truly creative person I’ve worked with has had a variety of personality quirks that can either be interpreted as endearing or highly irritating. The Beatles managed to go a whole decade, but only by constantly switching up the dynamics through reinvention. Hollywood movies are made on a contract basis over 2-3 years, which I think has more to do with the creative process than it does labor politics. Some of the best game ideas I have been involved with have come directly from spirited and slightly emotional arguments. Eventually the emotional weight of those arguments adds up. When a band breaks up over “creative differences” you can bet they’ve had those same creative differences for years. What has changed is that the level of resentment and anger over those differences has finally boiled over.

So I guess what I’m saying is: it’s time for me to leave the band. You guys can probably find a bass player who fits in a bit better and I want to explore a few more exotic styles of music. But thanks for the years of great memories and maybe down the line we can get together for a reunion tour. Until then I’m really looking forward to hearing that new CD you guys are working on, although I kinda wonder how much of my bass part will be in the mix by the time it comes out. I’m proud of what I put together, and you guys are welcome over any time to share a few beers.

Posted in Game Development | Tagged: , | 2 Comments »

Why Can’t I Jump? The Perils of Player Autonomy

Posted by Ben Zeigler on June 8, 2010

A few years ago, I bought Guild Wars and mostly enjoyed it. It had a well crafted world and an interesting combat system, and should have been right up my alley. But every few minutes I would instinctively hit the space bar and deflate when my avatar failed to jump. Having come right off City of Heroes and a series of FPSs, the game’s rejection of my will instantly pulled me out of the experience. I know I’m not the only one, as most reviews of Guild Wars mentioned the inability to jump Guild Wars 2 previews inevitably emphasize the ability to leave the ground on command.

Research I’ve been exposed to recently has made it abundantly clear why this disturbed me so: Guild Wars was not meeting my need for Autonomy. Basically Autonomy or Volition (well named game company!) in this context refers to the need of players to feel like they can make real choices. Individual choice and open ended game design is associated with increased autonomy but is not required, because research (working to cite a source, this is based on my notes from a presentation) has shown that the important bit is that a player feel like they made a choice, and not that they actually did.  It is incredibly vital to do as much as you can to align the game’s available choices and the player’s expectations. When they get out of sync, the long-term engagement of players with your game will plummet, which basically means no word of mouth or sustainability.

Despite being a supposedly open-ended game with lots of player choice, Grand Theft Auto 4 violated my Autonomy repeatedly. They introduced a compelling character interested in changing his life, and I bought into the premise. But then the game forced me to murder hundreds of people for threadbare reasons. Sure I could run around and shoot pigeons if I felt like it, but when it came to anything important I was strait jacketed into highly scripted and linear missions. This is a very real problem recently as this has popped up in other games (Uncharted 2 left me cold for the same reason) that are attempting to mix real character motivations with slaughterhouse gameplay contexts.

Games that focus on satisfying player Autonomy can create drastically variable responses in different players. Let’s take a game like Alpha Protocol, which by all accounts is quite bad at satisfying the Competence need (the action is pretty bad) but like every Obsidian game tries to really embrace player Autonomy. For a reviewer like Scott Sharkey of 1UP, the game obviously satisfied his Autonomy in compelling ways, while for a reviewer like Jim Sterling of Destructoid it completely missed the mark. Recent games like Deadly Premonition and Nier share identical review score profiles for arguable similar reasons. If you want a universally well reviewed game, you’re going to have to work overtime to craft the expectations of players (and reviewers) to match with the choices the game provides.

Back to Jumping, I think there’s one thing every game designer needs to learn about Autonomy: If some reasonably large percentage of your audience keeps trying to do something and is frustrated when they can’t, you either need to let them do it or change your presentation so they stop trying. For instance Gears of War does a great job of setting the expectations properly (the physicality of the characters and terrain flatness make it so you never want to jump), but if your game looks and controls like a PC MMO your audience is going to need to jump. Yes, this will mean a reduction in the autonomy of the designer, but hopefully we can learn to deal with that.

Posted in Game Design, Game Development | Tagged: , , , , , | 2 Comments »

Game Development vs. the Stock Market

Posted by Ben Zeigler on April 11, 2010

I read a pretty businessy article at Edge Online today, about the fact that there are no longer any publicly held game development studios in England, and how there are virtually none worldwide. The article talks a lot about why this is, but I think the best answer is given in the last paragraph, as a quote from the CEO of Climax: “I am very glad we didn’t list Climax now. There is an inherent tension between the short-term view of capital markets and the long-term nature of game development”. If you look around at the most successful game development studios in the industry you’ll see one of two patterns: completely private ownership like Valve, or elite studios inside very large public publishers that are mostly shielded from quarter-to-quarter financial issues (Blizzard appears to still be immune).

The current Activision vs. Infinity Ward debacle is a superb example of the direct conflict between short term and long term thinking. The heads of Infinity Ward, at least as far as I can tell from rumor mill, really wanted to try and work on a new franchish, which would possibly fill an important long term goal by spawning a new franchise and keeping employee morale high. Activision corporate can’t deal with that kind of thinking, because they have to worry about what’s happening in the short term. Activision is looking about 2 years out (because that’s really the min dev cycle possible) because when you’re a publicly held company without incredibly strong leadership and market position that’s the most you can look forward before you get fired by your board of directors or large shareholders.

Game industry stocks are basically treated like Tech industry stocks, and what are the shareholders looking for? They want Growth, and that’s all they care about. They need the revenue numbers to keep getting bigger and bigger, because that means the company is more valuable, and they can resell with a profit. Dividends, which work to encourage long-term holding of stocks, are largely absent in the tech center because of a lack of profits and the investors don’t look for it anymore. For most of the people who would buy the public stock of a game publisher, any actions the management team take that take even 5 years to bear fruit are 100% worthless.

This is why I wouldn’t feel too secure if I was John Riccitiello right now. He’s tried to renovate the image and developer prestige of EA, and these actions will bring great fruits in the years to come. I feel like he’s done a good job of turning around EA as a long-term going concern. But, that won’t be enough for him because EA’s revenue is down a bit over the last year.

A public corporation is explicitly chartered to do what’s good for it’s stockholders. But, what’s good for the  stockholder is NOT good for the company, the employees, or the industry as a whole. Google can get away with largely ignoring the stock analysts, but not the smaller publishers or independent publicly owned developers. They HAVE to live quarter to quarter because that’s what their fickle owners demand. And working to maximize your revenue for the next fiscal year is simply not what game development is about.  If you want to know why a certain company doesn’t seem to really care about it’s reputation or long term prospects, it’s because the very structure of the corporate funding model doesn’t let them, and they’re not talented or willing enough to resist.

Posted in Game Development | Tagged: , , , | 4 Comments »

GDC 2010: How to Honestly Lie To Your Players

Posted by Ben Zeigler on March 25, 2010

Looking back at this year’s GDC there was a single thread running through most of the sessions I attended: Deceiving the player. Both Sid Meier and Rob Pardo explicitly told us to lie to players about how we calculate random chance, because of the way human psychology interprets probabilities. Chris Zimmerman laid out in detail how to lie to the player about what their hands did. Chris Tector explained how to perform a deeply technical form of lying to build the illusion of a continuous world from streamed chunks.

Sid Meier talked about the “Unholy Alliance” between designer and player, and Ernest Adams talked about the “Tao of Game Design”. On reflection the concepts have much in common: they are about the collaboration between player and game designer to craft a shared experience. From both the player and the designer, a unique mix of deception and trust is required: Suspension of Disbelief.

It can be interesting to compare gaming to another form of popular entertainment: Professional Wrestling. Back in the dark days of carny scams, Pro Wrestling was presented as real with the explicit goal of bilking the consumer. Over the last 30 years or so the deception inherent to Pro Wrestling has shifted: The vast majority of fans are completely aware that it’s all fake and planned, but they don’t care. They are completely willing to suspend their disbelief, and in return become part of the show. The experiences are real even though they’re based on a foundation of deception, and that’s at the core of gaming as well.

So lying to your players with the goal of building a collaborative experience is key to the power of the medium, but where can it go wrong? Ernest clearly talks though the different design philosophies of different types of games and argues that designers should effectively be more truthful the more a game moves away from a conventional Player vs. Environment game. Jaime Griesemer talks about ignoring the literal feedback of players, but never proposes lying to them in a competitive PvP environment. Eskil Steenberg’s whole talk was about moving Procedural Generation away from it’s long history of deception and towards the front of a game’s design.

Finally, much of GDC was talking about a completely different form of player deception. Soren Johnson’s great blog post lays it all out: game designers are increasingly being asked to lie to players about the very process of playing a game. With a flat fee, subscription, or large chunk DLC model the goal of a designer is honest and out in the open: they want to make the player happy so they will recommend the products to their friends and buy future related products. The goal of a designer in a microtransaction-based game is instead to exploit the second to second emotional weaknesses of their players to sell as many individual bits as possible. But, you can’t tell players this so they brand the games  as “free to play” or “social”. This deception (and others such as DRM) doesn’t have anything to do with improving the player’s experience, it is simply about maximizing short term profit at the possible expense of long term credibility.

Update: Just as I posted this originally Soren’s twitter pointed me to a great post by Frank Lantz on essentially the same topic. He takes a bit of an opposing view in holding that the kind of deception advocated by Sid and Rob can be destructive because it stops players from learning about actual truths in the world. Give it a read.

Posted in Game Design, GDC 2010 | Tagged: , | Comments Off

GDC 2010: Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input

Posted by Ben Zeigler on March 21, 2010

Okay, time for my last set of session notes. Here’s what happened at Reading the Player’s Mind Through His Thumbs: Inferring Player Intent Through Controller Input presented by Chris Zimmerman from Sucker Punch. The main topic of the session was the controls of Infamous (sorry, I refuse to use the stupid capitalization), which I quite enjoyed as a game. It presented a few pretty innovative ideas for interpreting player input, which I’m currently trying to figure out how to apply to various projects I’m working on. Stuff I heard:

Controls Immersion

  • For Infamous one of the goals was to push immersion as far as possible. The objective was to make you feel like you WERE Cole, instead of just controlling Cole. They should see and hear what Cole sees and hears, and do what Cole does.
  • However, players expect a certain amount of abstraction in their control interfaces. No player wants an accurate simulation of what it’s like to climb a rope,  which is actually kind of hard. Direct control just won’t work, both because of the complication of the world and the fact that players are somewhat bad at using controllers. Players want to express their wishes, and feel challenged but also successful
  • Chris performed an experiment asking players to point a ps3 stick at objects in a 3D scene. Players were fairly inaccurate, and formed a bell curve. Around 70% are within 15 degrees of the target, but not accurate beyond that. A second experiment asked them to push a button when a ball bounces. Players were within 50 ms most of the time, but not any more accurate then that (plus any A/V lag which is omnipresent).
  • Challenges for Infamous were large. Active and small enemies, inexact controllers. Featured jumping and climbing, and they had the design goal of everything that looked climbable had to be climbable. Turns out the city has a lot of things that look climbable.
  • They considered going with a solution where you would lean on a joystick to climb, like in Assassin’s Creed 2. But, they decided that didn’t feel skillful, and would be a problem given the density of the city. So, the goal was to require the player to execute correctly, but to liberally provide aid in the background.

Aiming

  • For aiming, the goal was to make it so the game ALWAYS shot exactly where you were aiming the reticle. So, the solution was to help in moving the reticle, not leave the reticle manual but help after firing. So, the game adjusts the “input” based on both the player’s real input and what the game thinks the player is trying to do. The movements have to be physically possible on the controller, it just helps the player out.
  • It takes the controller input and looks for targets that are near the direction the reticle is moving. It will adjust the reticle motion to be directly towards that object if it’s a good enough fit. It also slows down the reticle movement as you are close to the target, to give more time to fire. It also keeps track of where it was and where it’s going as well as button presses, so if the press happens within a certain time frame of passing over the target it counts as on target.
  • If there are no inferred targets the reticle is 100% player controlled. It excludes targets that are too far away, and all targets are treated as a line (because they have height) instead of a point. This means that if you’re heading towards a characters feet you’ll hit their feet with your attack, instead of magically jumping to their midsection.
  • For evaluating targets, lexicographic scoring is used. It rates the targets on several dimensions in priority order, and only uses lower dimensions if the top ones are equal. For shooting, this means that each target has two parts of the “egg” that move out from them. All valid centers of any egg rate higher than any outer sections of the egg, but it will use the closest section out of the two groups.

Jumping and Climbing

  • For jumping the goal was the same as targeting. The game allows air steering while in midair, so the game will only adjust your controller input to match a physically possible controller input. But, jumping is way more complicated because of the larger number of possible targets, the commonality of linear targets such as ledges, and various animation affordance issues. Failing to correctly predict while jumping is worse than when aiming.
  • The basic algorithm looked ahead about 0.75 seconds. There were a set of “illegal” filters it would check for all possible locations, and one preference score. It would first apply exclusion filters, in performance order where it would do the cheap tests first. After passing all filters it would heuristically score the remaining targets.
  • To score the targets it uses a golden section 1 dimensional optimizer, which was picked for execution speed. Scoring function was based on trajectory relative to final position. To modify the player input it computes the player’s time to land, and the relationship of the desired target and the current trajectory. It adjusts the player’s stick input to match the desired input. (Note: I missed a bit of the specific math, but there are a variety of ways to rate targets heuristically)
  • Ground landings have to be compared against features. You can’t just trace the feet because they will hit a wall before the ground in many situations (such as jumping up). It instead traces a set of polygons between the head and the feet, and clips them against geo. It stops the search if the polygons hit a segment the player can stand on.
  • The original design didn’t call for wall jumps, but they added them when players ended up hitting falls. When a player hits a wall they can “move” a certain amount or jump off, which ended up feeling more natural. To score wall collisions it traces a parabola at the knees.
  • They ended up doing some modification to running as well, to walk around small objects in the world. Super heroes don’t run into parking meters, why should the player?

Conclusion

  • Why does this sound so complicated? At first they tried stealing from Uncharted, where it computes the jump at take off. But, there were too many objects. Then they tried stealing from their own Sly Cooper games, but having a “grab” button that when activates automatically air steers the player towards the nearest surface.
  • But, the realistic graphics of Infamous brought with them increased expectations. Players were not willing to accept the floatiness of the Sly Cooper solution, which worked just fine in a cartoony world. So, they had to come up with something that matched the world better.

Posted in GDC 2010 | Tagged: , , , | 2 Comments »

GDC 2010: Procedural, There is Nothing Random About it

Posted by Ben Zeigler on March 19, 2010

It’s starting to wind down, but here’s some more notes! These are from the session Procedural, There is Nothing Random About it by Eskil Steenberg. Eskil is working on the indie MMO Love, which is going to go live very shortly, and his talk comes from the perspective of integrating many procedural techniques into his work. It worked well both as an overview of the concept and as an explanation of specific techniques. This talk had a bunch of valuable visual aids (he opened the game live at several points) so these notes are not as useful as a video would be. Sorry. Anyway:

History of Procedural

  • Procedural content generation started as a purely practical pursuit, because many old systems were severely lacking in memory. Games like Rescue on Fractalus and Populous (Eskil said the high selling “Mission Disk” addon was purely a list of random seeds) generated their procedural data in engine, but that solution is fairly pointless in today’s world.  We now have tons of memory and storage space.
  • The next type of procedural content generation is offline generation. One early attempt at this was the Massive crowd simulation tech created for Lord of the Rings. It’s also been used in a variety of modern games such as Far Cry 2 or Eve. This technique is valuable because an imperfect procedural tool can be fixed up in post production to iron out the kinks. This is a valuable way to save time.
  • Last year Eskil told everyone to fire their designers, this year he’s telling everyone to fire their artists. The way you make a good game is to make a bad game and fix it, so you need as fast an iteration as possible. This means you need a super fast art pipeline, and procedural tools are a huge help for this.
  • Ken Levine has said that filmmakers get to make movies while game developers get stuck having to make the camera first. “Ken, I love you but you’re wrong”. Many of the most artistically interesting films have been made by filmmakers who DID make their own camera. Technology is not purely a means to an artistic end, but can in fact inspire new and interesting artistic expressions.
  • Eskil demoed his modeling tool. He showed how it allows artists to make fragments and then use “deploy” to recursively place those objects over any mesh or surface. It’s an example of how you can set it up so artists get to art direct, instead of just make tons of individual custom pieces.
  • In today’s game industry, Art is what is stifling innovation. Design, tech, and innovation and held back by art constraints. Destructible environments are easy, but the high visual requirements mean we can’t do them. “Chris Hecker, I love you but you’re wrong”, there are still interesting tech issues to solve.

Procedural Generation Back In Engine

  • The solution to the issues with the stifling art pipeline is to put procedural generation back into the engine. Ragdoll may not look as good as hand animation, but it reflects the player actions in a stronger way. This feedback and responsiveness is what is missing.
  • How would you procedurally build a labyrinth? You start with a block, carve out a solution, and then add embellishments once you’re sure it works. The traditional way to make a locked house is to make exactly one door that can be opened by exactly one key.  The emphasis is on logically correct structures.
  • But, how about we take a statistical solution? Perhaps we make a house that can be opened in any number of ways. You find a key, and then maybe you find the house. Life is lots of keys and lots of doors, and can be about improvising. Why can’t games be about this kind of improvisation?
  • If Eskil were an assassin, he could pickpocket the entire room and gain hundreds of possibilities. Games can be like that. Instead of enforcing logical consistency, we can build a house with 5 doors, and randomly placed keys. It will be statistically consistent because the odds are functionally 0 to have all 5 keys end up in the house.
  • To build interesting statistically consistent systems you need to take advantage of spatial dependencies. Applying series of what are basically image filters can be used to handle these relationships. Stochastic sampling is a good place to start.
  • Disney said to Pixar that Pixar would fail because computers can’t understand emotions/wants of consumers. But, the designers of said computers can. If a rule can be taught to a designer it can be taught to a computer.
  • As an example, Eskil had an algorithm to place bridges in his world. At first it made way too many bridges, so he kept refining the algorithm. Instead of just reducing the frequency he made the requirements more strict until he arrived at the best bridge he could think of. The bridges made by his algorithm where more interesting and logical than ones he would have hand placed, because the computer didn’t come into it with any biases.
  • Love is basically complicated systems of hierarchical filters, that can construct objects of any type, such as buildings cliffs etc. The world is a grid, but subsections of the grid are replaced by custom artist assets as appropriate, so the world ends up not looking like a grid.

Conclusion

  • Last year, Eskil felt alone. He didn’t share any of the problems of the rest of the industry. The PC is dead, except for steam (but that doesn’t count). Free to play MMOs are all that matter, except for WoW (that doesn’t count). Eskil doesn’t want to count: that’s when you succeed.
  • Finally, Eskil wants us to all go out and explore. He wants us to say next year “Eskil, I love you but you’re wrong”.

Posted in GDC 2010 | Tagged: , , , , | 3 Comments »

GDC 2010: Streaming Massive Environments from 0 to 200 MPH

Posted by Ben Zeigler on March 17, 2010

Here’s my notes for the talk Streaming Massive Environments from 0 to 200 MPH presented by Chris Tector from Turn 10 Studios. He’s listed as a Software Architect there, and obviously has a deep understanding of the streaming system they used on Forza 3. This talk was nice and deep technically, and touches all parts of the spectrum. I did get a bit lost when it got down to the deep GPU tricks, so I may have missed a bit. Anyway, here’s things that I think were probably said:

Overview

  • The requirements are to render the game at a constant 60 FPS, which includes tracks, cars, the UI, crowds particles, etc. Lots of things to do, so very little cpu available at runtime for streaming.
  • It has to look good at 0 mph, because that’s where marketing takes screenshots, and where the photo mode is enabled. It also has to look good at 200 mph, because that’s how people play the game.
  • As an example, the LeMans track is 8.4 miles long, has 6k models and 3k textures. To have entire track loaded would take 200% of the consoles memory JUST for models and textures.
  • Much information can be gathered from the academic area of “Massive Model Visualization”. However, beware that academic “real time” does not equal game real time, because of all the other things a game has to do.

The Pipeline

  • First, the tracks are stored on disk in modified .zip files, using the lzx data format. Tracks take from 90MB to 300MB of space compressed. This data is read off disk in cache-sized blocks. The only actual I/O that is performed is done strictly in order, to avoid the horrible seek times of a DVD.
  • The next stage is the in-memory compressed data cache. The track data is stored in this format in the same format as on disk. Forza 3 uses 56MB for this cache, and uses a simple Least Recently Used algorithm to push blocks out of this cache. Each block is 1MB large.
  • The next stage is a decompressed heap in memory. There’s a 360-specific LZX decompresser that runs at 20 MB/Sec. They had to optimize the heap heavily to get really fast alloc and free operations.  Forza 3 uses 194 MB for this heap and allocates everything aligned and contiguous
  • The next stage is the GPU/CPU caching layer. They do something semi tricky for textures. Textures can be present in either Mip 0 (full res), Mip chain (Mip 1 down to 32×32), or Small Texture (a single 32×32 texture) form. There is special 360 support to allow the Mip chain to be split up in different memory locations, so they can stream the Mip 0 in after the rest of the chain and it will display correctly.
  • A few special things happen in the GPU/CPU itself. First, there is NO runtime LOD calculation, as the streaming data gives the correct LOD to show, and they are seperate objects in the stream. They did add a basic instancing system to allow a single shader variable. They spent a lot of time optimizing the GPU/CPU for the 360. He mentioned using Command Buffers as much as possible. Spent time right sizing assets to fit optimal shader use. 360 has special controls to reduce MIP memory access (Note: This got a bit too deep for me)

Computing Visibility

  • Many projects use conservative occlusion to determine visibility, often because it can run real time. However, Forza does per-pixel occlusion in an extensive pre process step. Uses depth buffer rejection to figure out what’s occluded. It also does the LoD calculation at this point, and will exclude any objects that aren’t big enough to be visible (contribution rejection). Many games do LoD and contribution rejection runtime, but the data set is huge so they end up having horrible cache performance. (Note: I asked later, and this whole process takes up to 8 hours offline, for a very large track)
  • First step in the process is to Sample the visibility information. The tracks have an inner and outer spline that defines the “Active” area, so the sampler picks a set of points inside those splines (and maps them to a grid relative to the center spline). At this point it creates “zones” which are chunks of track.
  • To actually sample at each point, it uses a constant height and 4 angle views, relative to track direction. Visibility for each point are automatically placed at adjacent point, because the object came to exist at some undefined spot between two sample points.
  • The engine then renders all of the models that are plausible (without textures). It then runs a D3D occlusion query to see what and how much is visible. Each model keeps track of it’s object ID, camera location, and pixels visible. The LoD calculation happens at this point, as it uses the distance info. It can do LoD, Occlusion, and Contribution in a single pass, after 2 renders. So fairly quick individual operation. It then keeps track of the pixel count of each object in a zone, as opposed to just a binary yes/no for visibility.
  • After sampling a Splitting process takes place. Many of the artist placed objects are extremely large in their source data, to avoid seams and such. So, it will break these large objects up and cluster smaller objects together into single draw calls. Instancing breaks the clustering, so artists have to be careful
  • The next step is the Building process. At this point it maps the textures on to the models. There’s a pass that aggressively removes duplicate textures. It looks for renamed textures, exact copies, and MIP parent/child relationships and will combine them as necessary. It also computes the 32×32 “small textures” at this point. The small textures for an entire track are put into a separate chunk and are preloaded for the entire track. This chunk is from 20-60 MB depending on track and is the only track data that is preloaded. This is so when the low LoD for an object is up and running, it will at least be colored correctly.
  • Optimization is the next phase and is somewhat complicated. For each zone it finds the models and textures used in that zone as well as the two adjacent zones. It finds the models and then the textures, and sorts them by number of pixels visible. For any textures that are unneeded (if it’s < 32×32 strip it entirely, if it’s < MIP 0 strip MIP 0) it does the trivial reduction.
  • It then does two memory reduction passes. First, it has to lower the total number of models/textures loaded in a zone to be < the decompressed heap. It removes models/textures as required, starting with the least pixels visible. After that it computes a delta of models/textures relative to the zone before and after it. The delta has to be lower than the available streaming bandwidth, so it strips for that reason.
  • Once it computes the set of assets in a chunk, it has to package them. It places them in a cache efficient order and places the objects in “first seen” order. Objects that are frequently used end up near the front of the package and will stay in memory throughout, while objects for later in the track are farther back.
  • The last step is Runtime. The runtime code is responsible for keeping track of what individual objects to create and destroy, based on the zone descriptions. It could do a reference count but does a simple consolidate where it frees everything first. This reduces fragmentation

Summary

  • The keys to the Forza system are work ordering, heap efficiency, decompression efficiency, and disk efficiency. Each level of the data pipeline is critical, and anything that can be done to improve a level is worth doing. Don’t over specialize on a particular aspect of the pipeline
  • System isn’t perfect. Popping can happen either due to late arrival caused by memory/disk bandwidth not keeping up, or it can be caused by visibility errors. They did have to relax some of their visibility constraints eventually, because certain types of textures threw off the calculation. They provided artists with a manual knob that can tweak an individual object to be more visible at the expense of possibly showing up late. Finally, you have to deal with unrealistic expectations.
  • For future games, Chris had a few ideas. First, he would like to expand the system to work for non-linear environments. This would entail replacing linear zones with 3d zones, but would allow open world racing. There are probably more efficient forms of domain specific decompression that could up the decompression bandwidth. The system could do texture transcoding. It should be expanded to add another layer on top of the disk cache: network streaming (Note: Trust me when I say that’s a whole other lecture by itself)

Posted in Game Design, GDC 2010 | Tagged: , , , | 3 Comments »

 
Follow

Get every new post delivered to your Inbox.