Development Blog

Get updates with RSS or Twitter.

Z-fighting and tile-based rendering

When you draw two things exactly on top of each other, the GPU can't decide which one draws in front, and you get an effect called z-fighting. On most GPUs z-fighting manifests as an even pattern across all the affected pixels. But on the iPhone and iPad's tile-based GPUs, each tile has its own take on the pattern. It's like a quilt of roundoff error.

I was fooling with Degrees this morning, and I saw z-fighting where an asteroid was being drawn inside the solid part of a level. Tiles are just units of work for the GPU, so they're not supposed to be visible. But here they are. You can feel out how the inputs (shaded, clipped triangles) are different across each tile, and how that can have real consequences for how things are drawn.

Z-fighting is actually much more distracting to the player on tile-based GPUs than on standard GPUs. These patterns flicker and rearrange themselves if you rotate even a little, because the roundoff error resolves differently per title per angle. So the quilt has a time axis. It's a hyperquilt.

Actually I think it kind of looks grungy and cool, and I could use it as part of Degrees's style for occasional effects. But it's probably a health hazard for some players, so I'll pass.

The scaffolding approach and background loading

Scaffolding

When I started building Degrees, the first thing I had to get right was the gyroscope flight control. In a flying game, motion is where the fun comes from. But you can't feel motion unless you have a point of reference, so I had to make some geometry for the level. The easy way was to scaffold it: Degrees would run some throwaway code and generate a level when it launched. That was good enough, and I could focus on flight.

After the flight mechanics felt good, I came up with Degrees's voxel art style, and I tried to implement and optimize it in order to find out just how much I could draw. This would inform the gameplay: Could the game take place outdoors? Indoors? How many enemies could I put on the screen? During that phase, I would periodically need to test some aspect of rendering that demanded a different shape of level. I would just change the scaffold code and get back to the real task. I like working this way because I can concentrate all my energy on one little detail and get it right, and that detail can be anywhere in the program.

At some point I'm going to have a level editor, and that means Degrees needed to load levels off the disk. I could have built the editor, so I could save levels, so I could load levels. But the level editor wasn't the thing to get right at the moment. I have a design goal that the game should not pause while it loads a level. As in SSX 3 and Metroid Prime, you should be able to move from level to level with no interruptions in the animation. So the level-loading code had to be done before the editor, because the performance of background loading affects the way a level ought to be built in the first place. I just adjusted the scaffold: Instead of running the level-generating code at launch, it runs before a build in another executable. It generates the same levels as before, but it saves them as files, and so I was able to write the code that loads levels too.

I love the scaffolding approach because you can see progress every day. You don't have to write for a week before your code works. It always works! And you can do quick experiments without getting over-invested.

Background loading

Following the scaffolding approach, at first I just wrote the loading code to run on the main thread. That helped me get saving and loading right, but the final step was to move it to a background thread so it wouldn't interrupt animation. I did that today, and I learned that I've got an upper limit on the order of 10,000 vertices per file if I really want it not to pause. I think I will end up with lots of tiny, interconnected zones in separate files. Metroid Prime loads one room at a time, and that worked well. Maybe I'll have the same kind of structure. So, I discovered a design goal for the editor today.

I was pretty happy with the design pattern I came up with for background loading, and I think I'll be using it elsewhere. Here, code for you:

Things learned at ECGC 2012, part one

This week I attended the East Coast Game Conference, and you better believe I was taking notes. In this post I'll pick out things I learned that directly affect the course of Degrees's development. I am shocked that there's so much of it, so actually this will be a series of two posts.

A crash couse in indie game audio — Ben Crews and Thomas Dahlberg, Digital Roar

  • Sound design and game design inform one another, so you should do them at the same time.
  • Especially when you work on a team, make your ideas concrete as you work. Get sounds into the game and pair them up with visuals.

In other words, I should start doing sound design in parallel with the rest of the game. You can't leave it for the end of development if you want sound to substantially improve the game experience. Most of my favorite games have amazing sound, and it's no accident. Consider Half-Life.

Zynga keynote — Paul Stephanouk

"Games with community have long tails and change lives," he said. A social game is not just a multiplayer game. For many this means visiting your mom's farm. For me it means Team Fortress clans and the mod community. Also, "user-generated content" should contribute unique gameplay, so that by connecting with a person you gain a unique experience.

Well, I'm trying to bootstrap a community via this blog, but maybe when the game is done I can tidy up the level editor and publish it? Maybe Degrees will have a corner for user-created levels, and I'll make a little web service to host them? I'm sure I can do better than Game Center achievements.

Paul also says that free games need to be more free (as in unconstrained). I'm thinking about in-app purchases in Degrees, but I don't want a player to feel like a second-class citizen until they buy the extras. Remember shareware episodes? As a player I was really happy with that model.

Information, conveyance, and surprisal: How game mechanics communicate strategic choice to players — Christopher Hazard, Hazardous Software

There is such a thing as information theory, and it has direct applications in game design. My understanding is that if you can handle the math, information theory can quantify some things that feel very qualitative. For instance, how difficult is this level? How ambiguous is this interface? Will the player have an easier time understanding that they need to jump over this thing if I color obstacles red, or is that a waste? (It's not a waste.)

I didn't have the baseline knowledge to understand the first several minutes of the talk, so my notes are short and sad. But I talked to someone about a video he was shooting. Maybe I'll be able to share a link to that later.

As I move out of exploring technical limits and move into game design, this may help me make decisions with more confidence. Maybe I'll be able to show you the game's learning curve by computing and graphing it. Maybe I can have the level editor grade your level's difficulty, and then when you look through user-created levels, I can rank them and suggest some for your skill level.

Breaking in and making it to AAA — Tim Johnson, Epic

A great place to show off game art and get feedback is Polycount (and its forum). Although I've designed away the need for most texturing and 3D modeling, the look of Degrees is in flux and I could use some professional opinions.

In a portfolio, bad work hides good work. Only show your best. I won't be showing finished products here, but maybe I should have a certain standard.

It's true that the gaming industry is competitive, but I define "making it" as building the thing you want to build and getting paid enough to continue. I'm not necessarily interested in working on a big team on a high-profile product. If it's the right product, though…

Bringing AAA graphics to mobile platforms — Niklas Smedberg, “Smedis”, Epic

Mobile GPUs are different. The iPhone, iPad, and most others use tile-based deferred rendering. This is not some implementation detail. It fundamentally changes the graphics pipeline, which fundamenally changes how you design your method of drawing.

You want to make as few GL draw calls as possible, because they are very, very slow. In particular, since iOS processes are sandboxed, your app can't write to GPU memory, so the OS will do a memcpy of all your vertex data. This isn't great news for the geometry I'm creating dynamically from voxels, because I'm submitting new vertices every frame. But it does suggest that one of my optimizations was not so smart: Instead of doing lighting calculations per-vertex, I'm doing them once per cardinal direction, submitting the result in a uniform, and splitting my objects into six draw calls each (facing -X, facing +Z, etc). This means I'm actually creating six times as many vertex buffers, and the OS is doing six times as many memcpys. Maybe I can save a lot of time if I just let the shader unit do what it's good at.

The results of the vertex shader are written out to an area of RAM called the parameter buffer. It has a fixed size and you don't want to overflow it. If you do, then the renderer will have to make multiple passes, which is much slower. For me this means I need a way to tune the number of vertices that I'm drawing in a scene. Degrees's level editor will need a way to preview that.

The fragment (pixel) shader works on a tile of the screen at a time, and each core of the GPU will fit a whole tile's worth of results in registers. This means that multisample antialiasing is nearly free, so why not do it? I'm going to try it and measure the performance hit.

GPU shader units are specialized 4x4 matrix processors. The shader compiler can often arrange instructions so as to process four vertices at a time, basically by treating each vector as a matrix row. But if you have conditionals that would run different instructions for different vertices, that can't be optimized in this way, so vertices will be processed at one quarter the speed. In other words, don't use conditional branching in shaders at all. It may actually be faster to compute both results and then interpolate between them using the same conditional that you would branch on.

This validates the approach I chose for creating multiple vertex shader effects: Rather than using a uniform to select an effect, I'm compiling multiple programs from the same source and just screening off sections of code with #ifdefs. But he mentioned something else. Your shaders don't actually get compiled until you draw something with them. So during my initial setup I need to draw a little black triangle or something for each shader. I would never have thought of that.

Texture lookups are kicked off before the fragment shader runs for the sake of speed. But dependent texture reads can't be optimized in this way, so you should not calculate texture coordinates in the fragment shader. Finalize them in the vertex shader. I've been thinking about compacting Degrees's palette textures into a texture atlas, because switching textures is a GL state change and that's expensive. If I do that I'll need to calculate texture coordinates live, and it's good to know there are right and wrong places to do it.

Also, only the .xy coordinate pair is optimized for doing texture reads ahead of time. Don't try to pack another set of coordinates into .zw, because that will cause a dependent texture read and slow you down. But maybe you can pack them in your vertex data, unpack them in the vertex shader into varying vec2s, and still get good results? I wish I had thought to ask after the talk. (By the way, that's a great reason to attend the conference. You can just ask!)

Alpha testing is expensive, but alpha blending is cheap. It's better to draw the whole polygon's worth of pixels even if many of them are blank. If you do want to save pixel processing time, a better way than alpha testing is to add vertices and reshape the object to fit the texture. The voxel world in Degrees is totally opaque today, but when I do particles and the HUD this may come in handy.

When you render to multiple targets, such as for making a bloom filter, you don't ever want to render to one target, then another, and then back on the first one. That is, it's very slow to restore a buffer to GPU memory. So just create a new buffer and render onto that instead. If you need the colors from the one you're throwing out, sample it as a texture. Also, clearing buffers on tile-based GPUs is nearly free, and it helps the GPU avoid buffer restores. So always clear every buffer: Color, depth, and stencil. I'll try this and see if it has any impact, but I imagine it won't since I'm only using one render target so far. Still, the visual style of Degrees is meant to dazzle the player, so I may want to do some full-screen effects down the line. These techniques would make such effects much cheaper.

Intermission

Attending ECGC cost me only one Benjamin and two vacation days. It helped that it was in my town, and in fact I probably wouldn't have gone otherwise. When you learn from people who really know what they're talking about, they will surprise you by teaching you things you didn't know you needed to hear. So I got much more out of ECGC than I expected to. Next year, if you're within a day's drive, just get here.

Part two is posted.

Screenshots

I wanted to write a thorough post, but I've been hustling elsewhere. Now I'm out of time, so you get screenshots.

Why the rush: The East Coast Game Conference is underway, and I've been showing the game around to lots of people who care about all the same things I do. I need them to come back for more. I need their brains! (Not like that.)

These shots are from a current dev build running on an iPhone 4. Yeah, everything is voxels! No, nothing is textured. The blue rectangles are buttons for controlling the game. The terrain is just some Perlin noise that I tossed together to test ambient occlusion shadowing. The ship and the little red asteroids are built into voxel solids every frame, depending on how they're rotated. So as they turn, you see interference patterns running over them from aliasing. So instead of spending GPU time on antialiasing, Degrees embraces aliasing as part of its visual style. I think it looks nice in the same way Atari games do.

I'll try to write again soon in more detail.

Going public

OK hi, I'm writing this game called Degrees. I've been at it for months in my spare time, and now I'm going to develop in public. The game is for you! I want you to get it and have it and love it, but first I have to finish it. So please do stick around. The best way to stick around is with RSS.

On this blog you'll see what happens day to day as I build Degrees. I'll write about the game's design, technology, influences, challenges, that kind of thing. I'll try to post a lot of screenshots and maybe some videos, and I'm really hoping to learn from your comments.

This is just a quick post to kick off the blog. For a proper introduction, check the About page. In my next post I'll show you the state of the app.

Page 1 2