This week I attended the East Coast Game Conference, and you better believe I was taking notes. In this post I'll pick out things I learned that directly affect the course of Degrees's development. I am shocked that there's so much of it, so actually this will be a series of two posts.
A crash couse in indie game audio — Ben Crews and Thomas Dahlberg, Digital Roar
- Sound design and game design inform one another, so you should do them at the same time.
- Especially when you work on a team, make your ideas concrete as you work. Get sounds into the game and pair them up with visuals.
In other words, I should start doing sound design in parallel with the rest of the game. You can't leave it for the end of development if you want sound to substantially improve the game experience. Most of my favorite games have amazing sound, and it's no accident. Consider Half-Life.
Zynga keynote — Paul Stephanouk
"Games with community have long tails and change lives," he said. A social game is not just a multiplayer game. For many this means visiting your mom's farm. For me it means Team Fortress clans and the mod community. Also, "user-generated content" should contribute unique gameplay, so that by connecting with a person you gain a unique experience.
Well, I'm trying to bootstrap a community via this blog, but maybe when the game is done I can tidy up the level editor and publish it? Maybe Degrees will have a corner for user-created levels, and I'll make a little web service to host them? I'm sure I can do better than Game Center achievements.
Paul also says that free games need to be more free (as in unconstrained). I'm thinking about in-app purchases in Degrees, but I don't want a player to feel like a second-class citizen until they buy the extras. Remember shareware episodes? As a player I was really happy with that model.
Information, conveyance, and surprisal: How game mechanics communicate strategic choice to players — Christopher Hazard, Hazardous Software
There is such a thing as information theory, and it has direct applications in game design. My understanding is that if you can handle the math, information theory can quantify some things that feel very qualitative. For instance, how difficult is this level? How ambiguous is this interface? Will the player have an easier time understanding that they need to jump over this thing if I color obstacles red, or is that a waste? (It's not a waste.)
I didn't have the baseline knowledge to understand the first several minutes of the talk, so my notes are short and sad. But I talked to someone about a video he was shooting. Maybe I'll be able to share a link to that later.
As I move out of exploring technical limits and move into game design, this may help me make decisions with more confidence. Maybe I'll be able to show you the game's learning curve by computing and graphing it. Maybe I can have the level editor grade your level's difficulty, and then when you look through user-created levels, I can rank them and suggest some for your skill level.
Breaking in and making it to AAA — Tim Johnson, Epic
A great place to show off game art and get feedback is Polycount (and its forum). Although I've designed away the need for most texturing and 3D modeling, the look of Degrees is in flux and I could use some professional opinions.
In a portfolio, bad work hides good work. Only show your best. I won't be showing finished products here, but maybe I should have a certain standard.
It's true that the gaming industry is competitive, but I define "making it" as building the thing you want to build and getting paid enough to continue. I'm not necessarily interested in working on a big team on a high-profile product. If it's the right product, though…
Bringing AAA graphics to mobile platforms — Niklas Smedberg, “Smedis”, Epic
Mobile GPUs are different. The iPhone, iPad, and most others use tile-based deferred rendering. This is not some implementation detail. It fundamentally changes the graphics pipeline, which fundamenally changes how you design your method of drawing.
You want to make as few GL draw calls as possible, because they are very, very slow. In particular, since iOS processes are sandboxed, your app can't write to GPU memory, so the OS will do a
memcpy of all your vertex data. This isn't great news for the geometry I'm creating dynamically from voxels, because I'm submitting new vertices every frame. But it does suggest that one of my optimizations was not so smart: Instead of doing lighting calculations per-vertex, I'm doing them once per cardinal direction, submitting the result in a uniform, and splitting my objects into six draw calls each (facing -X, facing +Z, etc). This means I'm actually creating six times as many vertex buffers, and the OS is doing six times as many
memcpys. Maybe I can save a lot of time if I just let the shader unit do what it's good at.
The results of the vertex shader are written out to an area of RAM called the parameter buffer. It has a fixed size and you don't want to overflow it. If you do, then the renderer will have to make multiple passes, which is much slower. For me this means I need a way to tune the number of vertices that I'm drawing in a scene. Degrees's level editor will need a way to preview that.
The fragment (pixel) shader works on a tile of the screen at a time, and each core of the GPU will fit a whole tile's worth of results in registers. This means that multisample antialiasing is nearly free, so why not do it? I'm going to try it and measure the performance hit.
GPU shader units are specialized 4x4 matrix processors. The shader compiler can often arrange instructions so as to process four vertices at a time, basically by treating each vector as a matrix row. But if you have conditionals that would run different instructions for different vertices, that can't be optimized in this way, so vertices will be processed at one quarter the speed. In other words, don't use conditional branching in shaders at all. It may actually be faster to compute both results and then interpolate between them using the same conditional that you would branch on.
This validates the approach I chose for creating multiple vertex shader effects: Rather than using a uniform to select an effect, I'm compiling multiple programs from the same source and just screening off sections of code with
#ifdefs. But he mentioned something else. Your shaders don't actually get compiled until you draw something with them. So during my initial setup I need to draw a little black triangle or something for each shader. I would never have thought of that.
Texture lookups are kicked off before the fragment shader runs for the sake of speed. But dependent texture reads can't be optimized in this way, so you should not calculate texture coordinates in the fragment shader. Finalize them in the vertex shader. I've been thinking about compacting Degrees's palette textures into a texture atlas, because switching textures is a GL state change and that's expensive. If I do that I'll need to calculate texture coordinates live, and it's good to know there are right and wrong places to do it.
Also, only the
.xy coordinate pair is optimized for doing texture reads ahead of time. Don't try to pack another set of coordinates into
.zw, because that will cause a dependent texture read and slow you down. But maybe you can pack them in your vertex data, unpack them in the vertex shader into
varying vec2s, and still get good results? I wish I had thought to ask after the talk. (By the way, that's a great reason to attend the conference. You can just ask!)
Alpha testing is expensive, but alpha blending is cheap. It's better to draw the whole polygon's worth of pixels even if many of them are blank. If you do want to save pixel processing time, a better way than alpha testing is to add vertices and reshape the object to fit the texture. The voxel world in Degrees is totally opaque today, but when I do particles and the HUD this may come in handy.
When you render to multiple targets, such as for making a bloom filter, you don't ever want to render to one target, then another, and then back on the first one. That is, it's very slow to restore a buffer to GPU memory. So just create a new buffer and render onto that instead. If you need the colors from the one you're throwing out, sample it as a texture. Also, clearing buffers on tile-based GPUs is nearly free, and it helps the GPU avoid buffer restores. So always clear every buffer: Color, depth, and stencil. I'll try this and see if it has any impact, but I imagine it won't since I'm only using one render target so far. Still, the visual style of Degrees is meant to dazzle the player, so I may want to do some full-screen effects down the line. These techniques would make such effects much cheaper.
Attending ECGC cost me only one Benjamin and two vacation days. It helped that it was in my town, and in fact I probably wouldn't have gone otherwise. When you learn from people who really know what they're talking about, they will surprise you by teaching you things you didn't know you needed to hear. So I got much more out of ECGC than I expected to. Next year, if you're within a day's drive, just get here.
Part two is posted.
Response: hermes outlet store usaRevolve Clothing.Sueded Python Pouf Tote This sueded python tote comes in saffron suede and has a matching python handles that compliment the whole style. Great for