In my last post I talked about our new GPU based 3d procedural planets, and those are still being tweaked and tuned and enhanced before they are ready to go live, but that isn’t the only massive overhaul we have planned in 0.3.2.00 patch. Lets talk about our new particle system.
I suppose “particle system” is something you hear any given game developer talk about at one point or another in their life. The reason everyone ends up making a particle system or dealing with a particle system at some point is because a particle system is just this sort of core game mechanic that almost always exists in a game: moving a whole lot of animated things around on the screen.
You want a fire to emit sparks and smoke? You think that engines should leave a trail? Do bullets have glowing bits that spin for a while? Generally these things end up having a lot of code in common with each other.
So in order to make your code simple you create a class that does those things, and then you extend that class with unique code for each animation. Congratulations you have just reinvented one of the various methods for building a particle system.
Except then Jan wants a 100% GPU driven, state of the art system capable of millions of things wiggling about on screen at the same time in order to support silly levels of real-time visual fidelity that takes full advantage of hardware and software that didn’t exist when XNA was created. Go figure.
So you’ve decided you want your particle system to live entirely on the GPU. How hard could it be?
Well, actually pretty hard since the GPU does logic upside down. I say the GPU does logic upside down because I’m used to how the CPU does logic: It comes along and figures out what time it is, then it goes through each thing spending a moment updating each one to be in the correct spot for that time, then once it has everything organized it puts everything together and goes down the list drawing each one a single particle at a time. The GPU then does it’s dark magic to place your particles on the screen in the shape and position that you tell it, one particle at a time, until all of the pixels are the correct color and can go to the screen.
The upside down version of that on the other hand starts with the last step and works backwards: It starts (in some sense) with the position of the pixel. I know that isn’t exactly how the rendering process works inside the GPU hardware, but when you write a pixel shader you are essentially writing a function that takes the location of the pixel you will output as the initial parameter, and you have to work backwards to get its color. So it needs to figure out the color of that pixel by figuring out what is behind it and what that thing looks like which involves figuring out how it would have been updated which requires looking up to see what time it was. See how that process is exactly reversed from the CPU logic?
To do logic in reverse order like this requires all of your data and all of your math to work in different ways and to be perfectly frank even I have trouble wrapping my head around it. However the advantage is obvious: since the number of pixels is constant and the complexity of your process is constant, suddenly it doesn’t really matter how many particles there are on the screen nearly as much. Also modern GPU hardware is capable of some pretty silly concurrent processing.
The results are spectacular.
Much like with our new 3d planets, this system was Jan’s creation and I was mostly employed to fit it into our existing rendering framework. More and more I find that me and Jan work well together by looking at everything from exactly opposite perspectives. In this case I’m handling the forward, right-side-up logic of the CPU while he handles the upside down logic of the GPU in order to produce our new GPU particle system framework.
GPU particles store unique information about every particle in vertex data, and they treat all particles of the same type as if it were a single piece of 3d geometry. That means that while in the old system 3 specks of dust were 3 draw calls, in the new system 3 types of speck of dust are 3 draw calls, and the individual specks can be as numerous as points in a 3d model. We can now have hundreds of thousands of specks of dust, or glowing bits, or blobs of smoke animated in real time, and it will all run way faster than the old system while using less memory and looking better.
Now we can do things like make an explosion where every spec of flaming debris and every cloud of hot gas has its own particle. We can make trails behind projectiles that are endless streams of particles instead of big ugly rectangles. More importantly we can drastically increase the amount of stuff that flies off your ship while it is being shot into little pieces. The possibilities are endless and the benefits are numerous.
More importantly my favorite aspect of GPU particles is how it will improve our workflow. They interact with CPU code in a very friendly straightforward sort of way, so I can continue to focus on the CPU side of logic on how things move around in a scene and just call some simple function when I want particles to show up. Jan then, has complete control of how those particles will look and behave on the GPU side, and neither of us needs to worry much about side effects or about performance or irritating unexpected interactions from our code trying to coexist.
In the future GPU particles will allow us to crank the graphical fidelity of all game mechanics up to 11. We are going to be able to make better looking stuff with less effort and less concern over performance drawbacks. For the 0.3.2.00 update we are already included some new animations using the system and we are planning so much more. Nearly every impact and explosion, as well as a huge list of planned future effects could take advantage of GPU particles eventually. My favorite systems are the ones that make it easier to create cool stuff in the future, and GPU particles is the best example of that yet.