The Stanford Bunny is a great tool to test model loading and rendering. For those of you who don’t know, the bunny is quite famous among developers, due to its use in education and testing.
It this post I would like to give a short update on how work is progressing on the graphics pipeline and how it works.
First of all, the techniques I have used for the image above are the fundamental parts of deferred shading. I won’t go into much detail about how deferred shading works, mostly because others have described it much better than I can.
The basics are:
1. Render the geometry, without shading, to multiple render targets for color, normal and depth (the G-buffer).
2. Render light(s) as geometry, with a shader to recreate the world position of the pixels and shade them. Render these to a new target (or the back buffer) using additive blending.
If you would like to know more about the technique, I recommend Catalin Zima’s blog.
Now, on to my graphics pipeline.
The basic idea of the pipeline is that the user should have full control over the techniques used in it, while still keeping it simple. This is done by abstracting rendering down to a few simple tasks.
First, we setup a number of render targets, textures that we can render to. If no advanced techniques will be used, we could create only the depth-stencil-buffer at this stage and simply use the back buffer later.
Layers define in what order we want things to render in the scene and what targets to render to. We could setup layers for G-Buffer objects, GUI and post processing effects. The shaders for individual objects define what layer they should render to.
This is where the magic happens. A layer can be configured to run a set of resource generators before it renders any geometry. The resource generators are special objects that run things like rendering shadow maps and full screen effects. The image above uses three resource generators to render full screen effects. First to recompute the z-buffer into linear depth, then the light and lastly one pass to render all four buffers (albedo, normal, depth and shaded result) to the screen.
By giving a developer access to these simple things, it’s possible to implement techniques like forward or deferred shading, depth of field, and bloom. The possibilities are virtually endless.
So now that the basics of the graphics pipeline is there I will start working on tools for material editing. Then comes the hard part, recreating the art style of the show.
If you’re not into ponies, don’t worry. The configuration built into the engine will be a more standard pipeline, leaning towards realism rather then Flash animated cartoons. Rainbow Factory will show how the pipeline can be customized to a completely different style.