The Stanford Bunny

The Stanford Bunny

The one and only Stanford Bunny.

The Stanford Bunny is a great tool to test model loading and rendering. For those of you who don’t know, the bunny is quite famous among developers, due to its use in education and testing.

It this post I would like to give a short update on how work is progressing on the graphics pipeline and how it works.

First of all, the techniques I have used for the image above are the fundamental parts of deferred shading. I won’t go into much detail about how deferred shading works, mostly because others have described it much better than I can.

The basics are:
1. Render the geometry, without shading, to multiple render targets for color, normal and depth (the G-buffer).
2. Render light(s) as geometry, with a shader to recreate the world position of the pixels and shade them. Render these to a new target (or the back buffer) using additive blending.

If you would like to know more about the technique, I recommend Catalin Zima’s blog.

The basic steps of deferred shading.

The basic steps of deferred shading.

Now, on to my graphics pipeline.

The basic idea of the pipeline is that the user should have full control over the techniques used in it, while still keeping it simple. This is done by abstracting rendering down to a few simple tasks.

Targets:
First, we setup a number of render targets, textures that we can render to. If no advanced techniques will be used, we could create only the depth-stencil-buffer at this stage and simply use the back buffer later.

Layers:
Layers define in what order we want things to render in the scene and what targets to render to. We could setup layers for G-Buffer objects, GUI and post processing effects. The shaders for individual objects define what layer they should render to.

Resource Generators:
This is where the magic happens. A layer can be configured to run a set of resource generators before it renders any geometry. The resource generators are special objects that run things like rendering shadow maps and full screen effects. The image above uses three resource generators to render full screen effects. First to recompute the z-buffer into linear depth, then the light and lastly one pass to render all four buffers (albedo, normal, depth and shaded result) to the screen.

By giving a developer access to these simple things, it’s possible to implement techniques like forward or deferred shading, depth of field, and bloom. The possibilities are virtually endless.

So now that the basics of the graphics pipeline is there I will start working on tools for material editing. Then comes the hard part, recreating the art style of the show.

If you’re not into ponies, don’t worry. The configuration built into the engine will be a more standard pipeline, leaning towards realism rather then Flash animated cartoons. Rainbow Factory will show how the pipeline can be customized to a completely different style.

3 thoughts on “The Stanford Bunny

  1. Maybe it’s time to change the “nature” of the comentators xD I was just wondering which physics engine were you using and found it on the “What is Harmony?” webpage. Tried Farseer once but finally moved to BEPUphysics (http://bepuphysics.codeplex.com/) mainly to support 3D “environtments” too. Interesting discovery SlimDX, didn’t heard about it and it seems a good option seeing the recently option to “kill” XNA made by Microsoft.

    PS: still not a worthy comment, somehow in the way of the spam messages with not a lot of substance but at least, a bit more related to the post

    • Hi, and thanks!
      I’ve actually been thinking of implementing a 3D physics engine alongside the 2D physics, so the game developer can choose what type of environment they want. That will have to come later though.
      I’ll definitely check BEPUphysics out. Just read their front page and it looks interesting.

Comments are closed.