Finding the right rendering solution for our games

Here you can find all our experiments, upcoming techs and other crazy stuff.
Posts: 2

Finding the right rendering solution for our games

Unread post#1 » 28 Jan 2015 21:22

Finding the right rendering solution for our games
made in GameMaker:Studio

#1 - Light Pre-Pass aka Deferred Lighting
28th of January 2015
What is up guys,

today we have decided to start a little "tech blog" thingy of working on a new rendering system for our current and future games in GameMaker: Studio.

As you may have noticed for example on screenshots from BBP, we already have some vertex lighting, real-time soft shadows, newly also rim lighting, but some important aspects like specular reflections, ambient occlusion and environment mapping are completely absent. Also vertex lighting does not really look good on models with low polygon count. And this is the main reason why we are trying to find a better way how to handle lighting...

Forward rendering
So forward rendering would provide per-pixel lighting, but in our case it is definitely not the solution, because we want to have as many dynamic lights as possible. In forward rendering for every fragment you go through all lights with a for cycle and accumulate lighting. For many lights this is a fairly slow process.

Deferred rendering
Deferred rendering is a pretty popular way how to handle many dynamic lights. The basic idea is to draw opaque geometry and fill up G-buffer (in our case surfaces) with its all necessary data like diffuse colour, depth, normals and specular power. Then to another surface you draw only light models and in a fragment shader you calculate fragment position on the screen, read data from all the buffers and based on that data accumulate lighting. Than you can just combine surface with diffuse colour and surface with lighting. This way you can have great ammount of lights in the scene. The problem is that having many surfaces eats up video memory and reading from many surfaces is also slow. Another problem is that deferred rendering does not work for translucent objects. So we would have to handle stuff like particles in some different way.

Deferred lighting
Deferred lighting (also known as light pre-pass) is something between forward and deferred rendering. The difference from deferred rendering is that instead of two passes (geometry, lighting) we have three. In the first pass we write to G-buffer only depth and normals. In the second pass we do the lighting. In the third pass we again draw the geometry and apply accumulated lighting to it. Now, why would anyone like to do that? Why would you want to draw whole scene twice instead of once? The thing is that we didn't have to store big ammount of data into many diferent surfaces and now we can even set diferent materials to any object we want to. Also we don't have to use the same shader for every object. That means, that we can easily handle translucent geometry, particles etc. So far this looks very promising. Only disadvantage is that we have to draw whole scene twice, but with certain optimizations we can draw lower ammount of geometry.

This approach is also used in CryEngine 3 (one of our inspirations :D )

Here you can see our implementation of deferred lighting in GM:S (click to enlarge).

It is just a first working version, so there are many things to be implemented. So far we have only point lights, normal mapping, "materials" (just an attempt, specular reflection of metallics is tinted by their base colour, other materials reflect original light colour, may be not really noticeable because of low specular power of present materials), SSAO and vignetting. If everything goes well, stuff like directional lights, shadows, environment mapping, paricles, vegetation, post-process etc. will come later.

We have also made an exe for you, so if you want to see it in action, just download it in from the link below.

#2 - Shadows, SSAO, Translucency
14th of February 2015

Watsup guyz,

after some longer delay, I am back with another part of our tech blog. The reason why I am writing this after such a long time is that I have spent some time by researching and reading a lot of workpapers on topic of realistic real-time rendering, global illumination and stuff like that. From the beginning I wanted to go for a physically based rendering, which definitely is possible (we have already experimented with that, but that was just a Cook-Torrance BRDF), but it takes tons of work for ten people and a really good insight in real world physics. And I am just one person with a lot of other work to do. So I decided to go with the old "good" Phong and Lambertian and make our engine graphically appealing, not necessarily realistic. But still PBR might be our next step sometime in the future.

Ok, now let's finally have a look at what have we done so far...

Deferred Directional Light and Real-Time Shadows
This is something that we have actually finished before in forward rendering, so it was only matter of implementing it into our deferred lighting engine, which I guess is something that would interest you the most.

So the approach is similar to deferred point lights, but instead of applying lighting to a 3D model, we are using a separate shader on a full-screen quad drawed in a orthographical projection and we pass all needed matrices as a uniform. For the light itself you just need normals from the g-buffer and transform the light vector into the view space (or world, depends on your normals) by multiplying it with view (world) matrix.

The shadows might be a little bit trickier. I can't tell if the way we are doing it is the right way, so if you know a better one, you can share with us :) So what we need for them is a depth from our g-buffer, depth from the shadow map projection, and an inverse world-view matrix and a projection matrix used for shadow mapping. First we reconstruct view-space position and then multiply it with the inverse world-view matrix to get the world-space position.

Code: Select all

float depth = decode_depth(tex2D(uTexDepth, IN.TexCoord));
float3 viewPos  = float3(uTanAspect * (IN.TexCoord * 2.0 - 1.0) * depth, depth) * uClipFar;
float4 worldPos = mul(uMatInverse, float4(viewPos, 1.0));

When we have the world position, when can just multiply it with our shadow map projection matrix (orthographic projection fits shadows from directional light the best).

Code: Select all

float3 OrthoPos = mul(uMatOrtho, worldPos).xyz;

Then we use the result to compare it with the depth from our shadow map. If the depth is higher, then the point is in shadow. But if we use only one sample for comparing, the shadows will have hard edge and that is not really appealing. Unfortunately GM:S does not support hardware PCF (percentage close filtering), which automatically takes 4 samples aroud texel and compares them to get the final result (=> shadows with softer edge), so we had to solve it another way. We decided to go for Stratified Poisson Sampling. For poisson sampling we need a poisson disk, set of points covering whole area of radius = 1 around our current pixel. Example of poisson disk:

Code: Select all

float2 poissonDisk[8];
poissonDisk[0] = float2(-0.6622922, -0.5487083);
poissonDisk[1] = float2(-0.8485804, 0.3489894);
poissonDisk[2] = float2(-0.2681293, -0.1692538);
poissonDisk[3] = float2(0.02454568, -0.9016637);
poissonDisk[4] = float2(0.2056526, 0.2451244);
poissonDisk[5] = float2(-0.220465, 0.7909056);
poissonDisk[6] = float2(0.5490856, -0.3020224);
poissonDisk[7] = float2(0.724205, 0.405656);

And then for every point we take multiple samples based on the poisson disk. The "stratified" part we get by using poissonDisk[] of (pseudo-)random index for every sample. This basically means that the result will have a noisy edge, as you can see below.

The result:

Screen Space Ambient Occlusion (SSAO)
SSAO is something that we already had in the previous tech blog (you can see it if you download the exe), but we didn't really like it's look, so we decided to make a little upgrade. (There is a lot of documents on topic of SSAO all around the internet, so I am not going to write here exactly how it works...)

At first we used the original CryTek's approach, where you only use depth buffer to accumulate the occlusion. But that leads to a lot of self-occlusion, because half of the samples end up being in the geometry, so the samples are basically wasted.

An improvement to CryTek's original method is orienting the sample vectors around normal of the surface => Normal Oriented Hemisphere. There are multiple ways how to get such hemisphere and we got to like Blizzard's approach (see slide 16) which they used in StarCraft II. That means that we flip any sample vector facing opposite direction than the surface normal, so it ends up being above the surface.

This is the resut. It is still not perfect, but it looks much better then the old SSAO:

Real-Time Translucency Approximation
Translucency and subsurface scattering is one of the topics that I have been researching about for the last week. It is something that just adds up on graphical quality and it gives better feel of realism, so I just wanted to have it in our engine. But it also is quite heavy to compute in real-time. Techniques like ray-tracing are not really acceptable for us and we don't think that we even need such approach for our simple purposes. We just wanted something that is not computationally demanding, but it would have the look. In our fortune, DICE (Battlefield series) has already thought of such approach and it is really well described in their presentation.

There is a huge collection of screenshots from our implementation:

(Sorry I just loved the effect :D)

If you want to see the effect in the action, you can download the exe from the link below :) But it is not optimized yet, so you can expect low fps/fps drops!

Download EXE (#n represents the number from which tech blog the exe is)
#2: (not optimized!)

Used models and textures:
Phong reflection model:
Blinn-Phong shading model:
Encoding floats to RGBA:
Normal mapping without precomputed tangents:
Compact normal storage for small G-buffers:
Light Pre-Pass:
Shadow filtering:
Texture lookup in cube map:
CryTeks presentation:
DICE Translucency Approximation:
Cook-Torrance BRDF:
Poisson Disk Generator:
SSAO Tutorial by John Chapman:

Return to “BlueBurnLabs”

Who is online

Users browsing this forum: No registered users and 1 guest