0

Shading Toolbox #3 – Unity Depth Buffer and Depth Blur

Today‘s end result will be a very basic Depth Blur, similar to a Depth of Field effect in Unity3D. For this purpose we are going to use the Depth Buffer to figure out how far away a pixel is from our camera. We will then blur it using a very simple „Average Blur“.

Other interesting tid-bits in this post:

  • Writing a custom Blit function (allows us to pass more data to our post processing shader, even on a per vertex basis!)
  • How to prevent a flipped screen when post-processing in Deferred Shading mode (and using a custom Blit)
  • What is a Texel- (and why I should cancel my next vacation?)

Setting up our PostProcessing Shader

We have already talked about how to do this Shading Toolbox #1 – Chromatic Split / Aberration, so let‘s just boot up a standard set-up like we did before. In unity, right click anywhere in your project view and select Create → Shader → Image Effect Shader.
Name it DepthBlur and then proceed to create a C# script to host our shader. I called mine IE_DepthBlur.cs (IE = Image Effect, not Internet Explorer!)

using UnityEngine;
using System.Collections;

[ExecuteInEditMode]
public class IE_DepthBlur : MonoBehaviour {

    private Shader m_shader;
    private Material m_material;
    public float DepthTest;

    void Awake()
    {
	    // force the camera to render the depth texture. That's usually true anyway but just in case...
        GetComponent<Camera>().depthTextureMode = DepthTextureMode.Depth;
        m_shader = Shader.Find("Hidden/DepthBlur");
        m_material = new Material(m_shader);
    }

    void OnRenderImage(RenderTexture source, RenderTexture dest)
    {
        Graphics.Blit(source, dest, m_material);
    }
}

Attach it to a camera and we are done with our basic set-up. Note that I added a line where I set the depthTextureMode of the camera. This is necessary to ensure that, no matter the rendering mode, we always have a depth texture.

Let‘s go deeper, the Depth Buffer

There are several ways to access the Depth Buffer of our camera. However, I prefer to simply define the _CameraDepthTexture property, which is a sampler2D type. Unity will then be a dear and provide us with the current depth texture.
Let us also add the float2 uv_depth to our input structs, which we will use to pass a copy of our current UV values. Why we do this will become clear a b(l)it later (pardon the pun).

SubShader
{
	// No culling or depth
	Cull Off ZWrite Off ZTest Always

	Pass
	{
		CGPROGRAM
		#pragma vertex vert
		#pragma fragment frag
			
		#include "UnityCG.cginc"

		struct appdata
		{
			float4 vertex : POSITION;
			float2 uv : TEXCOORD0;
			float2 uv_depth :TEXCOORD1;
		};

		struct v2f
		{
			float2 uv : TEXCOORD0;
			float4 vertex : SV_POSITION;
			float2 uv_depth : TEXCOORD1;
		};

		v2f vert (appdata v)
		{
			v2f o;
			o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
			o.uv = v.uv;
			// pass a copy of our UV's because depending on our rendering mode
			// we might have to flip our original uv's
			o.uv_depth = v.uv;
			return o;
		}
			
		sampler2D _MainTex;
		sampler2D _CameraDepthTexture; // Grab the depth texture
		float _DepthFade;

		fixed4 frag (v2f i) : SV_Target
		{
			return tex2D(_MainTex, i.uv);	
		}
	}
}

Now, those that actually read my code might have inferred from the comment, that we copy the uv value touv_depth, because deferred shading tends to flip our image around. This will not be the case as long as we use the Graphics.Blit function in our C# script, but since we are going to write our own custom Blit function a bit later, this will become a problem, since even if our cameras render texture is flipped, the depth texture is not. So we need a second, flipped pair of UV‘s for depth sampling.

And speaking of depth sampling, let‘s get to it. Add the following two lines to your fragment shader.

// get the raw Depth relative to the camera
float rawDepth = DecodeFloatRG(tex2D(_CameraDepthTexture, i.uv_depth));
// normalize this between 0 .. 1 where 0 is at the near clipping plane and 1 at the far clipping plane
float linDepth = Linear01Depth(rawDepth);

We can easily visualize these depth values by adding

return linDepth * _ProjectionParams.z * 0.1;

.Blit()

Before we get to the meat of things and implement the depth blur, let‘s take a moment to talk about custom bliting. (Meme-culture people, Cyka Blit jokes are NOT appreciated. (here.))
Writing your own Blit function is not necessary for this shader, not really, but I think it‘s important for us to discuss this, since we will need it in the near future when we discuss advanced post processing effects that require per vertex parameters to be passed to our shader.

A while ago, Unity Technologies granted us the privilege to use the GL class, which is the interface to the low-level graphics library. Using this we can do really fun stuff, like drawing shapes and forms without bothering with meshes! (A silent yay for this!)
In a previous post I mentioned, that Post-Processing in Unity is quite simply the shading of a quad that just so happens to be textured with the things our camera sees. Utilizing this knowledge, we will use the GL class to draw a quad and shade it with our shader.

In the IE_DepthBlur class, add the following method.

void CustomBlit(RenderTexture source, RenderTexture dest, Material mat)
{
    // When applying an image effect we shade a Quad with our rendered screen output (rendertexture) on it.

    // Set new rendertexture as active and feed the source texture into the material
    RenderTexture.active = dest;
    mat.SetTexture("_MainTex", source);

    // Low-Level Graphics Library calls

    GL.PushMatrix(); // Calculate MVP Matrix and push it to the GL stack
    GL.LoadOrtho(); // Set up Ortho-Perspective Transform

    m_material.SetPass(0); // start the first rendering pass

    GL.Begin(GL.QUADS); // Begin rendering quads

    GL.MultiTexCoord2(0, 0.0f, 0.0f); // prepare input struct (Texcoord0 (UV's)) for this vertex
    GL.Vertex3(0.0f, 0.0f, 0.0f); // Finalize and submit this vertex for rendering (bottom left)

    GL.MultiTexCoord2(0, 1.0f, 0.0f); // prepare input struct (Texcoord0 (UV's)) for this vertex
    GL.Vertex3(1.0f, 0.0f, 0.0f); // Finalize and submit this vertex for rendering  (bottom right)

    GL.MultiTexCoord2(0, 1.0f, 1.0f); // prepare input struct (Texcoord0 (UV's)) for this vertex
    GL.Vertex3(1.0f, 1.0f, 0.0f); // Finalize and submit this vertex for rendering  (top right)

    GL.MultiTexCoord2(0, 0.0f, 1.0f); // prepare input struct (Texcoord0 (UV's)) for this vertex
    GL.Vertex3(0.0f, 1.0f, 0.0f); // Finalize and submit this vertex for rendering (top left)

    // Finalize drawing the Quad
    GL.End();
    // Pop the matrices off the stack
    GL.PopMatrix();
}

Now replace the Graphics.Blit call with CustomBlit and we are done.

But here‘s where we run into a little problem. Using a custom Blit function allows us to manually pass parameters to the four vertices of our quad, which is awesome, but we passed the UV‘s assuming that the screen space UV‘s start at the bottom left. This however, is NOT the case in deferred shading. This will result in our image being flipped upside down. Now, we could of course check which rendering mode is currently used by the camera and pass different UV‘s if we are rocking deffered shading, however we are planing to use the depth buffer which is not flipped in deferred shading, so we would have to expose the depth UV‘s in the vertex input struct and set those manually ….. and all that‘s way too much work.

There is, of course, a simpler solution.

In our shader, declare another Unity built in variable _MainTex_TexelSize, which is a float4.
In our fragment function we can then conveniently check for the UNITY_UV_STARTS_AT_TOP compile flag, which is true if we run a shading mode that has it‘s UV‘s starting at the top left. When the TexelSize is smaller than 0, we flip the uv coordinate.

Your code should look a little like this:

[… SNIP …] 
sampler2D _MainTex;
sampler2D _CameraDepthTexture;
float4 _MainTex_TexelSize;
float _DepthFade;

fixed4 frag (v2f i) : SV_Target
{
    // Prevent Flipping Forward/Deferred UVs on D3D
    // https://docs.unity3d.com/Manual/SL-PlatformDifferences.html
#if UNITY_UV_STARTS_AT_TOP
    if (_MainTex_TexelSize.y < 0)
    i.uv.y = 1 - i.uv.y;

#endif
[… SNIP …]

But what the hell is a Texel?

Well, Texel is a municipality and an island with a population of 13,641 in the province of North Holland in the Netherlands. It is the largest and most populated island of the West Frisian Islands in the Wadden Sea.

More importantly (sorry Texellians(?)), a Texel is a Texture-Pixel or a Texture Element. If we‘re mapping Textures to objects in our virtual 3D world, we can‘t always display all the pixels that the Texture has. For example, my over-ferocious colleague made a 4086 x 4086 pixel Texture for a box of matches, however, that box will never be bigger than 100 pixels on screen. Thus Textures are split into Texels, whose size are in turn relative to the width and height of the surface we are drawing our texture on. Essentially the texel is what we will use for a pixel, but of course since a texel, depending on the on-screen-size of our object, can be either bigger or even smaller than an actual pixel, magical filters can be applied on these texture elements to fit the size of the surface we are drawing on.

For screen post processing this means that _MainTex_TexelSize.x equals 1 / ScreenWidth, so our texel is in fact one pixel (if we assume that the render texture we get is not super-sampled to 4k or something).

Depth Blur

Now let us implement a depth blur akin to a depth of field effect, where distant objects are going to be blurred out, whereas close objects will remain focused.

For this purpose let us extend our C# script by a public value called DepthFade and set this value in our shader from within our custom Blit function.

[… snip …]     
[Range(0f,1f)]
public float DepthFade; // added

void Awake()
{
    m_shader = Shader.Find("Hidden/DepthBlur");
    m_material = new Material(m_shader);
}

void OnRenderImage(RenderTexture source, RenderTexture dest)
{
    CustomBlit(source, dest, m_material);
}

void CustomBlit(RenderTexture source, RenderTexture dest, Material mat)
{
// When applying an image effect we shade a Quad with our rendered screen output (rendertexture) on it.
// Set new rendertexture as active and feed the source texture into the material
    RenderTexture.active = dest;
    mat.SetTexture("_MainTex", source);

    mat.SetFloat("_DepthFade", DepthFade); // added

[… snip …]

Now we still have to declare _DepthFade inside the shader, so add the following above the fragment function.

float _DepthFade;

Remember when we debugged the depth? We can use this value, or one like it, to determine how strong our blur should be. So after we have successfully calculated our linDepth, add the following line.

// calculate the blur strength based on depth with a quadratic falloff
float BlurStrength = linDepth * _ProjectionParams.z * _DepthFade * _DepthFade;

Now the _DepthFade value will determine how quickly the blur increases in strength.

Finally, let us blur some stuff. For this we will be implementing a very basic Box-Blur with a 5×5 kernel (or convolution matrix, if that suits your fancy). We will go over convolution matrices and what we can do with them in a different post, for now suffice to say that we will take all fragments in a 5×5 grid around the fragment we currently look at and average the color values.

// blur
float4 c10 = tex2D(_MainTex, i.uv + float2(-2 * _MainTex_TexelSize.x,	2 * _MainTex_TexelSize.y));
float4 c11 = tex2D(_MainTex, i.uv + float2(-1 * _MainTex_TexelSize.x,	2 * _MainTex_TexelSize.y));
float4 c12 = tex2D(_MainTex, i.uv + float2(0,							2 * _MainTex_TexelSize.y));
float4 c13 = tex2D(_MainTex, i.uv + float2(1  * _MainTex_TexelSize.x,	2 * _MainTex_TexelSize.y));
float4 c14 = tex2D(_MainTex, i.uv + float2(2  * _MainTex_TexelSize.x,	2 * _MainTex_TexelSize.y));

float4 c20 = tex2D(_MainTex, i.uv + float2(-2 * _MainTex_TexelSize.x, 1 * _MainTex_TexelSize.y));
float4 c21 = tex2D(_MainTex, i.uv + float2(-1 * _MainTex_TexelSize.x, 1 * _MainTex_TexelSize.y));
float4 c22 = tex2D(_MainTex, i.uv + float2(0,						  1 * _MainTex_TexelSize.y));
float4 c23 = tex2D(_MainTex, i.uv + float2(1 * _MainTex_TexelSize.x,  1 * _MainTex_TexelSize.y));
float4 c24 = tex2D(_MainTex, i.uv + float2(2 * _MainTex_TexelSize.x,  1 * _MainTex_TexelSize.y));

float4 c30 = tex2D(_MainTex, i.uv + float2(-2 * _MainTex_TexelSize.x, 0 * _MainTex_TexelSize.y));
float4 c31 = tex2D(_MainTex, i.uv + float2(-1 * _MainTex_TexelSize.x, 0 * _MainTex_TexelSize.y));
float4 c32 = tex2D(_MainTex, i.uv + float2(0,						  0));
float4 c33 = tex2D(_MainTex, i.uv + float2(1 * _MainTex_TexelSize.x,  0 * _MainTex_TexelSize.y));
float4 c34 = tex2D(_MainTex, i.uv + float2(2 * _MainTex_TexelSize.x,  0 * _MainTex_TexelSize.y));

float4 c40 = tex2D(_MainTex, i.uv + float2(-2 * _MainTex_TexelSize.x, -1 * _MainTex_TexelSize.y));
float4 c41 = tex2D(_MainTex, i.uv + float2(-1 * _MainTex_TexelSize.x, -1 * _MainTex_TexelSize.y));
float4 c42 = tex2D(_MainTex, i.uv + float2(0,						  -1 * _MainTex_TexelSize.y));
float4 c43 = tex2D(_MainTex, i.uv + float2(1 * _MainTex_TexelSize.x,  -1 * _MainTex_TexelSize.y));
float4 c44 = tex2D(_MainTex, i.uv + float2(2 * _MainTex_TexelSize.x,  -1 * _MainTex_TexelSize.y));

float4 c50 = tex2D(_MainTex, i.uv + float2(-2 * _MainTex_TexelSize.x, -2 * _MainTex_TexelSize.y));
float4 c51 = tex2D(_MainTex, i.uv + float2(-1 * _MainTex_TexelSize.x, -2 * _MainTex_TexelSize.y));
float4 c52 = tex2D(_MainTex, i.uv + float2(0,						  -2 * _MainTex_TexelSize.y));
float4 c53 = tex2D(_MainTex, i.uv + float2(1 * _MainTex_TexelSize.x,  -2 * _MainTex_TexelSize.y));
float4 c54 = tex2D(_MainTex, i.uv + float2(2 * _MainTex_TexelSize.x,  -2 * _MainTex_TexelSize.y));

float4 blurredCol = (c10 + c11 + c12 + c13 + c14 +
			c20 + c21 + c22 + c23 + c24 +
			c30 + c31 + c32 + c33 + c34 +
			c40 + c41 + c42 + c43 + c44 +
			c50 + c51 + c52 + c53 + c54
			) / 25; // average

return lerp(col, blurredCol, saturate(BlurStrength));

As you can see, we use the TexelSize to get the UV position of the fragment that is right next to the one we are calculating, always keeping in mind that the actual resolution of our screen might change. Since _MainTex_TexelSize.x is the equivalent of one Texel, or Pixel in this case, we can multiply it by any number ‘n‘ and get the UV value that we need to travel to get to the texel that is n texels farther.

And then, finally, we return an interpolated color between the actual color of the pixel and the blurred color, resulting in a smooth increasingly blurry depth. Make sure to saturate the value however, since anything outside of the 0 to 1 range would really mess up the returned color.

That‘s it for today, thank you for reading my random ramblings about stuff that makes other stuff pretty and see you next time.

P.S. I am canceling my next vacation to go to Texel instead, what about you?

Alexander

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.