0

Shading Toolbox #4 – Reconstructing World Positions in Post Processing

Hello again everyone!

Image effects that respond to your game world are amazing! Are we in agreement? Good! To do that however, we often have to reconstruct the world position from pixels on our screen. Since this may or may not be a rather mysterious process to some, we will now demystify it! And implement a nice visual effect in the process!

Let’s start with an anecdote. I remember back when I was a gamedev student I went to this convention where students showed off their work to the industry. One team made a 3D Snake’ish arcade game with seriously amazing visuals. The most impressive thing to me at the time were the sweeping trails that illuminated the level like a topographical scan. Pretty much the same way that the game No Man’s Sky did it years later.

Well, let me show you how to do that very same thing… Meaning, the effect that made all us first semesters as jelly as a jellyfish! If you’re in your 1st semester and want to make your peers jelly as a jellyfish too, read right on! Topographical Sweeps ala student trauma inducing  game (or No Man’s Sky…) coming right up!

Note:
If you haven’t already, I strongly advise you to read Toolbox of Shading #3 – Depth Buffer, Blurring and you! Before going on, since this article will build upon techniques from that article.

The Theory

The effect is actually ridiculously easy to achieve. Still, after that convention, it never really occurred to me to try it until Dan Morran (the guy from „Making Stuff Look Good with Unity“ on Youtube) released his tutorial for the No Man’s Sky topographic scanner effect. I watched the video and was like: „Well eff me sideways, duh! Of course!“ I ended up implementing the thing myself and compared notes to his git repo and surprise, the code was pretty much the same… And of course I was not satisfied so I started a lengthy research session and came up with (or rather found) an in my opinion nicer solution to the problem. (And I’ll be damned on the day I write an article that basically mirrors someone else’s tutorial!)

Reconstructing World Space Positions

Let’s review Dan’s approach first. This is a very straight forward approach following these steps:

  1. Calculate the local corners of our far planes view frustum.
  2. Pass these to the vertex part of the image effect.
  3. Let the shader do an interpolation in the fragment step, the result is the local coordinates of every fragment on the far clipping plane
  4. Sample the fragments depth
  5. Normalize the fragments depth between the near and far clipping plane.
  6. Use linear interpolation to get the local position of said fragment between the two planes.
  7. Add the world position of the camera to translate from local to world space.

In Dan’s case, finding the frustum corners was done in a rather lengthy process on the CPU. These values where then passed to the shader.
While this works I find this approach a little inconvenient, but mainly because of it’s length and surface complexity. It is however a very intuitive approach that allows for quite a bit of customization later on. At this point you might want to go check out his channel, his tutorials are really quite cool and very informative!

An approach by a certain Keijiro, from Unity Technologies no less, is way shorter and, in unity, requires only one step outside of the shader itself. It is also the way Unity 5.4’s Cinematic Image Effects calculate world space positions. However, Keijiro’s approach might be hard to understand when confronted with his condensed piece of code, so let’s break the theory down before even looking at it.

Assuming you have the View, InverseView and Projection Matrix

  1. Sample a fragments linear eye depth
  2. Translate the fragment from Screen Space into View Space using reverse projection and the depth
  3. Translate the fragment from View Space into World Space using the inverse View Matrix

As you can tell the single steps appear to imply a lot more work than the steps in Dan’s approach. However, code wise we get away with only about 1/10th of the code (and moar speed!), because of the excessive use of matrices.

Usually, when shading an object in 3D space, a shaded object, undergoes several matrix multiplications that translate it’s vertex positions from once space to another. (Usually from Local, to World, to View, to Projection, however there are a few intermediate steps). You can check out the LearnOpenGL website for further reading on the topic. In this case, we traverse the chain backwards and obtain the world space position.

Let’s get our hands dirty!

If you read that with the voice of a blonde annoyance, chances are you are playing too much of a certain Riot Games™ product.

But more to the point, let’s see some code. For all of you folks not working with the Unity Shading API, dont worry, I’ll come to that in a second.

// Calculate the linear eye depth from the depth buffer
// This function works only with  0 - 1 values, so make sure to convert depth 
// values to that format, the function itself is defined as:
//
// LinearEyeDepth( float z ) { return 1.0 / (_ZBufferParams.z * z + _ZBufferParams.w); } 
//
// where	    _ZBufferParams.x = (1-farClipDistance/nearClipDistance)
// and where    _ZBufferParams.y = (farClipDistance/nearClipDistance)
// and where    _ZbufferParams.z = (x/farClipDistance)
// and where    _ZBufferParams.w = (y/farClipDistance)

float linDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv));
// Get a vector that holds the FOV multipliers for our uv's
float2 projectionMultipliers = float2(_ProjectionMatrix._11, _ProjectionMatrix._22);
// convert from screenSpace to viewSpace by applying a reverse projection procedure
float3 vpos = float3(
		// convert UV's so they represent a coordinate system with its origin in the middle
		(i.uv * 2 - 1) / projectionMultipliers, -1)
		// division translates uv's back from our screens aspect ratio to a quadratic space
		// -1 denotes a depth of -1, so in the next step we translate AWAY from the origin
		
		* linDepth; // slide the whole coordinates by the depth in a reverese projection

// convert from viewSpace to worldSpace
float4 wsPos = mul(_InverseViewMatrix, float4(vpos, 1));

The first step is quite straight forward, we sample our depth texture and linearize our depth buffer. Note that I use a Unity Macro here, depending on your rendering framework you will have to implement this function yourself. The important thing to note is that the function I am going to describe is taking inputs for depth values between 0 and 1 only, so if you are running a framework that generates depth values from -1 to 1, remember to convert them to the 0 to 1 format.

After that, the math looks like this: (You can implement the comments from the previous code sample):

x = ( 1-\frac{far}{near})

y = ( \frac{far}{near})

f(z) = ( \frac{x}{far}*z+\frac{y}{far})

The second step is to extract necessary projection values from the projection matrix. The projection matrix usually transforms view space coordinates into a projected space based on the cameras FoV and the frustum parameters (read: far and near clipping plane locations). In short, this matrix defines what the frustum sees and applies perspective distortion to vertices. A typical projection matrix looks a bit like this:

\begin{bmatrix} \frac{tan(fov/2)^{-1}}{a} & 0 & 0 & 0\\ 0 & tan(fov/2)^{-1} & 0 & 0 \\ 0 & 0 & \frac{-zp}{zm} & \frac{-(2*far*near))}{zm}\\ 0 & 0 & -1 & 0 \end{bmatrix}

Where the values are defined as:

fov = Field of View
a = Screen Aspect Ratio
zp = far + near
zm = far – near

In our codesample we take the (1,1) and (2,2) values from the matrix, which are responsible for scaling vertices along the x and y axis respective to the field of view and the screens aspect ratio. In the next step we do exactly that, but instead of doing a multiplication we divide, which reverses the process.

Before we do that however, we convert the screen space uv’s from a lower left based coordinate system, to a coordinate system which has it’s origin in the center by multiplying the uv’s by two and then subtracting one.
Then we slide the calculated position away from the center along the negative Z axis by multiplying the whole construct with our calculated depth. The result is the exact position of the vertex in view space.

Now assuming we have the Inverse View Matrix, we can simply multiply the matrix with our view space position, which will result in a world space position. (Since we undo the world to view space transformation by multiplying with the inverse matrix)

The Sweep

Having aquired our world space position, the rest of the shader is.. a fragment of cake!

First, let’s declare some parameters for this shader.

sampler2D _ScanTexture;
float4 _ScanColor;
float _ScanHeadPct;
float _TrailWidth;

float4 _PulseOrigin;
float _PulseDistance;

We define a texture and a color, as well as a width and a head percentage. Color and texture should be quite clear, the trail width is the width of the whole scan and the head percentage denotes how much of the trail should be filled with a solid color line.

Having defined and set these values from the outside we can now do :

fixed4 col = tex2D(_MainTex, i.uv);
			
if (_PulseDistance == 0) // we don't use the effect, just return the normal screen color
	return col;

[... snip ...] find wsPos [... snip ...]

// calculate the world distance between our fragment and the hit position of whatever
// we want to hightlight
float dist = distance(wsPos, _PulseOrigin.xyz);
float distanceToPulse = dist - _PulseDistance;
float distanceIterator = saturate(distanceToPulse / _TrailWidth);

if (distanceIterator < 1) {
	if (distanceIterator > 1 - _ScanHeadPct)
		return _ScanColor;
	else 
		col = lerp(col, tex2D(_ScanTexture, i.uv * 5)*_ScanColor, distanceIterator);

return col;
}

And we’re done. All that is left is to fill in the necessary values on runtime to animate the sweep.
In my case I went ahead and bound the mouse click to setting the _PulseOrigin value to the clicked world position, after which I executed a little routine that incremented the _PulseDistance over a certain amount of time until it reached a certain value.

Unity Implementation

There are a few things to note for implementing this effect inside unity. First, inside the shader we have access to the projection matrix natively. We can simply use unity_CameraProjection, intead of providing a projection matrix from external sources or via a cBuffer. We also have the view matrix natively, however, since the Cg variant of unity is not state of the art, as in, it doesn’t have all the features the official nVidia release has (at the time this post was written), we can’t easily inverse the view matrix. We COULD write an inverse matrix function inside the shader… that however is extremely tedious, and since we are lazy we can exploit the fact that the C# side of unity does have access to the view matrix as well AND it has an inverse property for 4×4 matrices. So we calculate the inverse view matrix inside C# and pass it to the shader manually. Watch out though, the view matrix in the camera object is actually called the worldToCameraMatrix.

The full unity implementation for this effect can be, as always, found in the GitRepo, leave a comment if you feel that the doc is lacking and as always, thanks for reading my ramblings!
See you in the next post.

Alexander

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.