# Practical Edge Preserving Depth of Field

I wanted to take a stab at depth of field, I had a few goals in mind:

• The bokeh should be a natural part of the algorithm, I didn’t want to use draw indirect to layer a bokeh effect on top.
• The algorithm should be fast enough to use in a 16ms or 33ms production environment
• It should not have artifacts or halos in the near blur or the far blur, it should look much like reference DOF shots
• The algorithm should work in HDR space and be compatible with other post effects such as SSR

I call it Practical Edge Preserving Depth of Field because I feel it’s descriptive to what is going on:

• Practical: It’s easy to implement, understand, and it runs at ~0.5ms on current GPUs
• Edge Preserving: Nothing fancy here; I wanted to make sure the near blur, focal range, and far blur did not incorrectly halo or bleed into each other. For the most part I accomplished this.

Visuals
Overall I’m happy with the results – you can judge for yourself. I did the majority of profiling on an Nvidia 1080 Ti; the entire DOF time is ~.45ms at 1080p and ~1.8ms at 4K. I was hoping to keep it under 1ms at 4K but…oh well, maybe I’ll revisit it later.

Near Blur
In the first screenshot, pay attention to where the foreground blurred leaves bleed into the focal range. Notice the focal range in this area stays perfectly in focus and does not suffer from unnatural darkening or lightening.

I believe this behavior closely matches these reference picture I found:

Notice how the rock in the foreground has a subtle blur which bleeds into the focal range.

Similar to the rock, notice how the blurred leaf bleeds into the focal range of the strawberry.

Far Blur
The next set of screenshots are to show off the far bokeh variants you can get by adjusting the kernel size. The kernel sizes for the images are 5, 10, and 20.

These are similar to an indoor restaurant shot I found:

Overall I think it turned out well, but it’s not perfect. I’d like it to be customizable so it can have a more hexagonal shape if desired. I’ll touch on that later in my “Areas for Improvement” section.

Finally, it seems every depth of field technique has an extreme screenshot – which is what you see here, I flew far away from the bistro:

Very close to this reference shot, I just wish I had more colored lights 🙂

Technique
The Practical Edge Preserving Depth of Field works in three stages:

1. Circle of Confusion (CoC) / Split – For each texel the near/far CoC is computed from the depth buffer, then two quarter resolution color buffers (near/far) are computed, and finally the far CoC is saved to a full resolution buffer.
2. Blur – A custom bokeh blur is performed on the near and far field, then a two pass Gaussian blur is performed to smooth out the bokeh. I say custom because there is some additional logic to prevent incorrect bleeding and haloing.
3. Composite – The near/far blurs are composited onto our HDR buffer

Stage 1: CoC/Split
First part of the stage is computing the CoC – I compute a floating point value for the near and far plane’s CoC. The second part is saving the color buffer data out to two 1/4 resolution color buffers, one for the near plane, and one for the far plane. However there is a key part here; these color buffers include an alpha channel, and that alpha is set to the respective CoC value. The subsequent blur passes will use this value to decide how much a texel should contribute to the blur.  I’ll cover this in detail during the blur description. Finally, the far CoC value is written out to a full resolution R8_UNORM target which will be used during the blur and composite steps.

Here’s the code for that logic:

 // write to full resolution coc g_coc[ d_pixel ] = coc.y;

// write to quarter res color buffers if ( (d_pixel.x & 0x1) == 0 && (d_pixel.y & 0x1) == 0 ) {    const float2 uv = (d_pixel + .5) / (float2) dispatch_res;    const float3 color = g_hdr.SampleLevel(g_sampler, uv, 0);

   int2 dof_res;    g_dof.GetDimensions( dof_res.x, dof_res.y );    float2 dof_pixel = d_pixel * .5;

   // near coc    g_dof[ dof_pixel ] = float4( color, coc.x );

   // far coc    g_dof[ dof_pixel + int2(dof_res.x * .5, 1) ] = float4( color, coc.y );
}

Stage 2: Blur
There are two separate blurs, a bokeh blur and a two pass Gaussian blur; however they are handled differently depending on whether they’re blurring the near field or the far field. The goal of the near blur is to correctly bleed the blurred texels into the focal range without incorrect lightening, darkening, haloing, or smeared focal range texels. The goal of the far blur is to not include any texels in the focal range or bleed into the focal range; otherwise there would be unnecessary haloing, blurring, or smearing around objects in focus.

For the near blur, I use a bokeh blur pattern to sample the texel. This pattern was created offline by using sin/cos to generate sampling points at an increasing frequency as their distance to the center increases (this pattern was inspired by Unity’s bokeh). When blurring I ignore any texels with 0 alpha values, these texels have no CoC and should not be included in the blur. If we included them it would cause smearing of focal range texels as part of the bokeh.

 // only use pixels which should have some blur (alpha > 0) // this prevents smeering of focal pixels in the foreground blur float4 blur_color = color[ 0 ]; int valid_count = 1;

for ( i = 0; i < total_colors; i++ ) {    if ( color[ i ].a > 0 )    {       blur_color += color[i];       valid_count++;    } }
 blur_color /= valid_count; blur_color.a = 1; return blur_color; 

After the near blur bokeh I perform a 2 pass custom Gaussian blur to smooth out the bokeh, I say custom because I use a standard Gaussian blur routine to sample each texel in the near field, but I compute the average color of all the texels which have an alpha > 0 (anything with a non-zero alpha value should contribute to the blur). Then, when computing the final blur color with the Gaussian weights, I use the color at that texel if it is an alpha > 0 or the average color (discussed previously) if that texel shouldn’t be in the blur. This allows each texel to maintain the color energy of the original texel but prevents texels outside of the near blur field from being included in the blur. Technically the sampler will do a bilinear blend so we will blur a small amount of focal texels, but because we also use the alpha as our ‘bleed amount’ over the focal range (see the composite section), there aren’t any apparent artifacts.

 // replace 0 alpha colors with color at sample 0 //(prevents bringing in unwanted focal colors) float3 avg_color = 0; int i;

for ( i = 0; i < total_colors; i++ )    avg_color += color[ i ].a > 0 ? color[ i ].rgb : color[ 0 ].rgb;

avg_color = avg_color / total_colors;

float4 blur_color = 0; for ( i = 0; i < total_colors; i++ ) {    blur_color += (color[ i ].a > 0 ? color[ i ] : float4(avg_color, color[i].a)) * weights[ i ]; }
 return blur_color; 

The far blur is very similar to the near blur, with a few notable exceptions:

First, if the CoC is 0 I early out and perform no blurring, whereas the near blur will still perform a blur if any texels sampled should be included in a blur. This is because the near blur should have some bleeding into the focal range whereas the far blur must stay distinct from the focal range (refer back to the reference shots).

Second, for both the bokeh and guassian blurs; the far blur weights the texels by their CoC (the alpha value), basically how much the energy from those texel should “spread” to the texel we’re blurring. This will prevent unnecessary bleeding or haloing. You’ll see from the two code snippets below…

So.. for the bokeh blending:

 float4 blur_color = color[ 0 ];

float valid_count = 2;

// a little extra wait for our center color // helps prevents holes in the middle blur_color *= valid_count;

for ( i = 0; i < total_colors; i++ ) {    blur_color += (color[i] * color[i].a);    valid_count += color[i].a; }

blur_color /= valid_count;

// this pixel has blur // mark it as such (with the alpha) so the gauss blur passes // will make sure to incorporate it blur_color.a = 1;

return blur_color;

 

And for the Gaussian blending:

 // replace colors with alpha != 1 with color at sample 0 // (prevents haloing of unwanted focal colors) float3 avg_color = 0;

int i;

for ( i = 0; i < total_colors; i++ )    avg_color += color[ i ].a == 1 ? color[ i ].rgb : color[ 0 ].rgb;

avg_color = avg_color / total_colors;

float4 blur_color = 0;

for ( i = 0; i < total_colors; i++ ) {    blur_color += (color[ i ].a == 1 ? color[ i ] : float4(avg_color, color[i].a)) * weights[ i ]; }
 // this pixel has blur // mark it as such (with the alpha) so the gauss blur passes // will make sure to incorporate it blur_color.a = 1; return blur_color; 

Stage 3: Composite
Composite is pretty straight forward, first I sample the current HDR texel and the far CoC value at the current texel and then I blend in the far blur texel based on the CoC value.

Finally, I sample the near blur and blend based on its alpha and the far CoC. The alpha allows the near blur to bleed into the focal range (correct behavior) and the CoC allows the far blur to always trump the near blur. This prevents the near blur to focal transition range (e.g. the soft edges on the leaves) from drawing over the far blur, thus causing a weird brief ‘in focus’ halo at the edges of the near blur when it overlaps a far blur. Unfortunately this has the negative trade off of near blur not bleeding and fading out over the far blur, you can see that in these two plant screenshots. In the first one the leaves correctly blur over the focal range:

But in the second on the far blur is behind the plant and the leaves do not correctly fade into the far blur – they have a more abrupt cutoff. This is because the ‘fading’ part of the leaves also contain the focal range texels and this would cause the weird ‘in focus’ halo:

Here’s the composite:

 float coc = g_coc_map[ d_pixel ]; float4 dof_near = g_dof_map.SampleLevel( g_input_sampler, near_uv, 0 ); float3 color = g_hdr[ d_pixel ];

if ( coc > 0 ) {    float4 far_color = g_dof_map.SampleLevel( g_input_sampler, far_uv, 0 );    color = lerp( color, far_color.rgb, coc ); }

float alpha = dof_near.a * (1.0 - coc); // fade the more we're going into far blur territory color = lerp( color, dof_near.rgb, alpha );

 g_hdr[ d_pixel ] = color; 

Areas for improvements
There are some performance and visual improvements I’d like to explore in the future:

• Preblur: There is shimmering during slow movement due to the near and far buffers being quarter resolution (1/2 x 1/2), this can be mostly resolved by slightly softening (i.e. blurring) the color buffer before writing to the near and far buffers. I’ve done this in a separate implementation but haven’t had the time to do it here.
• Performance: The blur texture is twice as wide as the scene, the first half holding the near blur and the second half holding the far blur. In theory the far blur doesn’t need an alpha value so I could try separating the textures which would allow the near blur to be R11G11B10 instead of the current R16G16B16A16.
• Visual: It bugs me that the near blur doesn’t bleed into the far blur (see the leaves in the last screen shot); I’ll probably investigate some sort of smarter near/far logic when splitting the blur planes.
• Visual: A more customizable bokeh blur, I’d like to have the option of having a more hexagonal look with hard edges. I might need to do some sort of indirect draw and specify specific bokeh locations.
• Visual: Customize the bokeh kernel size based on the CoC values. I attempted this but in practice it didn’t look any better and was more expensive, however it’s worth revisiting in the future.

Code
If you’re interested in checking out the results, you can find the code+data+binaries here:
https://github.com/mcferront/anttrap-engine/pep_dof