12 The Depth Buffer
12.010 How do I make depth buffering work?
Your application needs to do at least the
following to get depth buffering to work:
 Ask for a depth buffer when you create
your window.
 Place a call to glEnable (GL_DEPTH_TEST) in
your program's initialization routine, after a
context is created and made current.
 Ensure that your zNear and zFar
clipping planes are set correctly and in a way
that provides adequate depth buffer precision.
 Pass GL_DEPTH_BUFFER_BIT as a parameter to
glClear, typically bitwise OR'd with other values
such as GL_COLOR_BUFFER_BIT.
There are a number of OpenGL example
programs available on the Web, which use depth buffering. If
you're having trouble getting depth buffering to work
correctly, you might benefit from looking at an example
program to see what is done differently. This FAQ contains links to
several web sites that have example OpenGL code.
12.020 Depth buffering doesn't work in my
perspective rendering. What's going on?
Make sure the zNear and zFar clipping
planes are specified correctly in your calls to glFrustum()
or gluPerspective().
A mistake many programmers make is to
specify a zNear clipping plane value of 0.0 or a
negative value which isn't allowed. Both the zNear and
zFar clipping planes are positive (not zero or
negative) values that represent distances in front of the eye.
Specifying a zNear clipping plane
value of 0.0 to gluPerspective() won't generate an OpenGL
error, but it might cause depth buffering to act as if it's
disabled. A negative zNear or zFar clipping
plane value would produce undesirable results.
A zNear or zFar clipping
plane value of zero or negative, when passed to glFrustum(),
will produce an error that you can retrieve by calling
glGetError(). The function will then act as a noop.
12.030 How do I write a previously stored depth
image to the depth buffer?
Use the glDrawPixels() command, with the
format parameter set to GL_DEPTH_COMPONENT. You may want to
mask off the color buffer when you do this, with a call to
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); .
12.040 Depth buffering seems to work, but
polygons seem to bleed through polygons that are in front of them.
What's going on?
You may have configured your zNear and
zFar clipping planes in a way that severely limits
your depth buffer precision. Generally, this is caused by a zNear
clipping plane value that's too close to 0.0. As the zNear
clipping plane is set increasingly closer to 0.0, the
effective precision of the depth buffer decreases
dramatically. Moving the zFar clipping plane further
away from the eye always has a negative impact on depth
buffer precision, but it's not one as dramatic as moving the zNear
clipping plane.
The OpenGL
Reference Manual description
for glFrustum() relates depth precision to the zNear and
zFar clipping planes by saying that roughly log2(zFar/zNear)
bits of precision are lost. Clearly, as zNear
approaches zero, this equation approaches infinity.
While the blue book description is good at
pointing out the relationship, it's somewhat inaccurate. As
the ratio (zFar/zNear) increases, less precision is
available near the back of the depth buffer and more
precision is available close to the front of the depth buffer.
So primitives are more likely to interact in Z if they are
further from the viewer.
It's possible that you simply don't have
enough precision in your depth buffer to render your scene.
See the last question
in this section for more info.
It's also possible that you are drawing
coplanar primitives. Roundoff errors or differences in
rasterization typically create "Z fighting" for
coplanar primitives. Here are some options to assist you
when rendering coplanar primitives.
12.050 Why is my depth buffer precision so poor?
The depth buffer precision in eye
coordinates is strongly affected by the ratio of zFar to
zNear, the zFar clipping plane, and how far
an object is from the zNear clipping plane.
You need to do whatever you can to push the
zNear clipping plane out and pull the zFar plane
in as much as possible.
To be more specific, consider the
transformation of depth from eye coordinates
x_{e}, y_{e}, z_{e},
w_{e}
to window coordinates
x_{w}, y_{w}, z_{w}
with a perspective projection matrix
specified by
glFrustum(l, r, b, t, n, f);
and assume the default viewport transform.
The clip coordinates of z_{c} and w_{c} are
z_{c} = z_{e}* (f+n)/(fn)
 w_{e}* 2*f*n/(fn)
w_{c} = z_{e}
Why the negations? OpenGL wants to present
to the programmer a righthanded coordinate system before
projection and lefthanded coordinate system after projection.
and the ndc coordinate:
z_{ndc} = z_{c} /
w_{c} = [ z_{e} * (f+n)/(fn)  w_{e}
* 2*f*n/(fn) ] / z_{e}
= (f+n)/(fn) + (w_{e} / z_{e})
* 2*f*n/(fn)
The viewport transformation scales and
offsets by the depth range (Assume it to be [0, 1]) and then
scales by s = (2^{n}1) where n is the bit depth of
the depth buffer:
z_{w} = s * [ (w_{e} /
z_{e}) * f*n/(fn) + 0.5 * (f+n)/(fn) + 0.5 ]
Let's rearrange this equation to express z_{e}
/ w_{e} as a function of z_{w}
z_{e} / w_{e} = f*n/(fn)
/ ((z_{w} / s)  0.5 * (f+n)/(fn)  0.5)
= f * n / ((z_{w} / s) * (fn)
 0.5 * (f+n)  0.5 * (fn))
= f * n / ((z_{w} / s) * (fn)
 f) [*]
Now let's look at two points, the zNear
clipping plane and the zFar clipping plane:
z_{w} = 0 => z_{e}
/ w_{e} = f * n / (f) = n
z_{w} = s => z_{e} /
w_{e} = f * n / ((fn)  f) = f
In a fixedpoint depth buffer, z_{w}
is quantized to integers. The next representable z buffer
depth away from the clip planes are 1 and s1:
z_{w} = 1 => z_{e} /
w_{e} = f * n / ((1/s) * (fn)  f)
z_{w} = s1 => z_{e}
/ w_{e} = f * n / (((s1)/s) * (fn)  f)
Now let's plug in some numbers, for example,
n = 0.01, f = 1000 and s = 65535 (i.e., a 16bit depth buffer)
z_{w} = 1 => z_{e} /
w_{e} = 0.01000015
z_{w} = s1 => z_{e}
/ w_{e} = 395.90054
Think about this last line. Everything at
eye coordinate depths from 395.9 to 1000 has to map into
either 65534 or 65535 in the z buffer. Almost two thirds of
the distance between the zNear and zFar clipping
planes will have one of two zbuffer values!
To further analyze the zbuffer resolution,
let's take the derivative of [*] with respect to z_{w}
d (z_{e} / w_{e}) / d z_{w}
=  f * n * (fn) * (1/s) / ((z_{w} / s) * (fn)
 f)^{2}
Now evaluate it at z_{w} = s
d (z_{e} / w_{e}) / d z_{w}
=  f * (fn) * (1/s) / n
=  f * (f/n1) / s [**]
If you want your depth buffer to be useful
near the zFar clipping plane, you need to keep this
value to less than the size of your objects in eye space (for
most practical uses, world space).
12.060 How do I turn off the zNear
clipping plane?
See this
question in the Clipping section.
12.070 Why is there more precision at the front
of the depth buffer?
After the projection matrix transforms the
clip coordinates, the XYZvertex values are divided by their
clip coordinate W value, which results in normalized device
coordinates. This step is known as the perspective divide.
The clip coordinate W value represents the distance from the
eye. As the distance from the eye increases, 1/W approaches 0.
Therefore, X/W and Y/W also approach zero, causing the
rendered primitives to occupy less screen space and appear
smaller. This is how computers simulate a perspective view.
As in reality, motion toward or away from
the eye has a less profound effect for objects that are
already in the distance. For example, if you move six inches
closer to the computer screen in front of your face, it's
apparent size should increase quite dramatically. On the
other hand, if the computer screen were already 20 feet away
from you, moving six inches closer would have little
noticeable impact on its apparent size. The perspective
divide takes this into account.
As part of the perspective divide, Z is
also divided by W with the same results. For objects that are
already close to the back of the view volume, a change in
distance of one coordinate unit has less impact on Z/W than
if the object is near the front of the view volume. To put it
another way, an object coordinate Z unit occupies a larger
slice of NDCdepth space close to the front of the view
volume than it does near the back of the view volume.
In summary, the perspective divide, by its
nature, causes more Z precision close to the front of the
view volume than near the back.
A previous question in this
section contains related information.
12.080 There is no way that a standardsized
depth buffer will have enough precision for my astronomically
large scene. What are my options?
The typical approach is to use a multipass
technique. The application might divide the geometry database
into regions that don't interfere with each other in Z. The
geometry in each region is then rendered, starting at the
furthest region, with a clear of the depth buffer before each
region is rendered. This way the precision of the entire
depth buffer is made available to each region.
