Part I:Basics of Computer Graphics
Rendering Polygonal Objects
(Read Chapter 1 of Advanced Animation and Rendering Techniques)
Chapter 4
4-1
Polygonal Objects
Most renderers work with objects that are represented by a set of polygons
Advantages: Geometric information can be stored at
vertices Other geometric representations (e.g., spline
surfaces) can be converted to polygons by tessellation
Fast shading available in graphics hardware Disadvantages:
Texture mapping 2D images to arbitrary polygonal object is difficult.
Converting other geometric representations such as bicubic surface patches to polygons is actually a sampling process.Problem: aliasing upon closer examination
Many polygons are needed to represent complex objects, e.g. a human skull requires 500,000+ triangles
4-2
Rendering Steps
1. Polygons are extracted from database & transformed to world space
2. The 3D scene is then transformed into eye/camera space
3. Visibility test - backface culling
4. Unculled polygons are clipped against the 3D viewing frustum
5. Clipped polygons are projected onto a view plane or image plane
6. Hidden Surface Removal: (Z buffering) Projected polygons are shaded by an
incremental shading algorithm which consists of
rasterization (scan conversion) hidden surface calculation (depth
buffering or depth sorting) shading calculation (what is the color of
the pixel)
Note: Step 1 to 5 are standard for most renderers, while there are many variations for Step 6. e.g. Ray tracing, radiosity solver
4-3
From Database to World Space
Information stored List of polygon vertices (in object space) Connectivity of vertices. Other attributes at the vertex:
vertex normal (in object space) color texture coordinates (in texture space) any other attributes needed for
interpolation It is easy to transform vertices; how about
vertex normal? In general,
For example,
One solution: recalculate the vertex normal after all vertices have been transformed. Time consuming.
nMn'
vMv'
4-4
Transforming Vertex Normal
21 ttn 0 ntnt T
Tangent vector is the difference between two points on the surface.
Properties of tangent vector:
Given a transformation M
Proof:
Hence
011 ppt
Mpp '
100023222120
13121110
03020100
MMMM
MMMM
MMMM
M
tMt 0'Try transform 2 pointson the surface
222120
121110
020100
0
MMM
MMM
MMM
M
)(' 10 nMn T
0'')()( 01
001
0 tntMnMtMMntntn TTTT
4-5
Transforming Vertex Normal
Inverse of arbitrary 3x3 matrix may be obtained from the cross-products of pairs of its columns via:
where
)( 321
2113321321
1
222120
121110
020100
mmm
mmmmmmmmm
MMM
MMM
MMMT
4-6
20
10
00
1
M
M
M
m
21
11
01
2
M
M
M
m
22
12
02
3
M
M
M
m
Backface elimination or culling
Eye space is the most convenient space in which to ‘cull’ polygon
Remove all polygons that face away from the viewer
For a scene that consists of only a single closed object, this culling solves the hidden surface problem completely.
In most cases, it is a preprocess to eliminate invisible polygons.
Visibility test: Np . V > 0where Np is the polygon normal V is the line of sight (viewing) vector
How to define polygon normal
4-7
To Screen Space
It describes how those light rays reach our eye (or camera).
Basic principle of perspective projection (using similar triangles)
Screen space is defined to be within a closed volume - the viewing frustum (the volume of space which is to be rendered)
e
es z
xDx
e
es z
yDy
4-8
To Screen Space
Why don’t we simply drop the z coordinate as we project everything onto the screen at z=D?
We need the depth values (the distance to the eye) to perform hidden surface calculation
Consistent with the transformation equations for xs and ys given in the previous slide, it would be nice if we map ze to zs in the following form:
zs = A + B / ze , A, B are constants
Constraints:1. B<0, so when ze increases, zs also increases, i.e.
if one point is farther than another in eye space (it has larger ze) it also has larger z value in screen space. Hence, hidden surface removal can be done correctly.
2. Normalize the range of zs values so that ze [D,F] maps into the range zs [0,1]
4-9
To Screen Space
Full perspective transformation:xs = D xe / (h*ze)ys = D ye / (h*ze)zs = F ( 1 - D / ze ) / (F - D)
Check:when ze=D then zs = 0 ze=F then zs = 1
when xe= -h ze / D then xs = -1 xe= h ze / D then xs = 1
when ye= -h ze / D then ys = -1 ye= h ze / D then ys = 1
4-10
To Screen Space
Perspective projection is non-linear, so how to express it in matrix form?
Separate the transformation into 2 steps:1. This step is linear
x = xe
y = ye
z = (h F ze) / D (F-D) - hF / (F-D)
w = h ze / D
2. Non-linear perspective division xs = x / w i.e. xs = D xe / (h*ze)ys = y / w i.e. ys = D ye / (h*ze)zs = z / w i.e. zs = F ( 1 - D / ze ) / (F - D)
One extra coordinate w is added.(x, y, z, w) homogeneous coordinates
That is also why we need the fourth row in the transformation matrix. We call this homogeneous transformation.
1e
e
e
z
y
x
P
w
z
y
x
0/00
)/())(/(00
0010
0001
Dh
DFhFDFDhFP
4-11
To Screen Space
Now all transformation in the rendering pipeline can be expressed as 4 x 4 matrices.
Interpolating along a line in the eye space is not the same as interpolating this line in the screen space.
As ze approaches the far clipping plane, zs approaches 1 more rapidly. Objects in the screen space thus get pushed and distorted towards to the back of the viewing frustum.
Why is screen space suited to perform the hidden surface calculation?
Hidden surface calculation need only be performed on those points that have the same xs, ys coordinates. Just a simple comparison between zs values to determine which point is in front.
eye
screen
4-12
Clipping
Why Clipping? View point is an arbitrary point in the world
space, we don’t want to handle objects that do not contribute to the final image
Screen space transformation is not well-defined outside the viewing frustum e.g., when ze = 0
Three possible cases in a clipping process: Object lies completely outside the viewing
frustum, discard it! Object lies completely inside. Transform and
render it. Object intersects the viewing frustum. Clip then
transform the inside portion and render. Clipping operation must be performed on the
homogeneous coordinates before the perspective division.
-w x w-w y w 0 z w
4-13
Pixel-Level Processes
Rasterization Hidden surface removal (assume z-buffering) Shading calculation
All three processes can be viewed as 2D linear (bilinear) interpolation problems.
Rasterization:Interpolation between vertices to find the x-coordinates that define the limit of a span.
Hidden surface removal:Interpolating screen space z-values to obtain a depth value for each pixel from the given vertices’ depth values.
Shading:Interpolating from vertex intensities to find a intensity for each pixel.
4-14
Rasterization
Overall structure of the rendering process:
for each polygon
transform each vertex to screen space
for each scanline within a polygonfind the x-span by interpolation and
rasterize the span
for each pixel within the span
perform hidden surface removal
shade it
4-15
Hidden Surface Removal
The practical solution and de facto standard in the graphics community: Z-buffering.
Depth-buffering:
1) Initialize Z-buffer to max depth value.Initialize frame buffer to background color.
2) For each polygon
For each point (x, y) on that polygon
a) calculate depth value by interpolation
b) if z is less than current z-buffer value
store z in z-buffer
calculate pixel color and store in frame buffer
3) Display frame buffer
4-16
Shading
Interpolative shading
To recover (approximately) the visual appearance of curved surface which is now represented by flat polygons.
Assumption:
1) An approximate normal to the original smooth surface is given or can be computed at each vertex by averaging the normals to the polygons sharing that vertex.
2) The shading of a particular pixel can be obtained by a bilinear interpolation of the the appropriate quantities from adjacent vertices.
Np=(N1+N2+N3+N4)/4
4-17
Gouraud Shading
Calculate the intensity at each vertex using local reflection model
Intensities of interior pixels are determined by linearly interpolating the vertices’ intensities
Interpolation equations are:Ia = [I1 (ys- y2 ) + I2(y1-ys)] / (y1 - y2)Ib = [I1 (ys- y3) + I3(y1-ys)] / (y1 - y3)Is = [Ia (xb- xs) + Ib(xs-xa)] / (xb - xa)
4-18
Gouraud Shading
Flaws of Gouraud Shading:1) Highlight anomalies
If the highlight appears in the interior of the polygon, Gouraud may fail to shade this highlight because no highlighted intensities are recorded/calculated at the vertices.
2) Mach bandingHuman visual system emphasizes intensity changes occurring at a boundary, which creates a banding effect. The bands can be obvious if insufficient polygons are used to model area of high curvature.
4-19
Phong Shading
Instead of interpolating the vertex intensities, Phong interpolates vertex normal vector.
Solves the interior highlight problem:
Interpolation equations are:Na = [N1 (ys- y2 ) + N2(y1-ys)] / (y1 - y2)Nb = [N1 (ys- y3) + N3(y1-ys)] / (y1 - y3)Ns = [Na (xb- xs) + Nb(xs-xa)] / (xb - xa)
Since illumination calculation has to be invoked at each interior surface point, Phong shading is more expensive than Gouraud shading.
4-20
Defects in Phong Shading
1) Interpolation inaccuracies:Interpolation done in screen space is not equivalent to interpolation in world space.Remember perspective projection is non-linear.
Hence interpolation is orientation dependent.
4-21
Phong Shading
2) Vertex normal inaccuracies:
Correct interior normal vectors (red) cannot be found by linear interpolation of vertex normal vectors (black).
No guarantee that intensity is smoothly transited here
4-22
Interpolated normal vectors on the left patch might not be equivalent to the normal vector at vertex P. Discontinuity in calculated intensities.
P