Post on 18-Dec-2015
transcript
CS 445 / 645Introduction to Computer Graphics
Lecture 12Lecture 12
Camera ModelsCamera Models
Lecture 12Lecture 12
Camera ModelsCamera Models
Paul Debevec
Top Gun SpeakerTop Gun Speaker
Wednesday, October 9Wednesday, October 9thth at 3:30 – OLS 011 at 3:30 – OLS 011
http://www.debevec.orghttp://www.debevec.org
MIT Technolgy Review’s “100 Young MIT Technolgy Review’s “100 Young Innovators”Innovators”
Top Gun SpeakerTop Gun Speaker
Wednesday, October 9Wednesday, October 9thth at 3:30 – OLS 011 at 3:30 – OLS 011
http://www.debevec.orghttp://www.debevec.org
MIT Technolgy Review’s “100 Young MIT Technolgy Review’s “100 Young Innovators”Innovators”
Rendering with Natural Light
Fiat Lux
Light Stage
Moving the Camera or the World?
Two equivalent operationsTwo equivalent operations• Initial OpenGL camera position is at origin, looking along -ZInitial OpenGL camera position is at origin, looking along -Z
• Now create a unit square parallel to camera at z = -10Now create a unit square parallel to camera at z = -10
• If we put a z-translation matrix of 3 on stack, what happens? If we put a z-translation matrix of 3 on stack, what happens?
– Camera moves to z = -3Camera moves to z = -3
Note OpenGL models viewing in left-hand coordinatesNote OpenGL models viewing in left-hand coordinates
– Camera stays put, but square moves to -7Camera stays put, but square moves to -7
• Image at camera is the same with bothImage at camera is the same with both
Two equivalent operationsTwo equivalent operations• Initial OpenGL camera position is at origin, looking along -ZInitial OpenGL camera position is at origin, looking along -Z
• Now create a unit square parallel to camera at z = -10Now create a unit square parallel to camera at z = -10
• If we put a z-translation matrix of 3 on stack, what happens? If we put a z-translation matrix of 3 on stack, what happens?
– Camera moves to z = -3Camera moves to z = -3
Note OpenGL models viewing in left-hand coordinatesNote OpenGL models viewing in left-hand coordinates
– Camera stays put, but square moves to -7Camera stays put, but square moves to -7
• Image at camera is the same with bothImage at camera is the same with both
A 3D Scene
Notice the presence ofNotice the presence ofthe camera, thethe camera, theprojection plane, and projection plane, and the worldthe worldcoordinate axescoordinate axes
Viewing transformations define how to acquire the image Viewing transformations define how to acquire the image on the projection planeon the projection plane
Notice the presence ofNotice the presence ofthe camera, thethe camera, theprojection plane, and projection plane, and the worldthe worldcoordinate axescoordinate axes
Viewing transformations define how to acquire the image Viewing transformations define how to acquire the image on the projection planeon the projection plane
Viewing Transformations
Goal: To create a camera-centered viewGoal: To create a camera-centered view
Camera is at originCamera is at origin
Camera is looking along negative z-axisCamera is looking along negative z-axis
Camera’s ‘up’ is aligned with y-axis Camera’s ‘up’ is aligned with y-axis (what does this mean?)(what does this mean?)
Goal: To create a camera-centered viewGoal: To create a camera-centered view
Camera is at originCamera is at origin
Camera is looking along negative z-axisCamera is looking along negative z-axis
Camera’s ‘up’ is aligned with y-axis Camera’s ‘up’ is aligned with y-axis (what does this mean?)(what does this mean?)
2 Basic Steps
Step 1: Align the world’s coordinate frame with Step 1: Align the world’s coordinate frame with camera’s by rotationcamera’s by rotation
Step 1: Align the world’s coordinate frame with Step 1: Align the world’s coordinate frame with camera’s by rotationcamera’s by rotation
2 Basic Steps
Step 2: Translate to align world and camera Step 2: Translate to align world and camera originsorigins
Step 2: Translate to align world and camera Step 2: Translate to align world and camera originsorigins
Creating Camera Coordinate Space
Specify a point where the camera is located in world Specify a point where the camera is located in world space, the space, the eye point (View Reference Point = VRP)eye point (View Reference Point = VRP)
Specify a point in world space that we wish to become Specify a point in world space that we wish to become the center of view, the the center of view, the lookatlookat point point
Specify a vector in worldSpecify a vector in worldspace that we wish to space that we wish to point up in camera point up in camera image, the image, the up vector (VUP)up vector (VUP)
Intuitive camera Intuitive camera movementmovement
Specify a point where the camera is located in world Specify a point where the camera is located in world space, the space, the eye point (View Reference Point = VRP)eye point (View Reference Point = VRP)
Specify a point in world space that we wish to become Specify a point in world space that we wish to become the center of view, the the center of view, the lookatlookat point point
Specify a vector in worldSpecify a vector in worldspace that we wish to space that we wish to point up in camera point up in camera image, the image, the up vector (VUP)up vector (VUP)
Intuitive camera Intuitive camera movementmovement
Constructing Viewing Transformation, V
Create a vector from eye-point to lookat-pointCreate a vector from eye-point to lookat-point
Normalize the vectorNormalize the vector
Desired rotation matrix should map this vector Desired rotation matrix should map this vector to [0, 0, -1]to [0, 0, -1]T T Why?Why?
Create a vector from eye-point to lookat-pointCreate a vector from eye-point to lookat-point
Normalize the vectorNormalize the vector
Desired rotation matrix should map this vector Desired rotation matrix should map this vector to [0, 0, -1]to [0, 0, -1]T T Why?Why?
Constructing Viewing Transformation, V
Construct another important vector from the Construct another important vector from the cross product of the lookat-vector and the vup-cross product of the lookat-vector and the vup-vectorvector
This vector, when normalized, should align with This vector, when normalized, should align with [1, 0, 0][1, 0, 0]TT Why?Why?
Construct another important vector from the Construct another important vector from the cross product of the lookat-vector and the vup-cross product of the lookat-vector and the vup-vectorvector
This vector, when normalized, should align with This vector, when normalized, should align with [1, 0, 0][1, 0, 0]TT Why?Why?
Constructing Viewing Transformation, V
One more vector to define…One more vector to define…
This vector, when normalized, should align with [0, 1, 0]This vector, when normalized, should align with [0, 1, 0]TT
Now let’s compose the resultsNow let’s compose the results
One more vector to define…One more vector to define…
This vector, when normalized, should align with [0, 1, 0]This vector, when normalized, should align with [0, 1, 0]TT
Now let’s compose the resultsNow let’s compose the results
Composing Matrices to Form V
We know the three world axis vectors (x, y, z)We know the three world axis vectors (x, y, z)
We know the three camera axis vectors (u, v, n)We know the three camera axis vectors (u, v, n)
Viewing transformation, V, must convert from world to Viewing transformation, V, must convert from world to camera coordinate systemscamera coordinate systems
We know the three world axis vectors (x, y, z)We know the three world axis vectors (x, y, z)
We know the three camera axis vectors (u, v, n)We know the three camera axis vectors (u, v, n)
Viewing transformation, V, must convert from world to Viewing transformation, V, must convert from world to camera coordinate systemscamera coordinate systems
Composing Matrices to Form V
RememberRemember
• Each camera axis vector is unit length.Each camera axis vector is unit length.
• Each camera axis vector is perpendicular to othersEach camera axis vector is perpendicular to others
Camera matrix is orthogonal and normalizedCamera matrix is orthogonal and normalized
• OrthonormalOrthonormal
Therefore, MTherefore, M-1-1 = M = MTT
RememberRemember
• Each camera axis vector is unit length.Each camera axis vector is unit length.
• Each camera axis vector is perpendicular to othersEach camera axis vector is perpendicular to others
Camera matrix is orthogonal and normalizedCamera matrix is orthogonal and normalized
• OrthonormalOrthonormal
Therefore, MTherefore, M-1-1 = M = MTT
Composing Matrices to Form V
Therefore, rotation component of viewing Therefore, rotation component of viewing transformation is just transpose of computed transformation is just transpose of computed vectorsvectors
Therefore, rotation component of viewing Therefore, rotation component of viewing transformation is just transpose of computed transformation is just transpose of computed vectorsvectors
Composing Matrices to Form V
Translation component tooTranslation component too
Multiply it throughMultiply it through
Translation component tooTranslation component too
Multiply it throughMultiply it through
Final Viewing Transformation, V
To transform vertices, use this matrix:To transform vertices, use this matrix:
And you get this:And you get this:
To transform vertices, use this matrix:To transform vertices, use this matrix:
And you get this:And you get this:
Canonical View Volume
A standardized viewing volume representationA standardized viewing volume representation
Parallel (Orthogonal) PerspectiveParallel (Orthogonal) Perspective
A standardized viewing volume representationA standardized viewing volume representation
Parallel (Orthogonal) PerspectiveParallel (Orthogonal) Perspectivex or y
-z
x or y
-z
1
-1
-1FrontPlane
FrontPlane
BackPlane
BackPlane
x or y = +/- z
Why do we care?
Canonical View Volume Permits StandardizationCanonical View Volume Permits Standardization
• ClippingClipping
– Easier to determine if an arbitrary point is enclosed in Easier to determine if an arbitrary point is enclosed in volumevolume
– Consider clipping to six arbitrary planes of a viewing Consider clipping to six arbitrary planes of a viewing volume versus canonical view volumevolume versus canonical view volume
• RenderingRendering
– Projection and rasterization algorithms can be reusedProjection and rasterization algorithms can be reused
Canonical View Volume Permits StandardizationCanonical View Volume Permits Standardization
• ClippingClipping
– Easier to determine if an arbitrary point is enclosed in Easier to determine if an arbitrary point is enclosed in volumevolume
– Consider clipping to six arbitrary planes of a viewing Consider clipping to six arbitrary planes of a viewing volume versus canonical view volumevolume versus canonical view volume
• RenderingRendering
– Projection and rasterization algorithms can be reusedProjection and rasterization algorithms can be reused
Projection Normalization
One additional step of standardizationOne additional step of standardization
• Convert perspective view volume to orthogonal view volume Convert perspective view volume to orthogonal view volume to further standardize camera representationto further standardize camera representation
– Convert all projections into orthogonal projections by Convert all projections into orthogonal projections by distorting points in three space (actually four space distorting points in three space (actually four space because we include homogeneous coord w)because we include homogeneous coord w)
Distort objects using transformation matrixDistort objects using transformation matrix
One additional step of standardizationOne additional step of standardization
• Convert perspective view volume to orthogonal view volume Convert perspective view volume to orthogonal view volume to further standardize camera representationto further standardize camera representation
– Convert all projections into orthogonal projections by Convert all projections into orthogonal projections by distorting points in three space (actually four space distorting points in three space (actually four space because we include homogeneous coord w)because we include homogeneous coord w)
Distort objects using transformation matrixDistort objects using transformation matrix
Projection Normalization
Building a transformation Building a transformation matrixmatrix
• How do we build a matrix thatHow do we build a matrix that
– Warps any view volume to Warps any view volume to canonical orthographic view canonical orthographic view volumevolume
– Permits rendering with Permits rendering with orthographic cameraorthographic camera
Building a transformation Building a transformation matrixmatrix
• How do we build a matrix thatHow do we build a matrix that
– Warps any view volume to Warps any view volume to canonical orthographic view canonical orthographic view volumevolume
– Permits rendering with Permits rendering with orthographic cameraorthographic camera
All scenes rendered with All scenes rendered with orthographic cameraorthographic camera
All scenes rendered with All scenes rendered with orthographic cameraorthographic camera
Projection Normalization - Ortho
Normalizing Orthographic CamerasNormalizing Orthographic Cameras
• Not all orthographic cameras define viewing volumes of right Not all orthographic cameras define viewing volumes of right size and location (canonical view volume)size and location (canonical view volume)
• Transformation must map:Transformation must map:
Normalizing Orthographic CamerasNormalizing Orthographic Cameras
• Not all orthographic cameras define viewing volumes of right Not all orthographic cameras define viewing volumes of right size and location (canonical view volume)size and location (canonical view volume)
• Transformation must map:Transformation must map:
Projection Normalization - Ortho
Two stepsTwo steps
• Translate center to (0, 0, 0)Translate center to (0, 0, 0)
– Move x by –(xMove x by –(xmaxmax + x + xminmin) / 2) / 2
• Scale volume to cube with sides = 2Scale volume to cube with sides = 2
– Scale x by 2/(xScale x by 2/(xmaxmax – x – xminmin))
• Compose these transformation Compose these transformation matricesmatrices
– Resulting matrix maps Resulting matrix maps orthogonal volume to canonicalorthogonal volume to canonical
Two stepsTwo steps
• Translate center to (0, 0, 0)Translate center to (0, 0, 0)
– Move x by –(xMove x by –(xmaxmax + x + xminmin) / 2) / 2
• Scale volume to cube with sides = 2Scale volume to cube with sides = 2
– Scale x by 2/(xScale x by 2/(xmaxmax – x – xminmin))
• Compose these transformation Compose these transformation matricesmatrices
– Resulting matrix maps Resulting matrix maps orthogonal volume to canonicalorthogonal volume to canonical
Projection Normalization - Persp
Perspective Normalization is TrickierPerspective Normalization is TrickierPerspective Normalization is TrickierPerspective Normalization is Trickier
Perspective Normalization
Consider N=Consider N=
After multiplying:After multiplying:
• p’ = Npp’ = Np
Consider N=Consider N=
After multiplying:After multiplying:
• p’ = Npp’ = Np
0100
00
0010
0001
Perspective Normalization
After dividing by w’, p’ -> p’’After dividing by w’, p’ -> p’’After dividing by w’, p’ -> p’’After dividing by w’, p’ -> p’’
Perspective Normalization
Quick CheckQuick CheckQuick CheckQuick Check • If x = zIf x = z
– x’’ = -1x’’ = -1
• If x = -zIf x = -z
– x’’ = 1x’’ = 1
• If x = zIf x = z
– x’’ = -1x’’ = -1
• If x = -zIf x = -z
– x’’ = 1x’’ = 1
Perspective Normalization
What about z?What about z?
• if z = zif z = zmaxmax
• if z = zif z = zminmin
• Solve for Solve for and and such that zmin -> -1 and zmax ->1 such that zmin -> -1 and zmax ->1
• Resulting z’’ is nonlinear, but preserves ordering of pointsResulting z’’ is nonlinear, but preserves ordering of points
– If zIf z11 < z < z22 … z’’ … z’’11 < z’’ < z’’22
What about z?What about z?
• if z = zif z = zmaxmax
• if z = zif z = zminmin
• Solve for Solve for and and such that zmin -> -1 and zmax ->1 such that zmin -> -1 and zmax ->1
• Resulting z’’ is nonlinear, but preserves ordering of pointsResulting z’’ is nonlinear, but preserves ordering of points
– If zIf z11 < z < z22 … z’’ … z’’11 < z’’ < z’’22
Perspective Normalization
We did it. Using matrix, NWe did it. Using matrix, N
• Perspective viewing frustum transformed to cubePerspective viewing frustum transformed to cube
• Orthographic rendering of cube produces same image as Orthographic rendering of cube produces same image as perspective rendering of original frustumperspective rendering of original frustum
We did it. Using matrix, NWe did it. Using matrix, N
• Perspective viewing frustum transformed to cubePerspective viewing frustum transformed to cube
• Orthographic rendering of cube produces same image as Orthographic rendering of cube produces same image as perspective rendering of original frustumperspective rendering of original frustum
Color
Next topic: Next topic: ColorColor
To understand how to make realistic images, we need a To understand how to make realistic images, we need a basic understanding of the physics and physiology of basic understanding of the physics and physiology of vision. Here we step away from the code and math for a vision. Here we step away from the code and math for a bit to talk about basic principles.bit to talk about basic principles.
Next topic: Next topic: ColorColor
To understand how to make realistic images, we need a To understand how to make realistic images, we need a basic understanding of the physics and physiology of basic understanding of the physics and physiology of vision. Here we step away from the code and math for a vision. Here we step away from the code and math for a bit to talk about basic principles.bit to talk about basic principles.
Basics Of Color
Elements of color:Elements of color:Elements of color:Elements of color:
Basics of Color
Physics: Physics: • IlluminationIllumination
– Electromagnetic spectraElectromagnetic spectra
• ReflectionReflection
– Material propertiesMaterial properties
– Surface geometry and microgeometry (i.e., polished versus matte Surface geometry and microgeometry (i.e., polished versus matte versus brushed)versus brushed)
PerceptionPerception• Physiology and neurophysiologyPhysiology and neurophysiology
• Perceptual psychologyPerceptual psychology
Physics: Physics: • IlluminationIllumination
– Electromagnetic spectraElectromagnetic spectra
• ReflectionReflection
– Material propertiesMaterial properties
– Surface geometry and microgeometry (i.e., polished versus matte Surface geometry and microgeometry (i.e., polished versus matte versus brushed)versus brushed)
PerceptionPerception• Physiology and neurophysiologyPhysiology and neurophysiology
• Perceptual psychologyPerceptual psychology
Physiology of Vision
The eye:The eye:
The retinaThe retina
• RodsRods
• ConesCones
– Color!Color!
The eye:The eye:
The retinaThe retina
• RodsRods
• ConesCones
– Color!Color!
Physiology of Vision
The center of the retina is a densely packed The center of the retina is a densely packed region called the region called the foveafovea. .
• Cones much denser here than the Cones much denser here than the peripheryperiphery
The center of the retina is a densely packed The center of the retina is a densely packed region called the region called the foveafovea. .
• Cones much denser here than the Cones much denser here than the peripheryperiphery
Physiology of Vision: Cones
Three types of cones:Three types of cones:• LL or or RR, most sensitive to red light (610 nm) , most sensitive to red light (610 nm)
• MM or or GG, most sensitive to green light (560 nm), most sensitive to green light (560 nm)
• SS or or BB, most sensitive to blue light (430 nm), most sensitive to blue light (430 nm)
• Color blindness results from missing cone type(s)Color blindness results from missing cone type(s)
Three types of cones:Three types of cones:• LL or or RR, most sensitive to red light (610 nm) , most sensitive to red light (610 nm)
• MM or or GG, most sensitive to green light (560 nm), most sensitive to green light (560 nm)
• SS or or BB, most sensitive to blue light (430 nm), most sensitive to blue light (430 nm)
• Color blindness results from missing cone type(s)Color blindness results from missing cone type(s)
Physiology of Vision: The Retina
Strangely, rods and cones are Strangely, rods and cones are at the at the backback of the retina, of the retina, behind a mostly-transparent behind a mostly-transparent neural structure that neural structure that collects their response.collects their response.
http://www.trueorigin.org/retina.asphttp://www.trueorigin.org/retina.asp
Strangely, rods and cones are Strangely, rods and cones are at the at the backback of the retina, of the retina, behind a mostly-transparent behind a mostly-transparent neural structure that neural structure that collects their response.collects their response.
http://www.trueorigin.org/retina.asphttp://www.trueorigin.org/retina.asp
Perception: Metamers
A given perceptual sensation of color derives A given perceptual sensation of color derives from the stimulus of all three cone typesfrom the stimulus of all three cone types
A given perceptual sensation of color derives A given perceptual sensation of color derives from the stimulus of all three cone typesfrom the stimulus of all three cone types
Identical perceptions of color can thus be caused Identical perceptions of color can thus be caused by very different spectraby very different spectra
Identical perceptions of color can thus be caused Identical perceptions of color can thus be caused by very different spectraby very different spectra
Perception: Other Gotchas
Color perception is also difficult because:Color perception is also difficult because:
• It varies from person to personIt varies from person to person
• It is affected by adaptation (stare at a light bulb… don’t)It is affected by adaptation (stare at a light bulb… don’t)
• It is affected by surrounding color:It is affected by surrounding color:
Color perception is also difficult because:Color perception is also difficult because:
• It varies from person to personIt varies from person to person
• It is affected by adaptation (stare at a light bulb… don’t)It is affected by adaptation (stare at a light bulb… don’t)
• It is affected by surrounding color:It is affected by surrounding color:
Perception: Relative Intensity
We are not good at judging absolute intensityWe are not good at judging absolute intensity
Let’s illuminate pixels with white light on scale of 0 - 1.0Let’s illuminate pixels with white light on scale of 0 - 1.0
Intensity difference of neighboring colored rectangles Intensity difference of neighboring colored rectangles with intensities:with intensities:
0.10 -> 0.11 (10% change)0.10 -> 0.11 (10% change) 0.50 -> 0.55 (10% change)0.50 -> 0.55 (10% change)
will look the samewill look the same
We perceive We perceive relativerelative intensities, not absolute intensities, not absolute
We are not good at judging absolute intensityWe are not good at judging absolute intensity
Let’s illuminate pixels with white light on scale of 0 - 1.0Let’s illuminate pixels with white light on scale of 0 - 1.0
Intensity difference of neighboring colored rectangles Intensity difference of neighboring colored rectangles with intensities:with intensities:
0.10 -> 0.11 (10% change)0.10 -> 0.11 (10% change) 0.50 -> 0.55 (10% change)0.50 -> 0.55 (10% change)
will look the samewill look the same
We perceive We perceive relativerelative intensities, not absolute intensities, not absolute
Representing Intensities
Remaining in the world of black and white…Remaining in the world of black and white…
Use photometer to obtain min and max brightness of Use photometer to obtain min and max brightness of monitormonitor
This is the This is the dynamic rangedynamic range
Intensity ranges from min, IIntensity ranges from min, I00, to max, 1.0, to max, 1.0
How do we represent 256 shades of gray?How do we represent 256 shades of gray?
Remaining in the world of black and white…Remaining in the world of black and white…
Use photometer to obtain min and max brightness of Use photometer to obtain min and max brightness of monitormonitor
This is the This is the dynamic rangedynamic range
Intensity ranges from min, IIntensity ranges from min, I00, to max, 1.0, to max, 1.0
How do we represent 256 shades of gray?How do we represent 256 shades of gray?
Representing Intensities
Equal distribution between min and max failsEqual distribution between min and max fails
• relative change near max is much smaller than near Irelative change near max is much smaller than near I00
• Ex: ¼, ½, ¾, 1Ex: ¼, ½, ¾, 1
Preserve % changePreserve % change
• Ex: 1/8, ¼, ½, 1Ex: 1/8, ¼, ½, 1
• IInn = I = I00 * r * rnnII00, n > 0, n > 0
Equal distribution between min and max failsEqual distribution between min and max fails
• relative change near max is much smaller than near Irelative change near max is much smaller than near I00
• Ex: ¼, ½, ¾, 1Ex: ¼, ½, ¾, 1
Preserve % changePreserve % change
• Ex: 1/8, ¼, ½, 1Ex: 1/8, ¼, ½, 1
• IInn = I = I00 * r * rnnII00, n > 0, n > 0
II00=I=I00
II11 = rI = rI00
II22 = rI = rI11 = r = r22II00
……
II255255=rI=rI254254=r=r255255II00
Dynamic Ranges Dynamic RangeDynamic Range Max # ofMax # of Display Display (max / min illum)(max / min illum) PerceivedPerceived
Intensities (r=1.01)Intensities (r=1.01)
CRT:CRT: 50-20050-200 400-530400-530
Photo (print)Photo (print) 100100 465465
Photo (slide)Photo (slide) 10001000 700700
B/W printoutB/W printout 100100 465465
Color printoutColor printout 5050 400400
NewspaperNewspaper 1010 234234
Dynamic RangeDynamic Range Max # ofMax # of Display Display (max / min illum)(max / min illum) PerceivedPerceived
Intensities (r=1.01)Intensities (r=1.01)
CRT:CRT: 50-20050-200 400-530400-530
Photo (print)Photo (print) 100100 465465
Photo (slide)Photo (slide) 10001000 700700
B/W printoutB/W printout 100100 465465
Color printoutColor printout 5050 400400
NewspaperNewspaper 1010 234234
Gamma Correction
But most display devices are inherently nonlinear: But most display devices are inherently nonlinear: Intensity = Intensity = kk(voltage)(voltage)
• i.e., brightness * voltage != (2*brightness) * (voltage/2)i.e., brightness * voltage != (2*brightness) * (voltage/2)
is between 2.2 and 2.5 on most monitorsis between 2.2 and 2.5 on most monitors
Common solution: Common solution: gamma correctiongamma correction
• Post-transformation on intensities to map them to linear range on Post-transformation on intensities to map them to linear range on display device:display device:
• Can have separate Can have separate for R, G, B for R, G, B
But most display devices are inherently nonlinear: But most display devices are inherently nonlinear: Intensity = Intensity = kk(voltage)(voltage)
• i.e., brightness * voltage != (2*brightness) * (voltage/2)i.e., brightness * voltage != (2*brightness) * (voltage/2)
is between 2.2 and 2.5 on most monitorsis between 2.2 and 2.5 on most monitors
Common solution: Common solution: gamma correctiongamma correction
• Post-transformation on intensities to map them to linear range on Post-transformation on intensities to map them to linear range on display device:display device:
• Can have separate Can have separate for R, G, B for R, G, B 1
xy
Gamma Correction
Some monitors perform the gamma correction in Some monitors perform the gamma correction in hardware (SGI’s)hardware (SGI’s)
Others do not (most PCs)Others do not (most PCs)
Tough to generate images that look good on both Tough to generate images that look good on both platforms (i.e. images from web pages)platforms (i.e. images from web pages)
Some monitors perform the gamma correction in Some monitors perform the gamma correction in hardware (SGI’s)hardware (SGI’s)
Others do not (most PCs)Others do not (most PCs)
Tough to generate images that look good on both Tough to generate images that look good on both platforms (i.e. images from web pages)platforms (i.e. images from web pages)