1 Graphics, Day 2 Based on lecture by Ed Angel © Ed Angel Uncredited images from Ed Angel.

Post on 05-Jan-2016

221 views 4 download

Tags:

transcript

1

Graphics, Day 2

Based on lecture by Ed Angel

© Ed Angel

Uncredited images from Ed Angel

2

Objectives

Administrivia

Brief historical introduction

Meet our first Pioneer – Ivan Sutherland

Fundamental imaging notions

Physical basis for image formation

Light

Color

Perception

Synthetic camera model

Other models – Ray Tracing

Mathematics: meaning of Dot product, applications

Serpinski Gasket and the Chaos Game

Escape to 3D!

3

Notifications

4

Questions

Must we use shaders for all our projects?

I haven't converted my examples yet: I can't ask you to do what I haven't done

But if your purpose in taking the course is to learn GPU programming, it would be good practice…

Can we use Java?

In theory, yes. In practice, you have enough to worry about already

What can you do with shaders?

I'll give some examples tonight

Why must we link the Foundations when we #included OpenGL?

This is the difference between Circle.h and Circle.cpp

5

Basic Graphics System

Input devices

Output device

Image formed in Frame Buffer

6

CRT

Can be used either as a line-drawing device (calligraphic) or to display contents of frame buffer (raster mode)

7

Computer Graphics: 1950-1960

Computer graphics goes back to the earliest days of computing

Strip charts

Pen plotters

Simple displays using A/D converters to go from computer to calligraphic CRT

Cost of refresh for CRT too high

Computers slow, expensive, unreliable

API very device specific

8

Computer Graphics: 1960-1970

Wireframe graphics

Draw only lines

Ivan Sutherland's Sketchpad

Display Processors

Storage tube

wireframe representationof sun object

9

Sketchpad

Ivan Sutherland’s PhD thesis at MIT (1963)

Recognized the potential of man-machine interaction

Loop

Display something

User moves light pen

Computer generates new display

Sutherland also created many of the now common algorithms for computer graphics

Alan Kay presents Sutherland's sketchpad

http://www.youtube.com/watch?v=mOZqRJzE8xg

10

Display Processor

Rather than have the host computer try to refresh display use a special purpose computer called a display processor (DPU)

Graphics stored in display list (display file) on display processor

Host compiles display list and sends to DPU

11

Direct View Storage Tube

Created by Tektronix

Did not require constant refresh

Standard interface to computers

Allowed for standard software

Plot3D in Fortran

Relatively inexpensive

Opened door to use of computer graphics for CAD community

Drew lines - vector graphics

12

Computer Graphics: 1970-1980

Raster Graphics

Beginning of graphics standards

IFIPS

GKS: European effort

Becomes ISO 2D standard

Core: North American effort

3D but fails to become ISO standard

Workstations and PCs

13

Raster Graphics

Image produced as an array (the raster) of picture elements (pixels) in the frame buffer

14

Raster Graphics

Allows us to go from lines and wire frame images to filled polygons

Note different patches have different shading

Can see edges and vertices

15

PCs and Workstations

Although we no longer make the distinction between workstations and PCs, historically they evolved from different roots

Early workstations characterized by

Networked connection: client-server model

High-level of interactivity

Early PCs included frame buffer as part of user memory

Easy to change contents of frame buffer and create images

16

Computer Graphics: 1980-1990

Realism comes to computer graphics

smooth shading environment mapping

bump mapping

17

Bump Map

Change surface plane

Normal vector

18

Computer Graphics: 1980-1990

Special purpose hardware

Silicon Graphics geometry engine

VLSI implementation of graphics pipeline

Industry-based standards

PHIGS

Pixar's RenderMan API

Networked graphics: X Window System

Human-Computer Interface (HCI)

19

Computer Graphics: 1990-2000

OpenGL API

Completely computer-generated feature-length movies are successful

New hardware capabilities

Texture mapping

Blending

Accumulation, stencil buffers

Pixar

20

Computer Graphics: 2000-

Photorealism

Graphics cards for PCs dominate market

Nvidia, ATI, 3DLabs

Game boxes and game players determine direction of market

Computer graphics routine in movie industry: Maya, Lightwave

Programmable pipelines

Grand Theft Auto

21

Image Formation

In computer graphics, we form images which are generally two dimensional using a process analogous to how images are formed by physical imaging systems

Cameras

Microscopes

Telescopes

Human visual system

Wikipedia

22

Elements of Image Formation

Objects

Viewer

Light source(s)

How light interacts with material

Glossy or matt?

Note the independence of the objects, the viewer, and the light source(s)

Generating shadows is not easy – a theme we will return to

23

Light

Light is the part of the electromagnetic spectrum that causes a reaction in our visual systems

Generally these are wavelengths in the range of about 350-750 nm (nanometers)

Long wavelengths appear as reds and short wavelengths as blues

24

Luminance and Color Images

Luminance Image

Monochromatic

Values are gray levels

Analogous to working with black and white film or television

Color Image

Has perceptional attributes of hue, saturation, and lightness

Do we have to match every frequency in visible spectrum? No!

25

Three-Color Theory

Human visual system has two types of sensors

Rods: monochromatic, night vision

Cones

Color sensitive

Three types of cones

Only three values (the tristimulus

values) are sent to the brain

Need only match these three values to fool eye

Need only three primary colors

26

Shadow Mask CRT

27

Additive and Subtractive Color

Additive colorForm a color by adding amounts of three primaries

CRTs, projection systems, positive filmPrimaries are Red (R), Green (G), Blue (B)

Subtractive colorForm a color by filtering white light with cyan (C), Magenta (M), and

Yellow (Y) filtersLight-material interactionsPrintingNegative film

28

Synthetic Camera Model

center of projection

image plane

projector

p

projection of p

29

Advantages

Separation of objects, viewer, light sources

Two-dimensional graphics is a special case of three-dimensional graphics

Leads to simple software API

Specify objects, lights, camera, attributes

Let implementation determine image

Leads to fast hardware implementation

30

Global vs Local Lighting

Cannot compute color or shade of each object independently

Some objects are blocked from light

Light can reflect from object to object

Some objects might be translucent

31

Alternatives

Ray Tracing and Ray Casting

Wikipedia

32

Why not ray tracing?

Ray tracing is more physically based Simpler for objects such as polygons and quadrics.Can produce global lighting effects such as shadows and multiple reflections Why don’t we use it to design a graphics system?Ray tracing is slow and not well-suited for many interactive applications

Wikipedia

33

Dot Product

C = A − B

C ⋅C = (A − B) ⋅(A − B)

C ⋅C = A ⋅A + B ⋅B − 2A ⋅B

| C |2=| A |2 + | B |2 −2A ⋅B

| C |2=| A |2 + | B |2 −2 | A | ⋅| B | cosθ

A ⋅B =| A | ⋅| B | cosθ

A ⋅B| A | ⋅| B |

= cosθ

But the Law of Cosines tells us that

Putting these together, we get

34

More Vector MathematicsDot product – measures length and angle – can tease out one term

A•B = |A| |B| cos (theta)

cos(theata) = A•B/|A||B|

Perpendicular vectors have a dot product of 0

Given vector v1 = (a, b)

Build vector v2 = (-b, a)

v1 . v2 = -ab + ba = 0, so they are perpendicular

Could also use (b, -a) = -v2

Given vector v = (a, b, c) we have many perpendiculars

(-b, a, 0), (0, -c, b), (-c, 0, a) and combinations of these.

35

Applications

If AB == 0 and |A| =/= 0 and |B| =/= 0, then A and B are perpendicular

We can define a line through the origin as the set of points that are the endpoints of vectors perpendicular to a fixed vector v

The vector is called the normal vector to the line

v = (a, b), then the line is ax + by = 0

All (x, y) such that (x, y)•(a, b) = ax + by = 0

We can translate the line from the origin to get - ax + by = c

A normal vector can also be used to define a plan in 3 D

Wolfram Mathworld

36

Applications

Backface culling – We use the fact that AB < 0 means that the angle is > 90

When deciding if a triangle is facing the camera or is facing away, we compute the dot product of the normal vector to the polygon with the vector from the camera to one of the polygon's vertices.

Gives fast test to see if we need to draw the triangle: those that are facing away are culled (removed)

37

Applications

We want to know the distance between a point x and a line L

Take a point y on the line L, and draw a vector A from to y to x

Project the vector A onto the vector B

Distance from x to the line is length of perpendicular A-P

P = (A ⋅B)B

| B |2

38

Applications

Does a line intersect a circle?

What do you need to answer the question?

Vertex Shader for color

Vertex Shader for Rotation

chapter 3 example6 – vertex shader

attribute vec4 vPosition;

attribute vec4 vColor;

varying vec4 color;

uniform vec3 theta;

void main()

{

// Compute the sines and cosines of theta for each of

// the three axes in one computation.

vec3 angles = radians( theta );

vec3 c = cos( angles );

vec3 s = sin( angles );

Rotation (cont)

// Remember: thse matrices are column-major

mat4 rx = mat4( 1.0, 0.0, 0.0, 0.0,

0.0, c.x, s.x, 0.0,

0.0, -s.x, c.x, 0.0,

0.0, 0.0, 0.0, 1.0 );

mat4 ry = mat4( c.y, 0.0, -s.y, 0.0,

0.0, 1.0, 0.0, 0.0,

s.y, 0.0, c.y, 0.0,

0.0, 0.0, 0.0, 1.0 );

// Workaround for bug in ATI driver

ry[1][0] = 0.0;

ry[1][1] = 1.0;

Rotation (cont)

mat4 rz = mat4( c.z, -s.z, 0.0, 0.0,

s.z, c.z, 0.0, 0.0,

0.0, 0.0, 1.0, 0.0,

0.0, 0.0, 0.0, 1.0 );

// Workaround for bug in ATI driver

rz[2][2] = 1.0;

color = vColor;

gl_Position = rz * ry * rx * vPosition;

}

Vertex Shader: Rotation & Shading

// www.lighthouse3d.com

varying vec3 normal, lightDir;

void main()

{

lightDir = normalize(vec3(gl_LightSource[0].position));

normal = normalize(gl_NormalMatrix * gl_Normal);

gl_Position = ftransform();

}

Fragment Shader: NPR (Non Photo Realistic)varying vec3 normal, lightDir;

void main() {

float intensity;

vec3 n;

vec4 color;

n = normalize(normal);

intensity = max(dot(lightDir,n),0.0);

if (intensity > 0.98)

color = vec4(0.8,0.8,0.8,1.0);

else if (intensity > 0.5)

color = vec4(0.4,0.4,0.8,1.0);

else if (intensity > 0.25)

color = vec4(0.2,0.2,0.4,1.0);

else

color = vec4(0.1,0.1,0.1,1.0);

gl_FragColor = color;

}

Sierpinski Gasket

Discovered by Waclaw Sierpinski in 1915

Start with a triangle

In each step, remove 25% of the remaining area

We are left with a shape with area 0

Sierpinski Gasket

The odd numbers in Pascal’s Triangle are in bold

Can view trios of three bold numbers as a black triangle

See also S. Wolfram’s Cellular Automata rule 90

1

1 1

1 2 1

1 3 3 1

1 4 6 4 1

1 5 10 10 5 1

1 6 15 20 15 6 1

1 7 21 35 35 21 7 1

Drawing Triangleconst int NumTimesToSubdivide = 5;const int NumTriangles = 729; const int NumVertices = 3 * NumTriangles;

vec2 points[NumVertices];int Index = 0;

voidtriangle( const vec2& a, const vec2& b, const vec2& c ){ points[Index++] = a; points[Index++] = b; points[Index++] = c;}

Drawing Fractalvoid divide_triangle( const vec2& a, const vec2& b,

const vec2& c, int count ){ if ( count > 0 ) { vec2 v0 = ( a + b ) / 2.0; vec2 v1 = ( a + c ) / 2.0; vec2 v2 = ( b + c ) / 2.0; divide_triangle( a, v0, v1, count - 1 ); divide_triangle( c, v1, v2, count - 1 ); divide_triangle( b, v2, v0, count - 1 ); } else { triangle( a, b, c ); // end of recursion }}

We are representing points as ordered pairs

Find the midpoints of the three sides, and recurse

Drawing Triangles

void display( void ){ glClear( GL_COLOR_BUFFER_BIT ); glDrawArrays( GL_TRIANGLES, 0, NumTriangles ); glFlush();} void display( void ) // Display from example1.cpp { glClear( GL_COLOR_BUFFER_BIT ); glDrawArrays( GL_POINTS, 0, NumPoints ); glFlush(); }

50

OpenGL Primitives

GL_QUAD_STRIPGL_QUAD_STRIP

GL_POLYGONGL_POLYGON

GL_TRIANGLE_STRIPGL_TRIANGLE_STRIP GL_TRIANGLE_FANGL_TRIANGLE_FAN

GL_POINTSGL_POINTS

GL_LINESGL_LINES

GL_LINE_LOOPGL_LINE_LOOP

GL_LINE_STRIPGL_LINE_STRIP

GL_TRIANGLESGL_TRIANGLES

51

Polygon IssuesOpenGL will only display polygons correctly that are

Simple: edges cannot cross

Convex: All points on line segment between two points in a polygon are also in the polygon

Flat: all vertices are in the same plane

User program can check if above true

OpenGL will produce output if these conditions are violated but it may not be what is desired

Triangles satisfy all conditions: we can always use triangles

nonsimple polygonnonconvex polygon

52

Chaos Gamehttp://math.bu.edu/DYSYS/applets/chaos-game.html

53

Chaos Game Routine

vec2 points[NumPoints];...// Select an initial point inside of the trianglepoints[0] = vec2( 0.25, 0.50 );

// compute and store N-1 new pointsfor ( int i = 1; i < NumPoints; ++i ) { int j = rand() % 3; // pick a vertex at random

// Compute point halfway between the selected vertex // and the previous point points[i] = ( points[i - 1] + vertices[j] ) / 2.0;}

54

Chaos Game Main Loop

// compute and store N-1 new pointsfor ( int i = 1; i < NumPoints; ++i ) { int j = rand() % 3; // pick a vertex at random

// Compute point halfway between the selected vertex // and the previous point points[i] = ( points[i - 1] + vertices[j] ) / 2.0;}

Starting point

Second point

Second Random Vertex

55

Why does this work?

In my version the starting point is in the center of the middle triangle

Each time we move halfway towards a corner, we move into a blank triangle

We will never land in a shaded triangle

So why does the resulting set look the the Sierpinski Gasket?

56

Why does this work?

Imagine our random vertex is on the north, so we map all points north

At each step, our point is in the center of a blank triangle

But at every step, the new triangle is half as big, so point is closer to curve

In the limit, point lies within epsilon of the Sierpinski curve

If the choice of next vertex is not random, we do not get the full setA

B C

57

Attributes

Attributes are part of the OpenGL state and determine the appearance of objects

Color (points, lines, polygons)

Size and width (points, lines)

Stipple pattern (lines, polygons)

Polygon mode

Display as filled: solid color or stipple pattern

Display edges

Display vertices

58

RGB color

Each color component is stored separately in the frame buffer

Usually 8 bits per component in buffer: 24 bit color

8 bits can store numbers from 0 to 255 as Unsigned Bytes

Note in glColor3f the color values range from 0.0 (none) to 1.0 (all), whereas in glColor3ub the values range from 0 to 255

59

Indexed ColorAn alternative is Index Color

Colors are indices into tables of RGB values (see next slide for GIF scheme)

Requires less memory for

indices usually 8 bits

not as important now

Memory inexpensive

Need more colors for shading

60

GIF Color MapGIF, introduced by CompuServe, uses the idea of indirection

Rather than store the value, store a reference to the valueWe have an image with 24-bit colors for each pixelTo save storage, store index into a small table of colors (color map)Rather than store 24 bits, we typically pick 256 colors and store 8 bits

Lossy if original has more than 256 colorsWorks better for the Simpsons than the Sopranos

Store the color map first, then the array of references to map

0x00ED9D 0x00ED9D 0x00ED9D 0x00ED9D 0x00ED9D

0x00ED9D 0x00ED9D 0xFF69F0 0xFF69F0 0xFF69F0

0x00ED9D 0xFF69F0 0x00ED9D 0x00ED9D 0x00ED9D

0xFF69F0 0x00ED9D 0x00ED9D 0x00ED9D 0x00ED9D

0xFF69F0 0x00ED9D 0xFF0000 0x00ED9D 0x00ED9D

0xFF69F0 0x00ED9D 0x00ED9D 0x00ED9D 0x00ED9D

0xFF69F0 0x00ED9D 0x00ED9D 0x00ED9D 0x00ED9D

0x00ED9D

0xFF69F0

0xFF0000

0 0 0 0 0

0 0 2 2 2

0 2 0 0 0

2 0 0 0 0

2 0 1 0 0

2 0 0 0 0

2 0 0 0 00xFF69F0 0x00ED9D 0x00ED9D 0xFF0000 0xFF0000 2 0 0 1 1

Original Image Compressed Image

61

Color and State

The color as set by glColor becomes part of the state and will be used until changed

Colors and other attributes are not part of the object but are assigned when the object is rendered

We can create conceptual vertex colors by code such asglColor(…)glVertex(…)glColor(…)glVertex(…)

We will see an example in example4 of chapter 2

62

3D Example4

/* sierpinski gasket with vertex arrays */

#include "Angel.h"

const int NumTimesToSubdivide = 4;const int NumTetrahedrons = 256;const int NumTriangles = 4*NumTetrahedrons; const int NumVertices = 3*NumTriangles;

vec3 points[NumVertices];vec3 colors[NumVertices];

int Index = 0;

63

3D Example

voidtriangle( const vec3& a, const vec3& b, const vec3& c, const int color ){ static vec3 base_colors[] = {

vec3( 1.0, 0.0, 0.0 ), vec3( 0.0, 1.0, 0.0 ),vec3( 0.0, 0.0, 1.0 ), vec3( 0.0, 0.0, 0.0 )

}; points[Index] = a; colors[Index] = base_colors[color]; Index++; points[Index] = b; colors[Index] = base_colors[color]; Index++; points[Index] = c; colors[Index] = base_colors[color]; Index++;}

64

3D Example

voidtetra( const vec3& a, const vec3& b, const vec3& c, const vec3& d )

{ triangle( a, b, c, 0 ); triangle( a, c, d, 1 ); triangle( a, d, b, 2 ); triangle( b, d, c, 3 );}

65

Position and Color

// First, we create an empty buffer of the size we need by passing// a NULL pointer for the data valuesglBufferData( GL_ARRAY_BUFFER, sizeof(points) + sizeof(colors),

NULL, GL_STATIC_DRAW );

// Next, we load the real data in parts. We need to specify the// correct byte offset for placing the color data after the point// data in the buffer. Conveniently, the byte offset we need is// the same as the size (in bytes) of the points array, which is// returned from "sizeof(points)".glBufferSubData( GL_ARRAY_BUFFER, 0, sizeof(points), points );glBufferSubData( GL_ARRAY_BUFFER, sizeof(points), sizeof(colors), colors );

66

Position and Color

// Initialize the vertex position attribute from the vertex shader GLuint vPosition = glGetAttribLocation( program, "vPosition" );glEnableVertexAttribArray( vPosition );glVertexAttribPointer( vPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0) );

// Likewise, initialize the vertex color attribute. We// need to specify the starting offset (in bytes) for the color// data. Just like loading the array, we use "sizeof(points)"// to determine the correct value.GLuint vColor = glGetAttribLocation( program, "vColor" );glEnableVertexAttribArray( vColor );glVertexAttribPointer( vColor, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(points)) );

67

Vertex Shader

attribute vec3 vPosition;attribute vec3 vColor;varying vec4 color;

voidmain(){ gl_Position = vec4(vPosition, 1.0); color = vec4( vColor, 1.0 );}

68

3D Examplevoiddisplay( void ){ glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glDrawArrays( GL_TRIANGLES, 0, NumVertices ); glFlush(); }

There is added complexity handing the coloras well as the position

But switch from 2 to 3 dimensions is easy1) Example4 uses default Camera settings2) The Depth Buffer (z Buffer) is used for hidden line removal

69

Depth Buffervoid init( void ){ ... glEnable( GL_DEPTH_TEST ); ...}void display( void ){ glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glDrawArrays( GL_TRIANGLES, 0, NumVertices ); glFlush(); }int main( int argc, char **argv ){ glutInit( &argc, argv ); glutInitDisplayMode( GLUT_RGBA | GLUT_DEPTH );

70

Coordinate Systems

The units in glVertex are determined by the application and are called object or problem coordinates

The viewing specifications are also in object coordinates and it is the size of the viewing volume that determines what will appear in the image

Internally, OpenGL will convert to camera (eye) coordinates and later to screen coordinates

OpenGL also uses some internal representations that usually are not visible to the application

71

OpenGL Camera

OpenGL places a camera at the origin in object space pointing in the negative z direction

The default viewing volume is a box centered at the origin with a side of length 2

72

Orthographic Viewing

z=0

z=0

In the default orthographic view, points are projected forward along the z axis onto theplane z=0

73

Transformations and Viewing

In OpenGL, projection is carried out by a projection matrix (transformation)

There is only one set of transformation functions so we must set the matrix mode first glMatrixMode (GL_PROJECTION)

Transformation functions are incremental so we start with an identity matrix and alter it with a projection matrix that gives the view volume

glLoadIdentity(); glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

74

glMatrixModel

void glMatrixMode(GLenum mode);

mode

Specifies which matrix stack is the target for subsequent matrix operations.

Three values are accepted: GL_MODELVIEW, GL_PROJECTION, and GL_TEXTURE. The initial value is GL_MODELVIEW.

Additionally, if the ARB_imaging extension is supported, GL_COLOR is also accepted.

75

3- and 2-dimensional viewing

glLoadIdentity(); glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

In glOrtho(left, right, bottom, top, near, far)

The near and far distances are measured from the camera

Two-dimensional vertex commands place all vertices in the plane z=0

If the application is in two dimensions, we can use the function

gluOrtho2D(left, right, bottom, top)

In two dimensions, the view or clipping volume becomes a clipping window

76

Viewports

Do not have use the entire window for the image: glViewport(x,y,w,h)

Values in pixels (screen coordinates)

77

Summation

We can define a number of different attributes in OpenGL

Color is the simplest to see

We can define a number of different objects:

points, lines, triangles, …

We can move the objects, and move the camera independently

3D is a simple extension of 2D drawing

Currently, our camera uses projection

We will be able to define a perspective camera shortly

We can use Shaders to perform computations on the GPU