+ All Categories
Home > Documents > Government arts college for women,salem-8 Department of ...

Government arts college for women,salem-8 Department of ...

Date post: 23-Jan-2023
Category:
Upload: khangminh22
View: 0 times
Download: 0 times
Share this document with a friend
165
Government arts college for women,salem-8 Department of computer science 2020-2021 III-BCA Online Class Subject Name: Elective-I:Computer Graphics Paper Code:17UCAE03 Handled By Mrs.P.KANAGAVALLI M.Sc,M.Phil,B.Ed., II-Shift,Guest Lecturer. 1
Transcript

Government arts college for women,salem-8

Department of computer science2020-2021

III-BCA

Online Class

Subject Name: Elective-I:Computer Graphics

Paper Code:17UCAE03

Handled By

Mrs.P.KANAGAVALLI M.Sc,M.Phil,B.Ed.,

II-Shift,Guest Lecturer.

1

UNIT-I

Graphics Graphics Applications Graphics Systems

Graphics:Computer graphics deals with all aspects of creating

images with a computer.

Graphics Applications:

1. Computer Aided Design Automobile,watercraft,textiles.

2. Presentation Graphics Bar charts, Line Graphs, pie

charts and so on.

3. Computer Art CAD,DTP

4.Entertainment Motion Pic,Music videos, Graphics objects.

5. Education and Training Physical systems ,Applications

2

6. Visualization Medical Data Sets, Forms

7. Image Processing Improving picture quality

8. GUI(Graphical User Interface) Menus and icons

Graphics Systems

Video Display Devices: Cathode Ray Tube(CRT) Raster-Scan Displays Random-Scan Displays Color CRT Monitor Direct view storage tubes(DVST) 3D viewing devices Stereoscopic and Virtual reality systems

3

Basic Graphics System:

Input devices Image formed in FB Output device

4

Cathode Ray Tube(CRT):• The cathode-ray tube is a vacuum tube that contains one or

more electron guns and a phosphorescent screen, and is used

to display images. It modulates, accelerates, and deflects

electron beam onto the screen to create the images. • CRT is the technology used in traditional computer monitors

and televisions. The image on a CRT display is created by firing electrons from the back of the tube to phosphors located towards the front of the display.

5

•A CRT monitor contains millions of tiny red, green, and blue phosphor dots that glow when struck by an electron beam that travels across the screen to create a visible image. Electrons are negative. •The anode is positive, so it attracts the electrons pouring off The cathode.

6

Raster-Scan Display:• A Raster Scan Display is based on intensity control of pixels in the

form of a rectangular box called Raster on the screen.

Raster- Scan System

7

8

• In computer graphics, a raster graphics or bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via a monitor, paper, or other display medium. Raster images are stored in image files with varying formats.

• Random scan Displays: Random scan monitors draw a picture one line at a time and for this reason are also referred to as vector displays (or stroke-writing or calligraphic displays).

• A pen plotter operates in a similar way and is an example of a random-scan, hard-copy device.

Color CRT Monitors: • The CRT Monitor display by using a combination of phosphors.• The phosphors are different colors. There are two popular

approaches for producing color displays with a CRT are:Beam Penetration MethodShadow-Mask Method

Beam Penetration Method:• The Beam-Penetration method has been used with random-

scan monitors. In this method, the CRT screen is coated with two layers of phosphor, red and green and the displayed color depends on how far the electron beam penetrates the phosphor layers.

9

• This method produces four colors only, red, green, orange and yellow.

• A beam of slow electrons excites the outer red layer only; hence screen shows red color only.

• A beam of high-speed electrons excites the inner green layer. Thus screen shows a green color.

• This method is commonly used in random-scan monitor.

Advantages: Inexpensive

Disadvantages:Only four colors are possibleQuality of pictures is not as good as with another method.

10

11

Shadow-Mask Method:•Shadow Mask Method is commonly used in Raster-Scan System because they produce a much wider range of colors than the beam-penetration method.•It is used in the majority of color TV sets and monitors.

Magnified phosphor dot triangle

Shadow mask CRT

Construction: • A shadow mask CRT has 3 phosphor color dots at each pixel

position.

One phosphor dot emits: red light

Another emits: green light

Third emits: blue light• This type of CRT has 3 electron guns, one for each color dot and a

shadow mask grid just behind the phosphor coated screen.• Shadow mask grid is pierced with small round holes in a

triangular pattern.

Advantage:Realistic imageMillion different colors to be generatedShadow scenes are possible

12

Disadvantage:Relatively expensive compared with the monochrome CRT.

Relatively poor resolutionConvergence Problem

Direct view storage tubes(DVST):

•A direct-view storage tube (DVST) stores the picture information as a charge distribution just behind the phosphor-coated screen. •Two electron guns are used in a DVST.

Flat-Panel Displays:•The term flat-panel display refers to a class of video devices that have reduced volume, weight, and power requirements compared to a CRT.

•Flat-panel displays into two categories: Emissive displays and Non-emissive displays.

13

3D viewing devices: System are used in medical applications to data from

ultrasonography. Geological Topological

Stereoscopic and Virtual reality systems:• It provides a 3D effect by presenting a different view to each eye

on observer so that scenes do appear to have depth.• Stereoscopic viewing is a component in virtual reality

systems, where users can step into a scene and communicate

with the environment.

***There is no way to Beat Us***

14

UNIT-II

Raster -Scan Systems and Random-scan systems Input devices and Hard copy devices Output Primitives and their attributes

• Line Drawing algorithms DDA Algorithms• Circle generating algorithm• Properties of Ellipses

Raster -Scan Systems:

Interactive raster graphics system contains several processing

units. Apart from CPU it contains a special processing unit called

video controller or display controller, which is used to control the

operation of the display device.• Video-Controller• Raster-scan display processor

1

Video-Controller• A fixed area of the system memory is reserved for the frame

buffer, and the video controller is given direct access to the frame-buffer memory.

Architecture of a Raster Graphic System

(CPU, System memory ,Frame Buffer,Video controller,Monitor)

• Frame-buffer locations, and the corresponding screen positions, are referenced in Cartesian coordinates.

2

3

Co-ordinate System

• The screen surface is then represented as the first quadrant of a two-dimensional system, with positive xvalues increasing to the right and left positive y values increasing from bottom to top. •The basic refresh operations of the video controller are dia-grammed.•Two registers are used to store the coordinates of the screen pixels. Initially, the x register is set to 0 and the y register is, set to ymax.

4

Basic video-controller refresh operations

• A number of other operations can be performed by the video controller, besides the basic refreshing operations.

• For various applications, the video controller can retrieve pixel intensities from different memory areas on different refresh cycles.

Raster-scan display processor• The organization of a raster system containing a separate

display processor, sometimes referred to as a graphics controller or a display coprocessor.

Raster Graphics system with a display Processor

• The purpose of the display processor is to free the CPU from the graphics chores. In addition to the system memory, a separate display processor memory area can also be provided.

5

• A major task of the display processor is digitizing a picture definition given in an application program into a set of pixel-intensity values for storage in the frame buffer.

• This digitization process is called scan conversion. Graphics commands specifying straight lines and other geometric objects are scan converted into a set of discrete intensity points.

• Scan converting a straight-line segment, for example, means that we have to locate the pixel positions closest to the line path and store the intensity for each position in the frame buffer.

Random-scan systems:• An application program is input and stored in the system

memory along with a graphics package. Graphics commands in the application program are translated by the graphics package into a display file stored in the system memory.

6

7

Architecture of a simple random scan system

• Graphics patterns are drawn on random scan system by directing the electron beam along the component lines of the picture. • Lines are defined by the values for their coordinate end points, and these input coordinate values are converted to x and y deflection voltages. • A scene is then drawn one line at a time bypositioning the beam to fill in the line between specified endpoints.

Input Devices• Various devices are available for data input on graphics

workstations. • Most systems have a keyboard and one or more additional

devices specially designed for interactive input. • The Following are the commonly used input devices,

1.Keyboard 2.Mouse 3. Image Scanner 4.Touch Panels

5. Light pen 6. Voice systems 7.Track ball and space ball

8.Joysticks 9.Data glove 10.Digitizers

Hard copy Devices

These are two major categories of hard copy devices.

1.PrintersImpact or non impact methods

(dot matrix,Laser,Inkjet, Electrostatic device,Electrothermal)

2.Plotters Drum plotter, Flatbed plotter, Electrostatic plotter

8

Output Primitives and their attributes

Output Primitives:

Graphics programming packages provides functions to describe a

scene in term of the basic geometrics structures called output

primitives, and set of output primitives are grouped to form more

complex structures.

Points and Lines:• Point plotting is accomplished by converting a single

coordinate position furnished by an application program into

appropriate operations for the output device. For example ,CRT,

Random scan system, B&W Raster system,RGB Systems.

-----

9

Line Drawing: • Line drawing is accomplished by calculating intermediate

positions along the line path between two specified end point

positions. • An output device is then directed to fill in these positions

between the end points.

• For analog devices, such as a vector pen plotter or a random-scan display, a straight line can be drawn smoothly from one

end point to the other. • To load a specified color into the frame buffer at a position

corresponding to column x along scan line y.• A line is a straight one-dimensional figure having no thickness

and extending infinitely in both directions.

10

11

Pixel positions referenced by scan line number and column number

Set Pixel(x,y,intensity) Where intensity is the color•The function for retrieving the current frame-buffer intensity for a specified location is as follows: get Pixel(x,y)Line Drawing Algorithms:•The Cartesian slope-intercept equation for a straight line is with m representing the slope of the line and b as they intercept. Given that the two end points of a line segment are specified at positions (x1, y1,) and (x2, y2).

12

13

DDA Algorithm• A DDA (Digital Differential Analyzer) algorithms is a scan-

conversion method for drawing a line which follows an incremental approach.

• In this algorithm to draw a line the difference in the pixel points is analyzed then according to that the line is drawn.

• The method is said to be incremental because it performs computations at each step and uses the outcome of the previous step.

14

• When two points in a plane connected by a line segment and falls under the line equation is known as a line.

• The line equation mentioned above is y=mx+b where m is the slope (i.e., m = Δy/Δx) and y is the intercept of the line (the value of y at the points of the line).

15

yi+1 = yi + mΔx or xi+1 = xi + Δy/m

Advantage of Algorithm:• It is a simple algorithm.• It is easy to implement.• It avoids using the multiplication operation which is costly in

terms of time complexity.

Disadvantages of Algorithm:• There is an extra overhead of using round off() function.• Using round off() function increases time complexity of

the algorithm.• Resulted lines are not smooth because of round off() function.• The points generated by this algorithm are not accurate.

16

Circle Generating Algorithm:

• The circle is a frequently used component in pictures and graphs, a procedure for generating either full circles or circular arcs is included in most graphics packages.

• More generally, a single procedure can be provided to display either circular or elliptical curves.

• Drawing a circle on the screen is a little complex than drawing a line.

17

• Drawing a circle on the screen is a little complex than drawing a line.

Properties of Circle:• A circle is defined as the set of points that are all at a given

distance r from a center position (xc, yc) .

• This distance relationship is expressed by the Pythagorean theorem in Cartesian coordinates as ( x-xc)2+(y-yc)2 =r2

• The equation of circle is X2+Y2=r2,where r is radius.• The equation with the positive square root describes the

upper semicircle, and the equation with the negative square root describes the lower semicircle.

18

19

Ellipse Generating Algorithms

Properties of Ellipses:• Mid-point Ellipse algorithm is used to draw an ellipse in

computer graphics. • Midpoint ellipse algorithm plots(finds) points of an ellipse on

the first quadrant by dividing the quadrant into two regions. • Each point(x, y) is then projected into other three quadrants

(-x, y), (x, -y), (-x, -y)

20

• If the distance to the two foci from any point P=(x,y) on the ellipse are labeled d1 and d2 then the general equation of the ellipse can be stated as- d1+d2=constant.

ALL IS WELL

21

UNIT-III

Two Dimensional Geometric Transformation• Basic Transformations• Other Transformations

Two Dimensional Viewing Clipping Operations

Basic Transformations:Definition:• Changes in orientation, size, and shape are accomplished with

geometric transformations that alter the coordinate descriptions of objects.

• The basic geometric transformations are translation,rotation, and scaling.

• Other transformations that are often applied to objects

include reflection and shear.1

Translation:• A translation is applied to an object by repositioning it along a straight-line

path from one coordinate location to another. • A two-dimensional point by adding translation distances, tx, and ty, to the

original coordinate position (x, y) to move the point to a new position ( x ' , y').

x' = x + t

y' = y + t

Translation

2

• The translator pair (tx, ty ) is called a translation vector or shift vector.

• The above equations can also be represented using the column vectors.

• The 2D translation equations can be written in matrix form follows:

P’ = P + T

3

4

Example :X1 (2,2),y1 (3,3), z1 (6,3) by 2 units in the direction and 5 units in y directions.

5

Rotation:• A two-dimensional rotation is applied to an object by

repositioning it along a circular path in the xy plane. • To generate a rotation, specify a rotation angle θ and the

position(xr,yr) of the rotation point (or pivot point) about which the object is to be rotated.

• This transformation can also be described as a rotation about a rotation axis that is perpendicular to the xy plane and passes through the pivot point.

• The transformation equations for rotation of a point position

P when the pivot point is at the coordinate origin. • r is the constant distance of the point from the origin, and ɸ is

the original angular position of the point from the horizontal, and θ is the rotation angle.

• Using standard trigonometric identities, we can express the transformed coordinates in terms of angles θ and ɸ as,

6

7

cos(-θ) sin(-θ) R= -sin(-θ) cos(-θ) = cosθ -sin θ sinθ cosθ

cos (- θ)=cos θ and sin(- θ)=sin θ

8

The following figure explains the rotation about various axes:

Scaling: The scaling changes the size of an object. • A scaling transformation alters the size of an object. • This operation can be carried out for polygons by

multiplying the coordinate values (x, y) of each vertex by scaling factors SX and SY to produce the transformed coordinates (x', y'):

x'=x. SX x'=Y. SY

• Scaling factor sX, scales objects in the x direction, while Scaling factor sy scales in the y direction.

x' y' = x y SX 0

0 SY

=x. SX y. SY

=P.S

9

• Any positives (provide) values are valid for scaling factors SX and SY

• Values <1 reduce the size of the objects. • Values >1 produce an enlarged object.

• If the values of both SX and SY =1,the size of the object does not change.

• Example: Scale the polygon with coordinates a(2,5),b(7,10),c(10,2) by 3 units in x and 3 units in y directions.

10

Original image After scaling

Scaling

11

Matrix Representations and homogeneous Coordinates• Many graphics applications involve sequences of geometric

transformations. An animation, for example, might require an object to be translated and rotated at each increment of the motion.1.The coordinates are translated

2.Translated coordinates are scaled

3. The scaled coordinates are roated

12

13

Other transformations:• Other transformations that are often applied to objects

include reflection and shear.

Reflection:

• A reflection is a transformation that produces a mirror image of an object.

• The mirror image for a two-dimensional reflection is generated relative to an axis of reflection by rotating the object 180° about the reflection axis.

• Reflection is the mirror image of original object. • In other words, we can say that it is a rotation operation with 180°. In

reflection transformation, the size of the object does not change. • For reflection axis that are perpendicular to the xy plane, the rotation

path is in the xy plane.

14

15

The following figures show reflections with respect to X and Y axes, and about the origin respectively.•Reflection about the line y=0,the x axis is accomplished with the transformation matrix.

Reflection of an object about x axis•This transformation keeps x value the same. But ’flips’ the y values of coordinate positions.

• The resulting orientation of an object after it has been reflected about the x axis.

Reflection of an object about y axis

• The reflection about y axis flips coordinates while keeping coordinates the same.

• The equivalent rotation in this case in 180°through 3D space about the y axis.

16

17

Reflection of an object about of x and y axis

Shearing:• A transformation that distorts the shape of an object such that the

transformed shape appears as if the object were composed of internal layers that had been caused to slide over each other is called a shear.

• A transformation that slants the shape of an object is called the shear transformation.

• There are two shear transformations X-Shear and Y-Shear.

Shearing

18

X′=X+ shx .Y Y′=y

19

• One shifts X coordinates values and other shifts Y coordinate values.

• An x-direction shear relative to the x axis is produced with the transformation matrix. Shy

Shearing

20

21

Two Dimensional Viewing: TheViewing Pipeline:

Window• A world-coordinate area selected for display is called a window.

The window defines what is to be viewed.

Viewport• An area on a display device to which a window is mapped is

called a viewport. Viewport defines where it is to be displayed.

Viewing Transformation• The mapping of a part of a world-coordinate scene to device

coordinates is referred to as a viewing transformation.• It is otherwise known as the window-to-viewport

transformation or the windowing transformation.• Unfortunately, the same tern is now used in window-manager

systems to refer to any rectangular screen area that can be

22

23

Viewing Transformation

moved about, resized, and made active or inactive.

2D Viewing Transformations pipeline• First, we construct the scene in world coordinates using the

output primitives and attributes.• Next, to obtain a particular orientation for the window, we can

set up a two-dimensional viewing-coordinate system in the world-coordinate plane, and define a window In the viewing-coordinate system.

Viewing Transformation Pipeline

24

• Transform descriptions in world coordinates to viewing coordinates.

• Define a viewport in normalized coordinates and map the viewing-coordinate description of the scene to normalized coordinates.

• All parts of the picture that he outside the viewport are clipped, and the contents of the viewport are transferred to device coordinates.

• By changing the position of the viewport, we can view objects at different positions on the display area of an output device.

Window-To-Viewport Coordinate Transformation• Once object descriptions have been transferred to the viewing

reference frame, we choose the window extents in viewing coordinates and select the viewport limits in normalized coordinates.

25

26

•A point at position (xw yw) in the window is mapped into position (xv , yv ) in the associated viewport.•To maintain the position in the viewport it is window.

• Solving these expressions for the viewport position (sx , sy) and find the values of the viewport positions.

Find(xv , yv )

...........equation 2

Where scaling factors are,

27

xv=xvmin+(xw-xwmin)sxyv=yvmin+(yw-ywmin)sy

This conversation is performed with the following sequence of transformations:•Perform a scaling transformation using a fixed point position (xwmin,ywmin) that scales the window area to the size of the viewport.•Translate the scaled window area to the position of the viewport. Relative proportions of objects are maintained if the scaling factors are the same (sx=sy).

Workstation Transformation:• Workstation Transformation done by selecting a window area

in normalized space and a viewport area in the coordinates of the display device.

28

29

• Any number of output devices can we open in a particular applications, and another window-to-viewport transformation can be performed for each open output device.

• This mapping, called the workstation transformations is accomplished by selecting a window area in normalized space and a viewport area in the coordinates of the display device.

• Three windows to viewport transformation can be performed

for each open output device.• Workstation transformations to partition a view so that

different parts of normalized space can be displayed on different output devices.

Clipping Operations• Any procedure that identifies those portions of a picture that

are either inside or outside of a specified region of space is referred to as a clipping algorithm, or simply clipping.

• The region against which an object is to clipped is called a clip window.

Types of Clipping• Point Clipping• Line Clipping• Area Clipping (Polygon)• Curve Clipping• Text Clipping• Exterior Clipping

30

Point Clipping

Assuming that the clip window is a rectangle in standard position,

we save a point P = (x, y) for display if the following inequalities

are satisfied:• x ≤ xmax

• x ≥ xmin

• y ≤ ymax

• y ≥ ymin

• Where the edges of the clip window (Xwmin, Xwmax, Ywmin, Ywmax) can be either the world-coordinate window boundaries or viewport boundaries.

31

Line Clipping• Line clipping is the process of removing lines or portions

of lines outside an area of interest.

Line Clipping

• There are two common algorithms for line clipping:

1.Cohen–Sutherland and 2.Liang–Barsky.

32

Cohen Sutherland Line Clipping

• This is one of the oldest and most popular line-clipping

procedures.• Every line endpoint in a picture is assigned a four-digit binary

code, called a region code, that identifies the location of the point relative to the boundaries of the clipping rectangle.

• Each bit position in the region code is used to indicate one of the four relative coordinate positions of the point with respect to the clip window: to the left, right, top, or bottom.

bit 1: leftbit 2: rightbit 3: belowbit 4: above

• A value of 1 in any bit position indicates that the point is in that relative position; otherwise, the bit position is set to 0.

33

• If a point is within the clipping rectangle, the region code is 0000.

34

Binary region codes assigned to line endpoints according to relative position with respect to the clipping rectangle.

• A point that is below and to the left of the rectangle has a region code of 0101.

• Starting with the bottom endpoint of the line from P1 to P2 check P, against the left, right, and bottom boundaries in turn and find that this point is below the clipping rectangle.

35

• Find the intersection point P′ with the bottom boundary and discard the line section from P1 to P1′. • The line now has been reduced to the section from P1′ to P2. Since P, is outside the clip window.

• we check this endpoint against the boundaries and find that it is to the left of the window.

• Intersection points with a clipping boundary can be calculated using the slope-intercept form of the line equation.

• The intersection P and eliminate the line section from P3 to P′3

36

• By checking region codes for the line section from P3 to P′4

• For a line with end point coordinates (x1, y1) and (x2, y2), the y coordinate of the intersection point with a vertical boundary can be obtained with the calculation as,

y=y1+m(x-x1)

m = y-y1 /x-x1 where x value is set either to Xwmin or

Xwmax and slope of the line is calculated as = y-y2 /x-x2 .

• In the horizontal boundary, x= x1 + y-y1 /m with y set either to Ywmin or Ywmax .

Liang-Barsky Algorithm• Faster line clippers have been developed that are based on

analysis of the parametric equation of a line segment, which we can write in the form,

37

38

• The parametric equation of a line can be given by,

X = x1 + t(x2-x1) Y = y1 + t(y2-y1)

P= - Δx -(x2-x1)

P= Δx (x2-x1)

P= -Δy -(y2-y1)

P= Δx(y2-y1)

tpk tpk <= qk

u=qk/ pk

• Where k = 1, 2, 3, 4 (correspond to the left, right, bottom, and top boundaries, respectively).

• The p and q are defined as,

p1 = -(x2-x1), q1 = x1 - xwmin (Left Boundary)

p2 = (x2-x1), q2 = xwmax - x1 (Right Boundary)

p3 = -(y2-y1), q3 = y1 - ywmin (Bottom Boundary)

p4 = (y2-y1), q4 = ywmax - y1 (Top Boundary) 39

40

When the line is parallel to a view window boundary, the p value for that boundary is zero.When pk < 0, as t increase line goes from the outside to inside (entering).When pk > 0, line goes from inside to outside (exiting).When pk = 0 and qk < 0 then line is trivially invisible because it is outside view window.When pk = 0 and qk > 0 then the line is inside the corresponding window boundary.

Area Clipping (Polygon)

• Polygon is the collection of lines. As polygon is a closed solid area, after clipping, it should remain closed.

• A polygon can also be clipped by specifying the clipping window.

41

• First the polygon is clipped against the left edge of the polygon window to get new vertices of the polygon.

• Sutherland Hodgeman polygon clipping algorithm is used for polygon clipping.

• There are four possible cases when processing vertices in sequence around the perimeter of a polygon.

42

43

• The output of the algorithm is a list of polygon vertices all of

which are on the visible side of a clipping plane. • This is achieved by processing two vertices of each edge of

the polygon around the clipping boundary or plane. • This results in four possible relationships between the edge

and the clipping boundary or Plane. • There are two key processes in this algorithm.

1.Determining the visibility of a point or vertex (lnside –

Outside test) and

2. Determining the intersection of the polygon edge and the

clipping plane.• This plane is considered in the xy plane, then the vector cross

product AV x AB has only a component given by,

44

45

If z is:Positive - Point is on the right side of the window boundary.Zero - Point is on the window boundary.Negative - Point is on the left side of the window boundary.

Sutherland-Hodgeman Polygon Clipping Algorithm:-

Step 1: Read coordinates of all vertices of the Polygon.

Step 2: Read coordinates of the dipping window

Step 3: Consider the left edge of the window

Step 4: Compare the vertices of each edge of the polygon,

individually with the clipping plane.

Step 5: Save the resulting intersections and vertices in the new

list of vertices according to four possible relationships

between the edge and the clipping boundary.

Step 6: Repeat the steps 4 and 5 for remaining edges or the

clipping window. Each time the resultant list of vertices is

successively passed to process the next edge of the

clipping window.

Step 7: Stop.

46

Curve Clipping• Curve-clipping procedures involves nonlinear equations.

Step1: The bounding rectangle for a circle or other curved object

can be used first to test for overlap with a rectangular clip

window.

Step2: If the bounding rectangle for the object is completely

inside the window ,the object is saved.

Step3: Otherwise, the object is discarded.

Before Clipping

After Clipping

47

Text Clipping• Various techniques are used to provide text clipping in a computer

graphics.• The simplest method for processing character strings relative to a

window boundary is to use the all-or-none string-clipping.

All-or-none-character clipping• In this case, the boundary limits of individual characters are

compared to the window, any character that either overlaps or is a window boundary is clipped.

48

Clipping Individual character Components• Here, the characters are treated as lines.

Clipping individual character

• If the character is on the boundary of the clipping window, then we discard only that portion of character that is outside of the clipping window.

49

Exterior Clipping• Clipping a picture to the exterior(outside) of a specified region

is called exterior clipping.• In this clipping method, the picture to be saved are those that are

outside the region.

Application area of exterior clipping

1.Multiple window system.

2.Any applications that require overlapping pictures.• A line P1P2 can be clipped in two passes:

Procedure

1.First, P1P2 is clipped to the interior of the convex polygon

V1,V2,V3,V4, to get a clipped segment P1′ P2′ .2.A external clip of P1′ P2′ is performed only on the convex

polygon V1,V3,V5 to get the clipped line P1′′ P2′ .

50

• Now the clipped line segment by exterior clipping is P1′′ P2′. ***Time is Gold***

51

UNIT-IV

Three Dimensional Concepts• Three Dimensional Display method• Three Dimensional Geometric and Modeling

Transformations Three Dimensional Viewing Projections

Three Dimensional Display method:• To obtain A display of a three-dimensional scene that has been

modeled in world coordinates. we must first set up a coordinate reference for the "camera".

• Object descriptions are then transferred to the camera reference coordinates and projected onto the selected display plan.

1

x

2

3

1. Parallel Projection2. Perspective Projection3. Depth Cueing4. Visible line and surface identification5. Surface Rendering6. Exploded and Cutaway Views7. Three-dimensional and Stereoscopic Views

Parallel Projection• This technique is used in engineering and architectural

drawings to represent an object with a set of views that maintain relative proportions of the object.

• In a parallel projection, parallel lines in the world-coordinate scene project into parallel lines on the two-dimensional display plane.

• The appearance of the solid object can be reconstructed from the major views.

Perspective Projection• In a perspective projection, parallel lines in a scene that are not

parallel to the display plane are projected into converging lines. 4

• Scenes displayed using perspective projections appear more realistic, since this is the way that our eyes and a camera lens form images.

Depth Cueing• Depth information is important to viewing direction, which is

the front and which is the back of displayed objects.

5

6

Visible line and surface identification• The simplest method is to highlight the visible lines or to

display them in a different color.

• Another technique, commonly used for engineering drawings, is to display the non visible lines as dashed lines.

Surface Rendering• Surface properties of include degree of transparency and how

rough or smooth the surfaces are to be.

Wireframe => Surface Rendering => Light, Texture, etc.

• Added realism is attained in displays by setting the surface intensity of objects according to the lighting conditions in the scene and according to assigned surface characteristics.

7

Exploded and Cuteway Views

• Exploded and cutaway views of such objects can then be used to

show the internal structure and relationship of the object parts.

8

Three-dimensional and Stereoscopic Views• Three-dimensional views can be obtained by reflecting a raster

image from a vibrating flexible mirror.• The vibrations of the mirror are synchronized with the display

of the scene on the CRT.

• Stereoscopic devices present two views of a scene; one for the left eye and the other for the right eye.

9

10

Polygon surfaces• The most commonly used boundary representation for

a three-dimensional graphics object is a set of surface polygons that enclose the object interior.• Many graphics systems store all object descriptions as sets of surface polygons.

• A polygon representation for a polyhedron precisely defines the surface features of the object.• Polygon descriptions are referred to as “Standard

Graphics Objects”.

Polygon Tables• The polygon surface is specified with a set of vertex coordinates

and associated attribute parameters.• Polygon data tables can be organized into two groups:

1.Geometric tables and 2.Attribute tables

Geometric table Vertex coordinates and parameters to identify

the spatial orientation of the polygon surfaces.

Attribute tables Attribute information for an object such as

parameter specifying the degree of transparency of the object and

its surface reflectively and texture characteristics.

Three listsVertex(objects are stored in this table)

Edge(vertex table to identify the vertices)

Polygon table(Identify the edges the edges for each

polygon)

11

12

Plane Equations:

The equations for a plan surface is Ax+By+Cz+D=0, where(x ,y,

z) is any point on the plane and the coefficient A,B,C and D are

constants describing the spatial properties.

Polygon Meshes • Polygon mesh is the collection of vertices, edges, and faces that make up

a 3D object. • A polygon mesh defines the shape and contour of every 3D character &

object, whether it be used for 3D animated film, advertising, or video games.

13

• One type of polygon mesh is a triangle strip.• Another similar function is the quadrilateral mesh, which

generates a mesh of (n - 1) by (m - 1) quadrilaterals, given the

Triangle Strip Quadrilateral Mesh

coordinates for an n by array of vertices.(12quadrilaterals)

14

Quadric surfaces• A frequently used class of objects are the quadric surfaces,

which are described with second-degree equations(quadratics). • They include spheres, ellipsoids, Torus(tori), paraboloids, and

hyperboloids.

15

16

Sphere• In Cartesian coordinates, a spherical surface with radius r centered on the

coordinate origin is defined as the set of points (x, y, z) that satisfy the equation, x2+y2+z2=r2

x = r cosφ cosθ -л/2 <= φ<= л/2

y = r cosφ sinθ -л <= φ <= л

z = rsinφ

Ellipsoid• Ellipsoid surface is an extension of a spherical surface where the

radius in three mutually perpendicular directions can have different values.

• The Cartesian representation for points over the surface of an ellipsoid centered on the origin is (x/rx)2 +(y/rx)2 + (z/rx)2 = 1

• The parametric representation for the ellipsoid in terms of the latitude angle φ and the longitude angle θ is

x = rx cosφ cosθ, -л/2 <= φ <= л/2

y = ry cosφ sinθ, -л <= φ <= л

z = rz sinφ

Torus• Torus is a doughnut shaped object.• It can be generated by rotating a circle or other conic about a

specified axis. 17

• The Cartesian representation for points over the surface of a torus can be written in the form,

r-sqroot(x/rx)2 +(y/rx)2 2+ (z/rx)2 = 1where r in any given offset value.

18

• Parametric representation for a torus is similar to those for an ellipse, except that angle φ extends over 360°.

• Using latitude and longitude angles φ and θ, we can describe the torus surface as the set of points that satisfy.

x = rx (r + cosφ) cosθ, -л <= φ <= л

y = ry(r+ cosφ )sinθ, -л <= φ <= л

z = rz sinφ

Blobby Object• Some objects do not maintain a fixed shape, but change their

surface characteristics in certain motions or when in proximity to other objects.

• Examples in this class of objects include molecular structures, water droplets and other liquid effects, melting objects, and muscle shapes in the human body.

19

20

Spline Representations • A Spline is a flexible strip used to produce a smooth curve

through a designated set of points.

21

Three Dimensional Geometric and Modeling Transformations

Basic Transformation• Geometric transformations and object modeling in three

dimensions are extended from two-dimensional methods by

including considerations for the z-coordinate. • The basic geometric transformations are translation, rotation,

and scaling. • Other transformations that are often applied to objects

include reflection and shear.

Translation • In a three dimensional homogeneous coordinate

representation, a point or an object is translated from position

P = (x,y,z) to position P’ = (x’,y’,z’) with the matrix operation.

22

23

x’ 1 0 0 tx x Three equations, x ’ =x+ tx

y’ 0 1 0 ty y y ’ =y+ ty

z’ = 0 0 1 tz . z --------(1) z ’ =z+ tz

1 0 0 0 1 1 (or) P ’ =T.P

• Parameters tx, ty and tz specifying translation distances for the coordinate directions x,y and z are assigned any real values.

• Translating a point with translation vector T = (tx, ty, tz)

Translation of a point

Rotation• To generate a rotation transformation for an object an axis of

rotation must be designed to rotate the object and the amount of angular rotation is also be specified.

• Positive rotation angles produce counter clockwise rotations about a coordinate axis.

Co-ordinate Axes Rotations • The 2D z axis rotation equations are easily extended to 3D.

x ’ = x cosθ – y sinθ y ’ = x sinθ + y cosθ

z ’ = z --------(2) Parameters θ specifies the rotation angle.

x’ cosθ -sin θ 0 0 x which can write more compactly

y’ sinθ cos0 1 0 y as, P’=R2 (θ).P z’ = 0 0 1 0 . z 1 0 0 0 1 1

24

Rotation of an object about the z axis

x y z x

P’=Rx (θ).P

y ’ = y cosθ – z sinθ

z ’ = y sinθ + z cosθ

x ’ = x

Rotation of an object about the y axis

z ’ = zcosθ - xsinθ y z xy

x ’ = zsinθ +xcosθ P’=Ry (θ).P

y ’ = y

25

Scaling: Scaling is used to change the size of an object. • The size can be increased or decreased.

• The scaling three factors are required Sx Sy and Sz.

• Sx=Scaling factor in x- directionSy=Scaling factor in y-direction Sz=Scaling factor in z-direction

26

Scaling of the object relative to a fixed point• Translate fixed point to the origin• Scale the object relative to the origin• Translate object back to its original position.• Scaling of objects with fixed point (a, b, c).It can be

represented,

27

Scaling with a sequence of transformations

P’= x’, y’, z’,1 P= x,y,z,1 Matrix for scaling (or) P ’ = S.P

28

Other Transformations• Applied to objects include reflection and shear.

Reflections: 3D reflections are similar to 2D

• It is also called a mirror image of an object. For this reflection axis and reflection of plane is selected.

• Three-dimensional reflections are similar to two dimensions. Reflection is 180° about the given axis.

• For reflection, plane is selected (xy, xz or yz). Following matrices show reflection respect to all these three planes. Reflection relative to XY plane

29

Reflection relative to YZ plane

Reflection relative to zx plane

30

Shearing• It is change in the shape of the object. It is also called as

deformation. Change can be in the x -direction or y -direction or both directions in case of 2D.

• If shear occurs in both directions, the object will be distorted. But in 3D shear can occur in three directions.

Matrix for shear• Parameters a and b be assigned a z-axis shear.

31

32Shearing of a unit cube

Composite Transformation• A number of transformations or sequence of transformations

can be combined into single one called as composition. The resulting matrix is called as composite matrix. The process of combining is called as concatenation.

Example showing composite transformations• The enlargement is with respect to center. • For this following sequence of transformations will be performed

and all will be combined to a single oneStep1: The object is kept at its position as in fig (a)

Step2: The object is translated so that its center coincides

with the origin as in fig (b)

Step3: Scaling of an object by keeping the object at origin is done in fig (c)

Step4: Again translation is done. This second translation is called a reverse

translation. It will position the object at the origin location.

33

34

Dimensional Viewing

Three dimensional graphics applications • View an object from any spatial position, from the front, from

above or from the back. • Generate a view of what we could see if we were standing in the

middle of a group of objects or inside object, such as a building.

35

Viewing Pipeline Steps:

1. Positioning the camera at a particular point in space.

2. Deciding the camera orientation (i.e.,) pointing the camera and rotating it around the line of right to set up the direction for the picture.

3. When snap the shutter, the scene is cropped to the size of the ‘window’ of the camera and light from the visible surfaces is projected into the camera film.

Three dimension transformation pipeline

36

Viewing Coordinates: Specifying the view plane.• The view for a scene is chosen by establishing the viewing

coordinate system, also called the view reference coordinate system.

View coordinate system • A view plane or projection plane is set-up perpendicular to the

viewing Zv axis. Transformation from world to viewing coordinates This transformation sequence is,1. Translate the view reference• point to the origin of the world coordinate system.

37

• Apply rotations to align the x v, y v and z v axes with the world

xw,yw and z w axes respectively.

• Given vectors N and V, these unit vectors are calculated as

n = N / (|N|) = (n1, n2, n3)

u = (V*N) / (|V*N|) = (u1, u2, u3) v = n*u = (v1, v2, v3)• The composite rotation matrix for the viewing transformation

is , u1 u2 u3 0 which transform u into the world xw axis,

R= v1 v2 v3 0 v onto the yw axis,and n onto the zw axis.

n1 n2 n3 0 Mwc,vc =R.T (W-t-V transformation matrix is

0 0 0 1 obtained as the matrix product.

38

Projections• It is the process of converting a 3D object into a 2D object. It is

also defined as mapping or transformation of the object in projection plane or view plane.

• The view plane is displayed surface. Parallel projections is specified with a projection vector that defines the direction for the projection lines.

• There are two basic types of projection 1.Parallel Projection

2.Perspective Projection

Parallel Projection• Parallel projections are specified with a projection vector that

defines the direction for the projection lines. • There are two types of Parallel projections 1.Orthograohic

2.Oblique parallel

39

Orthographic Projection• Orthographic projections are used to produce the front, side and top views of

an object. • Front, side and rear orthographic projections of an object are called

elevations. • A top orthographic projection is called a plan view.• The orthographic projection that displays more than one face of an object is

called axonometric orthographic projections. • The most commonly used axonometric projection is the isometric projection.

40

• The orthographic projection that displays more than one face of an object is called axonometric orthographic projections.

• The most commonly used axonometric projection is the isometric projection.

41

•Transformation equations for an orthographic parallel projection are straight forward.

Orthographic projection of a point

• If the view plane is placed at position zvp along the zv axis then any point (x,y,z) in viewing coordinates is transformed to projection coordinates as xp = x, yp = y.

42

43

44

Oblique Projection • An oblique projection in obtained by projecting points along

parallel lines that are not perpendicular to the projection plane. • The below figure α and φ are two angles.

Oblique Projection

• Point (x,y,z) is projected to position (xp,yp) on the view plane. • The oblique projection line form (x,y,z) to (xp,yp) makes an

angle α with the line on the projection plane that joins (xp,yp) and (x,y).

45

• The projection coordinates are expressed in terms of x,y, L and φ as x p = x + Lcosφ

y p = y + Lsinφ

• Length L depends on the angle α and the z coordinate of the point to be projected:

tanα = z / L thus,

L = z / tanα = z L 1 where L 1 is the inverse of tanα,

which is also the value of L when z = 1.

The oblique projection equation:

x p = x + z(L1cosφ)

y p = y + z(L1sinφ)

• The transformation matrix for producing any parallel projection onto the x v y v plane is

46

1 0 L1cosφ 0

M parallel = 0 1 L1sinφ 0

0 0 1 0

0 0 0 1• An orthographic projection is obtained when L 1 = 0 (which

occurs at a projection angle α of 90°) • Oblique projections are generated with non zero values for L 1.

Perspective Projection• Perspective derived from Latin word “perspicere” which means

to see through.• Perspective in the graphic arts, such as drawing, is an

approximate representation, on a flat surface of an image as it is perceived by the eye.

47

• To obtain perspective projection of a 3D object, we transform points along projection lines that meet at the projection reference point.

• Equations describing coordinate positions along this perspective projection line in parametric form as

x’ = x - x u y’ = y - yu z’ = z – (z – z prp) u

• Parameter u takes values from 0 to 1 and coordinate position

(x ’, y ’,z ’) represents any point along the projection line.

When u = 0, the point is at P = (x,y,z). • In this representation, the homogeneous factor is

h = (z prp-z)/dp

and the projection coordinates on the view plane are calculated

from the homogeneous coordinates as,

48

x p = x h / h

y p = yh / h where the original z coordinate value

retains in projection coordinates for depth processing.

49

Perspective Projection Thank You

50

51

52

UNIT-V

Visible Surface Detection Methods• Visible Surface Detection Algorithms

• Back Face Detection • Depth-Buffer Method (or)Z-Buffer Algorithm• A-Buffer Method

• BSP Tree Method • Area Sub-Division Method

Visible Surface • The various algorithms are referred to as visible-surface

detection methods.• The most commonly used methods for detecting visible

surfaces in a three-dimensional scene.

1

Classification of visible-surface detection algorithms• Visible-surface detection algorithms are broadly classified

according to whether they deal with object definitions directly or with their projected images.

• These two approaches are called object-space methods and image-space methods, respectively.

object-space • An object-space method compares objects and parts of objects

to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.

• The Computation time of this algorithm will tend to grow with the number of objects in the screen, whether visible or not.

• object-space methods to identify visible lines in wireframe displays.

2

Image-space• In an image-space algorithm, visibility is decided point by

point at each pixel position on the projection plane. • The algorithm calculate an intensity for each of the 250,000 or

1 million distinct dots on the screen.• Image-space visible-surface algorithms can be adapted easily

to visible-line detection.

Back-face Detection• A fast and simple object-space method for identifying the back

faces of a polyhedron is based on the "inside-outside“ tests.• A point (x, y, z) is "inside" a polygon surface with plane

parameters A, B, C, and D if Ax+By+Cz+D<0

• This test is done by considering the normal vector N to a polygon surface, which has Cartesian components (A, B, C).

3

• If V is a vector in the viewing direction from the eye (or "camera") position, then this polygon is a back face if,

V.N > 0

• If object descriptions have been converted to projection coordinates and our viewing direction is parallel to the viewing Zv axis, then V = (0, 0, Vz ) and V.N= Vz .C

Vector V in the viewing direction and a back face normal Vector N of a polyhedron

4

• In a right-handed viewing system with viewing direction along the negative zv axis the polygon is back face if C<0.

• A back face if its normal vector has a Z component value:

C≤ 0.

Back face when the viewing direction is along the negative zv axis

• Backface have normal vectors that point away from the viewing positions and are identified by C3 0when the viewing directions is along the positive zv axis .

5

Depth-Buffer Method (or)Z-Buffer Algorithm• A commonly used image-space approach to detecting visible

surfaces is the depth-buffer method, which compares surface depths at each pixel position on the projection plane. This procedure is also referred to as the z-buffer method.

• The surface depth is measured from the view plane along the z axis of a viewing system.

• Each surface listed in the display file is then processed one scan line at time, calculating the depth(z value) at each (x,y) pixel position.

• Each surface listed in the polygon tables is then processed, one scan line at a time, calculating the depth (z value) at each (x, y) pixel position.

6

7

Depth buffer Method

The algorithm is:

for each polygon P

for each pixel (x, y) in P

compute z_depth at x, y

if z_depth < z_buffer (x, y) then

set_pixel (x, y, color)

z_buffer (x, y) <= z_depth

To Calculate the z-values ,the plane equation

Ax+By+Cz+D<0

is used where (x,y,z) is any point on the plane, and the coefficient A,B,C and D

are constants describing the special prosperities of the plane.

8

Advantage• It is very easy to implement

• It can be implemented in hardware to overcome the speed problem.

Disadvantage• It requires an additional buffer and hence the large memory.

E.g. (640 x 480 )

1.real (4 bytes): 4bytes/pixel = 1,228,000 bytes.

2.usually use a 24-bit z-buffer = 900,000 bytes

3.May need additional z - buffers for special effects, e.g.

shadows.

9

10

A-Buffer Method• An extension of the ideas in the depth-buffer method is the A-

buffer method (at the other end of the alphabet from "z-buffer", where z represents depth).

• The A buffer method represents an antialiased, area-averaged, accumulations -buffer method developed by ‘Lucasfilm’ for implementation in the surface-rendering system called REYES (an acronym for "Renders Everything You Ever Saw").

• Each position in the A-buffer has two fields:

1.depth field - stores a positive or negative real number

2.intensity field - stores surface-intensity information or a

pointer value.Surface color filed contains pointer to linked list of surface data comprising:

11

RGB intensity components Opacity parameter Depth Percent of area coverage Surface identifier Other surface rendering parameters

Scanline Method• A Scanline method of hidden surface removal is an another

approach of image space method. This method deals with more than one surfaces.

• The scanline algorithm maintains active edge list.• Scanlines are processed from left to right.• At the leftmost boundary of the surface , the surface flag is turned

ON , and at the rightmost boundary ,it is turned OFF.

12

Scanline crossing the projection of the surfaces

• For Scanline 2,the active edge contain edges AD,EH and FG.• But between edges EH and BC, the flags for both surfaces are

ON.

Depth-sorting method• The algorithm begins by sorting by depth.• The basic idea of the painter's algorithm is to paint the polygon

into the frame buffer in order of decreasing distance from the viewpoint.

13

• Using both image-space and object-space operations, the depth-sorting method performs the following basic functions:

1. Surfaces are sorted in order of decreasing depth.

2. Surfaces are scan converted in order, starting with the

surface of greatest depth.• In creating an oil painting, an artist first paints the background

colors.

14

• There is no need to erase portions of background.

• This process is continued as long as no overlaps occur.• The idea behind the splitting is that splitted polygon may

not obscure other polygon.• Sort all polygon in order of decreasing depth.

15

16

17

BSP Tree Method• A binary space-partitioning (BSP) tree is an efficient method for

determining object visibility by painting surfaces onto the screen from back to front, as in the painter's algorithm.

• The BSP tree is particularly useful when the view reference point changes, but the objects in a scene are at fixed positions.

18

• A BSP tree to visibility testing involves identifying surfaces that are "inside" and "outside" the partitioning plane at each step of the space subdivision, relative to the viewing direction.

• The space is partitioned into two sets of objects by using the plane P1.

• Object A and C are in front of P1,and objects B and D are in back of P1.

• Similarly, the space is partitioned again with plane P2.

Binary Tree Construction• In BSP tree, the objects are represented as terminal nodes.• The front objects are represented as left branches, and back

objects are represented as right branches.• The polygon equations are then used to identify "inside" and

"outside" polygons.

19

• Any polygon intersected by a partitioning plane is split into two parts.

BSP Tree Representation

20

Area Sub-Division Method• The area-subdivision method is an image-space method, but

object-space operations can be used to accomplish depth ordering of surfaces.

• This method can be applied by successively dividing the total viewing area into smaller and smaller rectangular until each small area is the projection of part of a single visible surface or no surface at all.

Implementation• A Tests must be done to determine whether the total area

should be subdivide into smaller rectangles. • If the tests indicate that the view is complex, then the surface

should be subdivided.• This process is continued until they are reduced to the size of a

single pixel.

21

• A viewing area with a resolution of 1024 x 1024 could be subdivided ten times in this way before a sub area is reduced to a pixel.

• A surface can have four possible relationship with an area boundary.

• They are as follows:1.Surrounding surface 2.Overlapping

Surface 3.Inside Surface 4.OutsideSurface

22

Surrounding surface-One that completely encloses the area.

Overlapping Surface-One that is partly inside and partly outside

the area.

Inside Surface-One that is completely inside the area.

Outside Surface-One that is completely outside the area.

23

The subdivision process of a area are needed if one of the following conditions is true:

1.All surfaces are outside surfaces with respect to the area.

2.Only one inside, overlapping, or surrounding surface is in the area.

3.A surrounding surface obscures all other surfaces within the area boundaries.

RGB colour model• The RGB color model is one of the most widely used color

representation method in computer graphics.

R(red), G(green), B(blue) • Each primary color can take an intensity value ranging

from 0(lowest) to 1(highest).24

The Color C is expressed in RGB component as

C =RR+GG+BB

RGB Color Model

Example: (0, 0, 0) for black, (1, 1, 1) for white,

(1, 1, 0) for yellow, (0.7, 0.7, 0.7) for gray

25

CMY color model• The CMY color model use a subtraction process and this

concept is used in the printer.• In CMY model, we begin with white and take away the

appropriate primary components to yield a desired color.• The coordinate system of CMY model use the three primaries’

complementary colors: C(cray), M(magenta) and Y(yellow).

CMY Color Model

26

• The corner of the CMY color cube that is at (0, 0, 0) corresponds to white, whereas the corner of the cube that is at (1, 1, 1) represents black.

• Where the white is represented in the RGB system as unit column vector.

• Similarly the conversion of CMY to RGB representation is expressed as

• Where black is represented in the CMY system as unit column vector.

***ALL THE BEST***

27


Recommended