CGM PPT

Post on 03-Feb-2016

24 views 0 download

Tags:

description

also seeviva questionshttp://engglabmanuals.blogspot.in/2011/09/cs2405-computer-graphics-viva.html

transcript

DCCS -503

COMPUTER GRAPHICS AND MULTIMEDIA

BY MUKTESH GUPTA

The computer graphics is on of the most effective and commonly used way to communicate the processed information to the user. It displays the information in form of graphics object like –as pictures, charts, graphs and diagrams instead of

simple text.

We can say that computer graphics makes it possible express data in pictorial form. It involves display manipulation and storage of picture and experimental data for proper visualization using a computer.

Definition of Computer Graphics

APPLICATION OF COMPUTER GRAPHICS

• G aphi al use i te fa e GUI ,like e u s, dialog box, icons, scroll box, curser, buttons etc .

• Graphics Plotting Commercial Applications and

Technology .

• Plotting in business.

• Office Automation.

• Scientific visualization

• Electronic Publishing.

• Desktop Publishing .

• Computer Aided Design (CAD).

• In simulation.

• In multimedia.

• In animation.

• In video games.

• In Education field.

GRAPHICS PRIMITIVES

GRAPHICS PRIMITIVES:- Graphics primitives a basic non indivisible graphical element for input or output within a computer graphics system typical output area.

GRAPHICS BASIC PRIMITIVES:- In order to display the image on various display devices, special procedures are required.

The basic graphics primitives are-(1) points,(2)lines,(3)circles,(4)ellipse,(5)other curves.

(1) POINTS:- Apoints is the smallest units of the graphics.Itrepresents a single location on a coordinate system.

(2) LINE:- line is the simplest geometrical structure of the graphics.line drawing is done by calculating the intremediate position between two specified end-points.

ALGORITHM FOR LINE DRAWING:- (a) digital differential analyzer (b) ese ha s line drawing algorithm (c)parallei line algorithm.

(3) CIRCLE GENERATION:- Circle can be defined by three ways: (a) polynomial method,(b) DDA method.

Let a calculated point is (x,y),then we can get eight different points-

(x,y) (y,x)

(x,-y) (-y,x)

(-x,y) (y,-x)

(-x,-y) (-y,-x)

ALGORITHMS FOR CIRCLE:- (a)midpoint circle algorithm (2) ese ha s algo ith .

(4)ELLIPSE:- An ellipse is defined as the set of points, such that the sum of distances from two fixed points is same for all points.

METHODS FOR DEFINING AN ELLIPSE:-(a) polymomial method

(b) trigonomeric method

(c) ellipse axis rotation

(d) midpoint ellipse algorithm

(5) OTHER CURVES:- different curve funtions plays major role in object modeling, animation path specifications data and function graphing and other graphics application .

Similar to circles and ellipse various functions prosses symmetries that can be exploited to minimize the computation of coordinate positions with curve path .

DIRECT VIEW STORAGE TUBE

A Direct View Storage Tube(DVST) stores the

picture information as a charge distributes just

behind the phosphorus coated screen.

This is the alternate method to monitor a screen

image. As it stores the pictures information

inside CRT instead of refreshing screen.

A Direct View Storage Tube(DVST) stores the

picture information as a charge distributes just

behind the phosphorus coated screen.

This is the alternate method to monitor a

screen image. As it stores the pictures

information inside CRT instead of refreshing

screen.

Structure of DVST

• Two guns are used- one is primary and second is flood gun.

• First one stores the picture patterns and second one

maintains picture display.

• The primary electron gun is used to draw the picture

definition on the storage grid on a non conducting material.

• High speed electron from primary gun strike storage grid and

knocks out electron, which are attached to the collector grid.

• Storage grid being non-conducting, the areas where electrons

have being removed will keep a net positive charge.

• The stored positive charge on the pattern on the storage grid

is the picture definition.

• The flood gun produces a continuous stream of low speed

electrons that pass through the control grid and are attached

to the positive area of storage grid.

• These low speed electrons penetrate through the storage grid

to the phosphor waiting, without affecting the charge pattern

on the storage surface.

Advantages of DVST • No refreshing is needed.

• Very complex pictures can be displayed at very high resolution

without flicker.

• They ordinarily do not display colour.

• Selected part of the picture cannot be erased.

• The erasing and redrawing process can take several seconds

for complex pictures.

• No animation in DVST.

• Modifying any part of image need to modify require

redrawing of entire image.

Drawbacks of DVST

PIXELS Picture element.' Smallest square element of an image (called 'dot') that

can be turned on or off on a computer monitor. Detail (resolution) of an image depends on the number of pixels a monitor can show. VGA monitors display 640 x 840 (307,400) pixels per inch (PPI), SVGA monitors display 1,073 x 768 (786,432) PPI, and newer monitors can display 1,000 x 1,000 (one million) or more PPI.

RESOLUTION

Resolution is the term used to describe the number of dots, or pixels, used to display an image. in computers, resolution is the number of pixels (individual points of colour) contained on a display monitor, expressed in terms of the number of pixels on the horizontal axis and the number on the vertical axis. The sharpness of the image on a display depends on the resolution and the size of the monitor.

FRAME BUFFER

A frame buffer is a large, contiguous piece of computer memory. At a

minimum there is one memory bit for each pixel in the raster , this

amount of memory is called a bit plane . A 512*512 element square

raster needs 2^18 (2^9=512 , 2^18=512*512)or 262,144 memory bit in

single plane . The picture is built up in the frame buffer 1 bit at a time. A

memory bit has only two states, therefore a single bit plane yields a

black-and white display.

ASPECT RATIO

The aspect ratio of a geometric shape is the ratio of its sizes in different dimensions

It is expressed as two numbers separated by a colon (x:y). The values x and y do

not represent actual width and height but, rather, the "relation" between width

and height. If the width is divided into x units of equal length and the height is

measured using this same length unit, the height will be measured to be y units.

Suppose the width is divided into 3 units

Of L le gth , a d the height is divided i to 2 u its of sa e le gth L , the Aspect Ratio is 3:2

L

COMMON ASPECT RATIOS

PHOSPHORESCENCE

Phosphorescence is a specific type of photoluminescence related to fluorescence. Unlike

fluorescence, a phosphorescent material does not immediately re-emit the radiation it

absorbs. The slower time scales of the re-emission are associated with "forbidden" energy

state transitions in quantum mechanics. As these transitions occur very slowly in certain

materials , absorbed radiation may be re-emitted at a lower intensity for up to several

hours after the original excitation.

Phosphorescent

Bird Figure

Commonly seen examples of phosphorescent

materials are the glow-in-the-dark toys, paint,

and clock dials that glow for some time after

being charged with a bright light such as in any

normal reading or room light.

PERSISTENCE

PERSISTENCE is defined as the time it take the emitted light from the screen to

decay to one tenth of its original intensity. PHOSPHOR PERSISTENCE is the

tendency of the phosphor to continue to emit light when no longer excited with

a beam. The higher the persistence, higher will be the quality of the display.

Various phosphors are available depending upon the needs of the measurement

or display application. Phosphors are available with Persistence ranging from less

than one microseconds to few seconds.

For visual observation of brief transient events, high persistence phosphor is

preferable, that is for displaying highly complex and static picture . And for fast

and repetitive or of high frequency, phosphor with low persistence are used. E.g.

for animation films.

Types of line drawing algorithms : There are two types of line drawing algorithms-

1) DDA line drawing algorithm

2) B ese ha s line drawing algorithm

1) DDA line drawing algorithm : The Digital Differential Analyzer (DDA)

is an incremental scan – conversion method, characterized by performing

calculations at each step using results from the preceding step . Let (xi,yi)

be the calculated point on the line at step i. Now, since the next point

(xi+1,yi+1) should specify the condition dy/dx=m , we have ,

yi+1=yi+mdx(dx=xi+1-xi)

xi+1=xi+dy/m(dy=yi+1-yi)

When m<= 1, we start with x=x1 and y=y1 and set dx=1(unit increment

in x direction ). At each successive point , the y coordinate can be

calculated as: yi+1=yi+m.

When m> 1 , we start with x=x1 and y=y1 and set dy=1 (unit increment in y direction). Now, the x coordinate at each successive point on the line can be calculated as: xi+1=xi+1/m . This process continues until x reaches x2 or y reaches y2 , and all points on the line are scan converted.

Procedure DDA (x1,y1,x2,y2:integers);

var length, i: integer; x, y,xincr,yincr:real;

begin

length :=abs(x2-x1);

if abs(y2-y1)> length then length= abs (y2-y1);

xincr= (x2-x1)/length;

yincr= (y2-y1)/length;

x= x1+0.5;

y= y1+0.5;

for i=1 to lenth do

begin

plot (trunc(x), trunc(y));

x=x+ xincr;

y=y+ yincr;

end;

end;

The DDA algorithm is faster then the previous approach, since it calculates

the points without any floating-point multiplication, as is done in the

previous one. However, floating –point addition is still needed for

determining each successive point.

DEFINATION

• Bresenhams algorithm is very good and capable method for scan converting the line.

• It is based on the principle of finding the optimum raster locations to show the straight lines.

• The basic concept behind this algorithm is to find the decision variable or the error term.

• This is defined as a distance between the real line location and the nearest pixel.

• According to the slope of the line the algorithm increments either x or y one unit.

• Once this is performed then increment in other variable is present on the basis of error term or the decision variable.

ALGORITHM

• Input the to line endpoints and store the left endpoint in (x0,y0).

• Load (x0,yo) into the frame buffer , that is plot the first point .

• Calculate constants Δx,Δy,2Δy and 2Δy-2Δx and obtain the starting value for the decision parameter as- d0=2Δy-Δx.

• At each xk along the line ,starting at k=0,perform the following test – if dk<0 then next point to plot is (xk+1,yk).and dk+1=dk+2Δy. Otherwise the next point to plot is(xk+1,yk+1) and

Dk+1=dk+2Δy-2Δx

• Repeat step 4 Δx time.

B ese ha s li e d a i g p og a #include <graphics.h>

#include<iostream.h>

#include<dos.h>

#include<math.h>

#include<conio.h>

Void main()

{

Int gd=DETECT,gm;

Int x1,x2,y1,y2;

Int I,flat,d;

Clrscr();

Cout<< e te alue of , = ; Cin>>x1>>y1;

Cout<< e te alue of , = ; Cin>>x2>>y2;

I itg aph &gd,&g . :\\tc\ gi ; Int dx,dy;

Dx=abs(x2-x1);

Dy=abs(y2-y1);

Intr x,y,t.s1,s2;

X=x1;

Y=y1;

If((x2-x1)>o)

S1=1;

Else

S1=-1;

If((y2-y1)>0)

S2=1;

Else

S2=-1;

If(dy>dx)

{

t=dx;

dx=dy

dy=t;

Flag=1;

}

Else

Flag=0;

D=2*dy-dx;

Outte t , . , ; Outte t , X ,Y ; i=1

A;

;

Putpixel(x,y,3); Delay(40); While(d>=0) { If(flag==1) X=x+s1; Else Y=y+s2; D=d-2*dx; } If(flag==1) y=y+s2; Else X=x+s1; D=d+2*dy; i++; If(i<=dx) Goto a; Getch(); Closegraph(); }

B ese ha s circle drawing

• B ese ha s circle drawing algorithm is

basically eight way symmetry of circle to

generate a circle.1/8th part of circle is drawn

started from 90-45 degree.

• In such case x

coordinate moves in positive direction while y

moves in negative direction.

TRACKING A POINT

• Let a point P is scanned.Now pixel selection is done.Q and R are two pixel and distances from Q is � = �+ + � and

� = �+ + � −

Distance of pixel Q and R from two circle is

given as � = � -� and

� = � �

• Now the decision variable is

• ��=� +�

• Now if�� <0, � <� then the x coordinate

will increase else x will increase in positive

direction and y will increase in negative

direction.

• So for

• ��<0

• �+ = � + and

• For

• ��• �+ � + �+ � −

DIAGRAM

• Hence now at starting point i.e x=0 and y=r

• Taking the equation

• ��=� +�

• � + + � -� + � + + � − -�

• (0+1)*(0+1)+r*r-r*r+(0+1)*(0+1)+(r-1)*(r-1)-

r*r

• =3-2r

• So same way ��+ will be

• ��<0 ��+ = ��+4 �+6

•• �� ≥ ��+ �� � − � +

ALGORITHM

• Radius scanned and named as r

• d=3-2r

• x=0; y=r;

• [initialize starting point ]

• do

• {

• plot(x,y)

• if(d<0)

• { d=d+4x+6

• }

• else

• {

• d=d+4(x-y)+10

• y=y-1

• }

• x=x+1;

• }while(x<y)

• stop

• Remaining point can be obtain through reflection from x and y axis and they are (y,x),(y,-x),(x,-y),(-x,-y),(-y,-x),(-y,x)and (-x,y) as shown ---

Eight-Way Symmetry The first thing we can notice to make our circle

drawing algorithm more efficient is that circles

centred at (0, 0) have eight-way symmetry

(x, y)

(y, x)

(y, -x)

(x, -y) (-x, -y)

(-y, -x)

(-y, x)

(-x, y)

2

R

Index

What is polygon filling

Types of level at which region is defined

Algorithm used for polygon filling at different

level

What is polygon filling?

Polygon is a closed figure with three or more

sides and filling is the process of colouring in a

fixed area or region.

Therefore polygon filling is colouring of

polygon.

For polygon filling we use different types of

algorithm according to situation

For colouring first we have to fix area or region.

We can define region at pixel level or geometric level.

If region is defined at pixel level then for filling we have two algorithm named as

Boundary fill

Flood fill

And if region is defined at geometric level we have one algorithm name as

Scan line

For filling we use two concept name as

4-connected pixel

8-connected pixel

4-connected pixel method is good for interior

filling ,but it is not efficient at boundary. Therefore

at boundary we use 8-connexted pixel method

BOUNDARY FILL ALGORITHM

POLYFILL

SCAN CONVERSION

OF A POLYGON

Conceptual Steps

• Find minimum enclosed rectangle.

• No. of scan lines = Y max - Y min +1

• For each scan line do

Obtain intersection point of scan line with polygon edges.

Sort intersection from left to right.

• Form pair of intersection from list

• Fill between pairs

• Intersection points are updated for each scan line

• Stop when scan line has reached Y max

• Data structure required are

• ACTIVE EDGE TABLE:

Contains all edges crossed by scan line at the current stage of iteration.

This is the list of edges that are active for this scan line sorted by increasing X intersections.

• SORTED EDGE TABLE

11 D

10

9 F

8

7

E

6

5 C

4

3 A

2

1

2 3 4 5 6 7 B 8 9 10 11 12 13

Sorted Edge Table

SET contains all the information necessary to process the scan line effectively

SET is typically built by using bucket sort with as many bucket as those are scan lines. All edges are sorted by

their Y minimum coordinate with a separate Y bucket for each scan line

11

10 nextedge

9

8

EF DE

7

6

CD

5

4

FA

3

2 AB BC

1

×

×

×

×

×

×

×

5 7 6/4 ×

9 2 0 ×

11 13 0 ×

9 7 -5/2 11 7 6/4 ×

3 7 -5/2

Y

MAX

X

MIN

1/m

STEPS

1. Set y to smallest y in SET entry

2. Initialize AET to be empty.

3. Repeat until both AET and SET are empty

(a) Move from SET bucket y to AET,those edged whose y min=y. Sort AET on x

(b) Fill pixels in scanline y using pair of x coordinate from AET.

4. Increment scanline by 1.

5. Remove from AET those entry for which y = y min.

6. For each non -vertical edge in AET update x for new y.

7. End loop

SCAN LINE 6

AET

FA CD

as slope=0 value of x min will remain same

9 2 0 11 13 0 ×

SCAN LINE 7

AET

FA EF DE CD

For edges EF and DE slope≠0.As we have just included edges EF and DE value of x remain same

SCAN LINE 8

FA EF DE CD

For edge EF For edge DE dx/dy< 0 dx/dy>0

So value of x will decrement. x value will increment.

New value of x : New value:

x=7-5/2=4.5 x=7+6/4=8.5

taking round off value i.e. 5 taking round off value 10

SCAN LINE 9

FA EF DE CD

9 2 0 9 7 -5/2 11 7 6/4 11 13 0 ×

9 2 0 9 5 -5/2 11 9 6/4 11 13 0 ×

9 2 0 9 2 -5/2 11 10 6/4 11 13 0 ×

Remove an edges from AET for those Y max=scan line. therefore edges EF and FA are removed SCAN LINE 10 AET DE CD SCAN LINE 11 AET DE CD As Ymax=scan line for edges DE and CD these edges are removed. Fill between pair of x in AET

11 12 6/4 11 13 0

11 13 6/4 11 13 0 ×

ALGORITHM

Edge table (ET)

ymin = min (all y in the ET)

AET=null

for y= y min to y max

merge_sort ET [Y] into AET by x value

Fill between pair of x in AET

for each edge in AET

ymax =y

Remove edge from AET

else

edge x = edge x+dx/dy

end if

Sort AET by x value

end scan_fill

2D TRANSFORMATIONS

SCALING

SHEARING

2D TRANSFORMATIONS

• A transformation is any operation on a point

i spa e , that aps the poi t s coordinates into a new set of coordinates

(x1,y1).

SCALING

• Scaling is the process of expanding or compressing the dimensions of an object. Positive scaling constants Sx and Sy are used to describe changes in the length with respect to the x direction and y direction. A scaling constant>1 creates an expansion of length,and <1 a compression of length . Scaling occurs along the x-axis and y-axis to create a new point from the original. This is achieved using the following transformation:

P =Tsx,sy(P),

he e = Sx* a d = Sy*y

EXAMPLE ON SCALING

Below shown is a triangle and a house that have

been doubled in both width and height

SHEARING

Shearing in the y direction is similar except the roles

are reversed.

x1 = x

y1 = y + bx

where a=0.

Where x1 and y1 are the new values ,x and y are the

original values, and b is the scaling factor in the y

direction .

REFLECTION

AND ROTATION

IN 2D

3) Reflection

Reflection is the transformation which generates the mirror image

of an object. For Reflection we need to know the Reference axis.

A Reflection is nothing but 180 degree Rotation. Therefore we

use the identity matrix with positive and negative signs

according to the situation respectively.

Reflection with respect to X-axis.

Reflection about the x-axis

can be shown as:

Rotation with respect to origin.

The reflection about the origin can

be shown as:

The reflection about the y-axis

can be shown as:

Reflection with respect to Y-Axis.

Reflection about an Arbitrary Line:-

Reflection about any line y= mx + c can be

accomplished with a combination of

translate-rotate-reflect transformations.

The Steps are as follows:-

1. Translate the system so that the line

passes through the origin.

2. Rotate it such that one of the coordinate

axis lies onto the line.

3. Reflect about the aligned axis.

4. Restore the System back by using the

inverse rotation and translation

transformation.

4) Rotation

In rotation, we rotate the object by a particular angle

θ theta .

Things to keep in mind while Transformation:-

1. Rotation is with respect to origin and not with the

centre of the object.

2. Let the point to be rotated be P(x,y).

3. Let the destination point be P( , . 4. Angle of Rotation be θ theta) in counter-

clockwise direction.

Derivation of the 2D Rotation Equations P(x,y) is located r away from (0,0) at angle φ from x-axis.

P( , is lo ated a a f o , at a a gle φ + θ f o -axis.

From Trigonmetry we have,

x = r * cos(φ)

y = r * sin(φ)

and

x' = r * cos θ + φ)

' = * si θ + φ)

Now making use of the following trigonometric identities:

cos(a+b) = cos(a) * cos(b) - sin(a) * sin(b)

sin(a+b) = sin(a) * cos(b) + cos(a) * sin(b)

and substituting in for the above equations for X' and Y', we

get:

x' = r * cos θ * cos(φ) - * si θ * si φ)

' = * si θ * cos(φ) + r * cos θ * si φ)

Then we substitute in X and Y from their definitions above,

and the final result simplifies to:

x' = x * cos θ) - y * si θ

y' = * si θ + * cos θ

Thus we have obtained the new co-ordinate of point P after the Rotation.

For Clockwise Direction

(Take (-θ

Homogenous Co-Ordinates:-

For Counter-Clockwise

Direction

For Counter-Clockwise

Direction

TRANSLATION

TRANSLATION means simply moving without resizing,rotating or doing

anything else.

To translate a shape,every point of the shape must move –

a) the same distance.

b) in the same direction.

The coordinates of just one pixel value on the right bottom most corner of

the structure is given which is (4, 2) on the left side here and you add 4 to 2

you get 8 and you subtract 1 from 2 you get 1.

So (4, 2) is displaced to (8, 1)and all other coordinates are also displaced in

a similar manner because we are talking of rigid body transformation so all

of these points nor the entire object undergoes the same types of

transformation like a translation.

If you want to replace the translations by matrix equation you can easily

represent it by an equation like this.

You remember the coordinates of A and B are 2 into 1, basically it is a

vertical column vector with just two elements and you add tx to X and ty to Y

and you get the coordinates of the displaced point or for the entire object.

And translations basically too happen in almost all sorts of transformations.

It is done inherently.

If you do not write it even as an expression and do not give it a translation

also it is there in other types of transformation as for example rotation.

Rotations of any objects which are not centered at the origin undergo a

translation and are the same for scaling when objects and lines are not

centered at the origin.

Anything which is not at the origin and undergoes rotation and scaling

then those objects including lines and points also undergo a translation in

some form.

We have seen examples of rotation earlier. We will again see some

examples today and it will be clear to you that translation inherently happens

with most other types of transformations as well.

But of course if the object or the point is at that origin or the line intersects

the origin there may not be a translation.

When you give it a rotation or a scaling there may not exist any such thing.

Or we will say that the origin by itself is invariant to scaling reflection and

shear but it is not invariant to translation that is indeed a very nice

observation because if you plug on the top of the expression given in the top

of the slide here and substitute A is equal to 0 which is the origin or the

coordinates of the origin into the space you get B equals Td which is tx and ty.

So the origin is shifted based on the coordinates or the amount of

displacement vector.

Basically the translation components might shift, you should be able to

shift something to origin and also the origin to somewhere else. But of

course you cannot scale an origin, you cannot reflect an origin and you

cannot shear an origin,there is the key.

So, you see the expression which we talked about in the last slide in

terms of the transformation. We used matrix multiplication in general

to represent all transformations

in 2D and except translation we have an addition instead of a

multiplication.

So we have a mathematical problem, we cannot directly represent

translations as matrix multiplication as we can do for say, these are

examples in figures of scaling and rotation.

Again in the same structure like a house which is scaled up or scaled

down it means it is expanded or contracted or we can make

something like a house to rotate about its origin. And in this case they

may inherent a translation but when you represent these

transformations you use the matrix multiplication operation,

mathematical model.

But for translation given in the previous slide we are forced to use

an addition.

Composite Transformation

Role: Composite Transformation are used to

perform transformations(reflection, rotation,

scale, shear and translation) at low computational

cost and time.

In this instead of applying series of

transformations one by one, we apply them

together

**in series transformations we have to take care of sequence

Composite Transformation (Translation)

To apply series of transformation T1,T2 on point p.

By regular method: fi st Cal ulate p′ =T *p the p ′′=T *p ′

By Composite Transformation: first calculate T=T1*T2

the p ′= T*p

Composite Transformation method : saves computations as in most of the cases we know the form of resultant T matrix thus we concatenate or compose the

matrices that is you get T 1 , T 2 together.

Composite Transformation (Rotation)

To rotation by Ө1 and Ө2.

By regular method: first Calculate T1= Ө1 then T1= Ө2

and multiply them

By Composite Transformation: add (Ө1+Ө2 )and replace by Ө in rotation matrix by the above addition

** In this case regular method is more effective as in original equation of rotation if we put Ө1+Ө2 i.e. sin(Ө1+Ө2 ) and cos(Ө1+Ө2 ) then it become more complex. But still both method results the same

Composite Transformation (Rotation about arbitrary point)

To rotate about an arbitrary point P in space.

By regular method: First Translate by to origin then rotate by given angle then translate it back

By Composite Transformation: first calculate general purpose rotation matrix –

T=T1(-Px,-Py)*T2(Ө)*T3(Px,Py)

Using the above composite matrix we can make our rotation computations easy

Composite Transformation (Scaling)

To scale about an arbitrary point in space .

By regular method: 1. Translate P to origin

2. Scale it

3.translate P back to origin

By Composite Transformation: first calculate the general

purpose scaling matrix by:

T=T1(-Px,-Py)*T2(Sx,Sy)*T3(Px,Py)

Using this matrix we can easily do

scaling instead of performing series

of transformations

WINDOW

An area of world coordinate scene that has been selected for display.

This is a rectangle surrounding the object as a part of it that we wish to draw

on the screen.

The window is associated with the object rather than with image and it is a

viewport which brought the window content to the screen.

Window defines what is to be viewed.

VIEW PORT

A rectangle region of the screen which is selected for displaying the object or a part of it described in a window.

The part of the object inside the window is displayed in the view port of the computer screen.

WINDOW

VIEW PORT

WORLD COORDINATE SCREEN COORDINATE

SCREEN COORDINATE

A coordinate system used to address the screen.

Screen coordinate is also called device coordinate.

This 2-dimention coordinate system refers to the physical

coordinate of the pixels on the computer screen

CONTENTS • Viewing Transformation

• Viewing Coordinates

• Specifying The View Coordinates

• Transformation From World To Viewing Coordinates

• Transformation From World To Viewing Coordinates: An Example For 2d System

Viewing Transformation

Viewing Transformation is an important aspect of 2D images and image of the picture is from using the picture coordinates this image of picture where required to we displayed on display we need to convert these picture coordinate to the display device coordinate task is done by viewing transformation.

Viewing Coordinates

• Generating a view of an object in 3D is similar to

photographing the object.

• Whatever appears in the viewfinder is projected onto the flat

film surface.

• Depending on the position, orientation and aperture size of

the camera corresponding views of the scene is obtained.

Specifying The View Coordinates

• To establish the viewing reference frame, we first

pick a world coordinate position called the view

reference point.

• This point is the origin of our viewing coordinate

system. If we choose a point on an object we can

think of this point as the position where we aim a

camera to take a picture of the object.

Specifying The View Coordinates

• Finally, we choose the up direction for the view by specifying view-up vector V.

• This vector is used to establish the positive direction for the yv axis.

• The vector V is perpendicular to N.

• Using N and V, we can compute a third vector U, perpendicular to both N and V, to define the direction for the xv axis.

xw

zw

yw

xv

zv

yv

P0

P

N

V

Specifying The View Coordinates

To obtain a series of views of a

scene , we can keep the the view

reference point fixed and change

the direcion of N. This

corresponds to generating views

as we move around the viewing

coordinate origin.

P0

V

N

N

Transformation From World To Viewing Coordinates

Conversion of object descriptions from world to viewing coordinates is equivalent to transformation that superimpoes the viewing reference frame onto the world frame using the translation and rotation.

xw

yw

zw

xv

yv

zv

3D- SCALING

• Scaling refers to enlarging or shrinking the size of the object.

• Scaling a 3D point or object is similar to scaling in 2D.

• The formulas for 2D and 3D scaling are same with the addition of z co-ordinate.

• Due to which instead to 3x3 matrix like in 2D, we use 4x4 matrix in 3D.

• 3D scaling matrix with homogeneous co-

ordinate system is as follows-

S = Sx 0 0 0

0 Sy 0 0

0 0 Sz 0

0 0 0 1

where Sx is the scaling factor for x coordinate,

Sy is the scaling factor for y coordinate,

Sz is the scaling factor for z coordinate.

• Scaling a point in 3D is done by multiplying

the scaling matrix with a point-

x y z 1 * Sx = z

0 Sy 0 0

0 0 Sz 0

0 0 0 1

3D TRANSFORMATION

INTRODUCTION:-

Here we have two types of views:-

1.Object/picture is moved or manipulated directly

with the geometric transformations.

.Ma ipulate the o je t ha gi g ie e s o-

ordinates.

There are four basic 3D transformations:-

3D translation,3D scaling,3D rotation,3D reflection.

Translation

Let tx, ty, tz be the translation in x, y, z directions

respectively then the translation matrix is

given by

Scaling

If Sx, Sy, Sz be the sacling uits in x, y and z

directions respectively then scaling matrix is

given by.

Rotation

Let rotation is done by angle q in any direction.

About X-axis:

About Y-axis:

w

z

y

x

w

z

y

x

1000

0cossin0

0sincos0

0001

'

'

'

w

z

y

x

w

z

y

x

1000

0cos0sin

0010

0sin0cos

'

'

'

About Z-axis:

About an arbitrary axis:

Assume we want to perform a rotation by degree about an

axis in a space passing through the point (x0,y0,z0) with

direction cosines(cx,cy,cz).

1. All points translate by x0,y0,z0.

|T|= (x0,y0,z0)T

2. Rotate the axis into one of the principal axis.

w

z

y

x

w

z

y

x

1000

0100

00cossin

00sincos

'

'

'

q

Lets pick z (|Rx||Ry|)

3. We rotate next by q degree in z (|Rz(q)|).

4. Now we undo that rotation to align the axis.

5. We undo the translation by

|T|= (- x0,-y0,-z0)T

• M=|T||Rx||Ry||Rz||Ry|-1|Rx|

-1|T|-1

x

y

z

cy

0

R

cz

cz d

Z

Y

X

d

cx

A

cx

Cos b=d/1

Sin b=cx/1 b

D = /cy2+cx

2

Cosa = Cx/d

Sina = Cy/d

a

1000

100

010

x001

0

0

0

z

yT

1000

0//0

0//0

0001

dcdc

dcdcR

xy

yx

x

1000

00

0010

00

dc

cd

Rx

x

y

1000

00

0010

00

dc

cd

x

x

Ry-1=

Rx-1=

1000

0//0

0//0

0001

dcdc

dcdc

xy

yx

1000

100

010

x-001

0

0

0

z

yT-1=

1000

0100

00cossin

00sincos

zR

Reflection In 3D-reflection the reflection takes place about a plane . The

matrix in case of pure reflections, along basic planes, viz. X-Y

plane, Y-Z plane and Z-X plane are given below:

Transformation matrix for

reflection through x-y

Plane is Txy

w

z

y

x

w

z

y

x

1000

0100

0010

0001

'

'

'

Transformation matrix for

reflection through x-y

Plane is Tyz

Transformation matrix for

reflection through x-y

Plane is Tzx

w

z

y

x

w

z

y

x

1000

0100

0010

0001

'

'

'

w

z

y

x

w

z

y

x

1000

0100

0010

0001

'

'

'

Translation

B=A+Td , where Td =[tx ty]T

• we represent translations in our general transformation

matrix which is [a b c d] four parameters.

• the transformation elements for the translation is made

possible with the help of the homogeneous coordinates.

• the homogeneous coordinates are the representation of

a point in a homogeneous coordinate system.

Homogeneous Coordinate • Use a * at i : a tx x

= d ty * y

We have :-

=ax+cy+tx

=bx+cy+ty

Each point is now represented by a triplet:- (x,y,w)

(x/w, y/w, ):- are called the cartesian coordinates of the homogeneous points.

Interpretation of Homogeneous Coordinates

Where,

• Ph is a point in this homogeneous space or in x y w space and it also

points as a vector along a certain direction.

• Any point on this line or on this vector in this direction basically represents

a single point in Cartesian coordinate system which is this one.

• Two homogeneous coordinates (x1 ,y1 ,w1 ) & (x2,y2,w2) may represent the same point, iff they are multiplies of one another : say, (1,2,3 ) & (3,6,9).

• There is no unique homogeneous representation of a point.

• All triples of the form (t.x , t.y , t.w) form a line x,y,w space.

• Cartesian coordinates are just the plane w=1 in this space.

• W=0, are the points at infinity.

General Purpose of 2D transformation in

homogeneous coordinate representation

a b p

T= c d q

m n s

• Parameters involved in Scaling , Rotation ,Reflection and Shear

are a,b,c,d

• If B=T.A , then Translation parameters are : (p, q)

• If B=A.T , then Translation parameters are : (m, n)

Composite Transformation

Role: Composite Transformation are used to

perform transformations(reflection, rotation,

scale, shear and translation) at low

computational cost and time.

In this instead of applying series of

transformations one by one, we apply them

together

**in series transformations we have to take care of sequence

Composite Transformation

(Translation)

To apply series of transformation T1,T2 on point p.

By regular method: fi st Cal ulate p′ =T *p the p ′′=T *p ′

By Composite Transformation: first calculate T=T1*T2

the p ′= T*p

Composite Transformation method : saves computations as in most of the cases we know the form of resultant T matrix thus we concatenate or compose the

matrices that is you get T 1 , T 2 together.

Composite Transformation

(Rotation) To rotation by Ө1 and Ө2.

By regular method: first Calculate T1= Ө1 then T1= Ө2

and multiply them

By Composite Transformation: add (Ө1+Ө2 )and replace by Ө in rotation matrix by the above addition

** In this case regular method is more effective as in original equation of rotation if we put Ө1+Ө2 i.e. sin(Ө1+Ө2 ) and cos(Ө1+Ө2 ) then it become more complex. But still both method results the same

Composite Transformation (Rotation about arbitrary point)

To rotate about an arbitrary point P in space

By regular method: First Translate by to origin then rotate by given angle then translate it back

By Composite Transformation: first calculate general purpose rotation matrix –

T=T1(-Px,-Py)*T2(Ө)*T3(Px,Py)

Using the above composite matrix we can make our rotation computations easy

Composite Transformation

(Scaling)

To scale about an arbitrary point in space .

By regular method: 1. Translate P to origin

2. Scale it

3.translate P back to origin

By Composite Transformation: first calculate the general purpose scaling matrix by:

T=T1(-Px,-Py)*T2(Sx,Sy)*T3(Px,Py)

Using this matrix we can easily do

scaling instead of performing series

of transformations

INVERSE TRANSFORMATION When we apply any transformation to point(x,y) we get a new point

( , . “o eti es it a e ui e to u do the applied t a sfo atio . I such a case we have to get original point (x,y) from the point ( , . This

can be achieved by inverse transformation.

INVERSE TRANSLATION In inverse translation, point P(x,y) is translated in opposite direction.

Inverse of T(tx,ty = T −tx, −ty) i.e point P(x,y) is translated by tx in left

direction and ty in downward direction.

T(-tx,-ty) = T(tx,ty ⁻¹ TT⁻¹=

INVERSE ROTATION

Here the rotation takes place in counter clockwise direction.

In rotation matrix put θ=(- θ),then we get inverse rotation matrix .

R(θ ⁻¹ = ‘ −θ

‘‘⁻¹=

INVERSE SCALING Here the already scaled coordinates i.e point( , is o e ted i to o igi al

point (x,y).

as =“ₓ.

=Sy.y

“o , = /“ₓ = /Sy

“o, the “ ali g Fa to is /“ₓ a d /Sy.

So the scaling matrix is:

“ “ₓ, Sy ⁻¹ = “ /“ₓ, /Sy)

INVERSE SHEARING Here, shearing takes place in opposite direction i.e tagential force is applied in

left direction by p and downward by q.

1 -p 0

sh⁻¹ = -q 1 0

0 0 1

MID-POINT LINE CLIPPING ALGORITHM :- This algorithm is based on binary search. The line

is divided equally into two shorter line segments at its mid-point. Now, The clipping

categories of the two line segments can be determined by their region codes. Each segment

which is to be clipped is again divided into smaller segments and then categorized. This

bisection and categorization process goes on until each line segment spans across a window

boundary reaches a threshold for line size and all other segments are either visible or not

visible, completely.

The mid-point coordinates (Xm,Ym) of any line having coordinates (X1,Y1) and (X2,Y2) are

Xm = X1+X2/2 , Ym = Y1+Y2/2

Let us take an example to illustrate the mid-point subdivision algorithm. The steps in the

process are as follows-

We test whether B is visible, if so, it is the farthest visible point from A and the process is

complete. Else we try process for other lines.

We check whether can be trivially rejected, in which case the process is complete and no

output is generated. Otherwise, we continue to the next time.

We divide AB at its mid-point Pm. This is a guess at the farthest visible point. If the segment

Pm B can be trivially rejected, we have overestimated and we repeat from step (ii) using the

segment APm, otherwise we repeat from step (ii) with segment PmB.

Fig . Mid point subdivision method of line clipping

WHAT IS CLIPPING

CLIPPING :- Clipping or clipping algorithm is the procedure which identifies the parts of a

picture that are either inside or outside of the specified region of the space. The region

against which an object is to be clipped is called a clip window.

WHY IS CLIPPING REQUIRED IN COMPUTER GRAPHIC

Clipping is required to specify a localized view along with the convenience and the

flexibility of using a window, because objects in the scene may be completely inside the

window, completely outside the window, or partially visible through the window. The

clipping operation eliminates objects or portions of objects that are not visible throughthe

window to ensure the proper construction of the corresponding image.

Clipping algorithms can be applied in the world coordinates, such that only the contents of

the window interior are mapped to device coordinates. The entire world-coordinates

picture can be mapped first to device coordinates, or normalized device coordinates, then

clipped against the viewport boundaries.

World-coordinates clipping removes the primitive outside the window from further

consideration, thus eliminating the processing necessary to transform the primitives to

device space. Other side, view port clipping can reduce calculations by allowing

concatenation of viewing and geometric transformation matrices

Cohen-Sutherland Algorithm

• Developed by Dan Cohen and Ivan Sutherland.

• This algorithm uses four digit code to

determine which of the nine regions contain

the end points of the line.

• Four bit codes are called region codes or

outcodes.

Bit codes

Bit number 1 0

First (most significant) bit Above top edge

Y > y(max)

Below top edge

Y < y(max)

Second bit Below top edge

y< < y(min)

Above top edge

y> > y(min)

Third bit Right of right edge

X > x(max)

To the left of right

edge

X < x(max)

Fourth (least significant) bit Left of left edge

X < x(min)

Right of left edge

X > x(min)

The window

1 0 0 1 1 0 0 0 1 0 1 0

0 0 0 1 0 0 0 0 0 0 1 0

0 1 0 1 0 1 0 0 0 1 1 0

How to take bit values

• 0 is taken at the bit places if the following

results positive. 1 is taken otherwise

• B1 = sign { y(max) – y }

• B2 = sign { y – y(min) }

• B3 = sign { x(max) – x }

• B4 = sign { x – x(min) }

Selecting lines

• Lines with complete 0000 bits are inside

window boundries. They need not be clipped

• On logical ANDing the bits of end points of

line, if result is not 0000, line is completely

outside the window. It is ignored

• If logical ANDing gives 0000, line is partially

visible and needs to be clipped.

Clipping the partially visible line

• Coordinates at which line will be clipped at top, bottom, left,

right edge are determined by-

• Top edge

X = X(o) + [{ X(1) – X(o)} * { Y(max) – Y(o) } / { Y(1) – Y(o) }]

• Bottom edge

X = X(o) + [ { X(1) – X(o)} * { Y(min) – Y(o)} / { Y(1) – Y(o) }]

• Right edge

Y = Y(o) + [{ Y(1) – Y(o)} * { X(max) – X(o)} / { X(1) – X(o) }]

• Left edge

Y = Y(o) + [{ Y(1) – Y(o)} * { X(min) – X(o)} / { X(1) – X(o) }]

Example

• if window is (5,30) (5,10) (25,30) (25,10) and line to be clipped is B(18,22) C(30,32).

• Bits fo B

B1 = 30-22 = (positive) = 0

B2 = 22-10 = (positive) = 0

B3 = 25-18 = (positive) = 0

B4 = 18-5 = (positive) = 0

• Bits fo C

B1 = 30-32 = (negative) = 1

B2 = 32-10 = (positive) = 0

B3 = 25-30 = (negative) = 1

B4 = 30-5 = (positive) = 0

• To clip the line

Y = Y(o) + [{ Y(1) – Y(o)} * { X(max) – X(o)} / { X(1) – X(o) }]

= 22 + (32 – 22) * (25-18)/(30-18) = 27.8

• Thus the line BC has coordinates (25, 27.8) in view port

B-Spline method

The terms b-spline goes back to the long flexible strips of metal used by drafts person to layout the surface of airplane, car and ships.

The mathematical equivelant of these strips, the natural cubic spline is a c0,c1,c2 continous coubic polynomial that interplotes (passes

through) these control points.

Given by the parametric representation, any point on the curve between two successive points Pi, Pi+1 has its coordinate x(u), y(u) for

0<u<1.for B-SPLINE curve segments.

X(u)={(a3u+a2)u+a1}u+a0

Y(u)={(b3u+b2)u+b1}u+b0

The cofficient(ai,bi) are evaluated using four consecutive points and forms constraints enforced c0,c1and c2 continuities as mentioned .

The cofficient values for points

(Xi-1,Yi-1),

(Xi,Yi),(Xi+1Yi+1)and(Xi+2,Yi+2)are

• A3=(-xi-1+3xi-3xi+1+xi+2)/6

• A2=(xi-1-2xi+xi+1)/2

• A1=(-xi-1+xi+1)/2

• A0=(xi-1+4xi+xi+1)/6

• Semilarly b0,b1,b2 and b3 are evaluated from y coordinates of those points.Throughout the B-spline curve drawing process,the sofficients are calculated only ances for a singl curve segments.

B-splines have two advantages over bezier

splines

• The degree of B-spline polynomial can be set

independently of the number of the control

points.

• B-splines allow local control over the shape of

a spline curve or surface.The trade off is that

B-splines are more complex than bezier

splines.

Bezier Curve

Bezier curve

• This curve generally follows the shape of

defining polygon.

In a bezier curve:

• The first and last points of the curve coincide

with the first and last point of the defining

polygon.

• The tangent vector at the end of the curve

have the same direction in which the polygon

is drawn.

• Equation of the Bezier curve is:

P(t)=ΣBi.Jn,i t ≤t≤

Where, Bi are the control points.

Jn,i is the Bezier function.

Jn,i= tⁱ -t ⁿ⁻ⁱ i

where, n = n!

i i!(n-i)!

• Eg:

• Here, no. of control points =4

therefore, degree of polynomial=3

Bezier functions are given by:

J = -t)³

J = t -t)²

J = t -t)

J = t

• Bezier blending functions:

INDEX

• Illumination models

• Diffuse Reflection

ILLUMINATION MODELS

1. DIFFUSE ILLUMINATION-

The object may be illuminated by light which

does not comes from single direction or

particular source but comes from all the

directions then illumination is uniform from

all the direction. Then the illumination is

called as Diffuse illumination.

2. POINT-SOURCE ILLUMINATION-

Point source emits rays from a single point

source and the emmiting rays are radialy

diverging from the source position. The

dimensions of the light source are smaller

than the size of an object.

DIFFUSE REFLECTION

Diffuse reflection is the reflection of light from

a surface in such a way that the incident ray

will reflected in many angle rather than just

one angle .

Consider the effect of ambient light(ambient light is

the combination of reflections of different surfaces to

produce a uniform illumination) when it is reflected

from a surface produce illumination of the

surface at any position from which the surface

is visible.

If we assume uniform intensity of ambient light

be La,

Then intensity of diffuse reflection is

I= Ka * La

Where Ka is ambient diffuse coefficient of

reflection (0 <= Ka <=1)

Submmited by-

Vipin Thakur

0101cs131120

Cse b, Vsem

SPECULAR REFLECTION

• Specular reflection is mirror like reflection of light from the surface , in which light from a single incoming direction is reflected into a single outgoing direction.

• Such behavior is described by the law of reflection.

• The law states that the incident ray and the reflected ray make the same angle with respect to surface normal i.e.

Ɵi = Ɵr

• Thus the incident , normal and reflected directions are coplanar.

• Shiny materials have specular

properties, that give highlights

from light sources.

• The highlights we see depends

on our position relative to the

surface from which the light is

reflecting.

• For an ideal mirror, a perfectly

reflected ray is symmetric with

the incident ray about the

normal.

• But as before, surfaces are not perfectly smooth, so there will

be variations around the ideal reflected ray.

• The a gle ɸ et ee e to a d e e is alled viewing angle

• Fo a idle efle to pe fe t i o , ɸ = .

• Phong modelled these variations through empirical

observations.

• As a result we have:

Iout = kspecular ∙ Ilight ∙ coss(ɸ).

where Iout =intensity of specular reflection, ɸ= ie ing angle

Ilight = intensity of light source

kspecular = material specular reflection coefficient

• s is the shininess factor due to the surface material.

PHONG SHADING

• Phong shading is a method for rendering a polygon

surface is to interpolate normal vectors, and then

apply the illumination model to each surface point.

• This method is developed by Phong Bui Tuong. It is

also called as normal vector interpolation shading.

• Surface highlights are more realistic and greatly

reduces the Mach-band effect.

Steps for rendering polygon surface:-

1. Find the average unit normal vector at each

polygon vertex.

2. Linearly & interpolate the vertex normals over

the surface of the polygon.

3. Apply an illumination model along each scan line

to calculate projected pixel intensities for the

surface points.

Scan Line

N 1 N 3

N 2 Fig. 1

• In the fig.1 , the normal vector N for the scan-line

intersection point along the edge between vertices 1

and 2 can be obtained by vertically interpolating

between edge endpoint normals.

• Interpolation of surface normals along a polygon

edge between two vertices is illustrated.

N1 + N2 N= Y-Y2

Y1-Y2

Y1-Y

Y1-Y2

• Incremental methods are used to evaluate normals

between scan lines and along each individual scan

line.

• At each pixel position along the scan line, the

illumination model is applied to determine the

surface intensity at that point.

• Intensity calculations using an approximated normal

vector at every point along the scan line produce

more accurate results than the direct interpolation of

intensities, as in Gouraud shading.

• The trade-off, however, is that Phong shading

requires considerably more calculations.

Advantages:-

• Usually very smooth-looking results.

• High quality, narrow specularities.

Disadvantages:-

• But, considerably more expensive.

• Still an approximation for most surfaces.

Color Model

A color model can be said as a orderly system of creating a whole

range of colors from a set of primary colors .It gives us a convienent

specification of colors in the specific range or gamut

In order to understand it properly especially primary colors and

color gamut we have to first look at the property of light involved in

this phenomenon.

Property of Light Light source produced by sun or a bulb emits all frequencies

within visible range to give white light. Also each frequency in

visible band corresponds to a different color. Our eye can

percieve 400,000 distinct frequencies. Now when this light is

reflected on an object some frequencies are absorbed while

some gets reflected. All this reflected frequencies combines

and decides the color of the object. If the lower frequencies

are predominant in the reflected light ,then the color of object

will be red(as in visible range red is of lowermost frequency i.e

VIBGYOR) .Therefore, we can say that the dominant frequency

decides the color of the object.

That s h do i a t f e ue is also k o as hue o si pl color.

Principle Used-

Now we know that 2 different color light sources with suitably

chosen intensities can be used to produce a range of other color.

And this the only principle used by color model. A color model

use combination of three or more colors to produce wide range

of colors ,called the color gamut for that model. And the basic

colors which are used to produce color gamut is known as

primary colors.

Types of Color Model

There are basically two types of color model and all

other color model are based on either of the 2 model.

1.Additive Color Model- This type of model uses light

to display color. Here the mixing begins with black and

ends with white as more color gets added , result is

lighter and tend to white . This is used for computer

display.

Eg-RGB Color Model

2. Subtractive color model- This model uses ink to

display color .In this the mixing begins with black and

ends on white, result gets darker as color gets

added.This is used for printing material.

Its called subtractive because its wavelength is less than

its constituent colors

Eg-CMYK sytem used for printing.

Different types of color Model

1.RGB Color Model

2.YIQ Color Model

3.CMY Color Model

4.HSV Color Model and so on..

All this model uses different ranges of color but

they all are either additive or subtractive

Thank You..

PARALLEL PROJECTION

During projections, when we want to preserve the object shape and size, we make use of

Parallel Projection. Parallel Projections need at least two views of the object onto different

View plane so as to obtain the complete representation of final, required object.

In this, image points are found as the intersection of view plane with a projector drawn from

Object and having fixed direction [ Direction Of Projection = DOP ] i.e. They have DOP instead

Of Centre Of Projection [COP] .

DOP same for all points.

A parallel projective transformation is determined by the DOP vector V and the View

Plane. View Plane is specified by its reference point Ro and Normal N .

y

z

x

P(x1,y1,z1) P , ,z

Ro N

View Plane

No ou ai is to fi d the p oje tio of poi t P o to ie pla e i.e. Poi t P . For obtaining equations onto xy plane in DOP , V= (Xp)i + (Yp)j + (Zp)k .

F o fig it is lea that PP a d V a e i sa e di e tio . “i e p oje tio is i xy pla e, P has

only x and y values and z=0.

Parallel Projection Transformation

Therefore, PP vector = u . V [ as directions are same ]

After comparing the components -

After that:

1) Translate view reference point Ro of view plane to origin using translation matrix.

2) Perform alignment transformation Rxy so that view normal vector N of view plane

points in direction K, normal to XY plane.

3) Project point P1 on xy plane.

4) Perform inverse of steps 2 and 1.

General Parallel Projection

THIS IS THE REQUIRED SOLUTION

Video File Formats

These are used to store digital video data on computers. They

are always stored in compressed form.

They normally consists of a container format(e.g. Matroska)

containing video data in video coding format (e.g. VP9)

alongside audio data in an audio coding format(e.g. Opus).

The container format may contain synchronization info.,

subtitles, and metadata like title etc.

Essence-The coded video and audio inside a video file

container (i.e. not headers, footers and metadata)

Codec-A program (or hardware) which can decode video or

audio

File extensions- .webm .wmv .avi .mov

Video Compression

WHAT?- It means reducing the quantity of data to represent

video images keeping in mind the original quality.

WHY?- It can effectively reduce the bandwidth required to

transmit digital video via terrestrial broadcast, via cable, via

satellite services.

Mostly they are lossy i.e. it operates on the basis that much of

the data originally present is not necessary for achieving goo

perceptual quality.

Example-DVDs use a video coding standard called MPEG-2

(which compress 2hrs of video data by 15 to 30 times) still is

high quality for standard definition video.

It s a t ade-off between space, quality, and cost of hardware

required to decompress the video in a reasonable time.

3GP

It is a multimedia container format efined by 3rd

generation Partnership Project used on 3G

Mobile Phone but also on some 2G, 4G Phones

It was designed for GSM based Phones 3GP is a

big endian, storing & transferring MSB first.

Device support-3G mobile phones, Nintendo DSi,

some Apple iDevices.

Software Support-Windows, Mac OS X and Linux

operating systems

.AVI

Audio Video Interleaved is a multi container format by

Microsoft in Nov 1992 as a part of its Video for Windows

software

Format-derivative of RIFF, divides data into blocks or

chunks. Each chunk is identified by a tag.

An AVI file takes the form of single chunk in a RIFF

formatted file, which is further subdivided into 2

a dato hu ks & optio al hu k . Format is described using following image:-