+ All Categories
Home > Documents > Multimedia Technology - Chapter 5_ Video

Multimedia Technology - Chapter 5_ Video

Date post: 13-Dec-2015
Category:
Upload: zeeshan-bhatti
View: 371 times
Download: 1 times
Share this document with a friend
Description:
Multimedia TechnologyChapter 5: VideoAnalog and Digital Video
61
MULTIMEDIA TECHNOLOGY VIDEO Dr. Zeeshan Bhatti BSIT-III Chapter 5 BY: DR. ZEESHAN BHATTI 1
Transcript
Page 1: Multimedia Technology - Chapter 5_ Video

MULTIMEDIA TECHNOLOGYVIDEO

Dr. Zeeshan Bhatti

BSIT-III

Chapter 5

BY: DR. ZEESHAN BHATTI 1

Page 2: Multimedia Technology - Chapter 5_ Video

VIDEO

Video is somewhat like a series of still images

Video uses Red-Green-Blue color space

Pixel resolution (width x height), number of bits per pixel, and frame rate are factors in quality

But there’s much more to it

BY: DR. ZEESHAN BHATTI 2

Page 3: Multimedia Technology - Chapter 5_ Video

VIDEO STRUCTURING

A video can be decomposed into a well-defined structure consisting of five levels

1. Video shot is an unbroken sequence of frames recorded from a single camera. It is the building block of a video.

2. Key frame is the frame, which can represent the salient content of a shot.

3. Video scene is defined as a collection of shots related to the video content, and the temporally adjacent ones. It depicts and conveys the concept or story of a video.

4. Video group is an intermediate entity between the physical shots and the video scenes. The shots in a video group are visually similar and temporally close to each other.

5. Video is at the root level and it contains all the components defined above.

BY: DR. ZEESHAN BHATTI 3

Page 4: Multimedia Technology - Chapter 5_ Video

TYPES OF VIDEO SIGNALS

1. Component video

2. Composite Video

3. S-video

BY: DR. ZEESHAN BHATTI 4

Page 5: Multimedia Technology - Chapter 5_ Video

COMPONENT VIDEO SIGNLAS

Component video: Higher-end video systems make use of three separate video signals for the red, green, and blue image planes. Each color channel is sent as a separate video signal.

Most computer systems use Component Video, with separate signals for R, G, and B signals.

For any color separation scheme, Component Video gives the best color reproduction since there is no “crosstalk” between the three channels.

This is not the case for S-Video or Composite Video, discussed next. Component video, however, requires more bandwidth and good synchronization of the three components.

Makes use of three separate video signals for Red, Green and Blue.

BY: DR. ZEESHAN BHATTI 5

Page 6: Multimedia Technology - Chapter 5_ Video

COMPOSITE VIDEO | 1 SIGNAL

Composite video: color (\chrominance") and intensity (\luminance") signalsare mixed into a single carrier wave.

Chrominance is a composition of two color components (I and Q, or U and V).

Used by broadcast TV, In NTSC TV, e.g., I and Q are combined into a chromasignal, and a color subcarrier is then employed to put the chroma signal at thehigh-frequency end of the signal shared with the luminance signal.

The chrominance and luminance components can be separated at the receiverend and then the two color components can be further recovered.

When connecting to TVs or VCRs, Composite Video uses only one wire andvideo color signals are mixed, not sent separately. The audio and sync signalsare additions to this one signal.

Since color and intensity are wrapped into the same signal, some interferencebetween the luminance and chrominance signals is inevitable. BY: DR. ZEESHAN BHATTI 6

Page 7: Multimedia Technology - Chapter 5_ Video

S-VIDEO | 2 SIGNALS

S-Video: as a compromise, (Separated video, or Super-video, e.g., inS-VHS) uses two wires, one for luminance and another for a compositechrominance signal.

As a result, there is less crosstalk between the color information andthe crucial gray-scale information.

The reason for placing luminance into its own part of the signal is thatblack-and-white information is most crucial for visual perception.

In fact, humans are able to differentiate spatial resolution in grayscaleimages with a much higher acuity than for the color part of color images.

As a result, we can send less accurate color information than must be sentfor intensity information | we can only see fairly large blobs of color, so itmakes sense to send less color detail.

BY: DR. ZEESHAN BHATTI 7

Page 8: Multimedia Technology - Chapter 5_ Video

ANALOG VIDEO

BY: DR. ZEESHAN BHATTI 8

Page 9: Multimedia Technology - Chapter 5_ Video

ANALOG VIDEO

An analog signal f(t) samples a time-varying image. So called“progressive” scanning traces through a complete picture (a frame) row-wise for each time interval.

In TV, and in some monitors and multimedia standards as well, anothersystem, called \interlaced" scanning is used:

a) The odd-numbered lines are traced first, and then the even numberedlines are traced. This results in “odd” and “even” fields --- two fields makeup one frame.

b) In fact, the odd lines (starting from 1) end up at the middle of a line atthe end of the odd field, and the even scan starts at a half-way point.

BY: DR. ZEESHAN BHATTI 9

Page 10: Multimedia Technology - Chapter 5_ Video

INTERLACINGImage separated in 2 series of lines (odd and even) called “fields”

One frame displayed = one field

NTSC = about 60 fields / sec

PAL & SECAM = 50 fields / sec

Interlacing was invented because it was difficult to transmit the amount of information in a full frame quickly enough to avoid flicker

The Double number of fields presented to the eye reduce perceived flicker

BY: DR. ZEESHAN BHATTI 10

Page 11: Multimedia Technology - Chapter 5_ Video

Figure 5.1 shows the scheme used. First the solid (odd) lines are traced, P to Q, then R to S, etc., ending at T; then the even field starts at U and ends at V.

The jump from Q to R, etc. in Figure 5.1 is called the horizontal retrace, during which the electronic beam in the CRT is blanked. The jump from T to U or V to P is called the vertical retrace.

BY: DR. ZEESHAN BHATTI 11

Figure: Interlaced raster scan

Page 12: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 12

Field 2Field 1

Page 13: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 13

Finally on TV/Computer Screen

Page 14: Multimedia Technology - Chapter 5_ Video

NTSC

NTSC (National Television System Committee) TV standard is mostly usedin North America and Japan.

It uses the familiar 4:3 aspect ratio (i.e., the ratio of picture width to itsheight)

Uses 525 scan lines per frame at 30 frames per second (fps).

a) NTSC follows the interlaced scanning system, and each frame isdivided into two fields, with 262.5 lines/field.

b) Each line takes 63.5 microseconds to scan. Horizontal retrace takes 10 microseconds (with 5 microseconds horizontal synch pulse embedded), so the active line time is 53.5 microseconds

c) Since the horizontal retrace takes 10.9 sec, this leaves 52.7 sec for theactive line signal during which image data is displayed (see Fig.5.3).

BY: DR. ZEESHAN BHATTI 14

Page 15: Multimedia Technology - Chapter 5_ Video

Vertical retrace takes place during 20 lines reserved for control informationat the beginning of each field. Hence, the number of active video lines perframe is only 485.

Similarly, almost 1/6 of the raster at the left side is blanked for horizontalretrace and sync. The non-blanking pixels are called active pixels.

Since the horizontal retrace takes 10.9 sec, this leaves 52.7 sec for theactive line signal during which image data is displayed

NTSC video is an analog signal with no fixed horizontal resolution.Therefore one must decide how many times to sample the signal for display:each sample corresponds to one pixel output.

A “pixel clock" is used to divide each horizontal line of video into samples.The higher the frequency of the pixel clock, the more samples per line thereare.

BY: DR. ZEESHAN BHATTI 15

Page 16: Multimedia Technology - Chapter 5_ Video

PAL

PAL (Phase Alternating Line) is a TV standard widely used in WesternEurope, China, India, and many other parts of the world, invented by GermanScientist

PAL uses 625 scan lines per frame, at 25 frames/second, with a 4:3 aspectratio and interlaced fields.

(a) PAL uses the YUV color model. It uses an 8 MHz channel and allocates abandwidth of 5.5 MHz to Y, and 1.8 MHz each to U and V. The colorsubcarrier frequency is fsc ≈ 4:43 MHz.

(b) In order to improve picture quality, chroma signals have alternate signs(e.g., +U and -U) in successive scan lines, hence the name “Phase AlternatingLine".

(c) Interlaced, each frame is divided into 2 fields, 312.5 lines/field

Its broadcast TV signals are also used in composite video

BY: DR. ZEESHAN BHATTI 16

Page 17: Multimedia Technology - Chapter 5_ Video

SECAM

SECAM stands for Systeme Electronique Couleur Avec Memoire, thethird major broadcast TV standard.

SECAM also uses 625 scan lines per frame, at 25 frames per second,with a 4:3 aspect ratio and interlaced fields.

SECAM and PAL are very similar. They differ slightly in their colorcoding scheme:

(a) In SECAM, U and V signals are modulated using separate colorsubcarriers at 4.25 MHz and 4.41 MHz respectively.

(b) They are sent in alternate lines, i.e., only one of the U or V signalswill be sent on each scan line.

BY: DR. ZEESHAN BHATTI 17

Page 18: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 18

Comparison of Analog Broadcast TV Systems

Page 19: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 19

DIGITAL VIDEO

Page 20: Multimedia Technology - Chapter 5_ Video

DIGITAL VIDEOThe output is digitized by the camera into a sequence of single frames.

The video and audio data are compressed before being written to a tape or digitally stored.

BY: DR. ZEESHAN BHATTI 20

Page 21: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 21

Page 22: Multimedia Technology - Chapter 5_ Video

DIGITAL VIDEO BASICS

A video signal consists of luminance and chrominance information

Luminance – brightness, varying from white to black (abbreviated as Y)

Chrominance – color (hue & saturation), conveyed as a pair of color difference signals:

R-Y (hue & saturation for red, without luminance)

B-Y (hue & saturation for blue, without luminance)

BY: DR. ZEESHAN BHATTI 22

Page 23: Multimedia Technology - Chapter 5_ Video

DIGITAL VIDEO

The advantages of digital representation for video are many.

For example:

(a) Video can be stored on digital devices or in memory, ready to be processed (noise removal, cut and paste, etc.), and integrated to various multimedia applications;

(b) Direct access is possible, which makes nonlinear video editing achievable as a simple, rather than a complex, task;

(c) Repeated recording does not degrade image quality;

(d) Ease of encryption and better tolerance to channel noise.

BY: DR. ZEESHAN BHATTI 23

Page 24: Multimedia Technology - Chapter 5_ Video

CHROMA SUBSAMPLING

Since humans see color with much less spatial resolution than they see black and white, it makes sense to “decimate" the chrominance signal.

Interesting (but not necessarily informative!) names have arisen to label the different schemes used.

To begin with, numbers are given stating how many pixel values, per four original pixels, are actually sent:

(a) The chroma subsampling scheme “4:4:4" indicates that no chromasubsampling is used: each pixel's Y (luminance) , Cb (Blue difference) and Cr (Red Difference) values are transmitted, 4 for each of Y, Cb, Cr.

BY: DR. ZEESHAN BHATTI 24

Page 25: Multimedia Technology - Chapter 5_ Video

(b) The scheme “4:2:2" indicates horizontal subsampling of the Cb, Cr signals by a factor of 2. That is, of four pixels horizontally labelled as 0 to 3, all four Ys are sent, and every two Cb's and two Cr's are sent, as (Cb0, Y0)(Cr0, Y1)(Cb2, Y2)(Cr2, Y3)(Cb4, Y4), and so on (or averaging is used).

(c) The scheme “4:1:1" subsamples horizontally by a factor of 4.

(d) The scheme “4:2:0" subsamples in both the horizontal and vertical dimensions by a factor of 2. Theoretically, an average chroma pixel is positioned between the rows and columns as shown Figure.

BY: DR. ZEESHAN BHATTI 25

Page 26: Multimedia Technology - Chapter 5_ Video

CHROMA SUBSAMPLING.

BY: DR. ZEESHAN BHATTI 26

Page 27: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 27

The mapping examples given are only theoretical and for illustration

Page 28: Multimedia Technology - Chapter 5_ Video

CCIR STANDARDS FOR DIGITAL VIDEO

CCIR is the Consultative Committee for International Radio, and one of the most important standards it has produced is CCIR-601, for component digital video.

This standard has since become standard ITU-R-601, an international standard for professional video applications

adopted by certain digital video formats including the popular DV video.

Table below shows some of the digital video specications, all with an aspect ratio of 4:3. The CCIR 601 standard uses an interlaced scan, so each eld has only half as much vertical resolution (e.g., 240 lines in NTSC).

BY: DR. ZEESHAN BHATTI 28

Page 29: Multimedia Technology - Chapter 5_ Video

CIF stands for Common Intermediate Format specied by the CCITT.

(a) The idea of CIF is to specify a format for lower bitrate.

(b) CIF is about the same as VHS quality. It uses a progressive (non-interlaced) scan.

(c) QCIF stands for “Quarter-CIF". All the CIF/QCIF resolutions are evenly divisible by 8, and all except 88 are divisible by 16; this provides convenience for block-based video coding in H.261 and H.263.

BY: DR. ZEESHAN BHATTI 29

Page 30: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 30

Table: Digital video specications

Page 31: Multimedia Technology - Chapter 5_ Video

HDTV (HIGH DENITION TV)

The main thrust of HDTV (High Denition TV) is not to increase the “denition" in each unit area, but rather to increase the visual eld especially in its width.

(a) The rst generation of HDTV was based on an analog technology developed by Sony and NHK in Japan in the late 1970s.

(b) MUSE (MUltiple sub-Nyquist Sampling Encoding) was an improved NHK HDTV with hybrid analog/digital technologies that was put in use in the 1990s. It has 1,125 scan lines, interlaced (60 elds per second), and 16:9 aspect ratio.

(c) Since uncompressed HDTV will easily demand more than 20 MHz bandwidth, which will not t in the current 6 MHz or 8 MHz channels, various compression techniques are being investigated.

(d) It is also anticipated that high quality HDTV signals will be transmitted using more than one channel even after compression.

BY: DR. ZEESHAN BHATTI 31

Page 32: Multimedia Technology - Chapter 5_ Video

For video, MPEG-2 is chosen as the compression standard.

For audio, AC-3 is the standard. It supports the so-called 5.1 channel Dolby surround sound, i.e., ve surround channels plus a subwoofer channel.

The salient dierence between conventional TV and HDTV:

(a) HDTV has a much wider aspect ratio of 16:9 instead of 4:3.

(b) HDTV moves toward progressive (non-interlaced) scan. The rationale is that interlacing introduces serrated edges to moving objects and flickers along horizontal edges.

BY: DR. ZEESHAN BHATTI 32

Page 33: Multimedia Technology - Chapter 5_ Video

The FCC (Federal Communications Commission) has planned to replace all analog broadcast services with digital TV broadcasting by the year 2006. The services provided will include:

o SDTV (Standard Denition TV): the current NTSC TV or higher.

o EDTV (Enhanced Denition TV): 480 active lines or higher, i.e., the third and fourth rows in Table 5.4.

oHDTV (High Denition TV): 720 active lines or higher.

BY: DR. ZEESHAN BHATTI 33

Page 34: Multimedia Technology - Chapter 5_ Video

USES OF DIGITAL VIDEO

The uses of digital video are widespread. People who are teaching, training, and selling can all use digital video to improve their jobs.

For teaching, digital video is particularly helpful in teaching multiculturalism. From the classroom, the class is able to visit distant places. The students are able to see and hear the local sounds.

This aspect of digital video is also particularly helpful in the Real Estate field. Customers are able to visit properties without having to actually be there.

Digital video is also a useful tool for training. Because digital video is capable of constructing images through the use of a three-dimensional model, producing simulated training situations with digital video can be an effective tool. For example, training an airline pilot with a flight simulator allows the operator to experience what flying a jet is really like.

BY: DR. ZEESHAN BHATTI 34

Page 35: Multimedia Technology - Chapter 5_ Video

USES OF DIGITAL VIDEO

Interior designers and landscape architects can also use this tool to simulate what a final project will look like when completed.

Selling products can also be enhanced through digital video. Video of products allow the customer to get a realistic glimpse of a product and its capabilities. Digital video provides product demonstrations and can often answer customer's questions.

Video conferencing allows salespeople to pitch there products to people around the world without the travel expenses.

The medical field has many different uses of interactive digital video. Digital video provides doctors with current information that is continuously changing. It also allows for doctors to learn from colleagues that are far away and to learn from from various sources of information.

BY: DR. ZEESHAN BHATTI 35

Page 36: Multimedia Technology - Chapter 5_ Video

PROCESS OF CONVERTING A ANALOG VIDEO SIGNAL TO A DIGITAL VIDEO SIGNAL

Video data normally occurs as continuous, analog signals. In order for a computer to process this video data, we must convert the analog signals to a non-continuous, digital format. In a digital format, the video data can be stored as a series of bits on a hard disk or in computer memory.

The process of converting a video signal to a digital bitstream is called analog-to-digital conversion (A/D conversion), or digitizing. A/D conversion occurs in two steps:

1. Sampling captures data from the video stream.

2. Quantizing converts each captured sample into a digital format.

BY: DR. ZEESHAN BHATTI 36

Page 37: Multimedia Technology - Chapter 5_ Video

SAMPLING:Each sample captured from the video stream is typically stored as a 16-bit integer.

The rate at which samples are collected is called the sampling rate.

The sampling rate is measured in the number of samples captured per second (samples/second).

For digital video, it is necessary to capture millions of samples per second.

BY: DR. ZEESHAN BHATTI 37

Page 38: Multimedia Technology - Chapter 5_ Video

QUANTIZING

Quantizing converts the level of a video signal sample into a discrete, binary value.

This value approximates the level of the original video signal sample.

The value is selected by comparing the video sample to a series of predefined threshold values.

The value of the threshold closest to the amplitude of the sampled signal is used as the digital value.

BY: DR. ZEESHAN BHATTI 38

Page 39: Multimedia Technology - Chapter 5_ Video

A video signal contains several different components which are mixed together in the same signal.

This type of signal is called a composite video signal and is not really useful in high-quality computer video.

Therefore, a standard composite video signal is usually separated into its basic components before it is digitized.

The composite video signal format defined by the NTSC (National Television Standards Committee) color television system is used in the United States.

The PAL (Phase Alternation Line) and SECAM (Sequential Coleur Avec Memoire) color television systems are used in Europe and are not compatible with NTSC.

Most computer video equipment supports one or more of these system standards. BY: DR. ZEESHAN BHATTI 39

Page 40: Multimedia Technology - Chapter 5_ Video

VIDEO COMPRESSION

A single frame of video data can be quite large in size.

A video frame with a resolution of 512 x 482 will contain 246,784 pixels.

If each pixel contains 24 bits of color information, the frame will require 740,352 bytes of memory or disk space to store.

Assuming there are 30 frames per second for real-time video, a 10-second video sequence would be more than 222 megabytes in size!

It is clear there can be no computer video without at least one efficient method of video data compression

BY: DR. ZEESHAN BHATTI 40

Page 41: Multimedia Technology - Chapter 5_ Video

41

DIGITAL VIDEO

Example:

NTSC: 640x480px 24bits (millions - 3Bytes) colour pictures

900 Kbytes / frame

x 30 frames / sec

26 Mbytes / sec

x 60 seconds

1.6 GBytes / min !!! Without sound !!!

Even worse with PAL / SECAM

Only Studios (Films or TV) can deal with such Size and Data Rates

Page 42: Multimedia Technology - Chapter 5_ Video

VIDEO COMPRESSION (CONTINUED)

There are many encoding methods available that will compress video data.

The majority of these methods involve the use of a transform coding scheme, usually employing a Fourier or Discrete Cosine Transform (DCT).

These transforms physically reduce the size of the video data by selectively throwing away unneeded parts of the digitized information.

Transform compression schemes usually discard 10 percent to 25 percent or more of the original video data, depending largely on the content of the video data and upon what image quality is considered acceptable.

BY: DR. ZEESHAN BHATTI 42

Page 43: Multimedia Technology - Chapter 5_ Video

VIDEO COMPRESSION (CONTINUED)

Usually a transform is performed on an individual video frame.

The transform itself does not produce compressed data. It discards only data not used by the human eye.

The transformed data, called coefficients, must have compression applied to reduce the size of the data even further.

Each frame of data may be compressed using a Huffman or arithmetic encoding algorithm, or even a more complex compression scheme such as JPEG.

This type of intraframe encoding usually results in compression ratios between 20:1 to 40:1 depending on the data in the frame. However, even higher compression ratios may result if, rather than looking at single frames as if they were still images, we look at multiple frames as temporal images.

BY: DR. ZEESHAN BHATTI 43

Page 44: Multimedia Technology - Chapter 5_ Video

MOTION COMPRESSION TECHNIQUE

In a typical video sequence, very little data changes from frame to frame.

If we encode only the pixels that change between frames, the amount of data required to store a single video frame drops significantly.

This type of compression is known as interframe delta compression, or in the case of video, motion compensation.

Typical motion compensation schemes that encode only frame deltas (data that has changed between frames) can, depending on the data, achieve compression ratios upwards of 200:1.

This is only one possible type of video compression method. There are many other types of video compression schemes, some of which are similar and some of which are different.

BY: DR. ZEESHAN BHATTI 44

Page 45: Multimedia Technology - Chapter 5_ Video

45

CAPTURE AND COMPRESSION

Uncompressed Video only High End Equipt

COMPRESSION

Page 46: Multimedia Technology - Chapter 5_ Video

46

CAPTURE AND COMPRESSIONCapture Card

Analogue Signal Capture CardComputer

TV, VCR, analogue camera

Noise losses

Generational Loss and Noise Bad Quality and Compression Problems

Usually Compression is Performed in the Capture Card

Mostly Hardware but Software Possible

Capture Card must be compatible with signal type (NTSC, PAL, VHS type,

S-VHS, Hi8…)

Page 47: Multimedia Technology - Chapter 5_ Video

47

CAPTURE AND COMPRESSIONDigital Signal

Digital Signal

ComputerDigital TV, Digital Camera

No Generational Loss or Noise Good Quality and Better Compression

Usually Compression is Performed in the Camera

No Control Over Such Compression (Typically DV for us)

Commonly FireWire Connection (IEEE1394)

Page 48: Multimedia Technology - Chapter 5_ Video

CODECS

The algorithm used to compress (code) a video for delivery.

Decodes the compressed video in real-time for fast playback.

Streaming audio and video starts playback as soon as enough data has transferred to the user’s computer to sustain this playback.

MPEG is a real-time video compression algorithm.

MPEG-4 includes numerous multimedia capabilities and is a preferred standard.

Browser support varies

BY: DR. ZEESHAN BHATTI 48

Page 49: Multimedia Technology - Chapter 5_ Video

VIDEO FORMAT CONVERTERS

Produce more than one version of your video to ensure that video will play on all the devices and in all the browsers necessary for your project’s distribution

BY: DR. ZEESHAN BHATTI 49

Page 50: Multimedia Technology - Chapter 5_ Video

VIDEO: SOURCE FORMATS

Analog

Tape: VHS, Betamax, 8mm, Hi8, Umatic, Betacam SP •

Disc: Laserdisc, SelectaVision

Digital

Tape: MiniDV, Digital8, DVCAM, Digital Betacam

Disc: Video CD, DVD , BlueRay

BY: DR. ZEESHAN BHATTI 50

Page 51: Multimedia Technology - Chapter 5_ Video

CHROMA KEYS

Blue or Green screen or chroma key editing is used to superimpose subjects over different backgrounds.

BY: DR. ZEESHAN BHATTI 51

Page 52: Multimedia Technology - Chapter 5_ Video

CREATING AND SHOOTING VIDEO

Shooting platform

–A steady shooting platform should always be used.

–Use an external microphone.

–Know the features of your camera and software.

–Decide on the aspect ratio up front.

BY: DR. ZEESHAN BHATTI 52

Page 53: Multimedia Technology - Chapter 5_ Video

CREATING AND SHOOTING VIDEO -STORYBOARDING

–Successful video production requires planning.

Storyboards are very important, as they form the basis of the work that is carried out on the movie, describing most of the major features as well as the plot and its development.

BY: DR. ZEESHAN BHATTI 53

Page 54: Multimedia Technology - Chapter 5_ Video

CREATING AND SHOOTING VIDEO -COMPOSITION

–Consider the delivery medium when composing shots.

–Use close-up and medium shots when possible.

–Move the subject, not the lens.

–Beware of backlighting.

–Adjust the white balance.

BY: DR. ZEESHAN BHATTI 54

Page 55: Multimedia Technology - Chapter 5_ Video

CREATING AND SHOOTING VIDEO -

• Titles and text (continued)

–Use plain, sans serif fonts that are easy to read.

–Choose colors wisely.

–Provide ample space.

–Leave titles on screen long enough so that they can be read.

–Keep it simple.

BY: DR. ZEESHAN BHATTI 55

Page 56: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 56

Page 57: Multimedia Technology - Chapter 5_ Video

BY: DR. ZEESHAN BHATTI 57

Page 58: Multimedia Technology - Chapter 5_ Video

NONLINEAR EDITING

–High-end software has a steep learning curve.

•Adobe’s Premiere, Apple’s Final Cut, Avid’s Media Composer

–Simple editing software is free with the operating system.

•Microsoft’s Windows Live Movie Maker, Apple’s iMovie.

–Remember video codecs are lossy; avoid re-editing.

BY: DR. ZEESHAN BHATTI 58

Page 59: Multimedia Technology - Chapter 5_ Video

NONLINEAR EDITING (CONTINUED)

BY: DR. ZEESHAN BHATTI 59

Page 60: Multimedia Technology - Chapter 5_ Video

NONLINEAR EDITING (CONTINUED)

BY: DR. ZEESHAN BHATTI 60

Page 61: Multimedia Technology - Chapter 5_ Video

THANKYOU

Q & A

BY: DR. ZEESHAN BHATTI 61

For My Slides and Handouts

http://zeeshanacademy.blogspot.com/

https://www.facebook.com/drzeeshanacademy

Website:

https://sites.google.com/site/drzeeshanacademy/


Recommended