A 3D television (3D-TV) is a television set that employs techniques of 3D presentation, such
as stereoscopic capture, multi-view capture, or2D plus depth, and a 3D display—a special
viewing device to project a television program into a realistic three-dimensional field.
[edit]History
It was Sir Charles Wheatstone who in 1833 first came up with the idea of presenting slightly
different images to the two eyes using a device he called a reflecting mirror stereoscope. When
viewed stereoscopically, he showed that the two images are combined in the brain to produce 3-
D depth perception. The invention of the Brewster Stereoscope by the Scottish scientist Sir David
Brewster in 1849 provided a template for all later stereoscopes. This in turn stimulated the mass
production of stereo photography which flourished alongside mono-photography. Stereo
photography peaked around the turn of the century and went out of fashion as movies increased
in popularity.
The stereoscope was improved by Louis Jules Duboscq and a famous picture of Queen
Victoria was displayed at The Great Exhibition in 1851. In 1855 the Kinematoscope was invented,
i.e., the stereo animation camera. The first anaglyph (use of red-and-blue glasses,invented by
L.D. DuHaron) movie was produced in 1915 and in 1922 the first public 3D movie was displayed.
Stereoscopic 3D television was demonstrated for the first time on August 10, 1928, by John Logie
Baird in his company's premises at 133 Long Acre, London.[1] Baird pioneered a variety of 3D
television systems using electro-mechanical and cathode-ray tube techniques. In 1935 the first
3D color movie was produced. By the Second World War, stereoscopic 3D still cameras for
personal use were already fairly common.
In the fifties, when TV became popular in the United States, many 3D movies were produced.
The first such movie was Bwana Devil fromUnited Artists that could be seen all across the US in
1952. One year later, in 1953, came the 3D movie House of Wax which also featured
stereophonic sound. Alfred Hitchcock originally made his film Dial M for Murder in 3D, but for the
purpose of maximizing profits the movie was released in 2D because not all cinemas were able to
display 3D films. The Soviet Union also developed 3D films, with Robinzon Kruzo being their first
full-length 3D movie in 1946.[2]
Subsequently, television stations started airing 3D serials in 2009 based on the same technology
as 3D movies.[citation needed] In 2010, video games began to utilize 3D as a new way to play the
games.[3]
[edit]Technologies
See also: Stereoscopy, 3D display, 3-D film, and List of emerging technologies
There are several techniques to produce and display 3D moving pictures.
Common 3D display technology for projecting stereoscopic image pairs to the viewer include:[4]
With lenses:
Anaglyphic 3D (with passive red-cyan lenses)
Polarization 3D (with passive polarized lenses)
Alternate-frame sequencing (with active shutter lenses)
Without lenses: Autostereoscopic displays, sometimes referred to commercially as Auto
3D.
Single-view displays project only one stereo pair at a time. Multi-view displays either use head
tracking to change the view depending of the viewing angle, or simultaneously project multiple
independent views of a scene for multiple viewers (automultiscopic); such multiple views can be
created on the fly using the 2D plus depth format.
Various other display techniques have been described, such as holography, volumetric
display and the Pulfrich effect, which was used byDoctor Who for Dimensions in Time in 1993,
by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000, among
others. Real-Time 3D TV (Youtube video) is essentially a form of autostereoscopic display.
Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It
involves capturing stereo pairs in a two-view setup, with cameras mounted side by side,
separated by the same distance as between a person's pupils. If we imagine projecting an object
point in a scene along the line-of-sight (for each eye, in turn) to a flat background screen, we may
describe the location of this point mathematically using simple algebra. In rectangular coordinates
with the screen lying in the Y-Z plane (the Z axis upward and the Y axis to the right) and the
viewer centered along the X axis, we find that the screen coordinates are simply the sum of two
terms, one accounting for perspective and the other for binocular shift. Perspective modifies the Z
and Y coordinates of the object point by a factor of D/(D-x), while binocular shift contributes an
additional term (to the Y coordinate only) of s*x/(2*(D-x)), where D is the distance from the
selected system origin to the viewer (right between the eyes), s is the eye separation (about 7
centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for
the left-eye-view and negative for the right-eye-view. For very distant object points, it is obvious
that the eyes will be looking along the same line of sight. For very near objects, the eyes may
become excessively "cross-eyed". However, for scenes in the greater portion of the field of view,
a realistic image is readily achieved by superposition of the left and right images (using the
polarization method or synchronized shutter-lens method) provided the viewer isn't too near the
screen and the left and right images are correctly positioned on the screen. Digital technology has
largely eliminated inaccurate superposition that was a common problem during the era of
traditional stereoscopic films.[5][6]
Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple
independent video streams. Plenoptic cameras, which capture the light field of a scene, can also
be used to capture multiple views with a single main lens.[7] Depending on the camera setup, the
resulting views can either be displayed on multi-view displays, or passed for further image
processing.
After capture, stereo or multi-view image data can be processed to extract 2D plus
depth information for each view, effectively creating a device-independent representation of the
original 3D scene. This data can be used to aid inter-view image compression or to generate
stereoscopic pairs for multiple different view angles and screen sizes.
2D plus depth processing can be used to recreate 3D scenes even from a single view and
convert legacy film and video material to a 3D look, though a convincing effect is harder to
achieve and the resulting image will likely look like a cardboard miniature.
[edit]TV sets
These TV sets are high-end and generally include Ethernet, USB player and
recorder, Bluetooth and USB Wi-Fi.
[edit]3D-ready TV sets
3D-ready TV sets are those that can operate in 3D mode (in addition to regular 2D mode), in
conjunction with a set-top-box and LCD shutter glasses, where the TV tells the glasses which eye
should see the image being exhibited at the moment, creating a stereoscopic image. These TV
sets usually support HDMI 1.4 and a minimum (input and output) refresh rate of 120 Hz; glasses
may be sold separately.
Panasonic already has several sets in the market (like the Panasonic Viera TC-P50VT200 which
are 3D capable and come shipped with glasses. It has a retail price of approximately US$2,500.
The Samsung UN46C7000 46-Inch 3D TV can be purchased for US$2,000.00 or less. There are
numerous, relatively inexpensive models available from a number of manufacturers already in the
summer of 2010.
Mitsubishi and Samsung utilize DLP technology from Texas Instruments.[8] As of January 2010,
Samsung, LG,[9] Toshiba, Sony, and Panasonic all had plans to introduce 3D capabilities (mostly
in higher-end models) in TVs available sometime in 2010.[10] 3D Blu-ray players went on sale in
2010, and Sky began 3D broadcasts in the UK on 3 April 2010. DirecTV broadcasts began with
the 2010 FIFA World Cup in June 2010.[10] Samsung began selling the UN55C7000, its first 3D
ready TV, late in February 2010.[11]
Philips was developing 3D television sets that would be available for the consumer market by
about 2011 without the need for special glasses (autostereoscopy).[12] However it was canceled
due to the slow adaptation of customers going from 2D to 3D.
In August 2010, Toshiba announced plans to bring a range of autosteroscopic TVs to market by
the end of the year.[13]
The Chinese manufacturer TCL Corporation has developed a 42-inch (110 cm) LCD 3D TV called
the TD-42F, which is currently available in China. This model uses a lenticular system and does
not require any special glasses (autostereoscopy). It currently sells for approximately $20,000.
[14] The biggest problem using lenticular lens as used also by Philip Dimenco is the sharpness of
the display. Although we use 4K (4 times of Full HD TV), the image we saw was coarse in
appearance due to lenticular lens technology required to refract the left and right images for each
eye, so the technology used is certainly better suited for non-stationary viewing. The border
around objects in the screen tended to shift quickly and blur.[15]
LG, Samsung, Sony & Philips intend to increase their 3D TV offering with plans to make 3D TV
sales account for over 50% of their respective TV distribution offering by 2012. It is expected that
the screens will use a mixture of technologies until there is standardisation across the industry.[16].
Samsung offers the LED 7000, LCD 750, PDP 7000 TV sets and the Blu-ray 6900.[17]
On June 9, 2010, Panasonic unveiled a 152 inches (390 cm) 3D-capable TV (the largest so far)
that will go on sale within 2010. The TV, which is the size of about nine 50-inch TVs, will cost
more than 50 million yen (US$576,000).[18]
[edit]Full 3D TV sets
Full 3D TV sets include Panasonic Full HD 3D (1920X1080 p, this is, 2 Mp; and 600 Hz). These
TVs are expensive.
Toshiba has shown 20 and 12 inch autostereoscopic (this is, glassesfree) LCD 3D TV sets for
commercial launch, with a 1,280X720 resolution. By systematically aligning pixels and adopting a
perpendicular lenticular sheet, Toshiba´s LCD panel eliminates blurring, or thevertical wave
pattern (caused by interference in the display cycle) that plagues other autostereoscopic 3-D
technologies. The viewing angle is about 40º, doubling the previous approaches, with.The 12 inch
model will sell for roughly $1400.
[edit]Standardization efforts
The entertainment industry is expected to adopt a common and compatible standard for 3D in
home electronics. To present faster frame ratein high definition to avoid judder, enhancing 3-D
film, televisions and broadcasting, other unresolved standards are the type of 3D glasses(passive
or active), including bandwidth considerations, subtitles, recording format and a Blu-ray standard.
With improvements in digital technology, in the late 2000s, 3D movies have become more
practical to produce and display, putting competitive pressure behind the creation of 3D television
standards. There are several techniques for Stereoscopic Video Coding, and stereoscopic
distribution formatting including anaglyph, quincunx, and 2D plus Delta.
Content providers, such as Disney, DreamWorks, and other Hollywood studios, and technology
developers, such as Philips, asked[when?]SMPTE for the development of a 3DTV standard in order
to avoid a battle of formats and to guarantee consumers that they will be able to view the 3D
content they purchase and to provide them with 3D home solutions for all pockets. In August
2008, SMPTE established the "3-D Home Display Formats Task Force" to define the parameters
of a stereoscopic 3D mastering standard for content viewed on any fixed device in the home, no
matter the delivery channel. It explored the standards that need to be set for 3D content
distributed via broadcast, cable, satellite, packaged media, and the Internet to be played-out on
televisions, computer screens and other tethered displays. After six months, the committee
produced a report to define the issues and challenges, minimum standards, and evaluation
criteria, which the Society said would serve as a working document for SMPTE 3D standards
efforts to follow. A follow-on effort to draft a standard for 3D content formats was expected to take
another 18 to 30 months.[citation needed]
Production studios are developing an increasing number of 3D titles for the cinema and as many
as a dozen companies are actively working on the core technology behind the product. Many
have technologies available to demonstrate, but no clear road forward for a mainstream offering
has emerged.
Under these circumstances, SMPTE's inaugural meeting was essentially a call for proposals for
3D television; more than 160 people from 80 companies signed up for this first meeting. Vendors
that presented their respective technologies at the task force meeting includedSensio,
[19] Philips, Dynamic Digital Depth (DDD), TDVision [3], and Real D, all of which had 3D
distribution technologies.
However, SMPTE is not the only 3D standards group. Other organizations such as the Consumer
Electronics Association (CEA),3D@home Consortium, ITU and the Entertainment Technology
Center at USC's School of Cinematic Arts (ETC), have created their own investigation groups and
have already offered to collaborate to reach a common solution. The Digital TV Group (DTG), has
committed to profiling a UK standard for 3DTV products and services. Other standard groups
such as DVB, BDA, ARIB, ATSC, DVD Forum, IEC and others are to be involved in the process.
[citation needed]
MPEG has been researching multi-view, stereoscopic, and 2D plus depth 3D video coding since
the mid-1990s;[20] the first result of this research is the Multiview Video Coding extension
for MPEG-4 AVC that is currently undergoing standardization. MVC has been chosen by the Blu-
ray disc association for 3D distribution. The format offers backwards compatibility with 2D Blu-ray
players.[21]
HDMI version 1.4, released in June 2009, defines a number of 3D transmission formats. The
format "Frame Packing" (left and right image packed into one video frame with twice the normal
bandwidth) is mandatory for HDMI 1.4 3D devices. All three resolutions (720p50, 720p60, and
1080p24) have to be supported by display devices, and at least one of those by playback
devices. Other resolutions and formats are optional.[22] While HDMI 1.4 devices will be capable of
transmitting 3D pictures in full 1080p, HDMI 1.3 does not include such support. As an out-of-spec
solution for the bitrate problem, a 3D image may be displayed at a lower resolution, like interlaced
or at standard definition.
[edit]DVB 3D-TV standard
See also: High-definition television
DVB has established the DVB 3D-TV Specification. The following 3D-TV consumer configurations
will be available to the public:[23]
3D-TV connected to 3D Blu-ray Player for packaged media.
3D-TV connected to HD Games Console, e.g. PS3 for 3D gaming.
3D-TV connected to HD STB for broadcast 3D-TV.
3D-TV receiving a 3D-TV broadcast directly via a built-in tuner and decoder.
For the two broadcast scenarios above, initial requirements are for Pay-TV broadcasters to
deliver 3D-TV services over existing HD broadcasting infrastructures, and to use existing
receivers (with firmware upgrade, as required) to deliver 3D content to 3D-TV sets, via anHDMI or
equivalent connection, if needed. This is termed Frame Compatible. There are a range of Frame
Compatible formats. They include the Side by Side (SbS) format, the Top and Bottom (TaB)
format, and others.
[edit]Broadcasts
A diagram of the 3D TV scheme
[edit]3D Channels
As of 2008, 3D programming is broadcast on Japanese cable channel BS 11 approximately four
times per day.[24]
Cablevision launched a 3D version of its MSG channel on March 24, 2010, available only to
Cablevision subscribers on channel 1300.[25][26]The channel is dedicated primarily to sports
broadcasts, including MSG's 3D broadcast of a New York Rangers-New York Islanders game,
limited coverage of the 2010 Masters Tournament, and (in cooperation with YES Network) a
game between the New York Yankees andSeattle Mariners.[27]
The first Australian program broadcast in high-definition 3D was Fox Sports coverage of the
soccer game Australia-New Zealand on 24 May 2010.[28]
Also in Australia, the Nine Network and Special Broadcasting Service will be bringing the State of
Origin (matches on 26 May, 16 June and 7 July 2010) (Nine) and FIFA World Cup (SBS) in 3D on
Channel 40 respectively. [29]
Earlier this year (2010) Discovery Communications, Imax and Sony announced plans to launch a
3D TV channel in the US with a planned launch in early 2011.[30] At the same time, a Russian
company Platform HD and its partners – General Satellite and Samsung Electronics – announced
about their 3D television project, which would be the first similar project in Russia.
In Brazil Rede TV! became the first Terrestrial television to transmit 3D signal freely for all 3D
enabled audience on 21 May. But despite their technology, its programming is still in poor quality.
[31][32][33][34]
Starting on June 11, 2010 ESPN launched a new channel, ESPN 3D, dedicated to 3D sports with
up to 85 live events a year in 3D.[35]
On 1 January 2010, the world's first 3D channel, SKY 3D, started broadcasting nationwide
in South Korea by Korea Digital Satellite Broadcasting. The channel's slogan is "World No.1 3D
Channel". This 24/7 channel uses the Side by Side technology at a resolution of 1920x1080i. 3D
contents include education, animation, sport, documentary and performances.[36]
A full 24 hour broadcast channel was announced at the 2010 Consumer Electronics show as a
joint venture from IMAX, Sony, and the Discovery channel.[37] The intent is to launch the channel
in the United States by year end 2010.
DirecTV and Panasonic plan to launch 2 broadcast channels and 1 Video on demand channel
with 3D content[38] in June 2010. DirecTV previewed a live demo of their 3D feed at the Consumer
Electronics Show held January 7–10, 2010.[39]
In Europe, British Sky Broadcasting (Sky) launched a limited 3D TV broadcast service on April 3,
2010. Transmitting from the Astra 2Asatellite at 28.2° east, Sky 3D broadcast a selection of live
UK Premier League football matches to over 1000 British pubs and clubs equipped with a
Sky+HD Digibox and 3D Ready TVs, and preview programmes provided for free to top-tier Sky
HD subscribers with 3D TV equipment. This was later expended to include a selection of films,
sports, and entertainment programming launched to Sky subscribers on 1 October 2010.[40]
On September 28, 2010, Virgin Media launched a 3D TV on Demand service,[41]
Several other European pay-TV networks are also planning 3D TV channels[42] and some have
started test transmissions on other Astrasatellites, including French pay-TV
operator Canal+ which has announced its first 3D channel is to be launched in December 2010.
Also the Spanish Canal+ has started the first broadcastings on May 18, 2010 and included
2010 FIFA World Cup matches in the new Canal+ 3Dchannel.[43] Satellite operator SES
Astra started a free-to-air 3D demonstration channel on the Astra satellite at 23.5° east on May 4,
2010 for the opening of the 2010 ANGA Cable international trade fair[44] using 3D programming
supplied by 3D Ready TV manufacturer Samsung under an agreement between Astra and
Samsung to co-promote 3D TV.[45]
[edit]3D episodes and shows
There have been several notable examples in television where 3D episodes have been produced,
typically as one hour specials or special events.
The first-ever 3D broadcast in the UK was an episode of the weekly science magazine The Real
World, made by Television South and screened only in the south-east region of the UK in
February 1982. The programme included excerpts of test footage shot by Phillips in the
Netherlands. Red/green 3D glasses were given away free with copies of the TV Times listings
magazine, but the 3D sections of the programme were shown in monochrome. The experiment
was repeated nationally in December 1982, with red/blue glasses allowing colour 3D to be shown
for the first time. The programme was repeated the following weekend followed by a rare
screening of the Western Fort Tistarring George Montgomery and Joan Vohs.
The sitcom 3rd Rock From The Sun two-part episode "Nightmare On Dick Street", where several
of the characters' dreams are shown in 3D. The episode cued its viewers to put on their 3D
glasses by including "3D on" and "3D off" icons in the corner of the screen as a way to alert them
as to when the 3D sequences would start and finish. The episode used the Pulfrich 3D technique.
Recent uses of 3D in television include the drama Medium and the comedy Chuck. The
show Arrested Development briefly used 3D in an episode.
Channel 4 in the UK ran a short season of 3D programming in November 2009 including Derren
Brown and The Queen in 3D.[46]
On 31 January 2010, BSKYB became the first broadcaster in the world to show a live sports
event in 3D when Sky Sports screened a football match between Manchester
United and Arsenal to a public audience in several selected pubs.[47]
The 2010 52nd Grammy Awards featured a Michael Jackson Tribute Sequence in 3D, using
anaglyph format.
In April 2010, the Masters Tournament was broadcast in live 3D on DirecTV, Comcast, and Cox.
On 29 May 2010, Sky broadcasts Guinness Premiership Final in 3D in selected pubs and clubs.
[48]
Fox Sports broadcasts the first program in 3D in Australia when the Socceroos played The New
Zealand All Whites at the MCG on May 24, 2010
The Nine Network broadcasts the first Free-to-air 3D telecast when the Queensland
Maroons faced the New South Wales Blues at ANZ Stadium on May 26, 2010.
The Roland Garros tennis tournament in Paris, from May 23 to June 6, 2010, was filmed in 3D
(center court only) and broadcast live via ADSL and fiber to Orange subscribers throughout
France in a dedicated Orange TV channel.[49]
25 matches in the FIFA World Cup 2010 were broadcast in 3D.
The Inauguration of Philippine President Noynoy Aquino on June 30, 2010 was the first
presidential inauguration to telecast in live 3D by GMA Network. However, the telecast was only
available in select places.
The 2010 Coke Zero 400 will be broadcast in 3D on July 3 on NASCAR.com and DirecTV along
with Comcast, Time Warner, and Bright House cable systems.
The 2010 AFL Grand Final will be broadcast in 3D from the Seven Network.
Avi Arad is currently developing a 3D Pacman TV Show.
Satellite delivered Bell TV in Canada began to offer a full time pay-TV, 3D channel to its
subscribers on 27 July 2010. In September 2010, theCanadian Broadcasting Corporation's first
3D broadcast will be a special about the Canadian monarch, Queen Elizabeth II, and will include
3-D film footage of the Queen's 1953 coronation as well as 3D video of her 2010 tour of Canada.
This will mark the first time the historical 3D images have been seen anywhere on television as
well as the first broadcast of a Canadian produced 3D programme in Canada.[50]
The 2010 PGA Championship was broadcast in 3D for four hours on August 13, 2010, from 3–
7 pm EDT. The broadcast was available on DirecTV, Comcast, Time Warner Cable, Bright House
Networks, Cox Communications, and Cablevision.[51]
FioS and the NFL partnered to broadcast the September 2, 2010, pre-season game between the
New England Patriots and the New York Giants in 3D. The game was only broadcasted in 3D in
the northeast.[52]
Singapore based Tiny Island Productions is currently producing Dream Defenders, which will be
available in both autostereoscopic andstereoscopic 3D formats [53].
Rachael Ray (TV series) aired a 3D Halloween Bash on October 29, 2010.
[edit]Health effects
Some viewers have complained of headaches and visual problems after watching 3D TV and
films. There have been several warnings, especially for children
PROJECTOR TV:-Rear projection television or RPTV is a type of large-screen television display technology. Up
until the mid-2000s, most of the relatively affordable consumer large screen TVs (up to
100 in (2,500 mm)) used rear projection technology. A variation is a video projector, using similar
technology, which projects onto a screen.
Rear projection television has been commercially available since the 1970s, but at that time could
not match the image sharpness of the CRT. Current models are vastly improved, and offer a cost-
effective HDTV large-screen display. While still thicker than LCD and plasma flat panels, modern
rear projection TVs have a smaller footprint than their predecessors. The latest models are light
enough to be wall-mounted.[1]
Three types of projection systems are used in projection TVs. CRT rear projection TVs were the
earliest, and while they were the first to exceed 40", they were also bulky and the picture was
unclear at close range. Newer technologies include DLP (reflective micromirror chip),LCD
projectors, and LCoS, has been capable of 1080p resolution, and examples
include Sony's SXRD (Silicon X-tal Reflective Display),JVC's D-ILA (Digital Direct Drive Image
Light Amplifier), and MicroDisplay Corporation's Liquid Fidelity.
While popular in the early 2000s as an alternative to more expensive LCD and plasma flat panels,
the falling price and improvements to LCDs have led
to Sony, Philips, Toshiba and Hitachi planning to drop rear projection TVs from their lineup.[2]
[3] Currently, Samsung, Mitsubishi,ProScan, RCA, Panasonic and JVC remain in the market. The
bulk of earlier rear-projection TVs meant that they cannot be wall-mounted, and while most
consumers of flat-panels do not hang up their sets, the ability to do so is considered a key selling
point.[4] On June 6, 2007, Sony did unveil a 70" rear-projection SXRD model KDS-Z70XBR5 that
was 40% slimmer than its predecessor and weighed 200 lbs, which was somewhat wall-
mountable, however on December 27, 2007, Sony decided to exit the RPTV market.[5][6]
[7] Mitsubishi began offering theirLaserVue line of wall mountable rear projection TVs in 2009.[8]
[edit]Types of rear projection technologies
A projection television uses a projector to create a small image from a video signal and magnify
this image onto a viewable screen. The projector uses a bright beam of light and a lens system to
project the image to a much larger size. A front-projection television uses a projector that is
separate from the screen, and the projector is placed in front of the screen. The setup of a rear-
projection television is in some ways similar to that of a traditional television. The projector is
contained inside the television box and projects the image from behind the screen.
The following are different types of projection televisions, which differ based on the type of
projector and how the image (before projection) is created:
CRT projector: Small cathode ray tubes create the image in the same manner that a
traditional CRT television does, which is by firing a beam of electrons onto a phosphor-coated
screen and then the image is projected to a large screen. This is done to overcome the limit
of size of cathode ray tube which is about 40 inches which is the maximum size a normal
CRT television set (see image). Normally 3 CRTs are used, one red, one green and one blue,
aligned so the colours mix correctly on the projected image.
LCD projector: A lamp transmits light through a small LCD chip made up of individual
pixels to create an image. The LCD projector uses mirrors to take the light and create three
separate red, green, and blue beams, which are then passed through three separate LCD
panels. The liquid crystals are manipulated using electric current to control the amount of light
passing through. The lens system takes the three color beams and projects the image.
Digital Light Processing (DLP) projector: A DLP projector creates an image using a digital
micromirror device (DMD chip), which on its surface contains a large matrix of microscopic
mirrors, each corresponding to one pixel in an image. Each mirror can be rotated to reflect
light such that the pixel appears bright, or the mirror can be rotated to direct light elsewhere
and make the pixel appear dark. The mirror is made of aluminum and is rotated on an axle
hinge. There are electrodes on both sides of the hinge controlling the rotation of the mirror
using electrostatic attraction. The electrodes are connected to an SRAM cell located under
each pixel, and charges from the SRAM cell drive the movement of the mirrors. Color is
added to the image-creation process either through a spinning color wheel (used with a
single-chip projector) or a three-chip (red, green, blue) projector. The color wheel is placed
between the lamp light source and the DMD chip such that the light passing through is
colored and then reflected off a mirror to determine the level of darkness. A color wheel
consists of a red, green, and blue sector, as well as a fourth sector to either control
brightness or include a fourth color. This spinning color wheel in the single-chip arrangement
can be replaced by red, green, and blue light-emitting diodes (LED). The three-chip projector
uses a prism to split up the light into three beams (red, green, blue), each directed towards its
own DMD chip. The outputs of the three DMD chips are recombined and then projected.
WSC2010:- IT IS HIGH QUALITY VIDEO PROCESSOR
CamcorderFrom Wikipedia, the free encyclopedia
Canon HD camcorder
A camcorder (video camera recorder) is an electronic device that combines a video camera and
a video recorder into one unit.[1][2][3] Equipment manufacturers do not seem to have strict guidelines for
the term usage. Marketing materials may present a video recording device as acamcorder, but the
delivery package would identify content as video camera recorder.
In order to differentiate a camcorder from other devices that are capable of recording video, like cell
phones and compact digital cameras, a camcorder is generally identified as a portable, self-contained
device having video capture and recording as its primary function.[4][5]
The earliest camcorders employed analog recording onto videotape. Digital recording has now
become the norm, but tape remained the primary recording medium, with tape only being gradually
replaced with other storage media including optical disks, hard disk drives and flash memory.
All tape-based camcorders use removable media in form of video cassettes. Camcorders that do not
use magnetic tape are often called tapeless camcorders and may use optical discs (removable), solid-
state flash memory (removable or built-in) or a hard disk drive (removable or built-in).
Camcorders that permit using more than one type of medium, like built-in hard disk drive and memory
card, are often called hybrid camcorders.
[edit]History
An arrangement of a separate portable recorder like a Betamax unit shown here and a video camera is still
considered a camcorder by some sources.[6]
A shoulder-mount RCA camcorder
Video cameras originally designed for television broadcast were large and heavy, mounted on special
pedestals, and wired to remote recorders located in separate rooms.
As technology advanced, out-of-studio video recording was made possible by means of compact video
cameras and portable video recorders. The recording unit could be detached from the camera and
carried to a shooting location. While the camera itself could be quite compact, the fact that a separate
recorder had to be carried along made on-location shooting a two-man job.[7] Specialized video
cassette recorders were introduced by both JVC (VHS) and Sony (U-matic and Betamax) to be used
for mobile work. The advent of the portable recorders helped to eliminate the phrase "film at eleven"—
rather than wait for the lengthy process of film developing, recorded video could be shown during the 6
o'clock news.
In 1982 Sony released the Betacam system. A part of this system was a single camera-recorder unit,
which eliminated the cable between camera and recorder and dramatically improved the freedom of a
cameraman. Betacam quickly became the standard for both news-gathering and in-studio video
editing.
In 1983 Sony released the first consumer camcorder—the Betamovie BMC-100P. It used
aBetamax cassette and could not be held with one hand, so it was typically resting on a shoulder. In
the same year JVC released the first camcorder based on VHS-C format.[8] In 1985 Sony came up with
its own compact video cassette format—Video8. Both formats had their benefits and drawbacks, and
neither won the format war.
In 1985, Panasonic, RCA, and Hitachi began producing camcorders that recorded to full-sized VHS
cassette and offered up to 3 hours of record time. These shoulder mount camcorders found a niche
with videophiles, industrial videographers, and college TV studios. Super VHS full-sized camcorders
were released in 1987 which exceeded broadcast quality and provided an inexpensive way to collect
news segments or videographies.
In 1986 Sony introduced the first digital video format, D1. Video was recorded in uncompressed form
and required enormous bandwidth for its time. In 1992 Ampex used D1 form-factor to create DCT, the
first digital video format that utilized data compression. The compression utilized discrete cosine
transform algorithm, which is used in most modern commercial digital video formats.
In 1995 Sony, JVC, Panasonic and other video camera manufacturers launched DV. Its variant using a
smaller MiniDV cassette quickly became a de-facto standard for home and semi-professional video
production, for independent filmmaking and for citizen journalism.
In 2000 Panasonic launched DVCPRO HD, expanding DV codec to support high definition. The format
was intended for use in professional camcorders and used full-size DVCPRO cassettes. In 2003 Sony,
JVC, Canon and Sharp introduced HDV, the first truly affordable high definition video format, which
used inexpensive MiniDV cassettes.
In 2003 Sony pioneered XDCAM, the first tapeless video format, which uses Professional Disc as
recording media. Panasonic followed next year, offering P2 solid state memory cards as recording
medium for DVCPRO HD video.
In 2006 Panasonic and Sony introduced AVCHD as an inexpensive consumer-grade tapeless high
definition video format. Presently AVCHD camcorders are manufactured by Sony, Panasonic, Canon,
JVC and Hitachi.
In 2007 Sony introduced XDCAM EX, which offers similar recording modes to XDCAM HD, but records
on SxS memory cards.
With proliferation of file-based digital formats the relationship between recording media and recording
format became weaker than ever: the same video can be recorded onto different media. With tapeless
formats, recording media has become a storage device for digital files, signifying convergence of video
and computer industries.
JVC KY D29 Digital-S pro camcorder.
[edit]Overview
Camcorders contain 3 major components: lens, imager, and recorder. The lens gathers and focuses
light on the imager. The imager (usually a CCD or CMOS sensor on modern camcorders; earlier
examples often used vidicon tubes) converts incident light into an electrical signal. Finally, the recorder
converts the electric signal into video and encodes it into a storable form. More commonly, the optics
and imager are referred to as the camerasection.
[edit]Lens
The lens is the first component in the light path. The camcorder's optics generally have one or more of
the following adjustments:
aperture or iris to regulate the exposure and to control depth of field;
zoom to control the focal length and angle of view;
shutter speed to regulate the exposure and to maintain desired motion portrayal;
gain to amplify signal strength in low-light conditions;
neutral density filter to regulate the exposure.
In consumer units, the above adjustments are often automatically controlled by the camcorder's
electronics, but can be adjusted manually if desired. Professional units offer direct user control of all
major optical functions.
[edit]Imager
The imager converts light into electric signal. The camera lens projects an image onto the imager
surface, exposing the photosensitive array to light. The light exposure is converted into electrical
charge. At the end of the timed exposure, the imager converts the accumulated charge into a
continuous analog voltage at the imager's output terminals. After scan-out is complete, the photosites
are reset to start the exposure-process for the next video frame.
[edit]Recorder
The third section, the recorder, is responsible for writing the video-signal onto a recording medium
(such as magnetic videotape.) The record function involves many signal-processing steps, and
historically, the recording-process introduced some distortion and noise into the stored video, such that
playback of the stored-signal may not retain the same characteristics/detail as the live video feed.
All but the most primitive camcorders imaginable also need to have a recorder-controlling section
which allows the user to control the camcorder, switch the recorder into playback mode for reviewing
the recorded footage and an image control section which controls exposure, focus and white-balance.
The image recorded need not be limited to what appeared in the viewfinder. For documentation of
events, such as used by police, the field of view overlays such things as the time and date of the
recording along the top and bottom of the image. Such things as the police car or constable to which
the recorder has been allotted may also appear; also the speed of the car at the time of recording.
Compass direction at time of recording and geographical coordinates may also be possible. These are
not kept to world-standard fields; "month/day/year" may be seen, as well as "day/month/year", besides
the ISO standard "year-month-day".
[edit]Consumer camcorders
[edit]Analog vs. digital
Camcorders are often classified by their storage device: VHS, VHS-C, Betamax, Video8 are examples
of 20th century videotape-based camcorders which record video in analog form. Newer digital
video camcorder formats include Digital8, MiniDV, DVD, Hard Disk and solid-state (flash)
semiconductor memory, which all record video in digital form. In older digital camcorders, the imager-
chip, the CCD was considered an analog component, so the digital namesake is in reference to the
camcorder's processing and recording of the video. Manynext generation camcorders use a CMOS
imager, which register photons as binary data as they hit the imager, thus tightly marrying part 2 and 3.
The take up of digital video storage improved quality. MiniDV storage allows full resolution video
(720x576 for PAL,720x480 for NTSC), unlike previous analogue consumer video standards. Digital
video doesn't experience colour bleeding, jitter, or fade, although some users still prefer the analog
nature of Hi8 and Super VHS-C, since neither of these produce the "background blur" or "mosquito
noise" of Digital compression. In many cases, a high-quality analog recording shows more detail (such
as rough textures on a wall) than a compressed digital recording (which would show the same wall as
flat and featureless). Although, the low resolution of analogue camcorders may negate any such
benefits.
The highest-quality digital formats, such as Digital Betacam and DVCPRO HD, have the advantage
over analog of suffering little generation loss in recording, dubbing, and editing (MPEG-2 and MPEG-
4 do suffer from generation loss in the editing process only). Whereas noise andbandwidth problems
relating to cables, amplifiers, and mixers can greatly affect analog recordings, such problems are
minimal in digital formats using digital connections (generally IEEE 1394, SDI/SDTI, or HDMI).
Although both analog and digital can suffer from archival problems, digital is more prone to complete
loss. Theoretically digital information can be stored indefinitely with zero deterioration on a digital
storage device (such as a hard drive), however since some digital formats (likeMiniDV) often squeeze
tracks only ~10 micrometers apart (versus ~500 μm for VHS), a digital recording is more vulnerable to
wrinkles or stretches in the tape that could permanently erase several scenes worth of digital data, but
the additions tracking and error correction code on the tape will generally compensate for most
defects. On analog media similar damage barely registers as "noise" in the video, still leaving a
deteriorated but watchable video. The only limitation is that this video has to be played on a completely
analogue viewing system, otherwise the tape will not display any video due to the damage and sync
problems. Even digital recordings on DVD are known to suffer from DVD rotthat permanently erase
huge chunks of data. Thus the one advantage analog seems to have in this respect is that an analog
recording may be "usable" even after the media it is stored on has suffered severe deterioration
whereas it has been noticed[9] that even slight media degradation in digital recordings may cause them
to suffer from an "all or nothing" failure, i.e. the digital recording will end up being totally un-playable
without very expensive restoration work.
[edit]Modern recording media
For more information, see tapeless camcorder.
Most recent camcorders record video on flash memory devices, Microdrives, small hard disks, and
size-reduced DVD-RAM or DVD-Rs usingMPEG-1, MPEG-2 or MPEG-4 formats. However because
these codecs use inter-frame compression, frame-specific-editing requires frame regeneration, which
incurs additional processing and can cause loss of picture information. (In professional usage, it is
common to use a codec that will store every frame individually. This provides easier and faster frame-
specific editing of scenes.)
Other digital consumer camcorders record in DV or HDV format on tape and transfer content
over FireWire (some also use USB 2.0) to a computer, where the huge files (for DV, 1GB for 4 to 4.6
minutes in PAL/NTSC resolutions) can be edited, converted, and (with many camcorders) also
recorded back to tape. The transfer is done in real time, so the complete transfer of a 60 minute tape
needs one hour to transfer and about 13GB disk space for the raw footage only—excluding any space
needed for render files, and other media. Time spent inpost-production (editing) to select and cut the
best shots varies from instantaneous "magic" movies to hours of tedious selection, arrangement and
rendering.
[edit]Consumer market
As the mass consumer market favors ease of use, portability, and price, most of the consumer-grade
camcorders sold today emphasize handling and automation features over raw audio/video
performance. Thus, the majority of devices capable of functioning as camcorders arecamera
phones or compact digital cameras, for which video is only a feature or a secondary capability.
Even for separate devices intended primarily for motion video, this segment has followed an
evolutionary path driven by relentless miniaturization and cost reduction, made possible by progress in
design and manufacturing. Miniaturization conflicts with the imager's ability to gather light, and
designers have delicately balanced improvements in sensor sensitivity with sensor size reduction,
shrinking the overall camera imager & optics, while maintaining reasonably noise-free video in broad
daylight. Indoor or dim light shooting is generally unacceptably noisy, and in such conditions, artificial
lighting is highly recommended. Mechanical controls cannot scale below a certain size, and manual
camera operation has given way to camera-controlled automation for every shooting parameter (focus,
aperture, shutter speed, white balance, etc.) The few models that do retain manual override frequently
require the user to navigate a cumbersome menu interface. Outputs include USB 2.0, Composite and
S-Video, and IEEE 1394/Firewire (for MiniDV models). On the plus side, today's camcorders are
affordable to a wider segment of the consumer market, and available in a wider variety of form factors
and functionality, from the classic camcorder shape, to small flip-cameras, to video-capable camera-
phones and "digicams."
At the high-end of the consumer market, there is a greater emphasis on user control and advanced
shooting modes. Feature-wise, there is some overlap between the high-end consumer and "prosumer"
markets. More expensive consumer camcorders generally offer manual exposure control, HDMI output
and external audio input, progressive-scan framerates (24fps, 25fps, 30fps), and better lenses than
basic models. In order to maximize low-light capability, color reproduction, and frame resolution, a few
manufacturers offer multi-CCD/CMOS camcorders, which mimic the 3-element imager design used in
professional equipment. Field tests have demonstrated most consumer camcorders (regardless of
price), to produce noisy video in low light.
Before the 21st century, video editing was a difficult task requiring a minimum of two recorders and
possibly a desktop video workstation to control them. Now, the typical home personal computer can
hold several hours of standard-definition video, and is fast enough to edit footage without additional
upgrades. Most consumer camcorders are sold with basic video editing software, so users can easily
create their own DVDs, or share their edited footage online.
JVC GZ-MG555 hybrid camcorder (MPEG-2 SD Video)
In the first world market, nearly all camcorders sold today are digital. Tape-based (MiniDV/HDV)
camcorders are declining in popularity, as tapeless models (miniDVD, SD card, hard drive) cost almost
the same, but offer greater convenience. For example, video captured on an SD card can be
transferred to a computer much faster than from digital tape. Hard disk camcorders feature the longest
continuous recording time, though the durability of the hard drive is a concern for harsh and high-
altitude environments. Footage from miniDVD camcorders can be dropped into and played on a DVD
player.
As of 2007, analog camcorders are still available but no longer widely marketed. Even with a street
price below US$200, both digital tape and basic tapeless technology have reached price parity with
the older analog tape, which suffers many disadvantages compared to the newer units, and all low-end
camcorders face market pressure from the rising popularity of multi-function devices (camera phones,
digicams) with basic video-recording capability.
[edit]Other devices with video-capture capability
Video-capture capability is not confined to camcorders. Cellphones, digital single lens reflex and
compact digicams, laptops, and personal media players frequently offer some form of video-capture
capability. In general, these multipurpose-devices offer less functionality for video-capture, than a
traditional camcorder. The absence of manual adjustments, external-audio input, and even basic
usability functions (such as autofocus and lens-zoom) are common limitations. Few can capture to
standard TV-video formats (480p60, 720p60, 1080i30), and instead record in either non-TV resolutions
(320x240, 640x480, etc.) or slower frame-rates (15fps, 30fps.)
When used in the role of a camcorder, a multipurpose-device tends to offer inferior handling and
audio/video performance, which limits its usability for extended and/or adverse shooting situations.
However, much as camera-equipped cellphones are now ubiquitous, video-equipped electronic
devices will likely become commonplace, replacing the market for low-end camcorders.
The past few years have seen the introduction of DSLR cameras with high-definition video. Although
they still suffer from the typical handling and usability deficiencies of other multipurpose-
devices, HDSLR video offers two videographic features unavailable on consumer camcorders, shallow
depth-of-field and interchangeable lenses. Professional video cameras possessing these capabilities
are currently more expensive than even the most expensive video-capable DSLR. In video
applications where the DSLR's operational deficiencies can be mitigated by meticulous planning of the
each shooting location, a growing number of video productions are employing DSLRs, such as
the Canon 5D Mark II, to fulfill the desire for depth-of-field and optical-perspective control. Whether in a
studio or on-location setup, the scene's environmental factors and camera placement are known
beforehand, allowing the director of photography to determine the proper camera/lens setup and apply
any necessary environmental adjustments, such as lighting.
A recent development to combine the feature-sets of full-feature still-camera and camcorder in a single
unit, is the combo-camera. The Sanyo Xacti HD1 was the first such combo unit, combining the
features of a 5.1 megapixel still-camera with a 720p video recorder. Overall, the product was a step
forward in terms of a single-device's combined level of handling and usability . The combo camera's
concept has caught on with competing manufacturers; Canon and Sony have introduced camcorders
with still-photo performance approaching a traditional digicam, while Panasonic has introduced a
DSLR-body with video features approaching a traditional camcorder.
[edit]Uses
[edit]Media
Operating a camcorder
Camcorders have found use in nearly all corners of electronic media, from electronic news
organizations to TV/current-affairs productions. In locations away from a distribution infrastructure,
camcorders are invaluable for initial video acquisition. Subsequently, the video is transmitted
electronically to a studio/production center for broadcast. Scheduled events such as official press
conferences, where a video infrastructure is readily available or can be feasibly deployed in advance,
are still covered by studio-type video cameras (tethered to "production trucks.")
[edit]Home video
For casual use, camcorders often cover weddings, birthdays, graduation ceremonies, kids growing up,
and other personal events. The rise of the consumer camcorder in the mid to late '80s led to the
creation of shows such as the long-running America's Funniest Home Videos, where people could
showcase homemade video footage.
[edit]Politics
Political protestors who have capitalized on the value of media coverage use camcorders to film things
they believe to be unjust. Animal rights protesters who break into factory farms and animal testing labs
use camcorders to film the conditions the animals are living in. Anti-hunting protesters film fox hunts.
People expecting to witness political crimes use cameras for surveillance to collect evidence. Activist
videos often appear on Indymedia.
The police use camcorders to film riots, protests and the crowds at sporting events. The film can be
used to spot and pick out troublemakers, who can then be prosecuted in court.
[edit]Entertainment and movies
Camcorders are often used in the production of low-budget TV shows where the production crew does
not have access to more expensive equipment. There are even examples of movies shot entirely on
consumer camcorder equipment (such as The Blair Witch Project and 28 Days Later). In addition,
many academic filmmaking programs have switched from 16mm film to digital video, due to the vastly
reduced expense and ease of editing of the digital medium as well as the increasing scarcity of film
stock and equipment. Some camcorder manufacturers cater to this market,
particularly Canon and Panasonic, who both support "24p" (24 frame/s, progressive scan; same frame
rate as standard cinema film) video in some of their high-end models for easy film conversion.
Even high-budget cinema is done using camcorders in some cases; George Lucas used
Sony CineAlta camcorders in two of his three Star Wars prequel movies. This process is referred to
as digital cinematography.
[edit]Formats
The following list covers consumer equipment only. (For other formats see Videotape)
[edit]Analog
8 mm Camcorder
Lo-Band: Approximately 3 megahertz bandwidth (250 lines EIA resolution or ~333x480 edge-
to-edge)
BCE (1954): First tape storage for video, manufactured by Bing Crosby Entertainment
from Ampex equipment.
BCE Coloer (1955): First color tape storage for video, manufactured by Bing Crosby
Entertainment from Ampex equipment.
Simplex (1955): Developed commercially by RCA and used to record several live
broadcasts by NBC.
Quadruplex (1955): Developed formally by Ampex, and this became the recording
standard for the next 20 years.
Vera (1955): An experimental recording standard developed by the BBC, but was never
used or sold commercially.
U-matic (1971): The initial tape used by Sony to record video.
U-matic S (1974): A small sized version of U-matic used for portable recorders.
Betamax (1975): Only used on very old Sony and Sanyo camcorders and portables;
obsolete by the mid/late-80s in the consumer market.
Type B (1976): Co-developed by Sony and Ampex and this became the broadcast
standard in europe for most of the 1980s.
Type C (1976): Co-developed by Sony and Ampex.
VHS (1976): Compatible with VHS standard VCRs, though VHS camcorders are no
longer made.
VHS-C (1982): Originally designed for portable VCRs, this standard was later adapted for
use in compact consumer camcorders; identical in quality to VHS; cassettes play in
standard VHS VCRs using an adapter. Still available in the low-end consumer market
(JVC model GR-AXM18 is VHS-C; see page 19 of the owner's manual). Relatively short
running time compared to other formats.
Betacam (1982): Introduced by Sony as a 1/2 inch tape for professional video recorders.
Video8 (1985): Small-format tape developed by Sony to combat VHS-C's compact palm-
sized design; equivalent to VHS or Betamax in picture quality, but not compatible. High
quality audio as standard.
Hi-Band: Approximately 5 megahertz bandwidth (420 lines EIA resolution or ~550x480 edge-
to-edge)
U-matic BVU (1982): Largely used in high-end consumer and professional equipment.
The introduction of U-matic BVU spelled the end of 16mm film recordings.
U-matic BVU-SP (1985): Largely used in high-end consumer and professional equipment.
The introduction of U-matic BVU spelled the end of 16mm film recordings.
Betacam -SP (1986): An minor upgrade to the Betacam format, but because of the
upgrade, it became a broadcast standard.
MII (1986): Panasonic's answer to Betacam-SP
S-VHS (1987): Largely used in medium-end consumer and prosumer equipment; rare
among mainstream consumer equipment, and rendered obsolete by digital gear like
DigiBetacam and DV.
S-VHS-C (1987): An upgrade to provide near-laserdisc quality. Now limited to the low-end
consumer market (example: JVC SXM38). As per VHS-C, relatively short running time
compared to other formats.
Hi8 (1988): Enhanced-quality Video8; roughly equivalent to Super VHS in picture quality,
but not compatible. High quality audio as standard. Now limited to low-end consumer
market (example: Sony TRV138)
[edit]Digital
U-matic (1982): An experiments overhaul was made to U-matic to record digital video, but
this was impractical and the tapes were used as a transport for digital audio only. This led
to the D series of tapes about 4 years later.
D1 (Sony) (1986): The first digital video recorder. It used digitized component video,
encoded at Y'CbCr 4:2:2 using the CCIR 601 raster form and experimentally supported
full HD broadcasts.
D2 (video format) (1988): This was a cheap alternative the D1 tape created by Ampex
and this actually encoded video digitally instead of sampling composite video and
experimentally supported full HD broadcasts.
D3 (1991): Created by Panasonic to compete with the Ampex D2 and experimentally
supported full HD broadcasts.
DCT (videocassette format) (1992): This was the first compressed video tape format
created by Ampex based on the D1 format. It used discrete cosine transform as its codec
of choice. DST was a data-only standard introduced to the rapidly growing IT industry.
D5 HD (1994): 1080i digital standard introduced by Sony based on the D1 tape.
Editcam (1995): First drive recording standard introduced by Ikegami. FieldPak used a
IDE hard and RAMPak used a set of flash ram modules. It can record in DV25, Avid JFIF,
DV, MPEG IMX, DVCPRO50, and Avid DNxHD format, depending on generation.
Digital-s (1995): JVC debuted a digital tape similar to VHS but had a different tape inside
and supported digital HD broadcasts. Widely used by FOX broadcasting. Also called D-9.
MiniDV (1995): Smaller version of the DV standard released by Sony. Became the most
widespread standard-definition digital camcorder technology for several years.
DVD (1995): Uses either Mini DVD-R or DVD-RAM. This is a multi-manufacturer standard
that uses 8 cm DVD discs for 30 minutes of video. DVD-R can be played on consumer
DVD players but cannot be added to or recorded over once finalized for viewing. DVD-
RAM can be added to and/or recorded over, but cannot be played on many consumer
DVD players, and costs a lot more than other types of DVD recordable media. The DVD-
RW is another option allowing the user to re-record, but only records sequentially and
must be finalized for viewing. The discs do cost more than the DVD-R format, which only
records once. DVD discs are also very vulnerable to scratches. DVD camcorders are
generally not designed to connect to computers for editing purposes, though some high-
end DVD units do record surround sound, a feature not standard with DV equipment.
DV (1996): Sony debuted the DV format tape with DVCAM being professional and
DVCPRO being a Panasonic variant.
D-VHS (1998): JVC debuted the digital standard of VHS tape and which supported 1080p
HD. Many units also supported IEEE1394 recording.
Digital8 (1999): Uses Hi8 tapes (Sony is the only company currently producing D8
camcorders, though Hitachi once also did). Most, but not all models of Digital 8 cameras
have the ability to read older Video8 and Hi8 analog format tapes. The format's technical
specifications are of the same quality as MiniDV (both use the same DV codec), and
although no professional-level Digital8 equipment exists, D8 has been used to make TV
and movie productions (example: Hall of Mirrors).
MICROMV (2001): Uses a matchbox-sized cassette. Sony was the only
electronics manufacturer for this format, and editing software was proprietary to Sony and
only available on Microsoft Windows; however, open source programmers did manage to
create capture software for Linux [1]. The hardware is no longer in production, though
tapes are still available through Sony.
XDCAM (2003): A professional blu-ray standard introduced by Sony. This is similar to that
of regular BRD but used different codecs, namely MPEG IMX, DV25 (DVCAM), MPEG-4,
MPEG-2, and HD422.
Blu-ray Disc (2003): Presently, Hitachi is the only manufacturer of Blu-ray Disc
camcorders.
P2 (2004): First solid state recording medium of professional quality, introduced by
Panasonic. Recorded DVCPRO, DVCPRO50, DVCPRO-HD, or AVC-Intra stream onto
the card.
HDV (2004): Records up to an hour of HDTV MPEG-2 signal roughly equal to broadcast
quality HD on a standard MiniDV cassette.
SxS (2007): Jointly developed by Sony and Sandisk. This is a solid state format of
XDCAM and is known as XDCAM EX.
MPEG-2 codec based format, which records MPEG-2 program stream or MPEG-2
transport stream to various kinds of tapeless media (hard disks, solid-state memory, etc).
Used both for standard definition (JVC, Panasonic) and high definition (JVC) recording.
H.264 , shorthand term for compressed video using the H.264 codec that is part of the
MPEG-4 standard in an MPEG-4 file most often stored to tapeless media.
AVCHD , a format that puts H.264 video into a transport stream file format. The video is
compressed according to the MPEG-4 AVC (aka H.264) format, but the file format is not
MPEG-4.
[edit]Digital camcorders and operating systems
Since most manufacturers focus their support on Windows and Mac users, users of
other operating systems often are unable to receive support for these devices.
However, open source products such as Cinelerra and Kino (written for
the Linux operating system) do allow full editing of some digital formats on alternative
operating systems, and software to edit DV streams in particular is available on most
platforms.
[edit]Handycam
Sony DVD-Handycam
Handycam is a Sony brand used to market its camcorder range. It was launched in
1985 as the name of the first Video8 camcorder, replacing Sony's previous line
of Betamax-based models, and the name was intended to emphasize the "handy"
palm size nature of the camera, made possible by the new miniaturized tape format.
This was in marked contrast to the larger, shoulder mounted cameras available
before the creation of Video8, and competing smaller formats such as VHS-C.
Sony has continued to produce Handycams [10] in a variety of guises ever since,
developing the Video8 format to produce Hi8 (equivalent to S-VHS quality) and
later Digital8, using the same basic format to record digital video. The Handycam
label continues to be applied as recording formats evolve.
A commercial for the Sony Handycam was made in June 2005 in Europe with the
song "I Love You, ONO" by Stereo Total.
[edit]Handycam models
Handycam (Video8 (1985~))
Hi8 Handycam
Digital8 Handycam
DV Handycam (1995~)
HDV Handycam
DVD-Handycam
HDD Handycam
Memory Stick Handycam (using Memory Stick Pro Duo. Up to 16GB)
Sony Handycam NEX-VG10
Digital televisionFrom Wikipedia, the free encyclopedia
List of digital televisionbroadcast standards
DVB family (Europe)
DVB-S (satellite)
DVB-S2
DVB-T (terrestrial)
DVB-T2
DVB-C (cable)
DVB-C2
DVB-H (handheld)
DVB-SH (satellite)
ATSC family (North America)
ATSC (terrestrial/cable)
ATSC-M/H (mobile/handheld)
ISDB family (Japan/Latin America)
ISDB-S (satellite)
ISDB-T (terrestrial)
1seg (handheld)
ISDB-C (cable)
SBTVD/ISDB-Tb (Brazil)
Chinese Digital Video Broadcasting standards
DMB-T/H (terrestrial/handheld)
ADTB-T (terrestrial)
CMMB (handheld)
DMB-T (terrestrial)
DMB Family (Korean handheld)
T-DMB (terrestrial)
S-DMB (satellite)
MediaFLO
Codecs
Video
MPEG-2
H.264/MPEG-4 AVC
AVS
Audio
MP2
MP3
AC-3
AAC
HE-AAC
Frequency bands
VHF
UHF
SHF
Digital television (DTV) is the transmission of audio and video by discrete (digital) signals, in contrast
to the analog signals used by analog TV.
Digital television is replacing analog television in several industrialized nations.
[edit]Technical information
Digital terrestrial television broadcasting systems by country
[edit]Formats and bandwidth
Digital television supports many different picture formats defined by the combination of size, aspect
ratio (width to height ratio) and interlacing. With digital terrestrial television broadcasting in the USA,
the range of formats can be broadly divided into two categories: HDTV and SDTV. These terms by
themselves are not very precise, and many subtle intermediate cases exist.
High-definition television (HDTV), one of several different formats that can be transmitted over DTV,
uses different formats, amongst which: 1280 × 720 pixels in progressive scan mode
(abbreviated 720p) or 1920 × 1080 pixels in interlace mode (1080i). Each of these utilizes
a 16:9 aspect ratio. (Some televisions are capable of receiving an HD resolution of 1920 × 1080 at a
60 Hz progressive scan frame rate — known as 1080p.) HDTV cannot be transmitted over current
analog channels.
Standard definition TV (SDTV), by comparison, may use one of several different formats taking the
form of various aspect ratios depending on the technology used in the country of broadcast.
For 4:3 aspect-ratio broadcasts, the 640 × 480 format is used in NTSC countries, while 720 × 576 is
used in PAL countries. For16:9 broadcasts, the 704 × 480 format is used in NTSC countries, while
720 × 576 is used in PAL countries. However, broadcasters may choose to reduce these resolutions to
save bandwidth (e.g., many DVB-T channels in the United Kingdom use a horizontal resolution of 544
or 704 pixels per line).[1]
Each commercial terrestrial DTV channel in North America is permitted to be broadcast at a data rate
up to 19 megabits per second, or 2.375 megabytes per second. However, the broadcaster does not
need to use this entire bandwidth for just one broadcast channel. Instead the broadcast can be
subdivided across several video subchannels (aka feeds) of varying quality and compression rates,
including non-video datacastingservices that allow one-way high-bandwidth streaming of data to
computers.
A broadcaster may opt to use a standard-definition digital signal instead of an HDTV signal, because
current convention allows the bandwidth of a DTV channel (or "multiplex") to be subdivided into
multiple subchannels(similar to what most FM stations offer with HD Radio), providing multiple feeds of
entirely different programming on the same channel. This ability to provide either a single HDTV feed
or multiple lower-resolution feeds is often referred to as distributing one's "bit budget" or multicasting.
This can sometimes be arranged automatically, using a statistical multiplexer (or "stat-mux"). With
some implementations, image resolution may be less directly limited by bandwidth; for example
in DVB-T, broadcasters can choose from several different modulation schemes, giving them the option
to reduce the transmission bitrate and make reception easier for more distant or mobile viewers.
[edit]Reception
There are a number of different ways to receive digital television. One of the oldest means of receiving
DTV (and TV in general) is using an antenna (known as an aerial in some countries). This way is
known as Digital Terrestrial Television (DTT). With DTT, viewers are limited to whatever channels the
antenna picks up. Signal quality will also vary.
Other ways have been devised to receive digital television. Among the most familiar to people
are digital cable and digital satellite. In some countries where transmissions of TV signals are normally
achieved by microwaves, digital MMDS is used. Other standards, such as DMB and DVB-H, have
been devised to allow handheld devices such as mobile phones to receive TV signals. Another way
is IPTV, that is receiving TV via Internet Protocol, relying on DSL or optical cable line. Finally, an
alternative way is to receive digital TV signals via the open Internet. For example, there is P2P (peer-
to-peer) Internet television software that can be used to watch TV on a computer.
Some signals carry encryption and specify use conditions (such as "may not be recorded" or "may not
be viewed on displays larger than 1 m in diagonal measure") backed up with the force of law under
the WIPO Copyright Treaty and national legislation implementing it, such as the U.S. Digital Millennium
Copyright Act. Access to encrypted channels can be controlled by a removable smart card, for
example via the Common Interface (DVB-CI) standard for Europe and via Point Of Deployment (POD)
for IS or named differently CableCard.
[edit]Protection parameters for terrestrial DTV broadcasting
[clarification needed]
Digital television signals must not interfere with each other, and they must also coexist
with analog television until it is phased out. The following table gives allowable signal-
to-noise and signal-to-interference ratios for various interference scenarios. This table is a
crucial regulatory tool for controlling the placement and power levels of stations. Digital
TV is more tolerant of interference than analog TV, and this is the reason a smaller range
of channels can carry an all-digital set of television stations.
System Parameters(protection ratios)
Canada [13]
USA [5] EBU [9, 12]ITU-mode M3
Japan & Brazil [36, 37][2]
C/N for AWGN Channel+19.5 dB(16.5 dB[3])
+15.19 dB
+19.3 dB +19.2 dB
Co-Channel DTV into Analog TV +33.8 dB+34.44 dB
+34 ~ 37 dB +38 dB
Co-Channel Analog TV into DTV +7.2 dB +1.81 dB +4 dB +4 dB
Co-Channel DTV into DTV+19.5 dB(16.5 dB[3])
+15.27 dB
+19 dB +19 dB
Lower Adjacent Channel DTV into Analog TV
−16 dB−17.43 dB
−5 ~ −11 dB[4] −6 dB
Upper Adjacent Channel DTV into Analog TV
−12 dB−11.95 dB
−1 ~ −10[4] −5 dB
Lower Adjacent Channel Analog TV into DTV
−48 dB−47.33 dB
−34 ~ −37 dB[4] −35 dB
Upper Adjacent Channel Analog TV −49 dB −48.71 −38 ~ −36 −37 dB
into DTV dB dB[4]
Lower Adjacent Channel DTV into DTV
−27 dB −28 dB −30 dB −28 dB
Upper Adjacent Channel DTV into DTV −27 dB −26 dB −30 dB −29 dB
[edit]Interaction
Interaction happens between the TV watcher and the DTV system. It can be understood in different
ways, depending on which part of the DTV system is concerned. It can also be an interaction with the
STB only (to tune to another TV channel or to browse the EPG).
Modern DTV systems are able to provide interaction between the end-user and the broadcaster
through the use of a return path. With the exceptions of coaxial and fiber optic cable, which can be
bidirectional, a dialup modem, Internet connection, or other method is typically used for the return path
with unidirectional networks such as satellite or antenna broadcast.
In addition to not needing a separate return path, cable also has the advantage of a communication
channel localized to a neighborhood rather than a city (terrestrial) or an even larger area (satellite).
This provides enough customizable bandwidth to allow true video on demand.
[edit]Conversion from analog to digital
Further information: Analog television
DTV has several advantages over analog TV, the most significant being that digital channels take up
less bandwidth, and the bandwidth needs are continuously variable, at a corresponding reduction in
image quality depending on the level of compression as well as the resolution of the transmitted
image. This means that digital broadcasters can provide more digital channels in the same space,
provide high-definition television service, or provide other non-television services such as multimedia
or interactivity. DTV also permits special services such as multiplexing (more than one program on the
same channel), electronic program guides and additional languages (spoken or subtitled). The sale of
non-television services may provide an additional revenue source.
Digital signals react differently to interference than analog signals. For example, common problems
with analog television include ghosting of images, noise from weak signals, and many other potential
problems which degrade the quality of the image and sound, although the program material may still
be watchable. With digital television, the audio and video must be synchronized digitally, so reception
of the digital signal must be very nearly complete; otherwise, neither audio nor video will be usable.
Short of this complete failure, "blocky" video is seen when the digital signal experiences interference.
[edit]Effect on existing analog technology
Analog switch-off would render a non-digital television obsolete unless it is connected to an external
digital tuner. An external converter box can be added to non-digital televisions to receive the new
digital signals. Several of these devices have already been introduced, with availability on the
increase. In the United States, a government-sponsored coupon was available to offset the cost of an
external converter box. Analog switch-off took place on June 12, 2009 in the United States[5] and is
scheduled for August 31, 2011 in Canada[6], July 24, 2011 in Japan[7], by 2012 in the United
Kingdom[8] and in Ireland [9] by 2013 in Australia[10] and by 2015 in the Philippines
[edit]Environmental issues
The adoption of a broadcast standard incompatible with existing analog receivers has created the
problem of large numbers of analog receivers being discarded during digital television transition. An
estimated 99 million unused analog TV receivers are currently in storage in the US alone[11] and, while
some obsolete receivers are being retrofitted with converters, many more are simply dumped
in landfills [12] where they represent a source of toxic metals such as lead as well as lesser amounts of
materials such as barium, cadmium and chromium.[13]
While the glass in cathode ray tubes contains an average of 3.62 kilograms (8.0 lb) of lead[14] (amount
varies from 1.08 lb to 11.28 lb, depending on screen size but the lead is "stable and immobile"[15])
which can have long-term negative effects on the environment if dumped as landfill,[16] the glass
envelope can be recycled at suitably-equipped facilities.[17] Other portions of the receiver may be
subject to disposal as hazardous material.
Local restrictions on disposal of these materials vary widely; in some cases second-hand stores have
refused to accept working color television receivers for resale due to the increasing costs of disposing
of unsold TV's. Those thrift stores which are still accepting donated TV's have reported significant
increases in good-condition working used television receivers abandoned by viewers who often expect
them not to work after digital transition.[18]
In Michigan, one recycler has estimated that as many as one household in four will dispose of or
recycle a TV set in the next year.[19] The digital television transition, migration to high-definition
television receivers and the replacement of CRTs with flatscreens are all factors in the increasing
number of discarded analog CRT-based television receivers.
[edit]Technical limitations
[edit]Compression artifacts and allocated bandwidth
DTV images have some picture defects that are not present on analog television or motion picture
cinema, because of present-day limitations of bandwidth and compression algorithms such as MPEG-
2. This defect is sometimes referred to as "mosquito noise".[20]
Because of the way the human visual system works, defects in an image that are localized to particular
features of the image or that come and go are more perceptible than defects that are uniform and
constant. However, the DTV system is designed to take advantage of other limitations of the human
visual system to help mask these flaws, e.g. by allowing more compression artifacts during fast motion
where the eye cannot track and resolve them as easily and, conversely, minimizing artifacts in still
backgrounds that may be closely examined in a scene (since time allows).
[edit]Effects of poor reception
Changes in signal reception from factors such as degrading antenna connections or changing weather
conditions may gradually reduce the quality of analog TV. The nature of digital TV results in a
perfectly-decodable video initially, until the receiving equipment starts picking up interference that
overpowers the desired signal or if the signal is too weak to decode. Some equipment will show a
garbled picture with significant damage, while other devices may go directly from perfectly-decodable
video to no video at all or lock up. This phenomenon is known as the digital cliff effect.
For remote locations, distant channels that, as analog signals, were previously usable in a snowy and
degraded state may, as digital signals, be perfectly decodable or may become completely unavailable.
In areas where transmitting antennas are located on mountains, viewers who are too close to the
transmitter may find reception difficult or impossible because the strongest part of the broadcast signal
passes above them. The use of higher frequencies will add to these problems, especially in cases
where a clear line-of-sight from the receiving antenna to the transmitter is not available.
Multi-path interference is a much more significant problem for DTV than for analog TV and affects
reception, particularly when using simple antennas such as rabbit ears. This is perceived
as ghosting with analog broadcasts, but this same problem manifests itself in a different way with DTV.
Multi-path can be worse for DTV under high signal conditions. If the problem is severe enough, multi-
path can be perceived by the viewer as a spotty loss of audio or picture freezing and pixelation.
Dynamic multipath interference, in which the delay and magnitude of reflections are rapidly changing,
is particularly problematic for digital reception. While this just produces moving and changing ghost
images for analog TV, it can render a digital signal impossible to decode. The8VSB-based standards
in use in North American ATSC broadcasts are particularly vulnerable to problems from dynamic
multipath; this has the potential to severely limit mobile or portable use of digital television receivers
High-definition television (or HDTV, or just HD) refers to video
having resolution substantially higher than traditional television systems (standard-definition TV,
or SDTV, or SD). HD has one or two million pixels per frame, roughly five times that of SD. Early
HDTV broadcasting used analog techniques, but today HDTV is digitally broadcast using video
compression. Some personal video recorders (PVRs) with hard disk storage but without high-
definition tuners are legitimately described as "HD", for "Hard Disk", which can be a cause of
confusion.
[edit]History of high-definition television
Further information: Analog high-definition television system
The term high definition once described a series of television systems originating from the late
1930s; however, these systems were only high definition when compared to earlier systems that
were based on mechanical systems with as few as 30 lines of resolution.
The British high definition TV service started trials in August 1936 and a regular service in
November 1936 using both the (mechanical) Baird 240 line and (electronic) Marconi-EMI 405
line (377i) systems. The Baird system was discontinued in February 1937. In 1938 France
followed with their own 441 line system, variants of which were also used by a number of other
countries. The US NTSC system joined in 1941. In 1949 France introduced an even higher
resolution standard at 819 lines (768i), a system that would be high definition even by today's
standards, but it was monochrome only. All of these systems used interlacing and a 4:3 aspect
ratio except the 240 line system which was progressive (actually described at the time by the
technically correct term 'sequential') and the 405 line system which started as 5:4 and later
changed to 4:3. The 405 line system adopted the (at that time) revolutionary idea of interlaced
scanning to overcome the flicker problem of the 240 line with its 25 Hz frame rate. The 240 line
system could have doubled its frame rate but this would have meant that the transmitted signal
would have doubled in bandwidth, an unacceptable option.
Color broadcasts started at similarly higher resolutions, first with the US NTSC color system in
1953, which was compatible with the earlier B&W systems and therefore had the same 525 lines
(480i) of resolution. European standards did not follow until the 1960s, when
the PALand SECAM colour systems were added to the monochrome 625 line (576i) broadcasts.
Since the formal adoption of Digital Video Broadcasting's (DVB) widescreen HDTV transmission
modes in the early 2000s the 525-line NTSC(and PAL-M) systems as well as the European 625-
line PAL and SECAM systems are now regarded as standard definition television systems. In
Australia, the 625-line digital progressive system (with 576 active lines) is officially recognized as
high definition.[1]
[edit]Analog systems
In 1949, France started its transmissions with an 819 lines system (768i). It was monochrome
only, it was used only on VHF for the first French TV channel, and it was discontinued in 1985.
In 1958, the Soviet Union developed Тransformator (Russian: Трансформатор, Transformer),
the first high-resolution (definition) television system capable of producing an image composed of
1,125 lines of resolution aimed at providing teleconferencing for military command. It was a
research project and the system was never deployed in the military or broadcasting.[2]
In 1969, the Japanese state broadcaster NHK first developed consumer high-definition television
with a 5:3 display aspect ratio.[3] The system, known as Hi-Vision or MUSE after its Multiple sub-
Nyquist sampling encoding for encoding the signal, required about twice the bandwidth of the
existing NTSC system but provided about four times the resolution (1080i/1125 lines). Satellite
test broadcasts started in 1989, with regular testing starting in 1991 and regular broadcasting
of BS-9ch commenced on 25 November 1994, which featured commercial and NHK
programming.
In 1981, the MUSE system was demonstrated for the first time in the United States, using the
same 5:3 aspect ratio as the Japanese system.[4] Upon visiting a demonstration of MUSE in
Washington, US President Ronald Reagan was most impressed and officially declared it "a
matter of national interest" to introduce HDTV to the USA.[5]
Several systems were proposed as the new standard for the USA, including the Japanese MUSE
system, but all were rejected by the FCC because of their higher bandwidth requirements. At this
time, the number of television channels was growing rapidly and bandwidth was already a
problem. A new standard had to be more efficient, needing less bandwidth for HDTV than the
existing NTSC.
[edit]Rise of digital compression
Since 1972, International Telecommunication Union's radio telecommunications sector (ITU-R)
has been working on creating a global recommendation for Analogue HDTV. These
recommendations however did not fit in the broadcasting bands which could reach home users.
The standardization of MPEG-1 in 1993 also led to the acceptance of recommendations ITU-R
BT.709.[6] In anticipation of these standards the Digital Video Broadcasting (DVB) organisation
was formed, an alliance of broadcasters, consumer electronics manufacturers and regulatory
bodies. The DVB develops and agrees on specifications which are formally standardised by
ETSI.[7]
DVB created first the standard for DVB-S digital satellite TV, DVB-C digital cable TV and DVB-
T digital terrestrial TV. These broadcasting systems can be used for both SDTV and HDTV. In the
USA the Grand Alliance proposed ATSC as the new standard for SDTV and HDTV. Both ATSC
and DVB were based on the MPEG-2 standard. The DVB-S2 standard is based on the newer and
more efficient H.264/MPEG-4 AVC compression standards. Common for all DVB standards is the
use of highly efficient modulation techniques for further reducing bandwidth, and foremost for
reducing receiver-hardware and antenna requirements.
In 1983, the International Telecommunication Union's radio telecommunications sector (ITU-R)
set up a working party (IWP11/6) with the aim of setting a single international HDTV standard.
One of the thornier issues concerned a suitable frame/field refresh rate, the world already having
split into two camps, 25/50Hz and 30/60Hz, related by reasons of picture stability to the frequency
of their main electrical supplies.
The IWP11/6 working party considered many views and through the 1980s served to encourage
development in a number of video digital processing areas, not least conversion between the two
main frame/field rates using motion vectors, which led to further developments in other areas.
While a comprehensive HDTV standard was not in the end established, agreement on the aspect
ratio was achieved.
Initially the existing 5:3 aspect ratio had been the main candidate but, due to the influence of
widescreen cinema, the aspect ratio 16:9 (1.78) eventually emerged as being a reasonable
compromise between 5:3 (1.67) and the common 1.85 widescreen cinema format. (Bob Morris
explained that the 16:9 ratio was chosen as being the geometric mean of 4:3, Academy ratio, and
2.4:1, the widest cinema format in common use, in order to minimize wasted screen space when
displaying content with a variety of aspect ratios.[8])
An aspect ratio of 16:9 was duly agreed at the first meeting of the IWP11/6 working party at
the BBC's Research and Developmentestablishment in Kingswood Warren. The resulting ITU-R
Recommendation ITU-R BT.709-2 ("Rec. 709") includes the 16:9 aspect ratio, a
specified colorimetry, and the scan modes 1080i (1,080 actively interlaced lines of resolution)
and 1080p (1,080 progressively scanned lines). The current Freeview HD trials use MBAFF,
which contains both progressive and interlaced content in the same encoding.
It also includes the alternative 1440×1152 HDMAC scan format. (According to some reports, a
mooted 750-ine (720p) format (720 progressively scanned lines) was viewed by some at the ITU
as an enhanced television format rather than a true HDTV format,[9] and so was not included,
although 1920×1080i and 1280×720p systems for a range of frame and field rates were defined
by several US SMPTEstandards.)
[edit]Demise of analog HD systems
Even this limited standardization of HDTV did not lead to its adoption, principally for technical and
economic reasons. Early HDTV commercial experiments such as NHK's MUSE required over four
times the bandwidth of a standard-definition broadcast, and despite efforts made to reduce it to
about twice that of SDTV, it was still only distributable by satellite with one channel shared on a
daily basis between seven broadcasters. In addition, recording and reproducing an HDTV signal
was a significant technical challenge in the early years of HDTV. Japan remained the only country
with successful public broadcast analog HDTV. Digital HDTV broadcasting started in 2000 in
Japan, and the analog service ended in the early hours of 1 October 2007.
[edit]Inaugural HDTV broadcast in the United States
HDTV technology was introduced in the United States in the 1990s by the Digital HDTV Grand
Alliance, a group of television companies andMIT.[10][11] Field testing of HDTV at 199 sites in the
United States was completed August 14, 1994.[12] The first public HDTV broadcast in the United
States occurred on July 23, 1996 when the Raleigh, North Carolina television station WRAL-
HD began broadcasting from the existing tower of WRAL-TV south-east of Raleigh, winning a
race to be first with the HD Model Station in Washington, D.C., which began broadcasting July
31, 1996 with the callsign WHD-TV, based out of the facilities of NBC owned and operated
station WRC-TV.[13][14][15] The American Advanced Television Systems Committee (ATSC) HDTV
system had its public launch on October 29, 1998, during the live coverage of astronaut John
Glenn's return mission to space on board the Space Shuttle Discovery.[16] The signal was
transmitted coast-to-coast, and was seen by the public in science centers, and other public
theaters specially equipped to receive and display the broadcast.[16][17]
[edit]European HDTV broadcasts
Although HDTV broadcasts had been demonstrated in Europe since the early 1990s, the first
regular broadcasts started on January 1, 2004 when the Belgian company Euro1080 launched
the HD1 channel with the traditional Vienna New Year's Concert. Test transmissions had been
active since the IBC exhibition in September 2003, but the New Year's Day broadcast marked the
official start of the HD1 channel, and the start of HDTV in Europe.[18]
Euro1080, a division of the Belgian TV services company Alfacam, broadcast HDTV channels to
break the pan-European stalemate of "no HD broadcasts mean no HD TVs bought means no HD
broadcasts..." and kick-start HDTV interest in Europe.[19] The HD1 channel was initially free-to-air
and mainly comprised sporting, dramatic, musical and other cultural events broadcast with a
multi-lingual soundtrack on a rolling schedule of 4 or 5 hours per day.
These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a
DVB-S signal from SES Astra's 1H satellite. Euro1080 transmissions later changed to
MPEG-4/AVC compression on a DVB-S2 signal in line with subsequent broadcast channels in
Europe.
The first Russian HDTV broadcast commenced in 2007 by NTV Plus, followed by Platform HD in
2008. Both companies broadcast via satellite using MPEG-4/AVC video compression.
In December 2009 the UK became the first European country to deploy high definition content on
digital terrestrial television (branded asFreeview) using the new DVB-T2 transmission standard as
specified in the Digital TV Group (DTG) D-Book.
The Freeview HD service currently contains 4 HD channels and is now rolling out region by
region across the UK in accordance with thedigital switchover process. Some transmitters such
as the Crystal Palace and Emley Moor transmitters are broadcasting the Freeview HD service
ahead of the digital switchover by means of a temporary, low-power pre-DSO multiplex.
[edit]Indian HD broadcasts
HDTV broadcasts hit India in early 2010. Currently, only a limited number of channels are
available in HD format depending on the dish television service provider. The channels broadcast
in HD comprise the most-watched channels such as Colors TV, Star Plus, Zee TV, SET
Max, STAR Sports, Zee Cinema, Discovery Channel, ESPN, National Geographic Channel and
other local players. With commence of Delhi CWG (OCT. 2010), Doordarshan also started HD
broadcast. Considered as an expensive revolution in the market, HDTV is seeing a gradual
growth in India.
[edit]Notation
HDTV broadcast systems are identified with three major parameters:
Frame size in pixels is defined as number of horizontal pixels × number of vertical pixels,
for example 1280 × 720 or 1920 × 1080. Often the number of horizontal pixels is implied from
context and is omitted, as in the case of 720p and 1080p.
Scanning system is identified with the letter p for progressive scanning or i for interlaced
scanning.
Frame rate is identified as number of video frames per second. For interlaced systems
an alternative form of specifying number of fields per second is often used.[citation needed]
If all three parameters are used, they are specified in the following form: [frame size][scanning
system][frame or field rate] or [frame size]/[frame or field rate][scanning system].[citation needed] Often,
frame size or frame rate can be dropped if its value is implied from context. In this case the
remaining numeric parameter is specified first, followed by the scanning system.
For example, 1920×1080p25 identifies progressive scanning format with 25 frames per second,
each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i25 or 1080i50 notation
identifies interlaced scanning format with 25 frames (50 fields) per second, each frame being
1,920 pixels wide and 1,080 pixels high.[citation needed] The 1080i30 or 1080i60 notation identifies
interlaced scanning format with 30 frames (60 fields) per second, each frame being 1,920 pixels
wide and 1,080 pixels high.[citation needed] The 720p60 notation identifies progressive scanning format
with 60 frames per second, each frame being 720 pixels high; 1,280 pixels horizontally are
implied.
50Hz systems support three scanning rates: 25i, 25p and 50p. 60Hz systems support a much
wider set of frame rates: 23.976p, 24p, 29.97i/59.94i, 29.97p, 30p, 59.94p and 60p. In the days of
standard definition television, the fractional rates were often rounded up to whole numbers, e.g.
23.976p was often called 24p, or 59.94i was often called 60i. 60Hz high definition television
supports both fractional and slightly different integer rates, therefore strict usage of notation is
required to avoid ambiguity. Nevertheless, 29.97i/59.94i is almost universally called 60i, likewise
23.976p is called 24p.[citation needed]
For commercial naming of a product, the frame rate is often dropped and is implied from context
(e.g., a 1080i television set). A frame rate can also be specified without a resolution. For example,
24p means 24 progressive scan frames per second, and 50i means 25 interlaced frames per
second.[citation needed]
There is no standard for HDTV color support. Until recently the color of each pixel was regulated
by three 8-bit color values, each representing the level of red, blue, and green which defined a
pixel color. Together the 24 total bits defining color yielded just under 17 million possible pixel
colors. Recently[when?] some manufacturers have produced systems that can employ 10 bits for
each color (30 bits total) which provides for a palette of 1 billion colors, saying that this provides a
much richer picture, but there is no agreed way to specify that a piece of equipment supports this
feature.
Most HDTV systems support resolutions and frame rates defined either in the ATSC table 3, or in
EBU specification. The most common are noted below.
[edit]High-definition display resolutions
Video format
supported [image
resolution]
Native resolution [inherent
resolution] (W×H)
PixelsAspect ratio
(W:H)
Description
ActualAdvertised (Mpixel)
Image Pixel
720p1280x720
1024×768XGA
786,432 0.8 16:9 4:3Typically a PC resolution (XGA); also a native resolution on many entry-level plasma displays with non-square pixels.
1280×720 921,600 0.9 16:9 1:1Standard HDTV resolution and a typical PC resolution (WXGAused by video projectors; also used for 750-line video, as defined in SMPTE 296M, ATSC A/53, ITU-R BT.1543.
1366×768WXGA
1,049,088 1.0683:384(approx. 16:9)
1:1approx.
A typical PC resolution (WXGA); also used by many HD readybased on LCD technology.
1080p/1080i1920×1080
1920×1080 2,073,600 2.1 16:9 1:1
Standard HDTV resolution, used by Full HD and HD ready 1080pdisplays such as high-end LCD, Plasma and rear projectiontypical PC resolution (lower than WUXGA); also used for 1125-line video, as defined in SMPTE 274M, ATSC A/53, ITU-R BT.709;
Video Screen Pixels Aspect ratio Description
format supported
resolution (W×H)
(W:H)
ActualAdvertised (Mpixel)
Image Pixel
720p1780×956
1780×956Clean Aperture
876,096 0.9 16:9 1:1Used for 750-line video with faster artifact/overscan compensation, as defined in SMPTE 296M.
1080p1920×1080
1888×1062Clean aperture
2,001,280 2.0 16:9 1:1Used for 1125-line video with faster artifact/overscan compensation, as defined in SMPTE 274M.
1080i1920×1080
1440×1080HDCAM/HDV
1,555,200 1.6 16:9 4:3Used for anamorphic 1125-line video in the HDCAM and HDV formats introduced by Sony and defined (also as a luminance subsampling matrix) in
[edit]Standard frame or field rates
23.976 Hz (film-looking frame rate compatible with NTSC clock speed standards)
24 Hz (international film and ATSC high definition material)
25 Hz (PAL, SECAM film, standard definition, and high definition material)
29.97 Hz (NTSC standard definition material)
50 Hz (PAL & SECAM high definition material))
59.94 Hz (ATSC high definition material)
60 Hz (ATSC high definition material)
A comparison of multiple TV resolution standards as if it were viewed on a fixed-pixel display at
full 1080p resolution. View at full size for proper comparison.
At a minimum, HDTV has twice the linear resolution of standard-definition television(SDTV), thus
showing greater detail than either analog television or regular DVD. The technical standards for
broadcasting HDTV also handle the 16:9 aspect ratio images without
using letterboxing or anamorphic stretching, thus increasing the effective image resolution.
The optimum format for a broadcast depends upon the type of videographic recording medium
used and the image's characteristics. The field and frame rate should match the source and the
resolution. A very high resolution source may require more bandwidth than available in order to
be transmitted without loss of fidelity. The lossy compression that is used in all digital HDTV
storage and transmission systems will distort the received picture, when compared to the
uncompressed source.
There is a wide spread confusion of using terms like PAL or SECAM or NTSC relating to HD
material. PAL, SECM, NTSC are only standard definition standards, not HD. There is no specific
technical reason to keep 25Hz as HD frame rate in a former PAL country (except in case of a
need of compatibility with both HD and standard definition television systems).
[edit]Types of media
Standard 35 mm photographic film used for cinema projection has higher resolution than HDTV
systems, and is exposed and projected at a rate of 24 frames per second. To be shown on
standard television, in PAL-system countries, cinema film is scanned at the TV rate of 25 frames
per second, causing an acceleration of 4.1 percent, which is generally considered acceptable. In
NTSC-system countries, the TV scan rate of 30 frames per second would cause a perceptible
acceleration if the same were attempted, and the necessary correction is performed by a
technique called 3:2 pull-down: over each successive pair of film frames, one is held for three
video fields (1/20 of a second) and the next is held for two video fields (1/30 of a second), giving a
total time for the two frames of 1/12 of a second and thus achieving the correct average film
frame rate.
See also: Telecine
Non-cinematic HDTV video recordings intended for broadcast are typically recorded either
in 720p or 1080i format as determined by the broadcaster. 720p is commonly used for Internet
distribution of high-definition video, because most computer monitors operate in progressive-scan
mode. 720p also imposes less strenuous storage and decoding requirements compared to both
1080i and 1080p. 1080p is usually used for Blu-ray Disc.
[edit]Contemporary systems
Main article: Large-screen television technology
Besides an HD-ready television set, other equipment may be needed to view HD television. In the
US, Cable-ready TV sets can display HD content without using an external box. They have
a QAM tuner built-in and/or a card slot for inserting a CableCARD.[20]
High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital
cable, IPTV, the high definition Blu-ray video disc (BD), internet downloads, the Blu-ray disc
compatible Sony PlayStation 3 video game console (PS3), and the Microsoft Xbox 360 video
game console.
[edit]Recording and compression
Main article: High-definition pre-recorded media and compression
HDTV can be recorded to D-VHS (Digital-VHS or Data-VHS), W-VHS (analog only), to an HDTV-
capable digital video recorder (for exampleDirecTV's high-definition Digital video recorder, Sky
HD's set-top box, Dish Network's VIP 622 or VIP 722 high-definition Digital video
recorderreceivers, or TiVo's Series 3 or HD recorders), or an HDTV-ready HTPC. Some cable
boxes are capable of receiving or recording two or more broadcasts at a time in HDTV format,
and HDTV programming, some free, some for a fee, can be played back with the cable
company's on-demand feature.
The massive amount of data storage required to archive uncompressed streams meant that
inexpensive uncompressed storage options were not available in the consumer market until
recently. In 2008 the Hauppauge 1212 Personal Video Recorder was introduced. This device
accepts HD content through component video inputs and stores the content in an
uncompressed MPEG transport stream (.ts) file or Blu-rayformat .m2ts file on the hard drive or
DVD burner of a computer connected to the PVR through a USB 2.0 interface.
Realtime MPEG-2 compression of an uncompressed digital HDTV signal is prohibitively
expensive for the consumer market at this time, but should become inexpensive within several
years (although this is more relevant for consumer HD camcorders than recording HDTV). Analog
tape recorders with bandwidth capable of recording analog HD signals such as W-VHS recorders
are no longer produced for the consumer market and are both expensive and scarce in the
secondary market.
In the United States, as part of the FCC's plug and play agreement, cable companies are required
to provide customers who rent HD set-top boxes with a set-top box with
"functional" Firewire (IEEE 1394) upon request. None of the direct broadcast satellite providers
have offered this feature on any of their supported boxes, but some cable TV companies have.
As of July 2004, boxes are not included in the FCC mandate. This content is protected by
encryption known as 5C.[21] This encryption can prevent duplication of content or simply limit the
number of copies permitted, thus effectively denying most if not all fair use of the content.