+ All Categories
Home > Documents > Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super...

Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super...

Date post: 24-Dec-2019
Category:
Upload: others
View: 10 times
Download: 0 times
Share this document with a friend
10
Sense of realness ln p )) PLASTER BUST MODEL SHIP BUTTERFLIES Resolution cpd0 1 2 4 5 3 6 7 20 40 60 80 100 140 Real object 1.1.1 1.1 Super Hi-Vision Super Hi-Vision format We are conducting studies on the video parameters of Super Hi-Vision SHV). Evaluating the sense of realness SHV is a video system that is able to provide a very strong sense of presence due to its extremely high resolution and wide field of view. SHV also provides a sense of realness as though the real object were actually there).In FY2010, we evaluated the relationship between image resolution and the sense of re- alness in experiments where subjects compared actual objects and video imagery. Our goal is to achieve a level of realism whereby images cannot be distinguished from the actual ob- jects 1Figure 1 shows the relationship between angular resolution cycles-per-degree: cpdand the sense of realness when view- ing images. The higher the angular resolution, the greater the sense of realness, and the sense gently saturates above about 60 cpd; above 155 cpd, images are indistinguishable from the Figure 1. Relationship between image resolution and sense of realness STRL is advancing with its research on Super Hi- Vision SHVand three-dimensional television for next-generation broadcast media. We studied the system parameters of SHV and examined the frame frequency and colorimetry as well as the pixel count, aspect ratio, and bit depth ). We confirmed that SHV will provide a sense of realness even on a midsize screen of ap- proximately 70-inches for households after con- ducting thorough assessment tests. Our R&D on SHV cameras has led to a three- chip, full-resolution SHV camera system using 33- megapixel imaging devices. Our work on SHV displays has yielded a practi- cal, compact projector and real-time signal- processing for a high-dynamic-range projector. We also developed an 85-inch full-resolution LCD display in collaboration with a commercial elec- tronics manufacturer and produced a prototype 58-inch high-resolution PDP as a milestone to our goal making of a 100-inch SHV display. We conducted successful test transmissions be- tween London and Tokyo using an improved AVC/ H.264 SHV codec. We also submitted proposals for standardization of a new video coding scheme, HEVC, to MPEG and researched a new video cod- ing technology called “reconstructive video cod- ing”. Regarding the audio system for SHV, we con- firmed the effectiveness of the loudspeaker ar- rangement for 22.2 multichannel sound and de- veloped a sophisticated and practical production system. We also studied a home audio system that uses fewer loudspeakers than channels and a transmission coding system. We are studying transmission technologies with improved bandwidth efficiency for the next gen- eration of wideband satellite broadcasting and digital terrestrial broadcasting to provide high- capacity transmissions such as SHV. We also pro- duced a prototype of cable transmission equip- ment that uses multiple 6 MHz channels conform- ing to existing standards. In other research, we studied methods of transmitting uncompressed SHV signals over optical fiber as a means of trans- mitting program materials and conducted trans- mission tests at IBC 2010. We submitted proposals for standardizing SHV to ITU-R, SMPTE, and other standards organiza- tions. We also issued our R&D roadmap of SHV for the future. We will continue to research integral 3D televi- sion IP 3DTVwith the ultimate goal being a form of video that shows natural 3 D images without special glasses. We increased the resolution of IP 3DTV using full-resolution SHV devices with pixel-offset tech- nology. We also studied ways of converting multi- viewpoint images captured with multiple cameras into IP 3D imagery, so that we can capture sub- jects that would otherwise be difficult to capture with ordinary IP techniques. We submitted research results on 3D video studies on the dependency of visual comfort and naturalness of 3D images and display conditionsto the ITU-R for standardization. 1 Next-generation broadcasting media 4 NHK STRL ANNUAL REPORT 2010
Transcript
Page 1: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

Sense o

f re

aln

ess (

ln (p))

PLASTER BUST

MODEL SHIP

BUTTERFLIES

Resolution(cpd)

0

ー 1

ー 2

ー 4

ー 5

ー 3

ー 6

ー 720 40 60 80 100 140 Real object

1.1.1

1.1 Super Hi-Vision

Super Hi-Vision format

We are conducting studies on the video parameters of SuperHi-Vision(SHV).

■ Evaluating the sense of realness

SHV is a video system that is able to provide a very strongsense of presence due to its extremely high resolution and widefield of view. SHV also provides a sense of realness(as thoughthe real object were actually there).In FY2010, we evaluatedthe relationship between image resolution and the sense of re-alness in experiments where subjects compared actual objectsand video imagery. Our goal is to achieve a level of realismwhereby images cannot be distinguished from the actual ob-jects(1).Figure 1 shows the relationship between angular resolution

(cycles-per-degree: cpd)and the sense of realness when view-

ing images. The higher the angular resolution, the greater thesense of realness, and the sense gently saturates above about60 cpd; above 155 cpd, images are indistinguishable from the

Figure 1. Relationship between image resolution and sense of realness

STRL is advancing with its research on Super Hi-Vision(SHV)and three-dimensional television fornext-generation broadcast media.We studied the system parameters of SHV and

examined the frame frequency and colorimetry(aswell as the pixel count, aspect ratio, and bitdepth).We confirmed that SHV will provide asense of realness even on a midsize screen of ap-proximately 70-inches for households after con-ducting thorough assessment tests.Our R&D on SHV cameras has led to a three-

chip, full-resolution SHV camera system using 33-megapixel imaging devices.Our work on SHV displays has yielded a practi-

cal, compact projector and real-time signal-processing for a high-dynamic-range projector.We also developed an 85-inch full-resolution LCDdisplay in collaboration with a commercial elec-tronics manufacturer and produced a prototype58-inch high-resolution PDP as a milestone to ourgoal making of a 100-inch SHV display.We conducted successful test transmissions be-

tween London and Tokyo using an improved AVC/H.264 SHV codec. We also submitted proposalsfor standardization of a new video coding scheme,HEVC, to MPEG and researched a new video cod-ing technology called “reconstructive video cod-ing”.Regarding the audio system for SHV, we con-

firmed the effectiveness of the loudspeaker ar-rangement for 22.2 multichannel sound and de-veloped a sophisticated and practical productionsystem. We also studied a home audio system thatuses fewer loudspeakers than channels and a

transmission coding system.We are studying transmission technologies with

improved bandwidth efficiency for the next gen-eration of wideband satellite broadcasting anddigital terrestrial broadcasting to provide high-capacity transmissions such as SHV. We also pro-duced a prototype of cable transmission equip-ment that uses multiple 6 MHz channels conform-ing to existing standards. In other research, westudied methods of transmitting uncompressedSHV signals over optical fiber as a means of trans-mitting program materials and conducted trans-mission tests at IBC 2010.We submitted proposals for standardizing SHV

to ITU-R, SMPTE, and other standards organiza-tions.We also issued our R&D roadmap of SHV for thefuture.We will continue to research integral 3D televi-

sion(IP 3DTV)with the ultimate goal being a formof video that shows natural 3D images withoutspecial glasses.We increased the resolution of IP 3DTV using

full-resolution SHV devices with pixel-offset tech-nology. We also studied ways of converting multi-viewpoint images captured with multiple camerasinto IP 3D imagery, so that we can capture sub-jects that would otherwise be difficult to capturewith ordinary IP techniques.We submitted research results on 3D video

(studies on the dependency of visual comfort andnaturalness of 3D images and display conditions)to the ITU-R for standardization.

1 Next-generation broadcasting media

4 | NHK STRL ANNUAL REPORT 2010

Page 2: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

Viewing distance(H)

Sense o

f re

aln

ess

Sense o

f bein

g there

20 40 60 80 100

8 4 3 2 1.5 1 0.75

Field-of-view(deg.)

Sense of realness

Sense of being there

High

Low

High

Low

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0x x

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

y y

real object. The results of converting the angular resolution tofield-of-view(FOV)angle or viewing distance for SHV(7680-pixel horizontal resolution)are shown in Figure 2.Combining this study’s results with the past study’s on the re-

lationship between the sense of being there and FOV showsthat the SHV system can provide both a sense of being thereand a sense of realness for FOV angles ranging from 30 to 100degrees( viewing distances of 0.75 to 3 times the pictureheight).SHV is expected to be viewed on large screens in thea-tres, medium screens in homes, and on small, portable screens.

■ Frame frequency

In the past, we studied the relationship between temporalsampling parameters and degradation in motion portrayal, in-cluding motion blur, stroboscopic effects, and flicker, as factorsfor determining the frame frequency. Taking the results of thesestudies into consideration, we have evaluated the relationshipbetween frame frequency and motion video quality on a 100-inch screen. The improvement in image quality as a result of in-creasing the frame frequency was found to be 0.5(on a five-grade interval scale)in going from 60 Hz to 120 Hz and a further0.2 in going from 120 Hz to 240 Hz.

■ System colorimetry

We conducted experiments on capturing and displaying wide-color-gamut natural images based on a wide-gamut systemcolorimetry that uses RGB primaries equivalent to monochro-matic light sources on a spectral locus(2).Figure 3 shows colordistributions of the objects. These objects include colors thatcannot be reproduced by the HDTV color gamut. We confirmedthat they could be covered by the proposed wide-gamut systemcolorimetry for SHV. A comparison of images reproduced by alaser display color gamut and those limited by the HDTV colorgamut confirmed that the wider color gamut reproduces colorswith higher saturation, enabling better color reproduction thatis closer to reality. This research was conducted in cooperationwith Mitsubishi Electric Corporation.

■ Standardization

We continued with our efforts to standardize the SHV format(UHDTV in ITU-R terminology)at ARIB and ITU-R. In addition to

contributing the system colorimetry research results describedabove, we have proposed a UHDTV video parameter set basedon research to date(see Table 1).

[References](1)K. Masaoka, Y. Nishida, M. Sugawara and E. Nakasu: “Comparing Vis-

ual Realness between High Resolution Images and Real Objects,” ITETechnical Report, Vol. 35, No. 16, HI2011-62, 3DIT2011-50, pp. 133-135(2011)(in Japanese)

(2)K. Masaoka, K. Omura, Y. Nishida, M. Sugawara, Y. Nojiri, E. Nakasu,S. Kagawa, A. Nagase, T. Kuno and H. Sugiura: “Demonstration of awide-gamut system colorimetry for UHDTV,” ITE Annual Convention,6-2(2010)(in Japanese)

Figure 2. Sense of being there and sense of realness for different field-of-view angles

Figure 3. Color distribution of wide-color objects and RGB primaries of SHV(outer)and HDTV(inner)

x = 0.3127,y = 0.3290Reference white:

D65

System colorimetry

(CIE 1931) x = 0.131,y = 0.046Blue

x = 0.170,y = 0.797Green

x = 0.708,y = 0.292Red

12bitBit depth

60 Hz

(Under study for over 60 Hz)Frame frequency

7680 × 4320 pixelsSpatial resolution(H×V)

Table 1. Basic video parameters of SHV

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

NHK STRL ANNUAL REPORT 2010 | 5

Page 3: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

1.1.2

1.1.3

Cameras

We are making advances on a full-resolution Super Hi-Vision(SHV)camera with a 7680-pixel by 4320-line resolution for red,green and blue(1)(Figure 1).In FY2010, we reduced the size ofequipment used to transmit the video signal between the cam-era head and camera control unit prototyped in FY2009. This isso it can be incorporated into the camera head and be moreportable. This development also helped to reduce the totalweight of the camera head, including lens, to 65 kg.We are developing a lateral chromatic aberration correction

system using signal processing in real time. The accuracy ofcorrection is increased by adding a function to support not onlythe zoom parameter but iris and focus parameters.We also devised a method for generating and presenting a

focus-assist signal, to help camera operators focus using a low-resolution viewfinder, and we implemented this function in thecamera. The assist signal is generated by applying a maximumfilter to a pre-selected high-frequency component of the signal,and this is overlaid on the viewfinder image generated by a low-pass filtering. This method enables focusing to an accuracyequal to that of using an ultrahigh-resolution monitor, and ithas low computational cost. It is thus very practical.We were able to shoot outdoor scenes using this prototype

camera and to exhibit SHV video that was captured and dis-played at the full-resolution at the STRL Open House 2010.We also proceeded with our work on single-chip color imag-

ing devices using 33-megapixel image sensors for full-resolution SHV, for compact SHV cameras. We also made ad-vances on demosaicing(2)to be applied to ultrahigh-resolution

single-chip color imaging.

[References](1)T. Yamashita, R. Funatsu, T. Yanagi, K. Mitani, Y. Nojiri and T. Yoshida:

“A Camera System Using Three 33-M-pixel CMOS Image Sensors forUHDTV2,” SMPTE Annual Technical Conference & Expo(2010)

(2)R. Funatsu, T. Soeno, K. Omura, T. Yamashita and M. Suagawara:“High-resolution Demosaicing Using an Even-tap Symmetric FIR Fil-ter,” ITE Winter Annual Convention, 4-9(2010)(in Japanese)

Displays

■ Projectors

We have been developing a high-dynamic-range projectorwith dual modulation since FY 2006. The dual modulationmethod requires processing of the outputs of the primary andsecondary modulators to obtain the optical output, and initially,the video signals had to be created computationally. In FY2010,we implemented real-time video signal processing equipmentto directly display Super Hi-Vision(SHV)video signals.We also developed a compact projector capable of displaying

SHV by using 4K display devices for each of R, G and B signalsand shifting them diagonally by a half-pixel every second

frame(1)(Figure 1).This projector was jointly developed withNHK Engineering Services and JVC Kenwood Holdings.

■ Direct-view displays

We are developing a self-emissive direct-view SHV displaysfor home use. In collaboration with Panasonic, we successfullyprototyped a 58-inch ultra-high-resolution plasma displaypanel(PDP)with a pixel pitch of 0.33 mm(Figure 2).It has fourtimes the number of pixels of Hi-Vision displays, the same as a

Figure 1. Full-resolution SHV prototype camera

Figure 1. Compact projectorFigure 2. Ultra-high-definition, 58-inch, 0.33mm pixel-pitch PDP displayexhibited at the STRL Open House 2010 and IBC 2010 in Europe

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

6 | NHK STRL ANNUAL REPORT 2010

Page 4: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

1.1.4

103-inch PDP display which we developed, and it matches thepixel-pitch required for implementing a 100-inch full-resolutionSHV display.In cooperation with Sharp Corporation, we developed an 85-

inch, full-resolution, SHV Liquid Crystal Display(LCD)with apixel pitch of 0.25 mm.

■ Ultra-realistic communication system

This is the final year of our research undertaken for the Na-tional Institute of Information and Communications Technology(NICT).This project began in FY2009 and is entitled, “Ultra-realistic communication system.” We verified the effectivenessof a binaural audio system designed for a dome theatre, con-

ducted a subjective assessment of the relationship betweenbrightness, contrast and screen size of the display(2),and out-lined video and audio specifications for dome theatres.

[References](1)F. Okano, M. Kanazawa, Y. Kusakabe, Y. Haino, M. Furuya and M.

Sato: “Proposal for a scanning method based on complementary fieldoffset sampling and its application to Super Hi-Vision projector,” ITETechnical Report, Vol. 35, No. 4, IDY2011-9, pp. 115-118(2011)( inJapanese)

(2)M. Kanazawa, J. Nishigaki, K. Takeuchi, R. Harada and T. Imamura:“Image Presentation for Dome Theater,” ITE Technical Report, Vol. 35,No. 16, HI2011-61, 3DIT2011-49, pp. 129-132(2011)(in Japanese)

Coding

We are studying video compression coding methods for SuperHi-Vision(SHV)broadcasting.

■ SHV video encoder

We improved the AVC/H.264 SHV video encoder developed inFY2009. In FY2010, we improved the video format converterthat divides up the SHV video signal. The conventional systemdoes not transmit the periphery of the image due to overlapprocessing at the partition boundaries. As a remedy, we addedan interpolation function that allows pixels at the edge of theimage to be transmitted. This research was conducted in coop-eration with Fujitsu Laboratories, Ltd.On September 29 and 30, 2010, we conducted live interna-

tional SHV transmission experiments in cooperation with NTT,the BBC, and academic networks. In these experiments, videocaptured at a BBC studio in London was compressed using anencoder developed by NHK and transmitted to STRL in Tokyovia an advanced internet(Figures 1, 2).We confirmed thathigh-quality SHV can be transmitted using multiple networksmanaged by different operators and without any guaranteedbandwidth.In parallel with these efforts, we are moving forward with our

research on video coding methods for program production. Wedeveloped functions for extending the JPEG2000-standard videoencoders developed in FY2009 and earlier, including a revers-ible RGB-YUV conversion system and an extended pixel dy-namic range system.

■ Next-generation video coding

Standardization of High Efficiency Video Coding(HEVC)forultra-high definition video is in progress at ISO/IEC and theITU-T. We are studying next-generation video coding methodsto be used for this standard(1).We developed a technology that adaptively use a discrete co-

sine transform(DCT)or discrete sine transform(DST).HEVC’sadvanced intra-prediction and motion compensated predictiontechniques de-correlate the residual signals after the predic-tions, which means that the DCT is not necessarily suitable forHEVC. By using a DCT and DST together, we were able to im-prove performance for images that include complex textures.We also developed a new intra-prediction technology that im-proves video coding efficiency, while maintaining continuity be-tween boundary regions and predicted regions. This researchwas conducted in cooperation with Mitsubishi Electric Corpora-tion.

■ Reconstructive video coding

For transmitting high-data-volume video such as SHV, we arelooking into a new concept called reconstructive video coding,in which video is coded after being reduced in size according toparameters such as transmission bandwidth and video contentand is reconstructed using super-resolution techniques.This method uses existing conventional video coding

schemes such as AVC/H.264 as its core, but in a pre-processingstage, it optimizes the super-resolution parameters with adap-tive processing according to texture and local reconstruction.These parameters are sent to the post-processing stage as aux-iliary data, and they make possible high-quality image recon-structions(Figure 3).In FY2010, we added parameter optimization to the preproc-

essor for it to control non-uniform spatial and temporal sub-sampling and aliasing due to pixel decimation. The parameteroptimization uses super-resolution in-loop simulation.We developed two reconstruction methods for the post proc-

essor including a sequential Monte Carlo method(2)and amethod based on wavelet analysis(3). We also developed tech-niques for iterative synthesis of high-frequency componentsbased on self-similarity.The sequential Monte Carlo-based reconstructive method

generates a super-resolution image using weightings and bychoosing from multiple hypothetical images. The resolution ofthe images is increased by iteratively processing them based onthe similarity between reduced hypothetical images and the de-coded image. We confirmed the effectiveness of this method toreconstruct images(Figure 4).Figure 2. Decoder(Tokyo)Figure 1. Encoder(London)

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

NHK STRL ANNUAL REPORT 2010 | 7

Page 5: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

Input image

Geometrictransforms

Pixel decimation

Reduced image

Reconstructed image

Video coding

Bit-stream

Decoded reduced image

Video decoding

Inverse geometrictransforms

Super-resolution

Pixel reconstruction

Coding

Decoding

Video reconstruction

Optimization/Image analysis

Local re

constr

uction

Parameters

Side information

Side-informationcoding

Side-informationdecoding

Pre

-pro

cessor

Core

codin

gP

ost-

pro

cessor

Param

ete

rs

1.1.5

The wavelet reconstruction method computes super-resolution parameters that minimize the difference between thesource input image and the locally reconstructed image on the

encoding side. It uses these parameters to obtain a highly accu-rate super-resolution image.The technique for generating high-frequency components fo-

cuses on the similarity between image blocks and estimateshigh-frequency components upon magnification from otherblocks to generate a super-resolution image.

[References](1)Y. Shishikui and K. Iguchi: “Iterative Adjustment Intra Prediction for

High Performance Video Coding,” IWAIT2011(2011)(2)S. Sekiguchi, A. Minezawa, K. Sugimoto, A. Ichigaya, K. Iguchi and Y.

Shishikui: “A novel video coding scheme for Super Hi-Vision,” Proc.PCS2010, O3-4, pp. 322-325(2010)

(3)T. Misu, Y. Matsuo, S. Sakaida, Y. Shishikui and E. Nakasu: “NovelVideo Coding Paradigm with Reduction/Restoration Processes,” Proc.PCS2010, S2-2, pp. 466-469(2010)

Satellite transmission technology

We have continued with our studies on Super Hi-Vision(SHV)broadcasting via satellite using 12- and 21-GHz-band.

■ Transmission systems for next-generation satellitebroadcasting

We evaluated the performance of a transmitter incorporatinga pre-distortion compensator built in FY2009. The compensatorcan reduce the transmission power needed for SHV broadcastsignals by improving the carrier-to-noise power ratio(C/N)re-quired for multi-level modulation on a satellite channel. It esti-mates distortion vectors in the satellite channel at the transmit-ter and adds the opposite vectors to the signal before transmis-sion. We conducted transmission tests on a 12-GHz-band satel-lite transponder simulator and found that incorporation of thecompensator yielded an improvement of 0.4 dB for 16 APSK(code rate: 3/4)and 1.3 dB for 32 APSK(code rate: 4/5)in therequired C/N + Output Back Off(OBO: the ratio of maximumtransmission power for the unmodulated signal to the actualmodulated-signal power in the satellite transponder)(1).To improve the required C/N on the receiving side, we began

prototyping a receiver conforming to ARIB STD-B44(Transmis-sion System for Advanced Wide Band Digital Satellite Broad-casting).The receiver incorporates an adaptive equalizer util-izing a pilot signal(Figure 1).To evaluate its performance, wewill conduct tests on it over the satellite transponder simulator.To evaluate the effect of the satellite channel characteristics,

we prototyped a 21-GHz-band satellite transponder simulator

composed of input and output filters, amplifiers and the othercomponents. We will use this simulator to evaluate the wide-band modulator/demodulators.

■ 21-GHz-band broadcasting satellite system

We continued with our research on an onboard phased-arrayantenna that can control the radiation pattern to compensatefor rainfall attenuation in the 21-GHz-band(21.4-22.0 GHz).The antenna is an array-fed imaging reflector antenna com-posed of a main reflector with a 2.2 m diameter aperture, asubreflector with a 0.22 m diameter aperture, and 61 feed ele-ments. We studied how to design radiation patterns that couldbe controlled with only the phase of the feeds in order to mini-mize the broadcast outages due to rainfall over the entire serv-ice area. We simulated outages by using radar and rain-gaugecombined analysis precipitation data provided by the Japan Me-teorological Agency. The simulation showed that outages couldbe reduced to approximately 40 % of the time over the entireservice area as compared with a uniform-power radiation pat-tern using a conventional reflector antenna(2).For the 21-GHz-band receiver, we developed offset parabola

antennas with 45, 60 and 120 cm diameter apertures and meas-ured their radiation patterns. We confirmed that the measuredco-polarization radiation pattern can be satisfied the referencepattern being considered by the ITU-R(3).We also studied thepower feed for a dual 12- and 21-GHz-band receiver antenna,assuming that broadcast satellite’s geostationary orbit positionwould be 110 degrees east longitude. In the future, we will buildan antenna based on the results of this study and evaluate itsperformance.

Figure 3. Reconstructive video coding

Figure 4. Test results(Left: Conventional coding method, Right: Recon-structive coding method)

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

8 | NHK STRL ANNUAL REPORT 2010

Page 6: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

I

Q

Filter coefficient update

through training sequenceDetect pilot signal Pilot signal replica

Log likelihood ratio

computationPilot signal averaging

Front-end

equalizer

LDPC※

decoder

Equalized signal

C/N estimation

Update filter coefficients

※ LDPC : Low-Density Parity Check

Log likelihood ratio

Shaped

reflector

antenna

Branchfilter MultiplexerCommands

Input filter Output filter

Telemetry

Receiver

TWT

1.1.6

■ Engineering model of 21-GHz-band broadcastingsatellite transponder

We are developing a test model of a satellite transponder,called the engineering model, to evaluate the performance of21-GHz-band onboard equipments in a space environment. It iscomposed of a shaped reflector antenna for transmission andreception, a branch filter, a receiver, an input filter, an amplifier,an output filter, and a multiplexer(Figure 2).The shaped re-flector antenna is a membrane structure made from tri-axialwoven carbon fiber for lightness and surface accuracy. Assum-ing the band from 21.4 GHz to 22.0 GHz can be divided in twoand used for wide-band signal transmissions, the bandwidthsof the input and output filters should be approximately 300MHz. The filters were designed to reduce the group delay devia-tion, which degrades a transmission signal. The output filterwas designed to attenuate unwanted emissions by 80 dB in theneighboring frequency band(22.21 to 22.5 GHz)for radio as-tronomy.A travelling-wave tube(TWT)amplifier with an out-put power of 130 Wwas also developed.We will evaluate the electrical performances of each of these

devices and verify their electrical and mechanical performancein a thermal vacuum test that simulates a space environment.

[References](1)M. Kojima, A. Hashimoto, Y. Suzuki, T. Kimura and S. Tanaka: “Per-

formance Evaluation of Predistortion Transmitter Based on Signal

Point Error Estimation over Satellite Channel,” IEICE Society Confer-ence, B-3-9(2010)(in Japanese)

(2)S. Nakazawa, M. Nagasaka, S. Tanaka and K. Shogen: “Onboard An-tennas and Simulation of Service Availability for 21-GHz Band Broad-casting Satellite,” IEICE Technical Report, Vol. 110, No. 23, AP2010-29, pp. 87-92(2010)

(3)M. Nagasaka, S. Nakazawa, S. Tanaka and K. Shogen: “A Study onReceiving Antenna Patterns for the 21 GHz-band Broadcasting-satellite Service,” IEICE Society Conference, B-1-164(2010)(in Japa-nese)

Terrestrial transmission technology

We are studying next-generation digital terrestrial broadcast-ing systems for delivering large-volume content services suchas Super Hi-Vision.

■ High-capacity transmissions for the nextgeneration of terrestrial broadcasting

In FY2010, we developed a dual-polarized MIMO(multiple-

input multiple-output)transmission system to transmit twoultra-multi-level OFDM(Orthogonal Frequency Division Multi-plexing)signals based on the ISBD-T signal format using the4096QAM for carrier modulation scheme. We also conductedlaboratory and field tests and evaluated the transmission char-acteristics of our prototype system. The transmission capacitywas about 73 Mbps(PES rate)using 4096QAM with an innercode rate of 3/4 for FEC and dual polarized waves. Transmitterand receiver experimental stations were set up on the STRLpremises. They were separated by approximately 100 m andhad dual polarized Yagi antennas mounted on top of them(Fig-

Figure 1. Adaptive equalizer using a pilot signal

Figure 2. 21-GHz-band broadcast satellite engineering model

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

NHK STRL ANNUAL REPORT 2010 | 9

Page 7: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

Dual polarized

Yagi antenna

Receiver

station

Transmitter

station

1024QAM

2048QAM

4096QAM

Bit E

rro

r R

ate (

BE

R)

Reception power(dBm)

10ー 1

10ー 2

10ー 3

10ー 4

10ー 5

10ー 6

10ー 7

-80 -50-55-60-65-70-75

1.1.7

ure 1).The bit-error rates at various receiver input levels weremeasured by attenuating the receiver input level and by varyingtransmission parameters such as the carrier modulationscheme and inner FEC code rate(1)(Figure 2).To reduce the degradation due to level differences between

the each polarized waves, we conducted tests using skew po-larized waves at ±45 degrees and right/left-handed circularlypolarized waves. We found that the required C/N could be im-proved by approximately 2dB by using skew or circularly polar-ized waves instead of horizontally and vertically polarizedwaves when the difference in levels between the each polarizedsignals was 8 dB.We also transmitted two ultra-multi-level OFDM signals each

modulated by different data streams at 1 W from vertically andhorizontally polarized transmitter antennas mounted at differ-ent locations on the roof of STRL. We measured the transmis-sion characteristics at 27 reception points at distances of 5 kmor less around STRL. We obtained error-free transmissions withan inner code rate of 3/4 at approximately 53 dBf or greater for1024QAM, and 59 dBf or greater for 4096QAM.We are also making progress in our work to lower the re-

quired C/N by changing the error correction scheme from aconcatenation of convolutional and Reed-Solomon coding, asis used in ISDB-T, to a concatenation of low-density parity-check coding and BCH coding.

■ Study of the UHF-band for next-generationterrestrial broadcastingTo increase the likelihood that the UHF-band will be secured

for next-generation terrestrial broadcasting, we conductedcomputational simulations based on the supposed state of fre-quency usage after the switchover to all-digital terrestrialbroadcasting takes place. We studied frequencies available fornew stations and began a study of possible interference fromtransmitters that use white spaces in the UHF-band. The inter-ference model included two types of interference from stationsusing white space: on the broadcast area and on reception atrelay stations. We also built a UHF-band experimental test sta-tion conforming to the ISDB-T transmission format at theNabeta radio broadcasting station in Aichi Prefecture. This sta-tion was used to test limited-area One-Seg use in disasterstricken areas and is a model for “Special Whitespace Areas”selected by the “Study team for a new radio-frequency-use vi-sion” of the Ministry of Internal Affairs and Communications.

[References](1)S. Asakura, K. Murayama, M. Taguchi, T. Shitomi and K. Shibuya:

“Technology for the next generation of digital terrestrial broadcasting-Transmission characteristics of 4096QAM-OFDM-, ” ITE TechnicalReport, Vol. 35, No. 10, BCT2011-40, pp. 43-46(2011)(in Japanese)

Wired transmission technology

We are proceeding with our research on wired transmissiontechnologies using fiber-optic and co-axial television cables fortransmitting uncompressed Super Hi-Vision(SHV)signals anddistributing SHV to homes.

■ Transmission of uncompressed SHV signals

Uncompressed video signals, unlike compressed video, haveminimal delay and no image degradation. They are thus usedfor the contribution links between broadcast stations or fromon-site locations to stations. To allow the use of Wide Area Net-

works(WAN)as a contribution link, the SHV signal must be con-verted into a WAN signal. We have developed a technology todo this conversion and have built optical transmission equip-ment for handling full-resolution SHV signals.At the IBC 2010 broadcasting equipment trade show held in

The Netherlands in September 2010, we transmitted live, un-compressed, 24 Gbps dual-green SHV signals over optical fiberwithin the city of Amsterdam(over a distance of approx. 16.5km).We are also conducting research on ultra-high-speed net-

works for transmitting SHV within broadcast stations. As part ofa study for a system to accommodate two 72 Gbps, full-resolution SHV signals on a 160 Gbps optical LAN signal, webuilt equipment that can carry a single full-resolution SHV sig-

Figure 1. A view of field test within STRL premises

Figure 2. Field test results

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

10 | NHK STRL ANNUAL REPORT 2010

Page 8: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

TpSiL

TpBL

BL

SiL

BtFL

FL

TpFL

TpFC TpFR

Upper level:9 channels

TpC

TpBC

BC BR

LFE1 LFE2

TV screen

FLc

TpSiR

TpBRFC FRc FR

SiR

BtFC

BtFR

Mid-level:10 channels

Lower level:3 channels

LFE:2 channels

1.1.8

nal on two 40 Gbps signals and carried out laboratory tests onit(1).We collaborated with the National Institute of Advanced In-

dustrial Science and Technology and the National Institute ofInformation and Communications Technology in a demonstra-tion of an optical packet and optical path integrated network.This demonstration successfully transmitted a dual-green SHVsignal over an optical path network.Part of this research was conducted under contract with the

New Energy and Industrial Technology Development Organiza-tion for the project titled “Development on Next-generationHigh-efficiency Network Devices Technology.”

■ CableTV transmission of SHV

We are also researching technology for transmitting com-pressed SHV signals over cable television networks to house-holds. We developed a multiplexing method that divides up acompressed SHV signal and efficiently multiplexes the dividedsignals on multiple carriers. The system supports TransportStream(TS)signals of various bit rates, including those of high-capacity MPEG-2 TS. It also reduces the power consumption ofreceivers. We built test equipment(2)and exhibited it at the STRLOpen House 2010(Figure 1).Our technology using multiple

carriers has the benefits of being easy to introduce to existingfacilities and facilitating early introduction of SHV broadcasting.

[References](1)T. Nakatogawa, M. Nakamura and K. Oyamada: “Fundamental experi-

ment of converting a full resolution Super Hi-Vision signal into twoOTU3 signals,” ITE Winter Annual Convention, 10-7(2010)(in Japa-nese)

(2)T. Kusakabe, T. Kurakake and K. Oyamada: “A division and transmis-sion method of various bitrate transport streams over coax cable tele-vision,” ITE Annual Convention, 15-1(2010)(in Japanese)

Audio systems providing a strongsense of presence

We are making progress with our R&D and standardizationefforts on 22.2 multichannel audio(1)(Figure 1)to be used by Su-per Hi-Vision(SHV).

■ 22.2 multichannel audio production systems

We are conducting R&D on easier and more sophisticatedproduction systems for 22.2 multichannel audio. In FY2010,based on the results of research that we contracted to McGillUniversity in Canada, we developed a system to add 3D rever-beration to audio and used the system during the production ofSHV content. We also developed a man-machine interface tomake 3D audio panning operations easier and exhibited a 22.2multichannel audio production system at IBC2010 for the firsttime. To control the virtual distance of the audio image in the3D audio space, we developed an algorithm that simultane-ously changes the frequency response, amplitude, amount ofreverb added, and distance of sound sources.

■ Subjective evaluation of 22.2 multichannel audio

To confirm the effectiveness of 22.2 multichannel loud-speaker arrangements, we arranged multiple loudspeakers onthe surface of a sphere and conducted a subjective evaluationon the sense of spatial uniformity of sound especially as regardsthe location of the speakers in the vertical direction. We foundthat if the interval between adjacent loudspeakers is less than45 degrees, there is a good sense of uniformity to the distribu-tion of sound levels in the vertical direction(Figure 2).

■ 22.2 multichannel sound for households

Households in Japan have little space available for 22.2 mul-tichannel audio when one speaker is used for each channel, so

we are researching signal processing that will enable the highsense of presence of 22.2 multichannel audio to be conveyed byfewer speakers. In FY2010, we developed a method for down-mixing 22.2 channels to 8 channels while maintaining the pres-sure and direction of sound at the listening point(2).We also de-veloped a method of reproducing 22.2 multichannel audio withonly three forward facing loudspeakers through use of a head-related transfer function that incorporates the propagationcharacteristics of sounds reaching both ears from various direc-tions. These advances were exhibited at the STRL Open House2010. We conducted subjective evaluations of the size of the lis-tening area for the 8-channel reproduction method and foundthat up to three viewers can experience the full sense of pres-ence at the same time. For the three-loudspeaker reproductionmethod, we studied methods to suppress high-frequency noiseproduced by signal processing.We also prototyped a system to play back the forward chan-

nels of 22.2 multichannel audio using a loudspeaker array withseveral small loudspeaker units arranged on the periphery of a

Figure 1. Prototype test equipment

Figure 1. Labels and positions of 22.2 multichannel audio channels(based on SMPTE ST2036-2-2008)

1 Next-generation broadcasting media | 1.1 Super Hi−Vision

NHK STRL ANNUAL REPORT 2010 | 11

Page 9: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

0.5

ー0.5

ー1

ー1.5

ー2

ー2.5

ー3.5

30 45 60 90 180

ー3

0

Speaker spacing(deg.)

Sp

atia

l u

nifo

rmity o

f so

un

d (

de

gra

da

tio

n m

etr

ic)

Azimuthal angle of vertical circle on which loudspeakers were arranged

0deg. 45deg. 90deg.

1.2.1

flat panel display.We have begun research on a format for converting 22.2 mul-

tichannel audio into a down-mixed 8-channel signal and a sig-nal for expanding listening area, encoding and transmittingthose signals, and decoding the signals and reproducing 22.2multichannel audio at the receiver side. In FY2010, we reducedthe energy of the listening-area-expansion signal by 10 dB ormore, which means that the information can be compressed byreducing the bits allocated to the listening-area-expansion sig-nal.

■ Standardization

We continued to promote 22.2 multichannel audio at variousstandards bodies. This year, the IEC 62574 standard for generalchannel label and channel assignment, including 22.2 mul-tichannel audio, was approved. This should facilitate the devel-opment of digital audio interfaces and other devices includingI/O devices. The specification is compatible with those of 22.2multichannel audio in the SMPTE ST2036-2-2008 standard. AtSMPTE, we are interested in standardizing metadata for de-scribing multichannel audio in the Material eXchange Format(MXF).At ITU-R, to support the 3D multichannel audio studiospecifications that we proposed in FY2009, we added proposalsfor 3D multichannel audio format requirements and contributedto the rapporteur documents regarding the latest trends in 22.2multichannel audio.

■ Acoustical cognitive science

In research contracted by the National Institute of Informationand Communications Technology, titled “R&D on Ultra-RealisticCommunication Technology with Innovative 3D Video Technol-ogy: Recognition and transmission of sensitivity information”,we studied acoustical factors affecting the sense of presenceand the relationship between the sense of presence and depthof emotion(“Kandoh”).We found that there is a strong corre-lation between the sense of presence and the impressions of“substance and motion” and that this correlation is strength-ened when sound sources are moved closer. We also found that

a large change in the sound image width results in a change inthe Kandoh evaluation value.

[References](1)K. Hamasaki, T. Nishiguchi, R. Okumura, Y. Nakayama and A. Ando:

“ A 22.2 Multichannel Sound System for Ultrahigh-Definition TV(UHDTV),” SMPTE Motion Imaging Journal, Vol. 117, No. 3, pp. 40-49(2008)

(2)A. Ando: “Conversion of reproduced sound field based on the coinci-dence of sound pressure and direction of particle velocity,” Proc. ICA2010(2010)

1.2 Three-dimensional television

Integral 3D television

We are researching a form of spatial imaging technologycalled integral 3D television. This form of television promises aheightened sense of presence without eye strain. Integral pho-tography(IP)is a system able to reproduce three-dimensionalimages by using an array of small lenses for capture and dis-play. Integral photography does not require the viewer to wearspecial glasses, and it can display natural 3D images thatchange with the viewing position. The drawback of IP is that itrequires a huge amount of data to produce good-quality im-ages.We developed integral television equipment using full-

resolution Super Hi-Vision(SHV)images in FY2009(1),and in FY2010, we improved the quality of the 3D images by using pixel-offset methods with the full-resolution SHV green element(Fig-ure 1).To capture images, we offset the positions of two greenimaging elements(G1, G2)by half a pixel width in the vertical andhorizontal directions. For display, we use a single green display

element together with an optical element able to shift the opti-cal path by half a pixel width in the horizontal and vertical di-rections and compose the images by controlling the shift intime. This increases the resolution of the elemental images(im-ages captured using the lens array),which in turns improvesthe resolution of the reproduced 3D images in the depth direc-tion. In the future, we will investigate the resolution character-istics of this equipment.We have also developed methods for detecting and adjusting

the amount of pixel offset by using aliasing in the image pro-duced by the lens array(2)(3).With these methods, pixel offsetscan be adjusted with high precision, either visually or usingsimple measuring equipment, with the lens array in place.Part of this research was conducted under contract with the

National Institute of Information and Communications Technol-ogy(NICT)for the project titled “R&D on Multi-Parallel & SpatialImaging 3-Dimensional Television System”.We also participated in R&D on proving test systems for ex-

pansion of digital museums with Tokyo University under con-tract by the Ministry of Education, Culture, Sports, Science and

Figure 2. Vertical spacing of speakers on a spherical surface and senseof sound connectedness

1 Next-generation broadcasting media | 1.1 Super Hi−Vision | 1.2 Three-dimensional television

12 | NHK STRL ANNUAL REPORT 2010

Page 10: Next-generation broadcasting media - NHK1 Next-generation broadcasting media | 1.1 Super Hi−Vision NHK STRL ANNUAL REPORT 2010 | 5 1.1.2 1.1.3 Cameras We are making advances

Reproduced

3D image

Image-capture lens array

(gradient-index lens)

ObjectB

G1

G2

R

G1/G2

Frame

switcher Time-division

pixel-offset

SHV projector

SHV camera using

the pixel offset method

HD-SDI

DVI

interface

converter

G2 : HD-SDI 16ch

G1 : HD-SDI 16ch

B : HD-SDI 16ch

R : HD-SDI 16chDVI

Diffusion screen

Display timing control signal Display lens array

1.2.2

Technology. We investigated the potential for using this methodfor exhibiting museum artifacts using 3D displays.Finally, we continued in our collaborative research with NICT

in the area of electronic holography. We were able to reproducefull-color holographic images of real objects by converting im-ages captured in SHV and integral photography into holograms.

[References](1)J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M.

Furuya and M. Sato: “Integral Three-Dimensional Television Using a

33-Megapixel Imaging System,” IEEE Journal of Display Technology,Vol. 6, No. 10, pp. 422-430(2010)

(2)M. Kawakita, S. Sasaki, J. Arai, M. Okui, F. Okano, Y. Haino, M.Yoshimura and M. Sato: “Projection-type integral 3-D display with dis-tortion compensation,” J. SID, 18/9, pp. 668-677(2010)

(3)H. Sasaki, M. Kawakita, K. Masaoka, J. Arai, M. Okui, F. Okano, Y.Haino, M. Yoshimura and M. Sato: “Pixel-offset position detection us-ing lens array for integral three-dimensional display,” Proc. of SPIE,7863, 71(2011)

Generating 3D content frommulti-viewpoint images

We are developing technology to capture subjects with multi-ple cameras and generate integral 3D images from the resultingmulti-viewpoint images. This method of generating 3D imagesworks on subjects that are difficult to capture with the normalintegral 3D method. In FY2010, we studied methods of settingup multi-viewpoint robotic cameras, generating shape modelsfrommulti-viewpoint images, and converting these shape mod-els into integral 3D images.We built multi-viewpoint robotic cameras to take 3D images

of distant objects with zoom lenses(1).Figure 1 shows a groupof cameras mounted on a motion control platform. With thissystem, the orientation of the cameras can be controlled in co-ordination with the movement of a single master camera oper-ated by a person.We studied the phase-only correlation method and the belief

propagation method as ways of generating shape models frommulti-viewpoint images. The precision of a shape model builtwith the phase-only correlation method was improved by usingregion partitioning. For the belief propagation method, we de-vised a hierarchical method for reducing the effect of video leveldifferences among the cameras. To convert shape models intointegral 3D images, we developed a procedure using obliqueprojections(parallel light rays are projected obliquely onto theprojection plane)(2).Oblique projections do not cause the geo-metric distortions often accompanying projections, and hence,they enable conversions to be done more efficiently. This proc-essing can be sped up by using a Graphics Processing Unit(GPU).

Part of this research was conducted under contract with theNational Institute of Information and Communications Technol-ogy(NICT)for the project titled “R&D on Ultra-Realistic Commu-nication Technology with Innovative 3D Video Technology”.

[References](1)K. Ikeya,K. Hisatomi,M. Katayama and Y. Iwadate: “Control Method

for Multiple Robot Cameras, ” ITE Winter Annual Convention, 8-5(2010)(in Japanese)

(2)Y. Iwadate and M. Katayama: “A generation method of integral 3D Im-age by oblique projection,” ITE Technical Report, Vol. 34, No. 43, 3DIT2010-66, pp. 17-20(2010)(in Japanese)

Figure 1. Integral 3D television incorporating the Super Hi-Vision pixel-offset method

Figure 1. Multi-viewpoint robotic camera

1 Next-generation broadcasting media | 1.2 Three-dimensional television

NHK STRL ANNUAL REPORT 2010 | 13


Recommended