+ All Categories
Home > Documents > Three-dimensional imaging technology - NHK | NHK STRL ANNUAL REPORT 2014 2.1 Integral 3D...

Three-dimensional imaging technology - NHK | NHK STRL ANNUAL REPORT 2014 2.1 Integral 3D...

Date post: 21-Jun-2018
Category:
Upload: trinhtuyen
View: 214 times
Download: 0 times
Share this document with a friend
3
18 NHK STRL ANNUAL REPORT 2014 2.1 Integral 3D television We are researching a form of spatial imaging technology, called the integral method, as a way of creating a more natural and viewable 3D television that does not require special glass- es. The integral method reproduces 3D images in the air by us- ing arrays of tiny lenses for capture and display. In FY 2014, we studied ways to improve the quality of 3D images and their re- production. Improvement of 3D image quality To generate high-quality integral 3D images, it is necessary to collect a large amount of light ray information from various directions. That requires capture and display equipment with a huge number of pixels and a lens array with many micro lens- es. We had previously prototyped capture and display equip- ment for 3D images using a video device for 8K Super Hi-Vi- sion, but the quality of the 3D images that can be reproduced by a single video device has reached its limit. That led to our study on ways to combine multiple cameras and display devic- es so that the number of pixels for the entire video system can be increased. In FY 2014, we prototyped capture equipment using seven Hi-Vision cameras (Figure 1). The equipment succeeded in cap- turing video of a 3D image with a larger viewing zone (the are- as in which the 3D image can be viewed) by a factor of 2.5 both in the horizontal and vertical directions compared with con- ventional single-camera equipment (1) . We also prototyped cap- ture equipment with two small lens arrays adhering to the im- age sensor. The equipment can capture high-quality images by synthesizing the information obtained from each lens array. For the display , we developed integral 3D display equipment with more pixels by using four liquid crystal panels. We found With the goal of developing a new form of broadcasting delivering a strong sense of presence, we aim to develop a more natural and viewable three-dimensional television that does not require special glasses. T o this end, we are researching the integral method, electronic holography devices, and generation of 3D images from multi-viewpoint images. In our research on capture and display technologies of the integral method, we are studying ways to in- crease the number of pixels and expand the viewing zone. In FY 2014, we prototyped capture equipment that uses an array of cameras to obtain a wider viewing zone. We also studied the effect of lens arrays placed next to the image sensor. For the display, we prototyped equipment consisting of multiple liquid crystal panels. In our study on the reproduction quality of integral 3D images, we conducted experiments to measure the ocular accommodation response and binocular vergence with respect to 3D images concurrently and confirmed that both measures conform to the depth of the 3D images. We also prototyped stereoscopic 3D display equipment that reproduces motion parallax in order to simulate the relation between the dis- play parameters of the integral method and the resolution of the image. We conducted subjective evalua- tion experiments on motion parallax and depth discrimination using this equipment and showed the valid- ity of the simulation. The MPEG-Free-viewpoint Television (FTV) ad hoc group started its activities in 2013, and we have been participating in its activities toward standardization. In FY 2014, we attended MPEG meetings and con- ducted coding experiments on using the Multi-View High Efficiency Video Coding (MV-HEVC) scheme on integral 3D images. In our research on electronic holography, we studied spatial light modulators driven by spin-transfer switching. In FY 2014, we fabricated a light modulation element using tunnel magneto-resistance on an active-matrix driving element we had prototyped and verified its operation. We also began our research on a fast beam steering device that controls the direction of light beams by using electro-optic materials.. In particular, we made progress in computer simulations and on the means of fabricating the device. We have been studying technologies to generate 3D images from multi-viewpoint images as a way of generating integral 3D images from images taken by multiple cameras and researching its application to video production. In FY 2014, we prototyped an algorithm to generate integral 3D images from images captured by multiple cameras that can acquire color and infrared images simultaneously. We also im- proved the functionality and operability of the multi-viewpoint robotic camera and used it in actual broadcasting productions. 2 Three-dimensional imaging technology Figure 1. Integral 3D capture equipment using seven Hi-Vision cameras Lens array Condenser lens Camera array
Transcript
Page 1: Three-dimensional imaging technology - NHK | NHK STRL ANNUAL REPORT 2014 2.1 Integral 3D television We are researching a form of spatial imaging technology, called the integral method,

18 | NHK STRL ANNUAL REPORT 2014

2.1 Integral 3D television

We are researching a form of spatial imaging technology, called the integral method, as a way of creating a more natural and viewable 3D television that does not require special glass-es. The integral method reproduces 3D images in the air by us-ing arrays of tiny lenses for capture and display. In FY 2014, we studied ways to improve the quality of 3D images and their re-production.

■ Improvement of 3D image qualityTo generate high-quality integral 3D images, it is necessary

to collect a large amount of light ray information from various directions. That requires capture and display equipment with a huge number of pixels and a lens array with many micro lens-es. We had previously prototyped capture and display equip-ment for 3D images using a video device for 8K Super Hi-Vi-sion, but the quality of the 3D images that can be reproduced by a single video device has reached its limit. That led to our study on ways to combine multiple cameras and display devic-es so that the number of pixels for the entire video system can be increased.

In FY 2014, we prototyped capture equipment using seven Hi-Vision cameras (Figure 1). The equipment succeeded in cap-turing video of a 3D image with a larger viewing zone (the are-as in which the 3D image can be viewed) by a factor of 2.5 both in the horizontal and vertical directions compared with con-ventional single-camera equipment(1). We also prototyped cap-

ture equipment with two small lens arrays adhering to the im-age sensor. The equipment can capture high-quality images by synthesizing the information obtained from each lens array.

For the display, we developed integral 3D display equipment with more pixels by using four liquid crystal panels. We found

With the goal of developing a new form of broadcasting delivering a strong sense of presence, we aim todevelop a more natural and viewable three-dimensional television that does not require special glasses.To this end, we are researching the integral method, electronic holography devices, and generation of 3Dimages from multi-viewpoint images.

In our research on capture and display technologies of the integral method, we are studying ways to in-crease the number of pixels and expand the viewing zone. In FY 2014, we prototyped capture equipmentthat uses an array of cameras to obtain a wider viewing zone. We also studied the effect of lens arraysplaced next to the image sensor. For the display, we prototyped equipment consisting of multiple liquidcrystal panels.

In our study on the reproduction quality of integral 3D images, we conducted experiments to measurethe ocular accommodation response and binocular vergence with respect to 3D images concurrently andconfi rmed that both measures conform to the depth of the 3D images. We also prototyped stereoscopic3D display equipment that reproduces motion parallax in order to simulate the relation between the dis-play parameters of the integral method and the resolution of the image. We conducted subjective evalua-tion experiments on motion parallax and depth discrimination using this equipment and showed the valid-ity of the simulation.

The MPEG-Free-viewpoint Television (FTV) ad hoc group started its activities in 2013, and we have beenparticipating in its activities toward standardization. In FY 2014, we attended MPEG meetings and con-ducted coding experiments on using the Multi-View High Effi ciency Video Coding (MV-HEVC) scheme onintegral 3D images.

In our research on electronic holography, we studied spatial light modulators driven by spin-transfer switching. In FY 2014, we fabricated a light modulation element using tunnel magneto-resistance on anactive-matrix driving element we had prototyped and verifi ed its operation. We also began our researchon a fast beam steering device that controls the direction of light beams by using electro-optic materials..In particular, we made progress in computer simulations and on the means of fabricating the device.

We have been studying technologies to generate 3D images from multi-viewpoint images as a way of generating integral 3D images from images taken by multiple cameras and researching its application tovideo production. In FY 2014, we prototyped an algorithm to generate integral 3D images from imagescaptured by multiple cameras that can acquire color and infrared images simultaneously. We also im-proved the functionality and operability of the multi-viewpoint robotic camera and used it in actualbroadcasting productions.

2 Three-dimensional imaging technology

Figure 1. Integral 3D capture equipment using seven Hi-Vision cameras

Lens array Condenser lens Camera array

Page 2: Three-dimensional imaging technology - NHK | NHK STRL ANNUAL REPORT 2014 2.1 Integral 3D television We are researching a form of spatial imaging technology, called the integral method,

NHK STRL ANNUAL REPORT 2014 | 19

2 Three-dimensional imaging technology

that we could expand the areas in which the 3D image can be displayed by approximately four times that of conventional equipment by magnifying the image of each liquid crystal panel using a lens array and a concave lens and optically connecting the images seamlessly(2) (Figure 2).

Part of this research was conducted under contract by the Ministry of Internal Affairs and Communications for its project titled, “R&D on systems for capturing spatial information using multiple image sensors.”

■Reproduction quality of 3D imagesWe have been studying ways of measuring and improving

the quality of 3D images reproduced using the integral method since FY 2012.

In the stereoscopic 3D method, the vergence position (where both eyes’ lines of sight intersect when observing an object) conforms to the 3D image, but the ocular accommodation (or just “accommodation”) position does not because it is fi xed on the display. The method is considered to strain the viewers’ eyes. Meanwhile, the integral method forms 3D images in the air. Theoretically, both the vergence and accommodation posi-tions conform to the 3D image. The method is therefore said to be able to display 3D images that are as natural as viewing the actual object. To verify this feature, we previously measured the accommodation response when viewing integral 3D imag-es and found that the accommodation position conforms to the depth of the 3D image. In FY 2014, we measured the accom-modation response and the vergence response simultaneously and compared them with those for observing the actual object. The results showed that both the accommodation and ver-gence positions in viewing an integral 3D image conform to the depth of the 3D image and that the response characteristics of accommodation and vergence were closer to those for observ-ing the actual object than could be produced by the stereoscop-ic 3D method.

The notable feature of the integral method is the ability to re-produce 3D images according to the viewer’s viewpoint posi-tion, that is, to reproduce motion parallax. In FY 2014, we pro-totyped stereoscopic 3D display equipment that can reproduce motion parallax while tracking the viewer’s position in order to verify the effect of motion parallax on the sensation of depth(3). Using the equipment, we experimentally evaluated how motion parallax affect the accuracy of discriminating the depth of an

object (the depth discrimination accuracy). The results indicat-ed that the depth discrimination accuracy could be improved by using motion parallax. We plan to continue with various ex-periments using this equipment to look into the relation be-tween system parameters such as the pitch and focal length of lenses in the array and reproduction quality.

We began a study on coding technologies for the integral method in FY 2013. In FY 2014, we conducted a basic study on compression effi ciency by applying Multi-View High Effi ciency Video Coding (MV-HEVC) to the integral method(4).

[References](1) M. Miura, N. Okaichi, J. Arai and T. Mishina: “Integral three-dimen-

sional capture system with enhanced viewing angle by using camera array,” Proc. SPIE, Vol. 9391, pp. 939106.1- 939106.7 (2015)

(2) N. Okaichi, M. Miura, J. Arai and T. Mishina: “Integral 3D display us-ing multiple LCDs,” Proc. SPIE, Vol. 9391, pp. 939114.1-939114.6 (2015)

(3) M. Katayama, T. Mishina and Y. Iwadate: “Binocular simulation sys-tem for integral images with head tracking,” ITE Winter Annual Con-ference, 5-4 (2014) (in Japanese)

(4) K. Hara, J. Arai, T. Mishina and Y. Iwadate: “A study on coded image quality of integral three-dimensional image using MV-HEVC,” ITE Winter Annual Conference, 5-5 (2014) (in Japanese)

2.2 Display devices

■ Spatial light modulator driven by spin-transferswitchingWe are researching electronic holography to realize a spatial

imaging form of three-dimensional television that shows natu-ral 3D images. Displaying the 3D images with a wide viewing zone requires a spatial light modulator (SLM) having a very small pixel pitch, an extremely large number of pixels, and high driving speed. We are developing a spin-transfer SLM (spin-SLM) for minute pixels less than 1 μm in size(1). The spin-SLMcan modulate light by using the magneto-optical Kerr effect, in which the polarization plane of refl ected light rotates according to the magnetization direction of the magnetic materials in the pixel. The magnetization direction is controlled by the direction of the current in the pixel, and this is called spin-transfer switching.

In FY 2014, we designed and fabricated a controller that ma-nipulates the pixels of the SLM that we prototyped in FY 2013. The SLM has a metal oxide semiconductor (MOS) transistor unit for each pixel, which is composed of a source, a gate, and a drain. The drain is connected with a light modulation unit. The light modulation unit uses a tunnel magneto-resistance

(TMR) light modulation element(2) to enable low-current opera-tion. An electrode that is transparent to the input and output light is placed in the upper part of the light modulation element (Figure 1). The TMR light modulation element consists of three layers: a pinned layer made of multi-layer film of terbi-um-iron-cobalt (Tb-Fe-Co) and cobalt-iron (Co-Fe) alloys; an insulating layer of magnesium oxide (MgO) film; and a light modulation layer of gadolinium-iron (Gd-Fe) alloy film. We found that the controller could switch the magnetization direc-tion of an arbitrary pixel using external electric signals and dis-play a white/black pattern of the pixel. This was the fi rst suc-cessful demonstration of a spin-SLM with active-matrix (AM) driving. We also designed and prototyped an AM driving circuit for the spin-SLM with smaller pixels.

We fabricated an ultra-high-density magnetic hologram (with a 1 μm pixel pitch and 10k×10k pixels) using patterned magne-to-optical materials to verify the feasibility of displaying 3D im-ages with this technology. We confirmed that the 3D image could be reproduced with a viewing-zone angle of 37 degrees and that the reproduced image could be turned on and off by applying an external magnetic fi eld to the hologram.

This research was supported by the National Institute of In-

Figure 2. Integral 3D image reproduced using four liquid crystal panels

(The dotted line shows the area of an image displayed with the conven-

tional equipment)

Page 3: Three-dimensional imaging technology - NHK | NHK STRL ANNUAL REPORT 2014 2.1 Integral 3D television We are researching a form of spatial imaging technology, called the integral method,

20 | NHK STRL ANNUAL REPORT 2014

2 Three-dimensional imaging technology

formation and Communications Technology (NICT) as part of the project titled “R&D on Ultra-Realistic Communication Tech-nology with Innovative 3D Video technology.”

■Beam steering deviceFor a future integral 3D display without a lens array that

would have much higher performance than one with a lens ar-ray, we are developing a new beam steering device that can fl exibly control the direction and shape of light beams. We pre-viously verifi ed that sub-micron-sized dielectric structures can

defl ect a light beam in a designated direction through simula-tions and experiments on a prototype device(3). In FY 2014, we began a study on optical waveguides using electro-optic mate-rials that change the refractive index by applying an external voltage in order to make an optical device capable of active control of the direction and shape of light beams. Two kinds of devices were examined, a slab optical waveguide that confi nes light in a plane and a channel optical waveguide that confi nes light in a micro space. We developed a light wave propagation simulator to analyze the operation of the optical waveguides, and quantitatively analyzed the phase shift of light within the waveguide when a voltage is applied. We simulated the char-acteristics of the deflection angle and beam spread angle of output light beams when multiple channel optical waveguides are confi gured in an array. We also evaluated the characteris-tics of the fabricated slab optical waveguide.

[References](1) K. Aoshima, K. Machida, D. Kato, T. Mishina, K. Wada, Y.Cai, H. Kin-

jo, K. Kuga, H. Kikuchi, T. Ishibashi, and N. Shimidzu: “A Magne-to-Optical Spatial Light Modulator Driven by Spin Transfer Switching for 3D Holography Applications,” J. Display Technology Vol. 11, No. 2, pp. 129-135 (2015)

(2) H. Kinjo, K. Machida, K. Matsui, K. Aoshima, D. Kato, K. Kuga, H. Ki-kuchi and N. Shimidzu: “Low-current-density Spin-transfer Switch-ing in Gd22Fe78-MgO Magnetic Tunnel Junction,” J. Appl. Phys. Vol. 115, pp. 203903.1-203903.3 (2014)

(3) Y. Motoyama, Y. Hirano, K. Tanaka, N. Saito, H. Kikuchi and N. Shim-idzu: “Directional Control of Light-Emitting-Device Emission Via Sub-micron Dielectric Structures,” IEEE Photonics Conf. 2014, pp. 160-161 (2014)

2.3 Generating 3D content from multi-viewpoint images

We are studying ways to acquire integral 3D images of ob-jects that are diffi cult to capture with optical equipment, such as very distant or very large objects.

In FY 2013, we captured objects with infrared dot patterns projected on them by using a camera array consisting of two infrared cameras and two color cameras. In FY 2014, we made an infrared color camera that can capture the infrared image and color image simultaneously by separating incident light with a half mirror and guiding it to two image sensors, one for infrared images and one for color images. This led to a more compact camera array configuration with two infrared color cameras and an infrared dot projector (Figure 1). While widen-ing the distance between the infrared color cameras improves the resolution in the depth direction, video distortion caused by the bigger difference in the viewpoints reduces the accuracy of depth estimation. We therefore studied ways to prevent such a reduction in estimation accuracy(1).

We developed a multi-viewpoint robotic camera with en-hanced practicality for broadcasting(2) (Figure 2). Operability was improved by reducing the size of the cameras and the number of cables, the processing time of captured images was shortened, and the quality of the multi-viewpoint images was improved. We also devised a new image presentation tech-

nique by which the operator can easily choose the camera that best captures the movements of the subjects such as players and the ball in sport scenes(3). We verifi ed the operability and effectiveness of the multi-viewpoint robotic camera and the image presentation technique in a shooting test conducted at handball games of the Inter-High School Championships in Au-gust and in production of the program, ISU Grand Prix of Figure Skating 2014/2015, NHK Trophy, in November.

[References](1) K. Hisatomi, M. Kano, K. Ikeya, M. Katayama, T. Mishina and K.

Aizawa: “Depth Estimation Method using an Infrared dot Image Pro-jector and a Stereo Pair of Infrared Color Cameras,” ITE Technical Report, Vol. 38, No. 51, pp. 17-20 (2014) (in Japanese)

(2) K. Ikeya, K. Hisatomi, M. Katayama, T. Mishina and Y. Iwadate: “Bul-let Time Using Multi-Viewpoint Robotic Camera System,” CVMP’14 Proceedings of the 11th European Conference on Visual Media Pro-duction Article, No. 1 (2014)

(3) K. Ikeya, K. Hisatomi, M. Katayama, T. Mishina and Y. Iwadate: “Bul-let Time Corresponding to the View-Point Switching in Multi-Frame,” ITE Winter Annual Conference, 9-1 (2014) (in Japanese)

Figure 1. Infrared color camera array

Figure 2. Multi-viewpoint robotic camera

Figure 1. Schematic illustration of AM driving TMR spin-SLM structure

Pixel

Transparent electrode

Insulator 1

Silicon substrate

Source Drain

Gate

Insulator 2I l t 2

ayerPinnedd laPPinned

Light modulatulatLiLightht moodu layerion Lig

InsulatinsuIns ayeratingg lalatinng ygg

Flow of electronsonsctronsns

MagnetizationMMaMagagnenetizizaatioonMdirectionddireecttionnd

TTransistor

unit

Lightmodulationunit


Recommended