+ All Categories
Home > Documents > PAPER Special Issue on Knowledge, Information and Creativity...

PAPER Special Issue on Knowledge, Information and Creativity...

Date post: 19-Jun-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
6
IEICE TRANS. INF. & SYST., VOL.E86–D, NO.5 MAY 2003 1 PAPER Special Issue on Knowledge, Information and Creativity Support System AwareTable: A Tabletop System with Transparency-Controllable Glass for Augmenting Card-based Tasks Motoki MIURA a) and Susumu KUNIFUJI b) , Members SUMMARY Most conventional tabletop systems control virtual data using tangible objects such as “phicons.” Although this approach is versa- tile and enables full use of the horizontal tabletop display, there is another approach to naturally extend conventional object handling tasks commonly performed on the table. In order to perform this conventional task in the real world, we developed AwareTable, a tabletop system that augments card handling. We employed a sheet of transparency-controllable glass for the tabletop to improve the performance of both recognizing and tracking the locations of tabletop objects and displaying projected images on the sur- face. We describe the architecture of the tabletop system and its design criteria. This tabletop system can be applicable for a large number of paper cards similar to the KJ method, because of the simplicity and flexibility of its configuration. We confirmed the efficacy of the tabletop configuration by our preliminary operations. 1. Introduction Many studies for developing tabletop interfaces have been conducted. In general, the term tabletop indicates a rela- tively large horizontal display with supporting legs that al- lows several users around the table to work collaboratively. The advantages of the tabletop interface are as follows: (1) multiple users can participate on equal terms, and (2) the users can place small objects on the tabletop, and inter- act with both the physical and virtual objects displayed on the surface. The characteristics of the tabletop interfaces promise improved cooperation between users, and have en- couraged us to continue research on the tabletop interface. The concept of a tangible user interface (TUI)[1] along with controlling virtual objects using tangible objects such as “phicons” is a promising approach. However, we be- lieve that these concepts and technologies can be gradually and naturally applied to increase the number of conventional tasks the table can perform, in terms of ubiquitous comput- ing. Since some users may not be accustomed to handling digital objects, they may prefer to manipulate physical ob- jects. With this in mind, we chose a knowledge creation activity that uses a set of paper cards, like in the KJ method, and developed a tabletop system, AwareTable, to incorpo- rate manipulating physical objects. Our tabletop system is based on the concept of “gradu- ally extending conventional tasks.” The “gradually extend- Manuscript received January 22, 2007. Manuscript revised April 27, 2007. Final manuscript received 0, 0. School of Knowledge Science, Japan Advanced Institute of Science and Technology a) E-mail: [email protected] b) E-mail: [email protected] ing” concept represents a phenomenon that the system inter- mittently extends support, when it is required. AwareTable has two main functions: one is tracking objects on the sur- face by scanning the tabletop, and the other is displaying additional data. Our aim is to perform card-handling activ- ities without any devices attached to the cards or the users’ hands. 2. AwareTable In this section, we describe the functions and design criteria of the proposed tabletop interface. 2.1 Purpose and Design Criteria The primary purpose of AwareTable is to record displace- ment of paper cards during card-handling activities for fu- ture reference and retrospection. To perform this task, the system should be able to detect each paper card, and record the location of the cards on the table. DigitalDesk[2], [3] and EnhancedDesk [4] capture im- ages of the tabletop using a camera located over the table to recognize objects and fingers. This approach is straightfor- ward, but the objects on the table should have visual markers on the users’ (upper) side to distinguish them. The visual markers may hinder the primary tasks. Also, mounting a camera over the table would require supporting posts. To consider both the mobility of the tabletop system and operability with conventional cards, we agreed upon the following criteria for our tabletop system. 1. No additional visual markers or tags are shown on the upper side of the cards. 2. For operability, no devices are attached to the paper cards. 3. All devices are contained within the table. 4. Multiple paper cards and their IDs should be detected. 5. The table should display additional data on its surface. By considering these criteria, we used a sheet of transparency-controllable glass as a material for the table- top screen. The transparency-controllable glass looks like frosted glass in its normal state, but the transparency can be changed instantly by applying an electric potential dif- ference. Since the glass contains a liquid crystal layer, the electric potential aligns the liquid crystal molecules. The system can detect both the locations and IDs of multiple paper cards by capturing images using a camera
Transcript
Page 1: PAPER Special Issue on Knowledge, Information and Creativity …ist.mns.kyutech.ac.jp/miura/papers/ieice2008-awaretable.pdf · 2018-09-07 · IEICE TRANS. INF. & SYST., VOL.E86–D,

IEICE TRANS. INF. & SYST., VOL.E86–D, NO.5 MAY 20031

PAPER Special Issue on Knowledge, Information and Creativity Support System

AwareTable: A Tabletop System with Transparency-ControllableGlass for Augmenting Card-based Tasks

Motoki MIURA †a) and Susumu KUNIFUJI †b), Members

SUMMARY Most conventional tabletop systems control virtual datausing tangible objects such as “phicons.” Although this approach is versa-tile and enables full use of the horizontal tabletop display, there is anotherapproach to naturally extend conventional object handling tasks commonlyperformed on the table. In order to perform this conventional task in thereal world, we developedAwareTable, a tabletop system that augments cardhandling. We employed a sheet of transparency-controllable glass for thetabletop to improve the performance of both recognizing and tracking thelocations of tabletop objects and displaying projected images on the sur-face. We describe the architecture of the tabletop system and its designcriteria. This tabletop system can be applicable for a large number of papercards similar to the KJ method, because of the simplicity and flexibility ofits configuration. We confirmed the efficacy of the tabletop configurationby our preliminary operations.

1. Introduction

Many studies for developing tabletop interfaces have beenconducted. In general, the termtabletop indicates a rela-tively large horizontal display with supporting legs that al-lows several users around the table to work collaboratively.The advantages of the tabletop interface are as follows: (1)multiple users can participate on equal terms, and (2) theusers can place small objects on the tabletop, and inter-act with both the physical and virtual objects displayed onthe surface. The characteristics of the tabletop interfacespromise improved cooperation between users, and have en-couraged us to continue research on the tabletop interface.

The concept of a tangible user interface (TUI)[1] alongwith controlling virtual objects using tangible objects suchas “phicons” is a promising approach. However, we be-lieve that these concepts and technologies can be graduallyand naturally applied to increase the number of conventionaltasks the table can perform, in terms of ubiquitous comput-ing. Since some users may not be accustomed to handlingdigital objects, they may prefer to manipulate physical ob-jects. With this in mind, we chose a knowledge creationactivity that uses a set of paper cards, like in the KJ method,and developed a tabletop system,AwareTable, to incorpo-rate manipulating physical objects.

Our tabletop system is based on the concept of “gradu-ally extending conventional tasks.” The “gradually extend-

Manuscript received January 22, 2007.Manuscript revised April 27, 2007.Final manuscript received 0, 0.

†School of Knowledge Science, Japan Advanced Institute ofScience and Technology

a) E-mail: [email protected]) E-mail: [email protected]

ing” concept represents a phenomenon that the system inter-mittently extends support, when it is required. AwareTablehas two main functions: one is tracking objects on the sur-face by scanning the tabletop, and the other is displayingadditional data. Our aim is to perform card-handling activ-ities without any devices attached to the cards or the users’hands.

2. AwareTable

In this section, we describe the functions and design criteriaof the proposed tabletop interface.

2.1 Purpose and Design Criteria

The primary purpose of AwareTable is to record displace-ment of paper cards during card-handling activities for fu-ture reference and retrospection. To perform this task, thesystem should be able to detect each paper card, and recordthe location of the cards on the table.

DigitalDesk[2], [3] and EnhancedDesk [4] capture im-ages of the tabletop using a camera located over the table torecognize objects and fingers. This approach is straightfor-ward, but the objects on the table should have visual markerson the users’ (upper) side to distinguish them. The visualmarkers may hinder the primary tasks. Also, mounting acamera over the table would require supporting posts.

To consider both the mobility of the tabletop systemand operability with conventional cards, we agreed upon thefollowing criteria for our tabletop system.

1. No additional visual markers or tags are shown on theupper side of the cards.

2. For operability, no devices are attached to the papercards.

3. All devices are contained within the table.4. Multiple paper cards and their IDs should be detected.5. The table should display additional data on its surface.

By considering these criteria, we used a sheet oftransparency-controllable glass as a material for the table-top screen. The transparency-controllable glass looks likefrosted glass in its normal state, but the transparency canbe changed instantly by applying an electric potential dif-ference. Since the glass contains a liquid crystal layer, theelectric potential aligns the liquid crystal molecules.

The system can detect both the locations and IDs ofmultiple paper cards by capturing images using a camera

Page 2: PAPER Special Issue on Knowledge, Information and Creativity …ist.mns.kyutech.ac.jp/miura/papers/ieice2008-awaretable.pdf · 2018-09-07 · IEICE TRANS. INF. & SYST., VOL.E86–D,

2IEICE TRANS. INF. & SYST., VOL.E86–D, NO.5 MAY 2003

placed below the tabletop surface. Also, when the glass isin the frosted mode, it acts as a screen to display additionaldata from a projector. The system can recognize paper cardsor objects by visual markers printed on their back sides. Theability to control the transparency of the glass contributes toboth accurate tracking of the visual markers and improvingthe visibility of projected data.

2.2 System Overview

Figure 1 shows a conceptual image of AwareTable as itwould typically be used. AwareTable includes (1) a sheetof transparency-controllable glass as its tabletop, (2) a cam-era for capturing visual markers on the back sides of papercards, and (3) a projector for displaying additional data. Inaddition, we use digital pens to store handwritten text ordrawings on the front side of the cards. The visual markerson the back sides of the paper cards can be printed usingconventional printers.

Fig. 1 Conceptual image of AwareTable as it would typically be used. Acamera and a data projector are installed within the table frame.

2.3 Scenario

We describe a scenario using AwareTable in the context ofcollaborative convergent thinking, based on the KJ method†.

As a part of the preparation, the users print visual mark-ers on the back sides of the paper cards (see Figure 2) foridentification. They also write their thoughts or ideas ontothe paper cards using digital pens. The paper cards andhandwritten text can be automatically related using Anotodigital pens. After writing on the cards, the users place thepaper cards on the table using their hands.

With the KJ method, it is crucial to understand thewritten content on the cards. However, the cards are toosmall to indicate its context or background knowledge. Inthe conventional KJ method, background knowledge is pro-vided by consulting a reference sheet that holds data gath-ered from previous investigations. AwareTable can display

†The KJ method is a registered trademark of the Kawakita re-search institute.

background knowledge on the tabletop screen near the card.Once the card positions and card placements are digi-

tized, the data can be accessed and utilized in multi-purposeway. The transition data can be used to retrieve previoustrials, and investigate topics that need further discussions.Also, the records can be used to restore the original organi-zation of papers. If another table is placed at a distant site,the data can be used to collaborate with users at the remotesite.

Fig. 2 Visual marker printed on the back side of the paper card. The sizeof the visual marker is 32× 32 mm, and the size of the card is 64× 38mm.

2.4 Advantages

The transparency control of the tabletop screen material isbeneficial to capture small visual markers. The configura-tion of AwareTable is relatively simple. Thus, we can easilyincrease the size of the tabletop screen. Also, the resolu-tions of both the scanned images and projected data can beincreased by installing additional cameras and data projec-tors. The resolution of the scanned image determines thesize of the visual marker. Using a large tabletop screen andhigh-resolution images, the system can handle more papercards. Therefore, our framework is suitable for handlingmany cards simultaneously.

As we mentioned in Section 2.1, the frosted mode ofthe screen surely improves the visibility of the projecteddata. By increasing the screen resolution, the user can obtainprecise and detailed images of background knowledge.

Since all necessary devices are contained within the ta-ble, the mobility of the tabletop is improved. The all-in-one feature also contributes to the operability. Both the rearprojection and the rear scanning features can eliminate neg-ative influences caused by occlusions with fingers, hands,and heads of users.

3. Related Work

The concept of computer-based paper work trackingemerged in the 1990s. DigitalDesk [2], [3] by Wellner et al.and EnhancedDesk [4] by Koike et al. are the representativeexamples. DigitalDesk captures finger operations and paper

Page 3: PAPER Special Issue on Knowledge, Information and Creativity …ist.mns.kyutech.ac.jp/miura/papers/ieice2008-awaretable.pdf · 2018-09-07 · IEICE TRANS. INF. & SYST., VOL.E86–D,

MIURA and KUNIFUJI: AWARETABLE: A TABLETOP SYSTEM WITH TRANSPARENCY-CONTROLLABLE GLASS FOR AUGMENTING CARD-BASED TASKS3

documents, and integrates them. Wellner et. al. explainedthe concept of DigitalDesk Calculator in which the user canenter numbers and operators by pointing to items printedon a paper sheet. EnhancedDesk links real world objectsand virtual ones by recognizing fingers and two-dimensionalmatrix codes printed on books.

The designers’ outpost [5] is a tangible wall-size dis-play that recognizes the locations of physical Post-it notesthrough computer vision. The outpost requires a camera tocapture images of the foreground. Although the type of col-laborative tasks is similar, we focused on improving the sim-plicity and mobility of the table configuration by installingall devices within it. Interactive Station developed by RicohCompany Ltd. is a tabletop system that stores handwrittentext or drawings using white board markers on a screen. Thestored text or drawings can be overlapped with an electronicdocument displayed on the surface. Interactive Station andAwareTable have similar concepts and configurations, be-cause both have a camera and a data projector installed in-side the table. We focus on paper handling tasks. Doringand Beckhaus proposed to use the card for research in thefield of art history [6].

Some systems employ special devices to interact withdigital objects in a virtual world. MetaDESK [1] andSensetable [7] are significant examples. These technologiesare applied to network management [8] and a disaster sim-ulator environment [9]. SnapTable [10] and PaperButtons[11] manage papers using devices. SnapTable introduces anelectronic paper and a collaborative workspace to provideintuitive browsing operations of electronic documents.

There has also been research on screen devices withliquid crystal shutters. Shiwa developed a large screentelecommunication device that enables eye contact betweenusers during remote conferences [12]. This approach is alsoreferred to as ClearBoard paper [13].

Kakehi et al. proposed Lumisight Table [14], [15]which projects personalized images for four users. The Lu-misight Table uses Lumisty films to control the visibility fordifferent users, and detects objects placed on the table usingan inner camera. TouchLight [16] also uses similar mate-rial for its screen. The Lumisight table uses four Lumistyfilms and fresnel lenses to improve image quality. Althoughthe characteristics of the Lumisight table are significant, itsconfiguration is not suitable for larger tabletop screens. Thesimplicity of AwareTable configuration makes it suitable forlarger tabletops, which allows handling many paper cards.

4. Implementation

In this section, we describe implementation of theAwareTable prototype.

4.1 Configuration

Figure 3 shows the appearance of the table. We usedUMU Smart Screen† as the tabletop material. The size

†UMU SmartScreen developed by NSG UMU Products Co.

Fig. 3 AwareTable appearance.

Fig. 4 AwareTable inside view.

Fig. 5 AwareTable configuration.

of the tabletop is 60 inch (1219×914mm). The table size(1350×1050×1000mm) is determined by considering thetask, users, and a restriction due to the mechanism. For thecard-based tasks, users may place over 100 paper cards ata time, and work with up to six other users collaboratively.Thus, we chose a large-sized table. The technical restrictionof displaying images on the surface determines the heightof the table. The height could be reduced if we use a better

Ltd., http://www.umupro.com/

Page 4: PAPER Special Issue on Knowledge, Information and Creativity …ist.mns.kyutech.ac.jp/miura/papers/ieice2008-awaretable.pdf · 2018-09-07 · IEICE TRANS. INF. & SYST., VOL.E86–D,

4IEICE TRANS. INF. & SYST., VOL.E86–D, NO.5 MAY 2003

Fig. 6 Captured image by inner camera in the frosted mode. Fig. 7 Captured image by inner camera in the transparent mode.

Fig. 8 Capture test result of 210 visual markers in the frosted mode. Fig. 9 Capture test result of 210 visual markers in the transparent mode.Ninety percent of the cards are successfully tracked within a few seconds.

projecting technique.Figure 4 and Figure 5 show the configuration of

AwareTable. For capturing images, we used two IEEE1394high-resolution cameras (PGR Scorpion SCOR-20SOC-KT,1600×1200 pixels). In order to obtain the best image qualityfrom the two cameras, two IEEE1394-PCI interface cardsare installed in the PC (Pentium 4 3.8GHz, 2GB Mem-ory) For data projection, we selected a short-throw projector(SANYO LP-XL40(S)). Figure 10 shows a projected imagein the frosted mode. To provide sufficient illumination forcapturing visual markers, we used four electric light bulbs.The light bulbs provide better lighting conditions for the sys-tem to capture images. The data projector can also use itsown light source, but the light bulbs enable the cameras tocapture peripheral areas outside the projected region.

To identify and obtain the location of each paper card,we utilized ARToolkitPlus [17] and its visual markers. AR-ToolkitPlus has the advantage of recognizing a large numberof visual markers with fewer computations. The recognitionmodule that processes captured images from the IEEE1394camera was written in Visual C++ with libraries provided

by Point Gray Research Inc. The recognition module usesa graphical interface module written in Java using the JavaNative Interface (JNI).

In order to control both the transparency of the table-top glass and the light bulbs, we used solid state relays(SSRs, Phototriac BTA24-600CWRG) for switching a 100-volt power supply. We prepared three independent SSR cir-cuits to control the glass and two lines of electric light bulbs.The SSRs were controlled by digital output of a Phidget In-terface Kit†. We developed the power supply control mod-ule in Visual C++, and it is also connected via the JNI. Whenthe graphical interface module requests scanning, the recog-nition module call backs a method with coordinates (or atranslation matrix for 3D applications). Then the graphi-cal interface module can overlay rectangles where the visualmarkers are placed, or project additional data on the tabletopscreen.

†http://www.phidgets.com/

Page 5: PAPER Special Issue on Knowledge, Information and Creativity …ist.mns.kyutech.ac.jp/miura/papers/ieice2008-awaretable.pdf · 2018-09-07 · IEICE TRANS. INF. & SYST., VOL.E86–D,

MIURA and KUNIFUJI: AWARETABLE: A TABLETOP SYSTEM WITH TRANSPARENCY-CONTROLLABLE GLASS FOR AUGMENTING CARD-BASED TASKS5

Fig. 10 Projection in the frosted mode. Fig. 11 Projection in the transparent mode.

5. Discussion

5.1 Preliminary Operation Test

Figure 6 and Figure 7 show images captured in the frostedand transparent states, respectively. In these pictures, weused small visual markers (32mm× 32mm). The distancefrom the camera to the tabletop screen was 57 cm. The lat-ter image is clearer than the former. Therefore, the systemhas the potential to recognize small size visual tags. Thus,our tabletop framework can effectively handle many papercards.

To check the feasibility of handling many paper cardsin our tabletop system, we conducted a test with 210 visualmarkers on the surface. We used the same visual markers(32mm× 32mm). The visual markers were printed on a A4-sized paper sheet. The resolution of the captured image was1600× 1200 pixels. Before the test, the data projector wasturned off to reduce the influence of projected light. Whilecapturing images, we made the glass transparent, and turnedon two light bulbs for 1 second. Then, we turned the lightbulbs off, and turned on another two light bulbs for 1 second.Then the light was turned off and the glass was switched tothe frosted mode.

Figure 9 shows the recognition results of the test.About 10% of the visual markers were not tracked. By scan-ning continuously, most of the missed markers could be de-tected. Thus, the framework of AwareTable is applicable forhandling more than a hundred paper cards. Fine-tuning thecamera by calibrating it can improve the scanning perfor-mance. Figure 8 shows the results of the test in the frostedglass mode. Other conditions such as the camera parametersand lights were the same. Due to the frosted state, about 40% of the markers were missed.

Figure 10 and Figure 11 show snapshots of projectedimages in the frosted and transparent modes, respectively.The frosted mode is suitable for users to recognize projectedimages, since the tabletop material (UMU Smart Screen)

was originally developed for rear projection screens. Thevisibility of the projected data is crucial for long-term tasks,but some tabletop systems do not consider this point.

5.2 Limitation

As we explained, AwareTable is effective for tasks that in-volve handling a large number of paper cards. However,there are some limitations with its operation. The main is-sues are the cards that are not scanned, and the disconti-nuity between the screen mode and scanning mode. Theusers should stop their task during scanning because of thescreen blinking by switching between the frosted mode andthe transparent mode, and the illumination by light bulbs.This drawback can be eliminated using high-speed cameraswith infrared lights.

6. Conclusion

In this paper, we presented AwareTable, which employs atransparency-controllable sheet of glass as the tabletop sur-face. We described how the configuration can effectivelyrecognize physical objects and project data with high visi-bility. In addition, AwareTable provides high portability be-cause all the components are included in the tabletop frame,and no additional components are required. In addition tothe structural merits, users can perform conventional cardhandling tasks in the conventional manner, since the sys-tem can recognize physical cards by scanning visual mark-ers printed on their back sides. Thus, our system is simpleyet powerful, and is suitable for card handling tasks withmultiple users.

We mentioned the concept of “gradually extendingconventional tasks.” Our AwareTable is a concrete exam-ple that realizes this concept. We believe that this approachis also promising because some people are not willing tostop using conventional methods, which are still natural andintuitive for almost all users.

In our preliminary test, we used two high-resolution

Page 6: PAPER Special Issue on Knowledge, Information and Creativity …ist.mns.kyutech.ac.jp/miura/papers/ieice2008-awaretable.pdf · 2018-09-07 · IEICE TRANS. INF. & SYST., VOL.E86–D,

6IEICE TRANS. INF. & SYST., VOL.E86–D, NO.5 MAY 2003

cameras to make the system scalable. Owing to the high-resolution, the system can identify small visual markers.Based on the configuration, we confirmed that the systemcan recognize almost 200 paper cards at a time. This numberis sufficient for many card-based activities, such as those us-ing the KJ method. Therefore, we conclude that AwareTablecan be used as an augmented workbench to record card-based thinking in an advanced manner.

In the future, we will improve recognition performanceby fine tuning and applying other techniques. Also, we willextend the system functionality to recognize finger tips onthe screen to provide seamless interaction with additionaldata and other virtual supplements.

Acknowledgment

Our research is partly supported by a grant-in-aid for Sci-entific Research (18700117) and a fund from the Ministryof Education, Culture, Sports, Science, and Technology ofJapan, under the name of “Cluster for Promotion of Scienceand Technology in Regional Areas.”

References

[1] B. Ullmer and H. Ishii, “The metaDESK: Models and Prototypefor Tangible User Interfaces,” Proceedings of UIST’97, pp.223–232,1997.

[2] P. Wellner, “The DigitalDesk Calculator: Tangible Manipulation ona Desk Top Display,” Proceedings of UIST’91, pp.27–33, 1991.

[3] W. Newman and P. Wellner, “A Desk Supporting Computer-based Interaction with Paper Documents,” Proceedings of CHI’92,pp.587–592, 1992.

[4] H. Koike, Y. Sato, and Y. Kobayashi, “Integrating Paper and Digi-tal Information on EnhancedDesk: A Method for Realtime FingerTracking on an Augmented Desk System,” ACM Transactions onComputer-Human Interaction, vol.8, no.4, pp.307–322, 2001.

[5] S.R. Klemmer, M.W. Newman, R. Farrell, M. Bilezikjian, and J.A.Landay, “The Designers’ Outpost: A Tangible Interface for Collab-orative Web Site Design,” Proceedings of UIST’01, pp.1–10, 2001.

[6] T. Doring and S. Beckhaus, “The Card Box at Hand: Exploring thePotentials of a Paper-Based Tangible Interface for Education andResearch in Art History,” Proceedings of the 1st international con-ference on Tangible and embedded interaction, pp.87–90, 2007.

[7] James Patten and Hiroshi Ishii and Jim Hines and Gian Pangaro,“Sensetable: A Wireless Object Tracking Platform for Tangible UserInterfaces,” Proceedings of CHI’01, pp.253–260, 2001.

[8] K. Kobayashi, M. Hirano, A. Narita, and H. Ishii, “A TangibleInterface for IP Network Simulation,” CHI’03 extended abstracts,pp.800–801, 2003.

[9] K. Kobayashi, A. Narita, M. Hirano, I. Kase, S. Tsuchida, T. Omi,T. Kakizaki, and T. Hosokawa, “Collaborative Simulation Interfacefor Planning Disaster Measures,” CHI ’06 extended abstracts (Work-in-progress), pp.977–982, 2006.

[10] M. Koshimizu, N. Hayashi, and Y. Hirose, “SnapTable : PhysicalHandling for Digital Documents with Electronic Paper,” Proceed-ings of NordiCHI’04, pp.401–404, 2004.

[11] E.R. Pedersen, T. Sokoler, and L. Nelson, “Paperbuttons: Expandinga tangible user interface,” Proceedings of the conference on Design-ing interactive systems (DIS’00), pp.216–223, 2000.

[12] S. Shiwa, “A large-screen visual telecommunications device usinga liquid-crystal screen to provide eye contact,” Journal of the SID,pp.37–41, 1993.

[13] H. Ishii and M. Kobayashi, “ClearBoard: A Seamless Medium for

Shared Drawing and Conversation with Eye Contact,” Proceedingsof CHI’92, pp.525–532, 1992.

[14] Y. Kakehi, M. Iida, T. Naemura, Y. Shirai, M. Matsushita, andT. Ohguro, “Lumisight Table: An Interactive View-DependentTabletop Display,” IEEE Computer Graphics and Applications,vol.25, no.1, pp.48–53, 2005.

[15] Y. Kakehi, T. Hosomi, M. Iida, T. Naemura, and M. Matsushita,“Transparent Tabletop Interface for Multiple Users on Lumisight Ta-ble,” Proceedings of the First IEEE International Workshop on Hor-izontal Interactive Human-Computer Systems (TABLETOP’ 06),2006.

[16] A.D. Wilson, “TouchLight: An Imaging Touch Screen and Displayfor Gesture-Based Interaction,” Proceedings of ICMI’04, pp.69–76,2004.

[17] D. Wagner and D. Schmalstieg, “ARToolKitPlus for Pose Trackingon Mobile Devices,” Proceedings of 12th Computer Vision WinterWorkshop (CVWW’07), 2007.

Motoki Miura was born in 1974. He re-ceived B. S., M. E. and D. E. degrees in electron-ics engineering from University of Tsukuba, in1997, 1999 and 2001 respectively. From August2001 to March 2004, worked as a research as-sociate at TARA center, University of Tsukuba,He is currently working as an assistant profes-sor in the School of Knowledge Science, JapanAdvanced Institute of Science and Technology,Japan. He is a member of IEICE, JSAI, IPSJ,JSSST, ACM, JSET and HIS.

Susumu Kunifuji was born in 1947. Hereceived B. E., M. E. and D. E. degrees fromTokyo Institute of Technology, in 1971, 1974and 1994, respectively. He worked as a re-searcher at International Institute for AdvancedStudy of Social Information Science, FUJITSULtd. (1974-1982), Chief researcher at the Insti-tute for New Generation Computer Technology(1982-1986), Manager of International Institutefor Advanced Study of Social Information Sci-ence, FUJITSU Ltd. (1986-1992), Professor of

School of Information Science at JAIST(1992-1998), Director of Centerfor Information Science at JAIST(1992-1998). He is currently a professorat the School of Knowledge Science, Japan Advanced Institute of Scienceand Technology, Japan. He is a member of IEICE, JSAI, IPSJ, SICE, JCSetc.


Recommended