+ All Categories
Home > Documents > A reusable library of 3D interaction techniques

A reusable library of 3D interaction techniques

Date post: 16-May-2023
Category:
Upload: uniandes
View: 0 times
Download: 0 times
Share this document with a friend
8
A Reusable Library of 3D Interaction Techniques Pablo Figueroa * Universidad de los Andes, Colombia David Castro Universidad de los Andes, Colombia ABSTRACT We present a library of reusable, abstract, low granularity compo- nents for the development of novel interaction techniques. Based on the InTml language and through an iterative process, we have designed 7 selection and 5 travel techniques from [5] as dataflows of reusable components. The result is a compact set of 30 com- ponents that represent interactive content and useful behavior for interaction. We added a library of 20 components for device han- dling, in order to create complete, portable applications. By design, we achieved a 68% of component reusability, measured as the num- ber of components used in more than one technique, over the total number of used components. As a reusability test, we used this library to describe some interaction techniques in [1], a task that required only 2% of new components. Index Terms: D.2.2 [Software Engineering]: Design Tools and Techniques—Software libraries; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Virtual reality D.3.2 [Programming Languages]: Language Classifications—Data-flow languages 1 I NTRODUCTION We have accumulated a wealth of information about 3D interactiv- ity, in particular in the form of 3D interaction techniques. They can be classified in several ways, from hardware or domain dependency, to generic categories such as travel, selection, or control. Numer- ous studies have been done in order to understand how interaction techniques affect users performance and how they fit the particular requirements of a user in a particular domain. Despite this progress, it is still difficult to develop applications with complex 3D interfaces. Novel techniques are created in novel scenarios, but it is difficult in any new development to benefit from previous work. We believe part of the problem is the set of tools we use to build new interaction techniques. Current languages and frameworks do not facilitate the development of reusable compo- nents for interaction. We also count with several libraries and toolk- its to handle devices, audio, or 3D graphics in certain programming languages, but a counterpart for interaction and models for their integration to the rest of the application are still missing. We propose a library for the development of interaction tech- niques that is independent from traditional programming languages. It is based on the InTml, the Interaction Techniques Markup Lan- guage [10] , that allows the description of VR applications as a dataflow of components that represent devices, content, and behav- ior. Since reuse might happen at a lower level of granularity that a whole technique, we want interaction techniques to be composed by simple, independent components, which could be reused in other techniques. In this way, novel techniques can benefit from previous developments and reuse interesting parts, while adding new com- ponents as required. We also want to be independent of particular * e-mail: pfi[email protected] e-mail: [email protected] implementation APIs and programming languages, so this descrip- tive library could be implemented on top of a wide variety of tech- nologies. In this way, VR applications described in this language have more opportunities to be ported to new technologies during their active lifetime. This paper is divided as follows: First we present previous work. We then describe the components of the library, its divisions, and some examples of interaction techniques we have developed, em- bedded in entire applications on a hardware environment based mostly on a tracker and a generic display. Later we present some reuse metrics about our current implementation. Finally we present some conclusions and future work. 2 RELATED WORK This work is related to the topics of high level programming lan- guages and libraries, in the particular field of VR applications and in the generic space of Computer Human Interfaces (CHI). We de- scribe first some important results in high level languages for CHI and in particular VR, and later we describe some results related to libraries for interaction. An assumption in our work is that previ- ous results in CHI, mostly related to 2D interfaces, are either too generic or too hard to accommodate to the specific representational needs of the wealth of 3D interaction techniques our community has developed. Our work is inspired in early efforts around the concept of User Interface Management Systems, or UIMS [17, 12], which provided a semantically rich metaphor for the development of user interfaces and a set of tools that supported several stages in the software life- cycle. Although there are some examples related to 3D interaction representation (i.e. the daisy menu in the PMIW language [12]), these works mostly concentrated in other elements important to UIMS, such as graphic feedback, device management, IDE support, or debug support. Some high level languages for interface development in the field of WIMP [2, 16] and post-WIMP [12] interfaces used formal spec- ification mechanisms such as Petri Nets, simplified programming languages, or state machines, which allow automatic checking but they are usually difficult to comprehend to non-expert users. Some abstract models and their implementations have been developed for the support of certain application types (i.e. multimodal applica- tions in [4]). We are more interested in systems that use a dataflow abstraction, as ours, and the libraries of interaction techniques they may have created. The Input Configurator and MaggLite in [9, 11] present a three layered dataflow that connect physical input devices to responses in post WIMP applications, an approach similar to ours that we generalize to more VR devices and more complex, exten- sible, and non-layered dataflows that represent generic 3D interac- tion techniques. Squidy [14] presents a simplified dataflow with just one input and output port per node that allows the integration of several types of novel devices through standard protocols and a Java-based core. Our approach hides such particular protocols and it is language independent, which presents a cleaner abstract model. In [15], a dataflow based, UIMS-like system for the devel- opment of multimodal interfaces is presented, with very interesting features such as design by example and group development, but still a more comprehensive library of interaction techniques is missing, as a result of an analysis of a space design instead of a result of representing available technologies.
Transcript

A Reusable Library of 3D Interaction Techniques

Pablo Figueroa∗

Universidad de los Andes, Colombia

David Castro†

Universidad de los Andes, Colombia

ABSTRACT

We present a library of reusable, abstract, low granularity compo-nents for the development of novel interaction techniques. Basedon the InTml language and through an iterative process, we havedesigned 7 selection and 5 travel techniques from [5] as dataflowsof reusable components. The result is a compact set of 30 com-ponents that represent interactive content and useful behavior forinteraction. We added a library of 20 components for device han-dling, in order to create complete, portable applications. By design,we achieved a 68% of component reusability, measured as the num-ber of components used in more than one technique, over the totalnumber of used components. As a reusability test, we used thislibrary to describe some interaction techniques in [1], a task thatrequired only 2% of new components.

Index Terms: D.2.2 [Software Engineering]: Design Toolsand Techniques—Software libraries; I.3.7 [Computer Graphics]:Three-Dimensional Graphics and Realism—Virtual reality D.3.2[Programming Languages]: Language Classifications—Data-flowlanguages

1 INTRODUCTION

We have accumulated a wealth of information about 3D interactiv-ity, in particular in the form of 3D interaction techniques. They canbe classified in several ways, from hardware or domain dependency,to generic categories such as travel, selection, or control. Numer-ous studies have been done in order to understand how interactiontechniques affect users performance and how they fit the particularrequirements of a user in a particular domain.

Despite this progress, it is still difficult to develop applicationswith complex 3D interfaces. Novel techniques are created in novelscenarios, but it is difficult in any new development to benefit fromprevious work. We believe part of the problem is the set of toolswe use to build new interaction techniques. Current languages andframeworks do not facilitate the development of reusable compo-nents for interaction. We also count with several libraries and toolk-its to handle devices, audio, or 3D graphics in certain programminglanguages, but a counterpart for interaction and models for theirintegration to the rest of the application are still missing.

We propose a library for the development of interaction tech-niques that is independent from traditional programming languages.It is based on the InTml, the Interaction Techniques Markup Lan-guage [10] , that allows the description of VR applications as adataflow of components that represent devices, content, and behav-ior. Since reuse might happen at a lower level of granularity thata whole technique, we want interaction techniques to be composedby simple, independent components, which could be reused in othertechniques. In this way, novel techniques can benefit from previousdevelopments and reuse interesting parts, while adding new com-ponents as required. We also want to be independent of particular

∗e-mail: [email protected]†e-mail: [email protected]

implementation APIs and programming languages, so this descrip-tive library could be implemented on top of a wide variety of tech-nologies. In this way, VR applications described in this languagehave more opportunities to be ported to new technologies duringtheir active lifetime.

This paper is divided as follows: First we present previous work.We then describe the components of the library, its divisions, andsome examples of interaction techniques we have developed, em-bedded in entire applications on a hardware environment basedmostly on a tracker and a generic display. Later we present somereuse metrics about our current implementation. Finally we presentsome conclusions and future work.

2 RELATED WORK

This work is related to the topics of high level programming lan-guages and libraries, in the particular field of VR applications andin the generic space of Computer Human Interfaces (CHI). We de-scribe first some important results in high level languages for CHIand in particular VR, and later we describe some results related tolibraries for interaction. An assumption in our work is that previ-ous results in CHI, mostly related to 2D interfaces, are either toogeneric or too hard to accommodate to the specific representationalneeds of the wealth of 3D interaction techniques our communityhas developed.

Our work is inspired in early efforts around the concept of UserInterface Management Systems, or UIMS [17, 12], which provideda semantically rich metaphor for the development of user interfacesand a set of tools that supported several stages in the software life-cycle. Although there are some examples related to 3D interactionrepresentation (i.e. the daisy menu in the PMIW language [12]),these works mostly concentrated in other elements important toUIMS, such as graphic feedback, device management, IDE support,or debug support.

Some high level languages for interface development in the fieldof WIMP [2, 16] and post-WIMP [12] interfaces used formal spec-ification mechanisms such as Petri Nets, simplified programminglanguages, or state machines, which allow automatic checking butthey are usually difficult to comprehend to non-expert users. Someabstract models and their implementations have been developed forthe support of certain application types (i.e. multimodal applica-tions in [4]). We are more interested in systems that use a dataflowabstraction, as ours, and the libraries of interaction techniques theymay have created. The Input Configurator and MaggLite in [9, 11]present a three layered dataflow that connect physical input devicesto responses in post WIMP applications, an approach similar to oursthat we generalize to more VR devices and more complex, exten-sible, and non-layered dataflows that represent generic 3D interac-tion techniques. Squidy [14] presents a simplified dataflow withjust one input and output port per node that allows the integrationof several types of novel devices through standard protocols anda Java-based core. Our approach hides such particular protocolsand it is language independent, which presents a cleaner abstractmodel. In [15], a dataflow based, UIMS-like system for the devel-opment of multimodal interfaces is presented, with very interestingfeatures such as design by example and group development, but stilla more comprehensive library of interaction techniques is missing,as a result of an analysis of a space design instead of a result ofrepresenting available technologies.

There have been several attempts to create high level languagesfor the description of VR applications. For example, [20] presentsa formalism based on Petri Nets, [22] proposes a dual control anddata flow architecture, or [24] proposes modules for interaction de-sign, just to mention some recent results. However, there are fewproposals in these languages related to libraries of reusable andindependent components, specifically for interaction techniques.Finally, novel interaction techniques development methodologiessuch as [7] do not provide clear proposals for libraries of interac-tion.

We should also mention the state of the art in game engines,that provides a good starting foundation for several types of VRapplications. However, in general, their architectures do not con-sider changes in the interaction techniques, and they are fixed to themost popular techniques with standard devices, mostly keyboardand mouse.

There are several libraries that focus on the visualization part ofan interface, both in WIMP and 3D applications, such as [3], wherethe interaction is implicit or limited to the standard metaphors anddevices in WIMP. In terms of libraries of interaction techniquesand novel devices, there are some attempts in the field of WIMP[18] and post-WIMP interfaces ([11]). Several systems providefixed and implicit sets, such as the standard X3D [23], so theirpurpose so far is limited for the creation of novel interaction tech-niques in novel hardware environments. Some extensions such asContigra [8] provide interesting implementations of common inter-action techniques, but due to the programming style in X3D, it isdifficult to identify components for reuse in novel interaction tech-niques. Several systems declare the existence of a library of de-vices or reusable modules (i.e. [9]), although there are not enoughdetails about its support to generic interaction techniques of designprocess. Squidy [14] introduces the concept of a searchable knowl-edge base of filters, although few details are available about thedesign rationale of such collection of filters. Commercial systemssuch as 3D Via [22] or Vizard [25] provide a wealth of functional-ity, although their components do not particularly address interac-tion, their component’s communication and encapsulation charac-teristics can be improved, their design has not taken reusability andshared code from the beginning, and they are tight to a particularlanguage and vendor.

Current libraries and toolkits include some support for interac-tion techniques. Device libraries such as VRPN and OpenTrackerprovide a common, programming language level access to devices,although they just offer simple ways to connect the rest of the appli-cation code. Complex scene graphs such as Java3D provide somesupport for interaction modules, but they have limited componentsand have not evolved despite their time in the market.

3 THE LIBRARY

The library was defined by means of an iterative process, that addedat a time one interaction technique from [5] and a sample applica-tion that use it. Each time we added a new interaction technique, wetook into account the components available in the library. If possi-ble, we either reused or slightly modified an existing component.If it was not possible, new components were created and added.All these components and libraries were created with the aid of ourIDE, a set of plug-ins for eclipse that allows us to create libraries,components inside a library, applications, and generate code stubsin a particular runtime. Figure 1 shows our IDE with the object li-brary and a ray casting application. Although details are too small,it can be seen the canvas for visual programming, the set of toolsthat each canvas provides, the outline view that shows the entireapplication, and the list of properties for the selected component inthe application.

The following subsections describe key components in the li-brary, divided in three main groups: devices, objects, and behaviors.

Figure 1: IDE for the Creation of Components, Libraries, and Appli-cations.

First we describe the basics of the dataflow model we are using, andthe basic types that all components are based upon.

3.1 Basic Model Elements

This library is defined on top of the InTml [10], which describesVR applications as a dataflow of components, which can then betranslated to a particular runtime environment, provided that sucha target environment has been implemented. Each component hasa type, that describes a set of input and output ports that can beused, as the only method for intercomponent communication. Notall ports have to be connected in an application, and zero or moreevents can be received by each port at any particular frame. Portshave types, which are an abstraction of basic types in low level pro-gramming languages. A port type can also be a component, in orderto allow objects to be sent to other behavior components. For exam-ple, components for collision detection may require the geometryof a pointer object, which could be sent to this module through aninput port. Entire applications can be defined as dataflows of com-ponents, some of them generic, some of them application specific.It is also possible to encapsulate entire applications in a component,although this mechanism precludes reusability.

3.2 Basic Types

As part of our requirements we want to make this description asprogramming language independent as possible, so it may be feasi-ble to implement these modules in several execution environments,especially in strongly typed, general purpose programming lan-guages. For this reason we define the types in Table 1. These typescan be easily translated to the language of each runtime environ-ment, as part of the basic implementation support. From this table,it is important to notice the Signal type that informs the occurrenceof an event with no associated information, and the Id* types thatcontain an identifier and information of a certain type. The latterallow ports to send events from several sources, as we will mentionin the following sections.

Spatial types are also of particular interest in 3D applica-tions. For example Pos3D model a 3D position in space, bothOrientation and Quaternion model an orientation in space,and Matrix4 can model both. While we have required this redun-dancy in certain components, in general we prefer to have abstracttypes that can be implemented in several ways. In that sense, weprefer Orientation than Quaternion, and pairs of instancesof Pos3D and Orientation events instead of one Matrix4

event.

Table 1: Basic Types in InTml .

Category Type Names

Ocurrence of an event Signal

Basic types Integer, Boolean, Float, Dou-ble, Long, Byte, String

Spatial types Pos2D, Pos3D, Orientation,Orientation3D, Quaternion,Matrix4

Id plus type IdFilter, IdBoolean, IdDouble,IdLong, IdByte, IdPos2D,IdPos3D, IdOrientation, IdOri-entation2D, IdOrientation3D,IdQuaternion, IdString, IdFloat

A Component’s instance Filter

3.3 Devices

The most easily understood component from the designer’s pointof view is a device. A device in this library is an abstraction ofa family of physical devices (i.e. gamepads, joysticks, and track-ers), as independent as possible from particular APIs or low levelprogramming languages, which share form factor and functionality.Our purpose is that each device component can be directly relatedto a physical device, therefore facilitating its identification duringdesign and maintenance. Although they are technically speakingoutside of the scope of interaction techniques, it is important to usto be able to describe an entire application in the same formalism.Therefore, devices (as well as content objects, as it is mentionedin the following Section) were required. Devices get reused whenseveral applications use the same hardware in their solution. Wehave also defined components for one of a kind devices such asthe SpaceMouse, the Wii Remote Control, the Phantom Omni bySensable, the Falcon by Novint, the P5 Glove, and more specificdevices can be defined in the future by designers. These one of akind devices may be generalized (i.e. a generic haptic device, ora generic glove), once more devices are included in the library andcommonalities between them are found. In this way, a designer willbe able to use either a generic component that describes commonfunctionality in its particular device, or a particular (but maybe notso reusable) component that gives access to all of a device’s func-tionality. We also included a component for one of our custom-builtdevices in order to show the library’s extension possibilities.

Generic devices provide a common interface for a family ofphysical devices, which have the same form factor, expected out-puts, and expected inputs. For example, Figure 2 shows a genericjoystick. Since joysticks can vary in their number of elements(i.e. buttons or analog sensors), we define the following portsper element of interaction: one that outputs the number of ele-ments, and ports that send streams of each type of event such el-ements can produce. For example, the joystick component hasthe numButtons port in order to output the number of buttonsa particular device has, and a set of ports (i.e. btnsPressed,btnsReleased, btnsClicked) that outputs the events suchdevices can produce, each type of event in its own port. In this case,since these events do not have attached information, we model eachevent as an integer that informs the identification (a number) of theparticular button that generated such an event. In a similar man-ner, joystick has the numAnalogs port that informs how manyanalog sensors the device has, and analogValues that outputs astream of analog events. Each one of these events is modeled withthe IdFloat basic type, which is a tuple that indicates the id of thesensor that produces the event and its float value. The last outputport is devName, which allows us to identify particular devices,

when such information is provided by the subjacent driver. Finally,we added a vrpnConnInfo input port, that allows us to definethe configuration string that our subjacent driver requires in orderto receive events from the physical device — in this case, VRPN.This port can be generalized in order to receive any particular con-figuration string that may be required by inner implementations.

Figure 2: Component for a Generic Joystick

Device families in our case are defined not only in terms of func-tionality, which could be seen in the available input and outputports, but also in their form factor and ergonomics. For example,Figure 3 shows a generic gamepad. Although we can notice it hasthe same input and output ports than a Joystick, it is used in adifferent way, it has a different form factor, and therefore we believeit is important to separately describe it .

Figure 3: Component for a Generic GamePad

Figure 4 shows a generic tracker. Apart from the deviceNameoutput port, it has a way to inform position and orientation of anumber of available trackers. In terms of position, it providesnumPositionswith the number of position values available, andpositionValues with the particular values provided. Events ofthe IdPos3D type provide an id for the generating tracker and itscurrent position. This representation can model small systems withtwo or three trackers, some with just position information, to novelmotion capture systems with hundreds of markers.

Figure 4: Component for a Generic Tracker

Special purpose or one of a kind devices are represented in thesame manner, although their ports are directly related to what aparticular driver implementation provides. For example, Figure 5shows a component for the Wii Remote Control. Its ports representavailable input and output through the wiiuse library 1. In this case,it is up to the designer to provide as many ports as possible, whichmay preclude implementation in several programming languages

1Details of the wiiuse functionality can be found at

http://www.wiiuse.net/.

due to variations in support, or provide minimum functionality thatcould be implemented in all targeted environments. In this case,the WiiDevice is modeled for completeness. It provides eventsfor all its buttons, analog values for sensors like its joystick andaccelerometer, the number of infrared dots that are detected at anytime and their positions in a projected 2D space, and the currentbattery level. As inputs, it may receive a vibration value, a soundvalue, and a combination of a connection string and an integer thatidentifies it, if more than one Wii device is present.

Figure 5: Component for a Wii Remote Control

3.3.1 Utility Components for Devices

In order to facilitate the use of device components, we designed aset of utility components with the following functionality:

• Selectors. Output ports of types such as IdFloat orIdPos3D provide a stream of values of the same type fromseveral sources. Selectors filter information from a particularsource, each one identified by an integer. A selector receivesthe stream of tuples (id,value) and a particular integer thatidentifies the source to filter, and they output just the infor-mation that such a source provides. One example of thesecomponents is shown in Figure 6, that filters position and ori-entation events from a particular tracker or marker. Note thatthe type for the output ports do not require the source id thatis embedded in the input events.

• Echo. A component for showing any type of events in theconsole.

• Type translators. Several input events can be combined in or-der to produce an event of another type. For example, twoFloats can be combined into a Pos2D, or a Pos3D thatrepresents a vector can be converted into an Orientation.Utility components can do this job.

• ComposedEventsButtons produces events for buttonsclicked, double clicked, held, just pressed, or just released.

• SemanticButtonEvent produces events that requireknowledge about devices. For example shootEvent is pro-duced once the shoot button in a particular device is pressed,which depends on the particular ergonomics of a device.

Figure 6: The TrackerSelector Component.

3.4 Objects

The object library solves common requirements that interactiontechniques impose to objects in the scene, and common objectsthat are required in such techniques. The basic interactive objectis shown in Figure 7. It allows loading its geometry from a file,basic transformations (i.e. translation, rotation, scaling), appear-ance changes (i.e. to show a bounding box, make it visible, changeits material), and parenting changes, so it can be dependant fromother object. Since several changes can be applied at each frame,an object can inform at the end which is its final state.

Figure 7: The SimpleObject Component.

Other objects in this library represent basic shapes (Ray, Cone,Cylinder, Plane, Sphere, Cube, Disk), a loader from a filethat separates several objects on it (Scene), and a component for aMaterial. In the case of basic shapes, although they can be cre-ated as a SimpleObject, their particular types allows specificmodifications and functionality for interaction. For example, Fig-ure 8 shows a ray as we found it useful for interaction. We canload its geometry and transform it as a generic object, but we canalso define it in terms of a position, orientation, length, and radius,which allows us to provide feedback about the direction of selec-tion or gaze in a particular interaction technique. Special outputports are also related to these features, such as length and radius.

Figure 8: The Ray Component.

We provide too an ObjectSet component, that models an arbi-trary (and usually temporal) collection of objects, which are treatedas one for interaction purposes, and an ObjectSelector thatallows to filter a particular object from a stream, given its id. Wealso include here a View component, which represents the currentapplication point of view, which may be mounted on an avatar’sgeometry.

3.5 Behaviors

These components provide the foundational algorithms for inter-action techniques. Currently, we have divided them into selectionbehaviors and travel behaviors, but they could be rearranged in thefuture, once more interaction techniques are represented and more

commonalities are found (i.e. components for travel techniques thatcan be used for selection).

The selection components consist on the basic algorithmsfor object selection (i.e. RayCaster, ConeCaster,ObjectCollisioner), and utility components that allowthe implementation of interaction techniques, some of thempresented in Section 4. Figure 9 presents the ConeCaster.It receives the selection cone and the selectable objects. Asoutput, it computes the new set of selected objects (i.e. the onesinside the cone). Figures 10 and 11 present the RayCaster andObjectCollisioner, respectively. Notice how similar theirstructures are, in particular the expected outcome (one or moreselected objects) from an input set. Other selection techniquescan be added in the future, such as fast solutions based on GPUcomputations or OpenGL state. In these cases we foresee morebasic types in order to represent the input information thesetechniques will require, and explicit output ports in display devicesin order to produce such information.

Figure 9: The ConeCaster Component.

Figure 10: The RayCaster Component.

Figure 11: The ObjectCollisioner Component.

The travel library contains 6 technique-dependant components,among them a way to draw a path in the floor (PlaneDrawer),an interpolator between positions (PositionInterpolator),and a way to identify the exact collision point between a rayand a floor plane (FloorRayCaster). Most of them are usedin the Drawing a Path technique [5, p.207]. We show here theIncrementalSpeedPositioner (Figure 12), which receivesa current position, current speed, and orientation of movement, andoutputs the expected new position. Notice that the component doesnot actually change the view position. Instead, this or other com-ponent can affect the View instance of an application and thereforemove the user, as we will see in the examples below.

4 EXAMPLES

As we have said, we have implemented several interaction tech-niques from [5]. These interaction techniques are represented asdataflows of instanced components, and we show some of themhere in the context of applications that use a generic display (notvisible in the diagrams and therefore implicit), trackers, and Wiicontrols as devices. Devices can be changed at will if necessary,although it is future work to identify which interaction techniquesmake more sense in particular hardware scenarios.

Figure 13 shows an application that uses one of the most com-mon selection techniques, ray casting [6]. The core of this tech-nique is the rayCaster component, that receives a Ray that

Figure 12: The IncrementalSpeedPositioner Component.

is moved and oriented by means of a hand tracker, and the se-lectable set of objects in the scene. Objects are loaded fromthe vrmlScene.wrl2 file by means of the Scene, which outputsa stream of identified objects. Such objects are filtered by anObjectSelector, given their identifiers (in this example, thestrings ”1”, ”2”, ”3”, and ”4”). The Ray is moved by a handtracker, whose coordinates can be transformed by means of thehandOffset component. Selected objects are passed to thefeedback component, in order to change its appearance andtherefore to show users their new state. As final elements in thisapplication, the position and orientation of a View component aredefined by a head tracker whose device coordinates are transformedby a OffSetter component.

Figure 14 depicts the Go-Go selection technique [19]. In thiscase, a collisioner component receives the selectable objectsand the virtualHand. The orientation of this virtual hand is di-rectly mapped from a hand tracker in world coordinates (by meansof the handOffset transformation). From the other hand, thevirtual hand position is computed by means of the gogo compo-nent, which takes into account the hand and trunk position, andtwo parameters3. Again in this example, a feedback componentchanges the appearance of selected objects, and there are compo-nents in charge of object loading and view manipulation.

One of the more complex techniques we have described is theWorld in Miniature selection technique (Figure 15) [21]. Objectsare loaded by the Scene, and copied by an ObjectCopier.From those copies we select which copies will be selectable andwhich not at objectsForSelection. Both sets of copies arehandled as ObjectSets and scaled. Selectable copies are givenas the input set of objects for a rayCaster, which will informwhich object is selected by a ray4. Finally, objectSelById willoutput the objects whose copies were selected.

As an example of traveling, Figure 16 shows a gaze directedsteering technique [13]. A tracker’ orientation is selected (byheadSelector) and transformed into world coordinates (bymeans of headOffset. The positioner component computesa new position by means of such an orientation plus the current viewposition and a speed factor. In this example, the speed is defined bymeans of two buttons of a WiiDevice, that increment or turn tozero the current user’s speed.

4.1 Implementation

We have implemented most components on top of our C++ runtimeenvironment, which is built on top of VRPN, OpenSG, xerces, andVR Juggler. We have measured an overhead of 10% in our currentimplementation as the average time of each cycle vs. the sum ofaverages for each component in a simple application. More workhas to be done to compare the performance of our dataflow vs. theperformance of a similar application in other formalisms.

5 METRICS

We have performed two analysis of the components and applica-tions we have designed so far. First, we want to show that the de-

2In the figure, vrmlScene.wrl and the other elements with similar look

are constants.3We avoid some constants in these figures, due to cluttering.4Any other selection technique may be used, so in this example we show

just the basic components of a Ray Caster.

Figure 13: The Ray Casting Application.

Figure 14: The Go-Go Application.

signed components are indeed reusable, from the point of view ofthe designed applications. Second, we want to test reusability ofthese components with new applications and interaction techniques.

5.1 Level of Reusability in Examples

Table 2 shows a summary of components used in the 12 applica-tions we have designed, which implements the following interac-tion techniques: Ray Casting (RC), Two-Handed Pointing (TP),Flashlight (FL), Aperture (AP), Virtual Hand (VH), Go-Go (GG),World in Miniature (WM), Walking Camera in hand (WH), GazeDirected Steering (GS), Pointing Torso (PT), Drawing a Path (DP),and Manipulating User Representation (MR).

We use 28 components in these applications, of a total of roughly50. Non used components are either devices or utility componentswe designed for completeness. Of this 28, only 9 are used in justone interaction technique. So far, we have reused more than 35%of the components we designed, which is a good percentage ac-cording to common practices. If we take into account just the usedcomponents, we have achieved a 68% of reusability.

Most of the applications use between 10 and 15 components,which we believe it shows a good granularity level: fewer compo-nents might be difficult to reuse, while more components might betoo many for these size of programs. Nevertheless we could alsonotice some repetitive patterns due to common functionality in or-

der to handle trackers, their translation to world coordinates, viewhandling, and feedback. If we omit these common components, wecan find that interaction techniques are implemented in terms of 3to 8 components. Of particular interest is the low percentage ofcomponents that are unique to an application: in average, only 6%of the components are unique in these applications, which seemsto indicate that the design space for interaction techniques is rathersimilar, with very few variations between techniques.

5.2 Extended Reusability

As a test for component reusability, we tried to design other appli-cations with novel interaction techniques, apart from the ones wecovered here. For that purpose, we took a small sample of 4 tech-niques from [1] (Object Picking, Identifiers, FirstPerson, and Scripted Navigation). They are similar tosome of the techniques we use, and we only needed minor changes(modifications in some input ports or changes in devices) in orderto design them. In total, we made changes in 2% of the library.

6 CONCLUSIONS AND FUTURE WORK

We have presented a library of reusable components for the de-velopment of interaction techniques. The interaction componentsare combined in a dataflow with components that represent con-tent and behavior, in order to create complex applications. These

Figure 15: The World In Miniature Application.

Figure 16: The Gaze Directed Application.

components have shown to be highly reusable among the selectedinteraction techniques from [5] and [1]. This library, the eclipse-based IDE for InTml, and the InTml runtimes are available athttp://intml.sourceforge.net .

There are several issues we plan to address in future work. Weplan to continue our description of more interaction techniques,newer than the ones included in the books we used as reference. Wewant to include components that could address more complex appli-cations, in order to show reusability when application-specific com-ponents are included. We plan to complement previous usabilitystudies we have performed. We have shown that non-programmerscould understand InTml and do simple applications after few hoursof introduction [10]. We want now to do more studies on the is-sue of library reusability, and how we can compare productivitywith InTml to other formalisms. We also plan to include in ourIDE implementation other abstraction mechanisms in our formaldescription such as composite components, which could help in theunderstanding of applications with reused parts.

ACKNOWLEDGEMENTS

Thanks to anonymous reviewers, for their very valuable comments.

REFERENCES

[1] J. Barrilleaux. 3D User Interfaces With Java 3D. Manning Publica-

tions, August 2000.

[2] R. Bastide and P. Palanque. A petri net based environment for the

design of event-driven interfaces. In G. De Michelis and M. Diaz,

editors, Application and Theory of Petri Nets 1995, volume 935 of

Lecture Notes in Computer Science, pages 66–83. Springer Berlin /

Heidelberg, 1995. 10.1007/3-540-60029-9 34.

[3] B. Bederson, J. Grosjean, and J. Meyer. Toolkit design for interac-

tive structured graphics. Software Engineering, IEEE Transactions

on, 30(8):535 – 546, August 2004.

[4] J. Bouchet, L. Nigay, and T. Ganille. Icare software components for

rapidly developing multimodal interfaces. In Proceedings of the 6th

international conference on Multimodal interfaces, ICMI ’04, pages

251–258, New York, NY, USA, 2004. ACM.

[5] D. Bowman, E. Kruijff, J. Joseph J. LaViola, and I. Poupyrev. 3D

User Interfaces: Theory and Practice. Addison Wesley, July 2004.

[6] D. A. Bowman and L. F. Hodges. An evaluation of techniques for

grabbing and manipulating remote objects in immersive virtual envi-

ronments. In Proceedings of the 1997 symposium on Interactive 3D

graphics, pages 35–ff. ACM Press, 1997.

[7] J. Chen and D. A. Bowman. Domain-specific design of 3d interaction

techniques: An approach for designing useful virtual environment ap-

Technique RC TP FL AP VH GG WM WH GS PT DP MR

device.Tracker 1 1 1 1 1 1 1 1 1 1 1 1

device.TrackerSelector 2 3 2 3 2 2 2 1 1 2 2 2

object.Cone 1 1

object.ObjectSelector 1 1 1 1 1 1

object.ObjectSet 2 1

object.Ray 1 1 1 1

object.Scene 1 1 1 1 1 1 1 1 1 1 1 1

object.SimpleObject 1 1

object.View 1 1 1 1 1 1 1 1 1 1 1 1

selection.ConeCaster 1 1

selection.Distance 1

selection.FeedbackToggle 1 1 1 1 1 1 1

selection.GoGoMapping 1

selection.LinearMapping 1

selection.ObjectCollisioner 1 1

selection.ObjectCopier 1 2

selection.ObjectSelectionByCopy 1

selection.ObjectSelector 1 1

selection.OffSetter 2 3 2 3 2 3 2 1 1 2 2 2

selection.PosToPosOrient 1

selection.RayCaster 1 1 1

device.wiiDevice 1 1

device.ButtonSelector 2 2

travel.SignalFloatInterpolator 1 1

travel.incrementedSpeed 1 1

travel.FloorRayCaster 1

travel.PositionInterpolator 1

travel.PlaneDrawer 1

travel.objectProxy 2

Table 2: Used Components in Sample Applications. Numbers Indicate How Many Instances of a Particular Component an Interaction TechniqueHas. Abbreviations are defined in Section 5.1.

plications. Presence: Teleoper. Virtual Environ., 18:370–386, October

2009.

[8] R. Dachselt, M. Hinz, and K. Meiner. Contigra: an XML–based archi-

tecture for component-oriented 3d applications. In Proceeding of the

seventh international conference on 3D Web technology, pages 155–

163. ACM, ACM Press, 2002.

[9] P. Dragicevic and J.-D. Fekete. Support for input adaptability in the

icon toolkit. In Proceedings of the 6th international conference on

Multimodal interfaces, ICMI ’04, pages 212–219, New York, NY,

USA, 2004. ACM.

[10] P. Figueroa, W. F. Bischof, P. Boulanger, H. J. Hoover, and R. Tay-

lor. Intml: A dataflow oriented development system for virtual reality

applications. Presence: Teleoper. Virtual Environ., 17(5):492–511,

2008.

[11] S. Huot, C. Dumas, P. Dragicevic, J.-D. Fekete, and G. Hegron. The

magglite post-wimp toolkit: draw it, connect it and run it. In Pro-

ceedings of the 17th annual ACM symposium on User interface soft-

ware and technology, UIST ’04, pages 257–266, New York, NY, USA,

2004. ACM.

[12] R. J. K. Jacob, L. Deligiannidis, and S. Morrison. A software model

and specification language for non-wimp user interfaces. ACM Trans.

Comput.-Hum. Interact., 6:1–46, March 1999.

[13] G. D. Kessler, D. A. Bowman, and L. F. Hodges. The simple virtual

environment library: An extensible framework for building VE appli-

cations. Presence: Teleoperators and Virtual Environments, 9(2):187–

208, 2000.

[14] W. Knig, R. Rdle, and H. Reiterer. Interactive design of multimodal

user interfaces. Journal on Multimodal User Interfaces, 3:197–213,

2010. 10.1007/s12193-010-0044-2.

[15] J.-Y. L. Lawson, A.-A. Al-Akkad, J. Vanderdonckt, and B. Macq. An

open source workbench for prototyping multimodal interactions based

on off-the-shelf heterogeneous components. In Proceedings of the

1st ACM SIGCHI symposium on Engineering interactive computing

systems, EICS ’09, pages 245–254, New York, NY, USA, 2009. ACM.

[16] E. Lecolinet. A molecular architecture for creating advanced guis. In

Proceedings of the 16th annual ACM symposium on User interface

software and technology, UIST ’03, pages 135–144, New York, NY,

USA, 2003. ACM.

[17] B. Myers, R. McDaniel, R. Miller, A. Ferrency, A. Faulring, B. Kyle,

A. Mickish, A. Klimovitski, and P. Doane. The amulet environment:

new models for effective user interface software development. Soft-

ware Engineering, IEEE Transactions on, 23(6):347 –365, June 1997.

[18] B. A. Myers. A new model for handling input. ACM Trans. Inf. Syst.,

8:289–320, July 1990.

[19] I. Poupyrev, M. Billinghurst, S. Weghorst, and T. Ichikawa. The go-

go interaction technique: non-linear mapping for direct manipulation

in vr. In Proceedings of the 9th annual ACM symposium on User

interface software and technology, pages 79–80. ACM, ACM Press,

1996.

[20] R. Rieder, A. B. Raposo, and M. S. Pinho. A methodology to specify

three-dimensional interaction using petri nets. J. Vis. Lang. Comput.,

21:136–156, June 2010.

[21] R. Stoakley, M. J. Conway, and R. Pausch. Virtual reality on a wim:

interactive worlds in miniature. In ACM, editor, Conference proceed-

ings on Human factors in computing systems, pages 265–272, 1995.

[22] Virtools. Virtools. http://www.virtools.com/index.asp, 2007.

[23] Web3D Consortium. Extensible 3D (X3DT M) Graphics. Home Page.

http://www.web3d.org/x3d.html, 2003.

[24] C. A. Wingrave, J. J. Laviola, Jr., and D. A. Bowman. A natural,

tiered and executable uidl for 3d user interfaces based on concept-

oriented design. ACM Trans. Comput.-Hum. Interact., 16:21:1–21:36,

November 2009.

[25] WorldViz. Vizard. http://www.worldviz.com/products/vizard/, 2010.


Recommended