+ All Categories
Home > Documents > A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D...

A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D...

Date post: 13-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
6
A stereoscopic viewer of the results of vessel segmentation in 3D magnetic resonance angiography images Krzysztof Karolczak *† and Artur Klepaczko * * Lodz University of Technology Institute of Electronics 90-924 Lodz, ul. Wolczanska 211/215 Inwedo Sp. z o.o. 90-441 Lodz, ul. T. Kosciuszki 101 Email: {krzysztofkarolczak, arthur.klepaczko}@gmail.com Abstract—The next generation of HCI (human-computer inter- action) devices currently entering the market creates possibilities for new ways of software control and 3D display. This study explores benefits of using touchless navigation and stereoscopic 3D vision in professional medical applications. The designed system aids in MRA-driven medical diagnosis by providing a true 3D insight into volumetric models of the segmented vessels. Magnetic resonance angiograms were chosen as the primary imaging modality due to high influence of proper representation of vascular structures on accuracy of the diagnostic procedures. The real-3D vision is obtained with the aid of Oculus Rift head mounted device, providing an immersive virtual reality experience. The designed custom controller additionally incor- porates low-latency head tracking in order to control the view and provide intuitive means to explore the 3D environment. Touchless navigation is introduced using Leap Motion controller, device specially designed for hand tracking with resolution up to 0.01 mm. Using a set of simple one-hand gestures the user can precisely rotate and move the 3D model, naturally navigating through complex vascular structures. Furthermore the viewer allows for quantification of segmented vessels measuring length, cross-section area and level of stenosis of structures under investigation. I. I NTRODUCTION The objective of the study presented in this paper was to design an innovative 3D viewer for magnetic resonance angiography (MRA) images that would allow medical staff to analyse diagnostic data, benefiting from a fully three- dimensional immersive experience and touchless intuitive in- terface. Furthermore by providing a set o gesture control tools the proposed software tool should allow easy quantification of parameters such as vessel length, diameter and degree of stenosis for the provided vascular tree models. A. Design concept The notion of a new type of 3D medical viewer originates from two basic observations: 1) using two-dimensionaldisplays significantly limits the way a 3D model can be presented and 2) the current input devices are not well suited for navigation in three-dimensional space with 6 degrees of freedom (6 DoF). Formerly all diagnostic images were analysed using 2D slices, based solely on pixel information encoded in DICOM files. However in case of the vascular system and other complex structures it is extremely difficult to extract subtle abnormalities without changing the projection plane. This is especially apparent in case of MRA images, as most of diagnostically relevant slices are perpendicular to the direction of main arteries, which impedes the holistic overview. With the development of vessel segmentation methods (e.g. [1], [2], [3]) 3D models became detailed enough to be con- sidered anatomically accurate representations of blood vessels. Currently modern medical MRI and MRA viewers (e.g. [4]) can import 3D models being a result of post-processing of DICOM data. The quality of representation is clearly visible in Fig. 1. However despite these advances the images are still being displayed on normal computer screens, effectively meaning that diagnosticians are working with projections or cross- sections of 3D models. One of the basic assumptions of our project was to provide medical staff with seamless 3D-viewing experience allowing to explore the human body from the perspective of an inside observer. After some research it led to incorporation of HMD class device named Oculus Rift into the viewer, providing real-3D display [5]. Furthermore the input devices presently used – mouse and keyboard, had been designed with 2D navigation in mind and do not provide an intuitive way to explore a fully three- dimensional space. The new viewer had to allow intuitive inter- action with the 3D environment. The widely popular Microsoft Kinect or PlayStation Move controller occur suboptimal for precise navigation as their gesture recognition and hand tracking capabilities are limited. On the other hand the new devices developed together with open-source community, such as mechanomyographic MYO [6], are still in early beta stage. In late July 2013 a device known as Leap Motion was introduced to the market. It is a touchless motion and gesture controller with extensive hand tracking possibilities. The Leap Motion controller has received a lot of attention from the med-
Transcript
Page 1: A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D viewer providing all of the above mentioned features will bring significant value

A stereoscopic viewer of the results of vesselsegmentation in 3D magnetic resonance

angiography imagesKrzysztof Karolczak∗† and Artur Klepaczko∗

∗Lodz University of TechnologyInstitute of Electronics

90-924 Lodz, ul. Wolczanska 211/215†Inwedo Sp. z o.o.

90-441 Lodz, ul. T. Kosciuszki 101Email: {krzysztofkarolczak, arthur.klepaczko}@gmail.com

Abstract—The next generation of HCI (human-computer inter-action) devices currently entering the market creates possibilitiesfor new ways of software control and 3D display. This studyexplores benefits of using touchless navigation and stereoscopic3D vision in professional medical applications. The designedsystem aids in MRA-driven medical diagnosis by providing atrue 3D insight into volumetric models of the segmented vessels.Magnetic resonance angiograms were chosen as the primaryimaging modality due to high influence of proper representationof vascular structures on accuracy of the diagnostic procedures.The real-3D vision is obtained with the aid of Oculus Rifthead mounted device, providing an immersive virtual realityexperience. The designed custom controller additionally incor-porates low-latency head tracking in order to control the viewand provide intuitive means to explore the 3D environment.Touchless navigation is introduced using Leap Motion controller,device specially designed for hand tracking with resolution up to0.01 mm. Using a set of simple one-hand gestures the user canprecisely rotate and move the 3D model, naturally navigatingthrough complex vascular structures. Furthermore the viewerallows for quantification of segmented vessels measuring length,cross-section area and level of stenosis of structures underinvestigation.

I. INTRODUCTION

The objective of the study presented in this paper wasto design an innovative 3D viewer for magnetic resonanceangiography (MRA) images that would allow medical staffto analyse diagnostic data, benefiting from a fully three-dimensional immersive experience and touchless intuitive in-terface. Furthermore by providing a set o gesture control toolsthe proposed software tool should allow easy quantificationof parameters such as vessel length, diameter and degree ofstenosis for the provided vascular tree models.

A. Design concept

The notion of a new type of 3D medical viewer originatesfrom two basic observations: 1) using two-dimensionaldisplayssignificantly limits the way a 3D model can be presented and2) the current input devices are not well suited for navigationin three-dimensional space with 6 degrees of freedom (6 DoF).

Formerly all diagnostic images were analysed using 2Dslices, based solely on pixel information encoded in DICOMfiles. However in case of the vascular system and othercomplex structures it is extremely difficult to extract subtleabnormalities without changing the projection plane. Thisis especially apparent in case of MRA images, as most ofdiagnostically relevant slices are perpendicular to the directionof main arteries, which impedes the holistic overview.

With the development of vessel segmentation methods (e.g.[1], [2], [3]) 3D models became detailed enough to be con-sidered anatomically accurate representations of blood vessels.Currently modern medical MRI and MRA viewers (e.g. [4])can import 3D models being a result of post-processing ofDICOM data. The quality of representation is clearly visiblein Fig. 1.

However despite these advances the images are still beingdisplayed on normal computer screens, effectively meaningthat diagnosticians are working with projections or cross-sections of 3D models. One of the basic assumptions of ourproject was to provide medical staff with seamless 3D-viewingexperience allowing to explore the human body from theperspective of an inside observer. After some research it ledto incorporation of HMD class device named Oculus Rift intothe viewer, providing real-3D display [5].

Furthermore the input devices presently used – mouse andkeyboard, had been designed with 2D navigation in mindand do not provide an intuitive way to explore a fully three-dimensional space. The new viewer had to allow intuitive inter-action with the 3D environment. The widely popular MicrosoftKinect or PlayStation Move controller occur suboptimal forprecise navigation as their gesture recognition and handtracking capabilities are limited. On the other hand the newdevices developed together with open-source community, suchas mechanomyographic MYO [6], are still in early beta stage.

In late July 2013 a device known as Leap Motion wasintroduced to the market. It is a touchless motion and gesturecontroller with extensive hand tracking possibilities. The LeapMotion controller has received a lot of attention from the med-

Page 2: A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D viewer providing all of the above mentioned features will bring significant value

Fig. 1. Reconstructed surface of a synthetic vessel system in the OsiriXsoftware.

ical community due to touchless control possibilities, allowingdoctors to interact with computers during procedures withoutbreaking sterility [7]. Therefore this device was chosen to be aprimary input device for the designed viewer. Furthermore theSDK supplied with the controller provides support for gesturerecognition, which was proven useful when creating the toolsetfor 3D model quantification.

B. Project specification

Based on the state-of-the-art analysis presented above andtaking into consideration new possibilities in HCI the designedviewer should fulfil a set of requirements enumerated below:• Ability to import 3D medical models in popular formats

such as STL, FBX, DAE, 3DS, DXF and OBJ.• Capability to create real-3D display with immersive vir-

tual reality experience provided by support for OculusRift HMD. The goal is to form a 3D environmentwhich will give users an accurate, diagnostically relevantrepresentation of models under investigation.

• ”Look around” possibility creating an intuitive way toexplore 3D models. Due to use of low latency 3DoF headtracking the displayed image should precisely follow anyhead movements enhancing user’s experience of presencein a surrounding 3D environment.

• Natural, touchless and gesture based navigation allowingrotation, scaling and translation of the viewed model.

• Ability to quantify selected parts of the model. With theuse of a gesture based toolset the users should be able toseamlessly measure length, area and evaluate degree ofstenosis in viewed vascular systems.

• Easy to meet hardware requirements, allowing using thesoftware on typical personal computers.

A 3D viewer providing all of the above mentioned features willbring significant value to users of medical imaging software,especially in the case of diagnostic procedures based onanalysis of MRA images.

II. IMPLEMENTATION

A. System architectureGeneral architecture of the created system together with

dependencies between modules has been presented in Fig. 2.The created viewer can import 3D models being a result ofMRA post-processing. Immediately after loading a model intoprogram memory polygon reduction is performed in order toeliminate unnecessarily complex representations. It guaranteesquick rendering without altering the shape of the model.

The vascular structures are then imported into a 3D sceneand properly illuminated. Custom controllers for Oculus Riftand Leap Motion (described later) are used as an interfacesallowing for interactions within the created space. The OculusRift controller collects information from gyroscope, magne-tometer and accelerometer in the HMD and translates the datato relative position of the camera controller. This controller,composed in fact of two cameras, allows for stereoscopicvision. The output image is transformed using an Oculus warpshader and projected to the user via HMD. Custom gesturecontroller is used to read information from Leap Motion abouthand positions, in particular palm coordinates and rotation,number of visible fingers and finger tip coordinates. Withsimple movements user can rotate, scale and translate themodel, which is done via described controller performing real-time analysis of data from Leap Motion. The same datasetis used to detect more complex gestures connected with thequantification toolset.

B. Hardware componentsThe hardware configuration used during development of the

project consist of Leap Motion controller (model LM-010,final consumer design) and Oculus Rift (2013 DevelopmentKit edition), connected to a standard PC computer with aDirectX 9 compatible graphic card (see Fig. 3).

1) Oculus Rift: Oculus Rift is a relatively new virtualreality head-mounted display (HMD) offering stereoscopic3D vision with very wide field of view and low latencyhead tracking. These properties make Oculus Rift one ofthe most innovative VR devices capable of creating a fullyimmersive 3D experience. The version of Rift used duringproject development was equipped with a 7-inch LCD screen(mass production phone display panel), with a total 1280800resolution, 16:10 aspect ratio, 24-bit colour depth and 60Hz refresh rate. The screen is placed perpendicularly to theobserver with a mechanically adjustable distance.

The image from the panel is projected to the user througha set of lenses (with fixed distance separation, althoughinterchangeable to cater for 3 basic dioptric corrections).The display for each eye is preliminary transformed withbarrel distortion (as seen in Fig. 4) inorder to compensate thepincushion effect of the lenses in the headset.

The optics allow creation of spherically-mapped image withthe field of view (FoV) exceeding 110 degrees diagonally and90 degrees horizontall. Such FoV is sufficient to completelyfill the typical instantaneous field of view of human eye andis crucial for creating immersive 3D environment.

Page 3: A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D viewer providing all of the above mentioned features will bring significant value

Fig. 2. Overview of the designed system architecture.

Fig. 3. Peripheral devices used for the system development.

Rift allows accurate 3 DoF head tracking. The device isequipped with Adjacent Reality Tracker - an open-source hard-ware further improved by Oculus VR allowing 1000 Hz sam-pling and orientation calculation with less than 1 ms latency.It uses three-axis gyroscope, which senses angular velocity,three-axis magnetometer, which senses magnetic fields andthree-axis accelerometer, which senses accelerations, includinggravitational. This allows calculation of absolute (relative toearth) head position without drift. Instantaneous response in

Fig. 4. Preview of the display panel in Oculus Rift showing simple tubemodels with the pincushion effect compensation.

adjusting the virtual perspective is very important for the userexperience, since high latency can cause motion sickness andheadaches. Therefore, latency issues had been additionallyaddressed in the designed software, as described in Sect.II-C3.

2) Leap Motion: The Leap Motion is a USB peripheraldevice that provides information about position and movementof hands with 0.01 mm resolution in approximately 600 x 600x 600 mm cubic space above the controller. It is capable ofsimultaneously tracking location of one or two hands and upto ten fingers.

Leap motion is equipped with 2 CMOS monochromatic IRcameras and 3 infrared LEDs. The cameras register 300 framesper second of 3D dot patterns generated by the diodes. Thereare no hardware components suggesting that any compleximage processing is done inside the device. The two captured2D images are sent via USB cable and all of the computationsare done by dedicated software at the host machine.

C. Software design

The created software consists of a 3D model importer andrenderer, a user interface used for navigation and measure-ments in 3D space and custom controllers integrating LeapMotion and Oculus Rift devices.

1) Oculus Rift controller: The custom Oculus Rift con-troller serves two main purposes: creating stereoscopic imagewith high frame rate (i.e. above 60 FPS) and reading sensordata to adjust camera position based on the user head move-ment (see Fig. 5a). While evaluating the proper operation ofthis module it was necessary to introduce additional adjust-ments related to latency reduction and distortion correctiondue to pincushion and chromatic aberration effects. In thefollowing, the procedures of image formation and latencyreduction are explained in detail.

2) Creation of stereoscopic image: Oculus Rift requires the3D scene to be rendered in split-screen stereo, as shown inFig. 4. In this configuration each eye sees the image from onehalf of the screen. This implicates that the scene has to bein fact rendered twice. It is important to note that fast stereovolume rendering [8] or other reprojection rendering techniquecreating stereoscopic view from a single fully rendered imagecannot be used with Oculus as it was shown to produce

Page 4: A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D viewer providing all of the above mentioned features will bring significant value

artefacts on edges of objects. Unlike 3D TVs, renderinginside of the Rift does not involve off-axis or asymmetricprojections. Instead, projection axes are parallel to each other.This simplifies the view configuration allowing to create a setof two cameras shifted by a distance proportional to typicalhuman interpupillary distance (approximately 64 mm).

The proper projection is setup following a sequence of steps:1) Forming the viewport to cover screen area for each eye.2) Determining field of view φfov and aspect ratio a.3) Calculating centre projection matrix P.4) Adjusting P according to lens separation distance.5) Matching eye location by adjusting view matrix V.

Setting the viewport is straightforward, for examplefor the left eye it is simply an area bounded by(0,0,HResolution/2, V Resolution), where HResolutionand V Resolution are horizontal and vertical resolution of theHMD screen (1280 x 800 in Development Kit).

Neglecting for now distortion the aspect ratio and field ofview can be calculated as:

a =HResolution/2

V Resolution, (1)

φfov = 2arctan

(V ScreenSize

2EyeToScreenDistance

), (2)

where V ScreenSize relates to physical height of HMD screen(currently 0.14976× 0.0936 m) and EyeToScreenDistanceis equal 0.041 m. The projection matrix used to create the 2Dimage can be formulated as follows:

P =

1

a tan(φfov

2

) 0 0 0

0 1

tan(φfov

2

) 0 0

0 0Zf

Zn−ZfZfZnZn−Zf

0 0 −1 0

(3)

where Zf and Zn are clipping plane depth coordinates.Assuming that no clipping is desired in our application thematrix can be further simplified by substituting Zn = 0 andZf = 1. The evaluation of projection matrix is a common taskand both OpenGL and Direct3D offer dedicated and GPU-optimised functions to handle the calculations.

The problem at this stage is that the projection centreobtained from P coincides with the centre of each screennot the centre of the lens. This can be corrected with atransformation:

P′ =

1 0 0 ±h0 1 0 00 0 1 00 0 0 1

P, (4)

where h and −h are the absolute horizontal offsets to accountfor lens separation for the left and the right eye respectively:

h = 1− 2dsep

HScreenSize, (5)

with dsep = 64 mm being the fixed lens separation distance.

The last step is to transform the non-stereoscopic viewmatrix V. As the centre of transform falls between the eyes ithas to be shifted (separated) horizontally for each camera tomatch location of the eye.

V′ =

1 0 0 ±dsep/20 1 0 00 0 1 00 0 0 1

V. (6)

3) Latency minimisation: The Oculus Rift integrates athree-axis gyroscope, accelerometer, and magnetometer allow-ing for 3DoF absolute head tracking with drift correction. Theposition estimation is done internally in the device during aprocess named sensor fusion. It basically means that gyroscopedata is obtained and then information from other sensors isused to increase accuracy and eliminate drift. The reason whyall of that is done by dedicated hardware is that the speedof calculation is crucial for proper immersive experience. Thedata is sampled with 1 kHz frequency and the total time ofcalculation is below 1 ms.

One of the biggest challenges in providing realistic 3Dexperience is minimising the time between head movementand the updated image showing up the screen (motion-to-photon latency). The interval of 60 ms is a popularly acceptedthreshold for VR latency [9] and most commercially availabledevices operate just below this value, such as Sony HMZ-T3Waveraging52 ms. However for most people any delay above40 ms is clearly noticeable, while for specially sensitive userslonger exposition to VR with latency above 20-30 ms causesdisorientation or nausea within minutes.

Table I presents the main latency factors in the end-to-endcommunication chain. The cumulative sum includes the worstcase scenarios, however the true latency is usually significantlylower as the LCD write and pixel switching happen simultane-ously. The average latency for the development kit of Oculusat 60 FPS is approximately 30-50 ms, depending mostly onthe colour change. Switching a single LCD pixel from blackto dark brown was reported to last less than 5 ms however afull change from black to white may take up to 30 ms.

In order to reduce the motion sickness effect the softwarewas optimised with latency in mind. The scenes were renderedwith higher FPS value – 60-120 frames per second dependingon model complexity. A polygon reduction algorithm wasimplemented during model import in order to decrease the

TABLE IOCULUS RIFT LATENCY FACTORS

Stage Event Duration Worst case[ms] [ms]

Start Position data is calculated 0 - 1 1Transit Data is sent via USB to the host 1 - 2 3

Latest frame is rendered 0 - 17 20Processing Controller writes a frame to LCD 15 - 20 40

LCD pixels switch colours 0 - 30 70End Frame presented to the user - 70

Page 5: A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D viewer providing all of the above mentioned features will bring significant value

(a) (b)

(c) (d)

Fig. 5. 3D scene control using Oculus Rift and LeapMotion devices. Head tracking rotates the system camera (a), closed fist moves the model (b), whileopen hand either rotates it (c), or moves the camera along optical axis.

complexity without altering the shape of the vascular struc-tures. The graphical swap-chain buffers were limited to twothe on screen and off screen, and eventually the commandbuffer size was reduced to a single frame. The latter hadto be done as typically the render commands are batchedto minimise GPU transfers and smooth out variability inrendering times, which contrarily cause a significant delay ofabout 1 in 20 frames. All of the above optimisations allowedto bring the rendering delay level to value typical for Rift DK(in range of 30 ms) and acceptable for most users.

After reaching the limit of actual latency optimisation thedesigned 3D viewer already provided strongly immersive ex-perience. However longer work with the HMD (20 minutes ormore) still caused symptoms of fatigue or disorientation. Whilethe actual latency would be very hard to further decrease dueto hardware and technological limitations, it was possible tocompensate some of the latency by using predictive tracking.

The applied approach uses a Kalman filter to estimate whatwill be the head position at the time the appropriate frame isrendered and relies on the simplification that head movementis a piecewise constant acceleration process. The algorithmis based on the results of widely cited PhD thesis [10]. Theimprovement in effective latency is clearly perceptible and

increases the comfortable work time in 3D environment toover 1 hour for most people.

4) Leap Motion controller: This software module is re-sponsible for translation of tracking data to hand position,3D model control, gesture recognition, and measurement. Thesoftware supplied with Leap Motion runs as a backgroundservice (on Windows) or daemon (on Mac and Linux) andis responsible for gathering data from the controller deviceover USB. The created 3D viewer application obtains motiontracking data (sch as e.g. x, y, z coordinates of the centre ofthe palm and fingers, velocity, number of fingers visible, theplane of the palm and its direction) by accessing the servicethrough a native C# interface. The average Leap Motion framerate is around 80 FPS in the recommended Balanced settinghowever it can be increased to 160 FPS when switched to HighSpeed Mode at an expense of overall accuracy. The 3D viewerscene is rendered with 60-120 FPS and the tracking data ispolled each time a frame is prepared. This approach resultsin steady rendering rate and provides more than sufficienttime resolution and precision of hand tracking in both of LeapMotion modes.

After a serires of experiments it was determined that it ismore natural to use hands to control the model rather than

Page 6: A stereoscopic viewer of the results of vessel ... · software on typical personal computers. A 3D viewer providing all of the above mentioned features will bring significant value

the camera. Therefore moving the hand right or left resultsin yaw rotation of the model while moving it front or backinitiates pitch rotation (Fig. 5b). The roll was implementedby rotating the plane of the hand in one of the directions.Lifting or lowering the hand allows to zoom in and out(Fig. 5c). Initially the magnification was done by scaling themodel, however it appeared to be counterintuitive as duringthat process users often lost sight of the structures they wereinterested in examining in detail. As it was suggested by theusers, sometimes a slight translation of the model is neededand it is not easily achievable with a combination of rotationand head movements. A function was introduced where theuser can “grab” the model by closing his/her hand into a fistand precisely moving the vascular structure (Fig. 5d).

It also quickly became apparent that a method to stop themovement of the camera was required and a way to reset thecameras and model to their initial position was needed. Theformer was achieved by monitoring the number of fingers andwhenever the user would form a fist all of the rotation wouldstop.

The Leap Motion SDK provides high level recognition forfour gestures: circle, swipe, key tap and screen tap. From thisset the circle gesture was chosen as a shortcut to reset the sceneto its initial position. When the circular continuous movementof one of the fingers is recognised the controller monitors theprogress and whenever two full circles are drawn it restoresthe initial layout. A custom gesture is used to activate themeasurement toolkit. It is initiated when only the thumb andindex fingers are visible (as shown in Fig. 6). Then the tippositions of both fingers are obtained from frame data andtranslated into points in 3D viewer space. A line is drawnbetween the points, forming a virtual ruler. Whenever the linepasses through a vascular structure the length of the segmentinside of model is computed and displayed on the screen.

Furthermore the software calculates the minimal cross-section through the object in the plain containing the drawnline and displays its area along with minimal, maximal andaverage diameter. The simplified stenosis coefficient calculatedas the ratio between current and maximum diameter is alsoshown. Because of the properties of 3D vision the notificationarea cannot be located in any of the corners of the screen as itmakes it harder to perceive due to presence in peripheral visionrange and due to the distortions influencing text readability.All of the dimensions presented to the user are in millimetrestranslated with respect to the true anatomical size of the model.

III. SUMMARY

The created 3D viewer is preliminarily intended for analysisof magnetic resonance angiographs by the medical personnel.However, the designed program can import a wide variety of3D models and therefore can readily be used with results ofsegmentation from other medical imaging modalities such asPET, SPECT, CT, ECG, USG and others. The viewer allowsfor precise navigation in 3D space, seamless transition fromone part of the model to other and easily adjusting the view inorder to reveal fragments obscured in different perspectives.

Fig. 6. Stenosis measurement with 3D Viewer.

This makes the software suitable for identifying vasculardiseases and pathologies such as atherosclerosis. Along withthe created quantification toolkit the viewer allows evaluationof stenosis, occlusions, aneurysms (dilatations of artery wallposing a risk of rupture) or other vascular abnormalities.

It is also worth noting that due to low overall price ofperipheral devices (below $400) the 3D viewer can be anaffordable choice both for public hospitals and private medicalclinics. Most of currently available head mounted displaysadapted for diagnostic purposes cost around $15,000 whileheaving similar or inferior technical properties. The price ofthese devices currently limits its number to just a few inbig medical centres, while the 3D viewer solution can beintroduced to every consultation and operation room at afraction of a cost.

ACKNOWLEDGMENT

This work was supported by the Polish National ScienceCentre grant no. 2013/08/ST7/00943.

REFERENCES

[1] A. F. Frangi, W. J. Niessen, K. L. Vincken, and M. A. Viergever,“Multiscale vessel enhancement filtering,” in Medical Image Computingand Computer-Assisted Interventation MICCAI’98, ser. Lecture Notesin Computer Science, W. M. Wells, A. Colchester, and S. Delp, Eds.Springer Berlin Heidelberg, 1998, vol. 1496, pp. 130–137.

[2] M. Strzelecki, P. Szczypinski, A. Materka, and A. Klepaczko, “A soft-ware tool for automatic classification and segmentation of 2d/3d medicalimages,” Nuclear Instruments and Methods in Physics Research SectionA: Accelerators, Spectrometers, Detectors and Associated Equipment,vol. 702, pp. 137 – 140, 2013.

[3] A. Skoura, T. Nuzhnaya, and V. Megalookikonomou, “Integrating edgedetection and fuzzy connectedness for automated segmentation ofanatomical branching structures,” International journal of bioinformaticsresearch and applications, vol. 10, no. 1, pp. 93–109, 2014.

[4] A. Rosset et al., “Navigating the fifth dimension: Innovative interfacefor multidimensional multimodality image navigation,” Radiographics,vol. 26, no. 1, pp. 299–308, 2006.

[5] Oculus VR INC., “Oculus rift: Step into the game, kickstarter,” Avail-able: https://www.kickstarter.com/projects/1523379957/oculus-rift-step-into-the-game., September 2012.

[6] D. P. Saha, “Design of a wearable two-dimensional joystick as amuscle-machine interface using mechanomyographic signals,” Ph.D.dissertation, Virginia Polytechnic Institute and State University, 2013.

[7] TEDCAS, “Natural user interfaces for healthcare. Available:http://www.tedcas.com/.”

[8] T. He and A. Kaufman, “Fast stereo volume rendering,” in Proceedings,IEEE Conf. Visualization’96, 1996, pp. 49–56.

[9] R. S. Allison et al., “Tolerance of temporal delay in virtual environ-ments,” in Proceedings, IEEE Conf. Virtual Reality, 2001, pp. 247–254.

[10] R. T. Azuma, “Predictive tracking for augmented reality,” Ph.D. disser-tation, University of North Carolina at Chapel Hill, 1995.


Recommended