+ All Categories
Home > Documents > Interference measurement of Kinect for Xbox One · Interference measurement of Kinect for Xbox One...

Interference measurement of Kinect for Xbox One · Interference measurement of Kinect for Xbox One...

Date post: 07-Jul-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
2
Interference measurement of Kinect for Xbox One Andreas Kunz * , Luzius Brogli , Ali Alavi ETH Zurich Abstract Microsoft Kinect is widely used for tracking human body in a range of applications. Although Kinect for Xbox One allows for multi- user tracking, it is not possible to use it in large spaces due to its limited range. Hence, using multiple Kinect sensors for large envi- ronments seems to be an appropriate solution. Thus, it is important to know if multiple sensors can be used simultaneously for such applications without interfering with each other. In this paper, we investigate the effect of using multiple Kinects on each other by performing multiple measurements in different settings. Our re- sults show that some occasional interference might happen in some specific constellations, when the sensors are facing the same target. Our recommendation is to avoid such constellations, or to perform a simple interference measurement before using multiple sensors in specific settings. Keywords: virtual reality, tracking, structured light, time of flight Concepts: Human-centered computing Graphics input de- vices; Ubiquitous and mobile computing systems and tools; 1 Introduction and Background Multi-camera setups are widely used either to cover a large region of interest (ROI), or to get more data about a ROI for tasks such as 3D reconstruction, having different angles of view and so on. While using multiple video cameras is a common practice in applications like surveillance, finding a proper setup for installing multiple ac- tive depth cameras can be much more complicated. This is due to the fact that active cameras generate signals that might interfere with other cameras. Such interference is specifically undesirable for applications using multiple sensors to extend the ROI. Even though Kinect for Xbox One (Kinect V2) uses a time of flight camera, as an active depth sensor it is still susceptible to interfer- ence errors in a multi-camera setting. Although multiple Kinect V2 setups has already been used in different research (e.g. [Geerse et al. 2015]), it seems they ignore the possibility of interference be- tween multiple sensors. Although a detailed study of working prin- ciple of Kinect V2 suggests otherwise. As an Amplitude Modulated Continuous Wave Time of Flight (AMCW ToF) camera, Kinect * e-mail:[email protected] e-mail:[email protected] e-mail:[email protected] Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third- party components of this work must be honored. For all other uses, contact the owner/author(s). c 2016 Copyright held by the owner/author(s). VRST ’16, November 02-04, 2016, Garching bei M¨ unchen, Germany ISBN: 978-1-4503-4491-3/16/11 DOI: http://dx.doi.org/10.1145/2993369.2996329 V2 emits a periodic infrared signal, p(t) = 1 + cos(ω0t), and calculates depth based on cross-correlation of the reflected signal, r(t)= R p(z)h(t - z)dz, where h is the scene response function. This is all under the assumption that every pixel observes only one optical path [Bhandari et al. 2014]. This may cause interference problems in the presence of reflective surfaces, or other modulated infrared sources such as another Kinect V2. Even though [Bhandari et al. 2014] propose a solution to overcome this issue, their solution is only applicable to a preset number of interfering paths. Further- more, we do not know whether their solution is implemented in the current Kinect V2 firmware. Hence, it seems that Kinect V2 sen- sors are susceptible to interference, and care should be taken when using multiple sensor.This paper thus describes an in-depth mea- surement using two Kinect V2, to investigate if multiple Kinect V2 setups suffer from interference in practice, and how such problems can be minimized or avoided. 2 Methodology To measure the effect of multiple sensors active at the same time on the accuracy and precision of the Kinect V2, a setup with two de- vices was used. With this setup, two series of measurements were performed. First, a measurement with only one active sensor was performed to measure the linear distance accuracy. The resulting data was taken as ground truth, to which the results from measure- ments with a second active sensor were compared. In the second measurement series, the influence of the angle be- tween the two sensors and the distance of the two sensors to each other and to the observed target, as well as whether the sensors are facing each other or the target, was studied. In order to get the most accurate measurements without the influence of changing condi- tions between different setups, the data was collected from a single sensor, which was kept stationary throughout all setups, while the second sensor was moved to generate the desired situations. Details of the setup are further explained in section 2.1. In both cases, the measurement ROI was a flat 100 mm by 100 mm surface, covered with paper in order to reduce specular reflection (see Figure 1). 2.1 Dual Sensor Measurements For these measurements, the setup from Figure 1 was used. For the measurements, two conditions were distinguished: Both sensors facing the target: Here, three different measure- ments were performed, with sensor 2 being 0.5 meter, 1 me- ter and 1.35 meter apart from the target. For the first two measurements, angle α was varying from 0 to 80 , with 10 steps. For the 1.35 m case, sensor2 will be hindered by sensor 1 at angles smaller than 10 , so the measurements were done with α starting at 10 . (see Figure 2(a)), Sensors facing each other: Again, three different measure- ments were performed, with sensor 2 being 0.5 meter, 1 meter 345
Transcript
Page 1: Interference measurement of Kinect for Xbox One · Interference measurement of Kinect for Xbox One Andreas Kunz , Luzius Brogli y, Ali Alavi z ETH Zurich Abstract Microsoft Kinect

Interference measurement of Kinect for Xbox One

Andreas Kunz ∗, Luzius Brogli †, Ali Alavi ‡

ETH Zurich

Abstract

Microsoft Kinect is widely used for tracking human body in a rangeof applications. Although Kinect for Xbox One allows for multi-user tracking, it is not possible to use it in large spaces due to itslimited range. Hence, using multiple Kinect sensors for large envi-ronments seems to be an appropriate solution. Thus, it is importantto know if multiple sensors can be used simultaneously for suchapplications without interfering with each other. In this paper, weinvestigate the effect of using multiple Kinects on each other byperforming multiple measurements in different settings. Our re-sults show that some occasional interference might happen in somespecific constellations, when the sensors are facing the same target.Our recommendation is to avoid such constellations, or to performa simple interference measurement before using multiple sensors inspecific settings.

Keywords: virtual reality, tracking, structured light, time of flight

Concepts: •Human-centered computing → Graphics input de-vices; Ubiquitous and mobile computing systems and tools;

1 Introduction and Background

Multi-camera setups are widely used either to cover a large regionof interest (ROI), or to get more data about a ROI for tasks such as3D reconstruction, having different angles of view and so on. Whileusing multiple video cameras is a common practice in applicationslike surveillance, finding a proper setup for installing multiple ac-tive depth cameras can be much more complicated. This is dueto the fact that active cameras generate signals that might interferewith other cameras. Such interference is specifically undesirablefor applications using multiple sensors to extend the ROI.

Even though Kinect for Xbox One (Kinect V2) uses a time of flightcamera, as an active depth sensor it is still susceptible to interfer-ence errors in a multi-camera setting. Although multiple KinectV2 setups has already been used in different research (e.g. [Geerseet al. 2015]), it seems they ignore the possibility of interference be-tween multiple sensors. Although a detailed study of working prin-ciple of Kinect V2 suggests otherwise. As an Amplitude ModulatedContinuous Wave Time of Flight (AMCW ToF) camera, Kinect

∗e-mail:[email protected]†e-mail:[email protected]‡e-mail:[email protected]

Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s). c© 2016 Copyright held by the owner/author(s).VRST ’16, November 02-04, 2016, Garching bei Munchen, GermanyISBN: 978-1-4503-4491-3/16/11DOI: http://dx.doi.org/10.1145/2993369.2996329

V2 emits a periodic infrared signal, p(t) = 1 + cos(ω0t), andcalculates depth based on cross-correlation of the reflected signal,r(t) =

∫p(z)h(t − z)dz, where h is the scene response function.

This is all under the assumption that every pixel observes only oneoptical path [Bhandari et al. 2014]. This may cause interferenceproblems in the presence of reflective surfaces, or other modulatedinfrared sources such as another Kinect V2. Even though [Bhandariet al. 2014] propose a solution to overcome this issue, their solutionis only applicable to a preset number of interfering paths. Further-more, we do not know whether their solution is implemented in thecurrent Kinect V2 firmware. Hence, it seems that Kinect V2 sen-sors are susceptible to interference, and care should be taken whenusing multiple sensor.This paper thus describes an in-depth mea-surement using two Kinect V2, to investigate if multiple Kinect V2setups suffer from interference in practice, and how such problemscan be minimized or avoided.

2 Methodology

To measure the effect of multiple sensors active at the same time onthe accuracy and precision of the Kinect V2, a setup with two de-vices was used. With this setup, two series of measurements wereperformed. First, a measurement with only one active sensor wasperformed to measure the linear distance accuracy. The resultingdata was taken as ground truth, to which the results from measure-ments with a second active sensor were compared.

In the second measurement series, the influence of the angle be-tween the two sensors and the distance of the two sensors to eachother and to the observed target, as well as whether the sensors arefacing each other or the target, was studied. In order to get the mostaccurate measurements without the influence of changing condi-tions between different setups, the data was collected from a singlesensor, which was kept stationary throughout all setups, while thesecond sensor was moved to generate the desired situations. Detailsof the setup are further explained in section 2.1. In both cases, themeasurement ROI was a flat 100 mm by 100 mm surface, coveredwith paper in order to reduce specular reflection (see Figure 1).

2.1 Dual Sensor Measurements

For these measurements, the setup from Figure 1 was used. For themeasurements, two conditions were distinguished:

• Both sensors facing the target: Here, three different measure-ments were performed, with sensor 2 being 0.5 meter, 1 me-ter and 1.35 meter apart from the target. For the first twomeasurements, angle α was varying from 0◦ to 80◦, with 10◦

steps. For the 1.35 m case, sensor2 will be hindered by sensor1 at angles smaller than 10◦, so the measurements were donewith α starting at 10◦. (see Figure 2(a)),

• Sensors facing each other: Again, three different measure-ments were performed, with sensor 2 being 0.5 meter, 1 meter

345

Page 2: Interference measurement of Kinect for Xbox One · Interference measurement of Kinect for Xbox One Andreas Kunz , Luzius Brogli y, Ali Alavi z ETH Zurich Abstract Microsoft Kinect

(a) Sensors facing the sametarget

(b) Sensors facing each other

Figure 1: Interference measurement setup

and 1.5 meter apart from sensor1. For the first two cases (0.5m and 1 m distance), angle α was varying from 0◦ to 90◦,with 10◦ steps. For the 1.5 m case, the sensor was obstructedby the target at angles larger than 50◦, hence the measurementwas done with angles between 0◦ and 50◦ (see Figure 2(b)).

(a) Sensors facing the same target (b) Sensors facing each other

Figure 2: Dual sensor measurement setup

3 Discussion and Results

In order to evaluate the effect of measurement distortions causedby the second sensor, the measured values were subtracted fromthe values acquired by the single-sensor measurement. The resultsare depicted in Figure 3.

(a) Sensors facing the same target (b) Sensors facing each other

Figure 3: Depth measurement error, measured at different anglesand distances

As shown in Figure 3(b), the absolute value of error is below 2 mmwhen the sensors are facing each other. Thus it can be neglected formost applications. When both sensors are facing the same target,a significant depth error can be observed when the angle betweentwo sensors is 10◦ to 40◦, as shown in Figure 3(a). Such errorscan be as high as 8 mm when both cameras have the same distanceto the target. It is important to note that the figures representedhere are worst case results we could achieve, and there were many

instances of the same experiment where such dramatic errors wherenot observed.

While due to closed source software and unpublished details of theKinect V2, it is not possible to concretely interpret these results,a possible explanation for such interference error could be that atsome instances, the timing and intensity of the generated infraredlight by the second sensor matches with the one expected by thefirst cameras, which leads to erroneous correlation of phase shift tolight intensity, and therefrom a depth error. This explains why theerror is significant only when the distance and angle of cameras aresimilar to each other, since when cameras’ distances or angles tothe target is significantly different, the infrared light generated bythe second sensor is discarded as noise. As such, it is recommendedthat, in multiple Kinect V2 installations, the cameras’ constellationswith respect to the target should be significantly different. If suchsettings cannot be avoided, we recommend a simple interferencemeasurement prior to running the main experiments.

4 Conclusion

Although for most applications, the measurement errors using mul-tiple Kinects are negligible, in some cases these errors could be dis-turbing. In order to avoid this randomly occurring error, one shouldmake sure that distance and/or angle of multiple sensors should besignificantly different with respect to the target. In particular, an-gles between 10◦ - 40◦ between two Kinects while their distanceto the target is equal should be avoided. However, in many setupssuch a constellation is very unlikely, since two Kinect V2 wouldbe installed without increasing the tracking volume and without asignificant reduction of shadowing effects.

References

BHANDARI, A., FEIGIN, M., IZADI, S., RHEMANN, C.,SCHMIDT, M., AND RASKAR, R. 2014. Resolving multipathinterference in kinect: An inverse problem approach. In IEEESENSORS 2014 Proceedings, IEEE, 614–617.

GEERSE, D. J., COOLEN, B. H., AND ROERDINK, M. 2015.Kinematic validation of a multi-kinect v2 instrumented 10-meterwalkway for quantitative gait assessments. PloS one 10, 10,e0139913.

MUTTO, C. D., ZANUTTIGH, P., AND CORTELAZZO, G. M.2012. CW Matricial Time-of-Flight Range Cameras. SpringerUS, Boston, MA, 17–32.

SARBOLANDI, H., LEFLOCH, D., AND KOLB, A. 2015. Kinectrange sensing: Structured-light versus time-of-flight kinect.Journal of Computer Vision and Image Understanding 139, 1–20.

XIANG, S., YU, L., YANG, Y., LIU, Q., AND ZHOU, J. 2015.Interfered depth map recovery with texture guidance for multi-ple structured light depth cameras. Signal Processing: ImageCommunication 31, 34–46.

YANG, L., ZHANG, L., DONG, H., ALELAIWI, A., AND EL SAD-DIK, A. 2015. Evaluating and improving the depth accuracy ofkinect for windows v2. IEEE Sensors 15, 8, 4275–4285.

ZENNARO, S., MUNARO, M., MILANI, S., ZANUTTIGH, P.,BERNARDI, A., GHIDONI, S., AND MENEGATTI, E. 2015.Performance evaluation of the 1st and 2nd generation kinect formultimedia applications. In Proceedings of the 2015 IEEE In-ternational Conference on Multimedia and Expo (ICME 2015),IEEE, 1–6.

346


Recommended