+ All Categories
Home > Documents > WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free...

WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free...

Date post: 31-May-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
17
This paper is included in the Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15). May 4–6, 2015 • Oakland, CA, USA ISBN 978-1-931971-218 Open Access to the Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) is sponsored by USENIX WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia, Manikanta Kotaru, and Sachin Katti, Stanford University https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/joshi
Transcript
Page 1: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

This paper is included in the Proceedings of the 12th USENIX Symposium on Networked Systems

Design and Implementation (NSDI ’15).May 4–6, 2015 • Oakland, CA, USA

ISBN 978-1-931971-218

Open Access to the Proceedings of the 12th USENIX Symposium on

Networked Systems Design and Implementation (NSDI ’15)

is sponsored by USENIX

WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter

Kiran Joshi, Dinesh Bharadia, Manikanta Kotaru, and Sachin Katti, Stanford University

https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/joshi

Page 2: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 189

WiDeo: Fine-grained Device-free Motion Tracing using RF BackscatterKiran Joshi, Dinesh Bharadia , Manikanta Kotaru, Sachin Katti

krjoshi, dineshb, mkotaru, [email protected]

AbstractCould we build a motion tracing camera using wireless

communication signals as the light source? This papershows we can, we present the design and implementa-tion of WiDeo, a novel system that enables accurate, highresolution, device free human motion tracing in indoorenvironments using WiFi signals and compact WiFi ra-dios. The insight behind WiDeo is to mine the backscat-ter reflections from the environment that WiFi transmis-sions naturally produce to trace where reflecting objectsare located and how they are moving. We invent novelbackscatter measurement techniques that work in spite ofthe low bandwidth and dynamic range of WiFi radios,new algorithms that separate out the moving backscat-ter from the clutter that static reflectors produce and thentrace the original motion that produced the backscatterin spite of the fact that it could have undergone multi-ple reflections. We prototype WiDeo using off-the-shelfsoftware radios and show that it accurately traces motioneven when there are multiple independent human motionsoccurring concurrently (up to 5) with a median error inthe traced path of less than 7cm.

1 IntroductionFine-grained human motion tracing, i.e. the ability totrace the trajectory of a moving human hand or leg oreven the whole body, is a general capability that is use-ful in a wide variety of applications. For example, itcan be used for gesture recognition and virtual touch-screens (e.g. Kinect style natural user interfaces), activityrecognition (e.g. controlling the Nest thermostat depend-ing on intensity of human activity), monitoring of younginfants and the elderly, or security applications such asintruder detection. Motivated by these applications, thecomputer vision community has developed a number ofdepth sensing based systems (e.g Kinect) to implementmotion tracing capabilities in cameras. However thesedevices are limited because they have a constrained fieldof view (around 2-4m range with a 60 degree aperture),and do not work in non line-of-sight scenarios, prevent-ing their use in many applications such as whole homeactivity recognition, security and elderly care.

To tackle these limitations, recent work namely RF-IDraw [43] - has built a motion tracing system using wire-less signals. The idea is that users would wear RFID

Figure 1: WiDeo in operation: The compact WiFi AP in thestudy integrates WiDeo’s motion tracing functionality, and canreconstruct the hand movement made by humans in the livingroom. WiDeo traces motion even though the AP is separated bya wall and does not have a LOS path to the humans, and doesn’trequire that the humans have any RF devices on them.

tags, and the motion tracing system would generate trans-missions and then listen to reflections of wireless signalsfrom these tags. RF-IDraw then infers the underlyinghand motion from changes in reflection signal parame-ters such as angle of arrival over time. RF-IDraw demon-strates good accuracy and since it uses lower frequenciesthan light (the 900MHz RFID band whereas visible lightis at 600THz), it works in non line-of-sight (NLOS) sce-narios and in the dark. However, RF-IDraw has two lim-itations that restrict its deployability. First, RF-IDraw re-quires the user whose motion is being traced to wear aspecial RFID tag on her hands. However, users are ac-customed to motion tracing using systems such as Kinectthat do not require the user to have any special hardwareon them, and changing user habits can be hard. Second,the tracing system requires large antenna arrays of eightantennas with a separation of 8λ , that in their current im-plementations translates to an antenna array distance ofnearly 2.62m. Expecting users in homes to deploy an-tenna arrays that might span almost an entire room is abig hurdle.

Fig. 1 depicts our goal which is to design a device free,compact motion tracing system. By device free we meanthat the humans whose motion is being traced do not needto have any devices on them, whether it’s RFID tags orphones. By compact we mean that the motion tracing isimplemented on standard WiFi or LTE APs (albeit withminor modifications in hardware and software) and theAPs have antenna arrays that they would have had as stan-

1

Page 3: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

190 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

dard APs anyways. Thus the system is as compact as anAP that is already being deployed. Finally, we would likethe system to be non-intrusive, it should be integrated intoWiFi and LTE APs that people anyway deploy in theirhomes and reuse existing packet transmissions for fine-grained motion tracing.

The above requirements pose unique challenges. First,since the system needs to be device-free, it can only relyon natural reflections of the transmitted signals that hu-man limbs naturally produce. These are relatively weakcompared to the ones from RFID tags that RF-IDrawuses, and reflections from different objects in the envi-ronment cannot be easily distinguished since they are allslightly distorted copies of the same transmitted signal(each RFID tag has its own unique IDs which allows RF-IDraw to distinguish different moving hands because thetags will be different). Second the fact that the systemuses a compact antenna array with at most four antennasand regular spacing of λ/2 makes achieving high spatialaccuracy difficult. As the RF-IDraw paper notes, regu-larly spaced, compact antenna arrays struggle to resolvethe spatial angles of incoming signal reflections.

We present WiDeo, a device-free, compact motiontracing system with standard AP antenna arrays. WiDeoonly needs 4 antennas per AP, with a spacing of λ/2which translates to an antenna array length of 18cm forWiDeo-integrated WiFi APs. At a high level, WiDeouses the AP’s transmitted signals itself as a flash to lightup the scene, and then analyzes the natural reflections ofthese transmitted WiFi communication signals from theenvironment that arrive back at the AP over time to traceany motion that’s occurring. WiDeo accomplishes mo-tion tracing through three main components which oper-ate in sequence:Backscatter Sensor: The sensor analyzes the compos-ite reflected signal received at the WiDeo AP (referredas backscatter) to tease apart the individual reflectionscoming from each significant reflector in the environ-ment, and calculates each reflection’s amplitude, time offlight (ToF) and angle of arrival (AoA). Our key contri-bution here is a novel algorithm that accurately estimatesthese backscatter components in spite of the constraintsthat the humans are device-free, and the limited spatialresolution of the compact antenna arrays. Our key in-sight is to exploit the natural sparsity that exists in in-door environments; as several empirical studies on indoorMIMO [16, 19] have shown, the number of significant re-flectors in an environment is fairly small. WiDeo exploitsthis insight to accurately measure the backscatter param-eters using sparsity aware optimization algorithm.

Second, WiDeo must tolerate limited dynamic range,which causes strong reflections to swamp weak ones,and limited sampling bandwidth, which hides reflectionsspaced closely in time. Typical WiFi sampling of 80Msps

implies a resolution of 12.5ns, or about 6 feet. Our novelalgorithms separate weak and closely-spaced reflectionsdespite the limitations of commodity radios.Declutterer: Reflectors abound in indoor environments,and most of them will be static. The declutterer ana-lyzes the raw set of reflection parameters estimated bythe backscatter sensor, and clusters them into groups thatcorrespond to reflections from static and moving reflec-tors. Further it also eliminates the static reflectors sincethey are not useful for motion tracing and enables WiDeoto specifically focus on reflections arising from movingobjects.Motion Tracing: This component of WiDeo analyzes thereflections arising from moving objects to predict the un-derlying motion that could have produced those sequenceof reflections and their parameters. We design a novelstatistical and sequential estimation framework that pre-dicts the motion that might have taken place, then esti-mates the changes in reflection parameters the predictedmotion would have produced, and compares it with theactual estimated reflection parameters from the backscat-ter sensor to continuously refine WiDeo’s estimate of themotion that occurred.

We design and implement a prototype of WiDeo us-ing WARP radios and simulation environment. The ra-dios are running a standard WiFi OFDM PHY using upto 40MHz, and use 4 antennas with a spacing of 6cmfor an overall length of 18cm. We conduct experimentsin indoor environments to demonstrate the accuracy ofWiDeo’s motion tracing. We show that WiDeo can ac-curately trace multiple sets of fine-grained motion witha median tracing error of less than 7cm, which is com-parable to RF-IDraw’s performance of tracing error of5.5cm. Further, the motion tracing has very high reso-lution, WiDeo achieves the same accuracy even when themultiple humans performing the motion are as close as2 feet away from each other, which to the best of ourknowledge, no prior RF based motion tracing system hasdemonstrated.

2 Related WorkFine-grained motion tracing: Vision based systemssuch as [51, 4] make use of depth sensors (e.g. Kinect)and infrared cameras (e.g. Wii) to trace the fine-grainedmotion of a user and enable applications such as gesturerecognition, virtual touch screens etc. WiDeo, on theother hand, unlike solutions based on depth imaging orinfrared, does not require line of sight to work.

RF based systems like [43] and sensor based systemslike [23, 26] perform accurate motion tracing but requireinstrumentation of users. However, WiDeo achieves ac-curate fine-grained motion tracing in a device-free man-ner.

2

Page 4: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 191

RF based coarse motion tracking and gesture recog-nition: Recent work such as WiTrack and others [34, 7,6, 5] has shown the ability to coarsely track full body mo-tion (not fine-grained motion of human limbs) using ra-dio waves. Other approaches like [35, 24, 33, 30, 49, 45]track human motion by using ultra-wide band (UWB)signals. All of these approaches are also device-free, butunlike these systems, WiDeo is the first device-free fine-grained motion tracing system that can accurately recon-struct the detailed trajectory of a user’s free-form writ-ing or gesturing in the air, where the motion may onlyspan a few tens of centimeters. Such free-form tracingcapability is not supported by prior work in RF basedgesture recognition or motion tracking. For example,[34] presents a state-of-the-art WiFi based interface, yetit only supports the detection and classification of a pre-defined set of nine gestures. Moreover, many of thesesystems [6, 5, 35, 24, 33, 30, 49, 45] require GHz ofbandwidth unlike WiDeo which works with regular WiFibandwidths.

There have been approaches like [48, 50, 27, 36]which use existing WiFi infrastructure, with no hardwaremodifications to achieve device-free human localizationand coarse motion tracking, they use coarse informa-tion about the environment in terms of Received SignalStrength Indicators reported by WiFi NICs and requireextensive war-driving. In contrast, WiDeo requires minorchanges to existing WiFi/LTE APs, re-uses the spectrumallocated for communication by performing fine-grainedmotion tracing using reflections of communication sig-nals that would have been sent for data communicationanyway.Motion clustering techniques: WiDeo also builds ontheoretical work on motion segmentation, clustering andclassification [41]. These works are targeted at visionapplications that use visible light, and deal with takinga collection of pixels that represent the motion and un-derstanding the underlying motion that occurred. WiDeoon the other hand has to deal with RF signal reflectionswhich pose unique challenges such as multiple reflec-tions, noisier measurements and compact, limited sensors(antenna arrays).Indoor Localization: A large body of work, rangingfrom classic RSSI based techniques [15, 9, 47, 37] torecent antenna array based techniques [46, 25, 20] ex-ploit already available WiFi infrastructure to provide in-door localization services for radios. They achieve im-pressive localization accuracy of a few decimeters. An-other line of approaches uses single moving antenna tosimulate an antenna array [29]. However WiDeo differsfrom all of them in two fundamental respects. First, itprecisely traces fine-grained motion, rather than a staticlocation. Second, its device-free, the traced object doesnot need to have any RF transmitters on them.

3 Design

WiDeo’s goal is to achieve accurate device-free motiontracing of moving objects. To realize this, WiDeo, likestandard ToF camera, incorporates four main compo-nents:Flash: This is the light source used to light up the scene;in WiDeo, this is simply the transmission that the AP inwhich WiDeo is housed is sending for standard commu-nication. In other words, wireless transmissions used forcommunicating packets act as the flash for the WiDeo.Backscatter Sensor: This component looks at thebackscatter arising from the environment when the AP’stransmission gets reflected and arrives back at the AP.The sensor teases out the individual signals emanatingfrom each reflector in the environment as well as esti-mates each reflection’s intensity, angle of arrival and rel-ative time of arrival. The corresponding component in astandard camera are the image sensors which capture thelight (aka the backscatter) from objects in the scene andform a picture of the scene.Declutterer: The captured backscatter contains a lot ofreflections from static objects which act as clutter to thereflections originating from the moving object WiDeowishes to trace. The declutterer component figures outwhich of the reflections are from objects WiDeo doesn’tcare about and eliminates them so that motion tracing canfocus only on reflections from moving objects.Motion Tracer: This component looks at the reflectionscoming from moving objects over time to predict the ac-tual physical motion that could have produced that se-quence of reflections.We omit the description of the flash component since thatis a standard AP transmitter. We describe each of theother three components in detail next. For now assumethat the AP’s receiver can listen to all the reflections fromthe environment even though it is transmitting at the sametime; we describe how we leverage recent work on fullduplex to tackle that challenge in § 3.2.2.

3.1 Backscatter Sensor

The sensor’s main challenge is to estimate the parame-ters of each reflection that makes up the received signal.The reflected signals from these reflectors arrive at theAP with different times of flight (ToF), amplitudes andangles of arrival (AoA), but the receiver only obtains thesum of the signals. Let’s assume L reflectors are presentand that each reflector k applies an unique unknown dis-tortion function fk(x(t)) to the transmitted signal x(t). Sothe overall backscatter signal y(t) that is arriving back atthe AP can be simply written as:

y(t) = ∑Lk=1 fk(x(t)) (1)

3

Page 5: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

192 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

The backscatter sensor’s goal is to estimate these func-tions fk and then calculate the ToF, amplitude and AoAof the signals reflected from each of these reflectors. Aswritten, the above equation 1 appears intractable, all weknow is the transmitted signal x(t) and the overall re-ceived signal y(t). How might we tease out the individ-ual reflections? WiDeo makes two novel observations tosolve the above under-constrained problem:Reflector Sparsity: First, WiDeo posits based on recentempirical evidence [16, 19] that the number of significantreflectors in an indoor environment are limited. Whilethere may be many objects, the ones that actually pro-duce sufficiently strong reflections to be visible in the 40dB of effective dynamic range, which is typical in WiFiradios, are not so numerous. This phenomenon has beenextensively proven in empirical wireless communicationstudies that study the performance of MIMO which crit-ically depends on the number of independent reflectorsin an environment [16, 19]. In WiDeo’s case, this meansthat the number of reflectors that could have contributedsignificantly to the overall signal is limited.Narrowband Transformations: The second key obser-vation is that WiDeo uses narrowband communicationsignals and radios as the flash/light source. By narrow-band we mean that the signals generated or received bythe WiDeo device (the AP) are filtered to only let the sig-nal within the bandwidth, which conforms to FCC regu-lation, used for communication to come through. For ex-ample, if we are using the WiFi channel of width 40MHzat center frequency 2.42GHz, then a passband filter ofwidth 40MHz centered at 2.42GHz is applied at the trans-mitter and the receiver. Filtering by a passband filter canbe modeled as convolution with a sinc pulse of the samebandwidth in the time domain [1]. So the reflected signal(after including the attenuation and delay) is now con-volved with a sinc pulse. So the signal that arrives backfrom a single reflector is actually given by:

fk(x(t)) =(αkx(t − τk)

)⊗ sinc(Bt) (2)

where B is the communication bandwidth of the signal,αk is the complex amplitude and τk is the overall delayof the reflection or the Time of Flight (ToF) for the kth

reflector, and ⊗ represents the convolution operator [39].Eq. 2 can equivalently be written as:

fk(x(t)) =(αksinc(B× (t − τk))

)⊗ x(t) (3)

If there are L reflectors, then all L reflections will undergodifferent attenuations and ToFs, add up over the air andthen get convolved with a sinc pulse. Therefore the over-all signal is given by:

y(t) =(

∑Lk=1 αksinc(B× (t − τk))

)⊗ x(t). (4)

The sensors now first calculate the overall transforma-tion h(t) that has happened to the transmitted signal x(t),

i.e. y(t) = h(t)⊗ x(t) where h(t) is essentially the sumof the transformations applied by all the reflectors. Thisis classic channel estimation that’s used in standard com-munications (after all every receiver estimates the chan-nel that has transformed the transmitted signal to be ableto decode). We refer the reader to the following litera-ture [8] for a review of the different techniques that canbe used.

However, WiDeo’s problem is quite harder than stan-dard channel estimation which only cares about the over-all transformation. Although, WiDeo knows the overallchannel h(t), it needs to figure out the amplitudes andtime shifts of the sinc pulses that are summed up to pro-duce the overall channel h(t). The equation that WiDeohas to solve is therefore given by:

h(t) = ∑Lk=1 αksinc(B× (t − τk)) (5)

We can rewrite the equivalent equation in the digital do-main (after all WiDeo will be working in the basebanddomain after ADC sampling) as:

h[n] = ∑Lk=1 αksinc(B× (nTs − τk)), (6)

where Ts is the sampling time of ADC. WiDeo’s goal isto solve the above equation to determine αk and τk for allreflections.

To tackle this, we now exploit the sparsity observationthat the number of significant reflectors in an environmentis limited to a handful (typically on the order of 10-15).Specifically, we attempt to find the smallest number (lessthan 20 in our implementation) of scaled and shifted sincpulses that could have summed up to produce the overallchannel response. Mathematically, we are attempting tosolve the following problem:

min ∑n(h[n]−∑k αksinc(B× (nTs − τk)))2 +λr|α|0

s. t. τk ≥ 0, |αk| ≤ 1,k = 1 : L,n =−N : N.(7)

Note that the above problem is similar to classic prob-lems in compressive sensing [17, 42, 14]. Like in com-pressive sensing problem, we are trying to find the min-imum number of non-zero components (each componentcorresponds to a reflector) and the corresponding scalingand shifting factors that best explain the observed chan-nel h[n]. The sparsity of the number of reflectors iscoerced by the term λr|α|0, where λr is a positive reg-ularization term and |.|0 is the number of non-zero termsin the amplitude vector. However there is one major dif-ference, WiDeo’s problem is trying to find the best spars-est combination of parameterized continuous basis func-tions (the sinc pulses parameterized by continuous shiftfactors), whereas classic compressive sensing is findingthe sparsest combination of discrete finite sized vectorsthat produces some overall vector. We omit the mathe-matical details here for brevity, but refer the reader to alarge body of literature on solving these sparse estimation

4

Page 6: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 193

problems [18, 40, 31]. WiDeo’s contribution is to showthat the backscatter sensing problem can be formulatedusing sparsity and compressive sensing intuition.

3.1.1 What if the reflectors are closely spaced?

The above description didn’t make any mention of howclosely spaced the reflectors are. For example, if the tworeflectors are a foot apart, their reflections will arrive atthe AP within two nanosecond of each other (wirelesssignals travel a foot per nanosecond and reflection forobjects a foot apart takes 2 nanoseconds). But samplingrates of wireless communication radios are at best around100Msps (Mega samples per second), which means thattwo samples are spaced 10ns apart. How could thenWiDeo estimate the parameters of the two reflectors thatare closely spaced to an order of magnitude closer in timethan the sampling period? Even if two reflections areclosely spaced in time because their reflectors are almostat the same distance from the AP, they are likely to beat different spatial angles (otherwise they would be thesame reflector!). So the spatial dimension provides us theability to separate reflections in space when they are closein time. The heuristic works in the other direction too, iftwo reflectors are at the same AoA (because they are onthe same radial line), they are likely at different delaysand can be separated out.

How do we use this insight to separate out reflections?The intuition is that if the WiDeo AP has an antenna ar-ray (typical APs have 4 antennas), then the specific AoAof each reflection imposes a constraint on how the phaseof that reflection changes across space. Specifically ifthe antennas are laid out equidistant at distance d in astraight line, the so called uniform linear array (ULA),and if the AoA is θ , then the relative phase betweenthe signal at any two consecutive antennas is given by(φ(θ) = 2πd sin(θ)c/λ ), where c is the speed of light inair and λ is the wavelength of the RF carrier. Assumingthat there are four antennas in the WiDeo’s AP we call thefollowing vector [0,φ(θ),2φ(θ),3φ(θ)] of phase differ-ences of all the antennas with respect to the first antennaas the relative phase constraint vector.

In general when more than two backscatter signals arepresent, each of these backscatter signals arrives at allfour antennas, but based on the AoA of these signalsthe relative phase constraint vectors of these signals willbe different. WiDeo uses this insight in the followingway. In addition to finding the best sparse signals as de-scribed by 7, WiDeo imposes an additional constraint thatthese estimated sparse solutions should strictly follow thephase vector constraint imposed by the ULA structureleading to the following problem for Ψ antennas:

min ∑m

∑n(hm[n]−∑

kαke−i(m−1)φ(θk)sinc(B× (nTs − τk)))

2

+λr|α|0s. t. τk ≥ 0, |αk| ≤ 1,k = 1 : L,n =−N : N,m = 1 : Ψ.

The e−i(m−1)φ(θk) term in the optimization objective func-tion is encoding the phase constraint that arises from aspecific AoA. In essence, while many signals can fit thetime domain constraint given by 7, only few of them cansatisfy the relative phase constraint vector thereby furtherlimiting our solution space and hence increasing the ac-curacy of our estimates despite the closeness of these sig-nals in time. The matching relative phase constraint vec-tor of ULA has one-to-one relationship with AoA, thususing this process we can simultaneously estimate theAoA of the backscatter signals in addition to their am-plitude and ToF.

To summarize using the above technique, the sensoroutputs a set of reflections with their associated three tu-ple of parameters: amplitude, ToF and AoA. The nextstep is eliminating the numerous reflections from staticobjects that act as clutter to motion tracing problem whichwe describe below.

3.2 DecluttererReflectors abound in the environment and their reflec-tions end up cluttering the backscatter, making it hard forWiDeo to focus on the reflections arriving from the mov-ing object that’s being traced. Tracing accuracy can begreatly improved if this clutter can be eliminated. Thereare two kinds of clutter in decreasing order of harmful-ness. The first are reflections from objects nearby whoserelative strength w.r.t. to the moving object reflection isgreater than the dynamic range of the WiDeo receiver. Inthis case, the reflection from the moving object is com-pletely lost in the quantization noise and motion cannotbe traced. The second is clutter whose strength is withinthe dynamic range relative to the moving object’s reflec-tion. Here information is not lost, but it becomes harderfor the tracing algorithm to recover the original motion.WiDeo’s declutterer handles both kinds of clutter andeliminates them. We start by describing how to handlethe second kind of clutter by guessing which reflectionsare from moving objects, and then describe how to elimi-nate the rest of the clutter including the nearby reflectors.

3.2.1 Eliminate Reflections of Static Objects

WiDeo uses a heuristic to loosely identify reflections thatare likely to have come from moving objects. The ba-sic idea is to look across sequences of backscatter sensormeasurements as shown in Fig. 2, and then make an as-sociation of which reflections haven’t changed in valueand which have. The idea is that the reflections that have

5

Page 7: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

194 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

continuously changed their parameters (their amplitude,AoA and ToF) will include reflections from moving ob-jects. Everything else is classified as static clutter that hasto be eliminated.

(a)  Backsca)er  components   (b)  Sta3c  Backsca)er   (c)  Moving  Backsca)er  

AoA

AoA

AoA

Frames →

Frames →

Frames →

Delay → Delay → Delay →

Figure 2: The figure represents backscatter components ob-tained from a simulated hand movement in a typical indoor sce-nario using ray tracing software [2]. The backscatter compo-nents collected in each time interval are presented as an im-age snapshot. The horizontal and vertical axes correspond toToF and AoA respectively. Each colored pixel corresponds toa backscatter component. Different snapshots stacked one overthe other correspond to set of backscatter components obtainedin consecutive time intervals. The majority of backscatter com-ponents are contributed by static environment, which are shownin the same color to provide contrast with moving backscatter.

The key question then is to look at snapshots ofbackscatter over time, associate the backscatter param-eters that we believe are coming from the same reflec-tor and then apply the above heuristic . Each snapshotis made up of as many backscatter points as number ofreflections, and each point is associated with a three tu-ple of amplitude, ToF and AoA. WiDeo keeps track ofa moving window of such backscatter snapshots (in ourcurrent implementation the last 10 snapshots are main-tained). The first step is to associate points which are gen-erated from the same reflector between every two succes-sive snapshots, even if the reflector moved between thosetwo snapshots. To do so, we invent a novel point associa-tion algorithm across snapshots based on minimizing theamount of change between consecutive snapshots.Identification of Static Reflections: The algorithm startsby calculating the pairwise distance between every pairof backscatter points in successive snapshots. Distanceis defined as the absolute difference in the three parame-ters (amplitude, ToF and AoA) squared and summed afterappropriate normalization. Note that this metric is calcu-lated for all pairs of points, so there would be n2 distanceswhere n is the number of backscatter points in a snap-shot. The goal is to figure out the specific pairings wherepoints in each pair of snapshots are generated by the samebackscatter reflector.

Our key insight is that for static objects, the points cor-responding to backscatter reflections from that static ob-ject in successive snapshots should be at zero distancewith respect to each other because by definition they did

not move and the associated parameters did not change.Further even for the points that correspond to moving re-flectors, given how slow human motion is relative to thelength of a backscatter snapshot (a millisecond), the dis-tance between points in successive snapshots that corre-spond to the moving object is small. So if we can pair thepoints up such that the overall sum of the distance metricsfor these paired points is the minimum among all possi-ble pairings, then very likely we will have associated theright sets of points together.

How do we determine the right point association be-tween successive snapshots? This is a combinatorial as-signment problem where we first pass distances betweenall pairs of points as the input and then pick the set ofpairs that minimize the overall sum of distance metricamong them. A naive algorithm would be to enumer-ate all possible assignment of point pairs, which wouldrequire evaluation of n! assignments for snapshots with npoints. To reduce the complexity, we turn to a classic al-

1 2 3 4 5 6 7 8 9 10

1 2 3 4 5 6 7 8 9 10

α

θ

τ

α

θ

τ

1 2 3 4 5 6 7 8 9 10

1 2 3 4 5 6 7 8 9 10

α

θ

τ

α

θ

τ

Moving  

Moving  Sta+c  

Sta+c  (No  change)  

Backsca5er  Component  Numbers   Hungarian    

Algorithm  

Sta+c  

Figure 3: This figure illustrates application of Hungarian al-gorithm for a subset of backscatter components obtained inthe experiment narrated in Fig. 2. The left side represents thebackscatter components in two successive snapshots. The colorof each pixel is a representation of the value of α(power indBm), θ (in degrees), or τ(in ns) according to the appropriaterow. We design a distance metric between each component inthe first (top) snapshot and each component in second (bottom).The distance thus obtained are represented as edges with ap-propriate weights (not shown in the figure for clarity). We wantto find the matching with minimum weight in the above bipar-tite graph. Applying Hungarian algorithm results in the leastweight matching presented on the right, thus providing a way toassociate backscatter components in the two snapshots.

gorithm in combinatorial optimization known as the Hun-garian algorithm [28] which runs in polynomial time.We omit a full description of the algorithm for brevityhowever, the algorithm is best visualized in terms of abipartite graph G = (F1,F2,E), where points from thefirst snapshot are vertices in the set (F1) and points fromthe second snapshot are in the set (F2) and the edge set(E) consists of all possible edges between vertices in thetwo sets. The weight of each edge is the distance metricbetween the backscatter parameters corresponding to the

6

Page 8: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 195

points the edge connects. The goal of our algorithm is tofind a matching with minimum cost, as shown in Fig. 3.Eliminating Static Clutter: The next step is to eliminatethe clutter caused by static backscatter reflectors. Thisstep immediately follows from the above computation;we identify the pairs of points whose distance metric isclose to zero, and stays so for at least a fixed number ofsnapshots (typically around 10 snapshots correspondingto those reflectors being static for 10ms). When we findsuch points, we declare them to be part of the static clut-ter. These points are then eliminated from the snapshotsand the only points left are those that the algorithm be-lieves to be coming from moving reflectors.

The static clutter elimination step also naturally pro-vides the detection of a new motion that is starting. Forexample, let’s say we start with a completely static envi-ronment; in the steady state the declutterer block won’treport any parameters because eventually all of the com-ponents will be declared static and eliminated as clutter.When a new motion starts and generates new backscat-ter components, the sensor will report these parameters tothe declutterer which will classify them as moving points.Such points are grouped together and passed to the mo-tion tracing block, described in section 3.3, as a new mov-ing object that needs to be traced.

3.2.2 Eliminating Clutter from Nearby Reflectors

In many scenarios, we may have a nearby reflector thatis producing strong reflections. If these reflections arestronger than the reflections from the moving object thatWiDeo wishes to trace by more than the dynamic rangeof the radio, all information about the moving object willbe lost in the quantization error of the ADC at the re-ceiver. Further remember that WiDeo aims to listen toreflections from the environment while the WiDeo AP istransmitting signals for communication. The transmit-ted signal also directly leaks through to the receiver andcauses interference.

WiDeo’s observation is that such clutter is essentiallya form of self-interference, and recent work on full du-plex radios can be used to eliminate such clutter [12].Full duplex radios have to solve a similar problem, theyhave to cancel their own transmitted signal’s leakage andreflections that arrive back at the receiver. This self-interference also incorporates reflections from the envi-ronment, and recent work has developed sophisticated in-terference cancellation techniques that can eliminate theself-interference to the noise floor [12]. WiDeo leveragesthis work. We provide a brief description below, but re-fer the readers to [10] for a detailed description. WiDeo’scontribution is showing how full duplex can be used tobuild imaging applications rather than the communica-tion applications that full duplex research has focused on.

Conceptually, full duplex radios consist of a pro-grammable canceler component that consists of both ana-log and digital cancelers. The canceler’s main compo-nent is a programmable filter which attempts to modelthe distortions that the transmitted signal goes throughbefore arriving back at the same radio’s receiver as self-interference. The canceler takes the transmitted signal asinput, passes it through the programmable filter, and thensubtracts the filtered signal from the received signal tocompletely eliminate self-interference.

Note that in traditional full duplex radios, the goal is tocompletely eliminate the self-interference. WiDeo how-ever is different, in fact some of the self-interference maybe coming from moving objects that we do not want tocancel since we want to infer the motion from them. SoWiDeo implements a novel modification to traditionalfull duplex self-interference cancellation. It uses thebackscatter sensor measurements to program the filter toonly model the static and strong reflectors that act as clut-ter, but intentionally leaves the components that wouldalso have modeled the moving reflectors out. WiDeofigures out which backscatter components correspond tomoving reflectors using the static clutter detection algo-rithm described in the previous section. Thus cancellationis selectively applied only to the static and strong cluttercomponents. Specifically the programmable canceler fil-ter is tuned to implement the following response:

hcm = ∑L′k=1 αke−i(m−1)φ(θk)sinc(B× (t − τk)) (8)

where αk,τk,θk is the amplitude, ToF and AoA parame-ters for all L′ unwanted reflectors, and hcm is the responseof cancellation filter attached to the mth antenna.

This completes the design of the declutterer compo-nent. At this point we have a set of snapshots with pointsthat correspond to moving objects. Further points in suc-cessive snapshots are associated with each other if theybelong to the same moving reflector. However note thatthis does not mean we have traced the original movingobject itself, all we have isolated is the multiple backscat-ter reflections from it. The next step is to trace the originalobject and its motion which produced the snapshots withthe moving points.

3.3 Tracing the Actual MotionEach WiDeo AP sends the isolated backscatter measure-ments arising from moving objects it computes from theprevious step to the central server. Whenever a new mo-tion starts, its quite likely that many of the WiDeo APswill detect the backscatter measurements from this newmotion. The server collects backscatter snapshots overa period of 10ms from all participating radios, and as-sumes that any moving backscatter detected by any of theradios are coming from the same object. The heuristic

7

Page 9: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

196 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

implicitly makes the assumption that two new and inde-pendent human motions won’t start within an interval of10ms. Given the timescales at which human motion hap-pens, 10ms is a negligible amount of time and we believethat such asynchrony is very likely in practice. Note thatthis does not mean that two independent motions cannotbe occurring simultaneously, we are only making the as-sumption that they don’t start within 10ms of each other.

3.3.1 Localizing the origin of the motion

The first step the server implements is actually to localizethe origin of the motion that just started. The server hasmeasurements from multiple radios across multiple snap-shots, and very likely the new motion will be detectedat many of these radios. So, how might we estimate thelocation of the new motion? The idea is that the measure-ments collected at the WiDeo APs imposes constraints onwhere the moving reflector is located. We demonstratethe idea using the AoA measurement. Let’s say the lo-cations of the M WiDeo APs involved in motion tracingare given by (xi,yi), i = 1, . . . ,M. Similar to many otherstate-of-the-art localization systems [46, 38] using WiFi,the locations of the APs (or the anchors) are assumed tobe known in advance. Let the AoA measurements of thereflector at the APs be denoted by θi, i = 1, . . . ,M and thecurrent estimate of the object’s location is (x,y). So themost likely location of the object is one that minimizesthe following metric:

min ∑Mi=1(θi +bθi −θi)

2

s. t. θi = AoAULA((x,y),(xi,yi))(9)

The above equation is stating the fact that the predictedangle of arrival at each of the WiDeo APs given the esti-mated location of the reflector and the location of the APsmust closely match the actual AoA measured by each ofthe APs. The function AoAULA((x,y),(xi,yi)) computesthe AoA seen by the tracing radio located at (xi,yi) froma reflector located at (x,y). However there is a new fac-tor bθi that represents the bias to model multipath reflec-tions. This is because the moving backscatter not onlycorresponds to the direct backscatter from the object butalso the backscatter from the reflections of the backscat-ter. For example, if a backscatter reflection from a mov-ing object is further reflected by a wall before arriving atthe AP, the ToF parameter will have a constant bias thatreflects the extra time it takes to traverse the extra dis-tance corresponding to going to the wall and reflectingoff it. Similar bias exists for both the amplitude and AoAmeasurements. Further the bias values are unknown andhence are a variable in the optimization. The value of(x,y) that minimizes the above metric is likely the bestestimate of the location of the reflector.

We can also use other parameters like ToF and power

to estimate the location of the target. In our actual im-plementation, we solve a more sophisticated optimiza-tion problem than the simple optimization problem in 9.Specifically, WiDeo uses AoA, ToF, and backscatter sig-nal strength measurements over multiple frames for theparticular backscatter, say J frames, and declares the ori-gin of the motion as the location that minimizes the fol-lowing objective function described by Eq. 10

∑Jj=1 ∑M

i=1[(αi −αi j)2 +(τi +bτi − τi j)

2 +(θi +bθi −θi j)2],

(10)where αi j, τi j, and θi j are the power, ToF, and AoA re-spectively of the backscatter observed by the ith AP inthe jth frame and the variables αi, τi, and θi are the val-ues of respective backscatter parameters that would havebeen observed at the APs if the object was actually lo-cated at that particular location. We follow a simple pathloss model [21] to describe the relation between the lo-cation of the object and the backscatter signal strengthαi. The variables bτi and bθi represent the bias in ToFand AoA respectively due to reflections of the backscat-ter from the object. This problem of minimizing Eq. 10is non-convex, therefore we apply a widely used heuristicknown as sequential convex optimization to solve it [13].

We note that Eq. 9 as such is an ill-posed problemwithout a unique solution because each AP introduces itsown bias terms for backscatter parameters. However, inEq. 10, by collecting measurements over enough num-ber of backscatter frames, the number of measurementsbecome greater than the number of variables and the op-timization problem becomes well-posed. Further the pa-rameters of simple path loss model used to model αi arealso estimated as part of the minimizing Eq. 10 and neednot be known ahead of time.

3.3.2 Tracing Motion

Once the newly detected moving object is localized, thenext step is to trace the object’s motion as it moves andproduces new measurements via our backscatter sensor.Remember that the new measurements are naturally as-sociated with the measurements from the previous snap-shots via the declutterer described in § 3.2. So the al-gorithm has already clustered backscatter measurementscoming from the same moving reflector together, and wecan operate the motion tracing algorithm on each clusterof measurements separately. Hence we describe the trac-ing algorithm as if there is a single motion occurring anda single set of backscatter measurements being producedfrom it across successive snapshots.

Our approach to this problem is to build a dynamicalmodel about the motion that is occurring and progres-sively refine its parameters. There are several parame-ters to the motion model: current position of the object,

8

Page 10: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 197

velocity, direction of motion, acceleration, and bias ineach backscatter parameter due to the indirect reflections.Both the bias and initial position variables are initializedusing the output of the localization algorithm in the pre-vious step. Velocity, acceleration and direction of motionare initially set to zero and then updated over time as newmeasurements come along. Note that we also allow thebias parameter to change over time, after all as the objectmoves, the bias for each parameter changes.

The key insight is as follows: at every point in thetraced motion, given the estimate of the motion modelat that instant, WiDeo can predict what the backscattersensor measurements for that moving reflector should be(given the estimate for the locations of the reflector andthe WiDeo AP and the biases in the parameters we cancalculate the expected amplitude, ToF and AoA of the re-flections). WiDeo also of course has access to the actualbackscatter sensor measurements at that instant for thesame moving reflector, so we can calculate the error be-tween the predicted and the actual backscatter measure-ment. The goal of the motion tracing component is tominimize the sum of these backscatter prediction errorsover the entire motion trajectory in a sequential fashion.The algorithm proceeds in three steps at each time instant:Model based prediction: In this step, WiDeo calculatesthe new position of the reflector given the previous posi-tion and motion model parameters namely, velocity andacceleration. It then uses this extrapolated position alongwith the estimates of the bias for the backscatter param-eters to calculate what the new values of the amplitude,ToF and AoA of the reflection should be.Backscatter prediction error computation: Computethe difference between the above predicted and measuredbackscatter parameters.Model update from error: Update motion model pa-rameters such that the overall backscatter prediction er-ror is minimized across the entire trajectory. The updatestep uses a classic technique in dynamic estimation: theKalman filter [44]. Kalman filter theory shows that as-suming the measurement noise and motion modeling er-ror is Gaussian, the update is dependent on two factors.First factor is the size of the prediction error itself, i.e. ifthe error is large then a larger update to the model is re-quired and vice-versa. The second factor is a gain termthat modulates this error term. The gain factor is cho-sen such that the accumulated error between all the ob-servations of the measured backscatter parameters so farand the best prediction that the motion model can make isminimized. In essence the gain signifies the effect of ac-cumulated errors in the motion model, for example if themeasurements are noisy the gain should be chosen smallto account for the unreliable nature of the measurementand vice-versa. We omit the proof and refer the read-ers to [44] for a more detailed mathematical treatment

0  

0.1  

0.2  

0.3  

0.4  

0.5  

0.6  

0   50   100   150   200  

Delay  accuracy  in  ns  

Bandwidth  in  MHz  

0.6  0.7  0.8  0.9  1  

1.1  1.2  1.3  1.4  1.5  

0   50   100   150   200  

AoA

 accuracy  in  degrees  

Bandwidth  in  MHz  

Figure 4: Accuracy of WiDeo’s algorithms in estimating thedelays and AoAs of backscatter. WiDeo achieves an accuracyof 300ps and 1.2 degrees (with error bar representing standarddeviation) at 40MHz bandwidth used in WiFi signals.

of the Kalman filter and how to compute the gain factorgiven the motion model and history of backscatter mea-surements and prediction errors.

The convergence of the motion tracer takes a few snap-shots, after this point it constantly updates its motionmodel parameters. Reconstructing the motion is now akinto starting with the initial point and performing a direc-tional piece-wise integration using the speed and direc-tion of motion parameter at each time step. An instanceof the above algorithm is executed for each detected mo-tion.

4 Evaluation

We implement a prototype of WiDeo using the WARPsoftware radios using WiFi compatible OFDM PHY witha bandwidth of 20MHz at 2.4GHz. The radio is set up touse 4 antennas and all RX chains are phase synchronizedlike in a MIMO radio. The spacing of the antennas isλ/2 and the overall width of ULA is 18cm. The declut-terer is designed using analog cancellation circuit boardsbased on the design described in [11]. From the timeit receives information about the clutters to be canceled,the declutter takes few microseconds to remove their ef-fect and improve the dynamic range. The optimizationalgorithms that measure the backscatter parameters andthe rest of the tracing algorithms are implemented in ahost PC in C using the cvxgen toolbox [32] and Matlab.Although the current implementation of WiDeo is not re-altime we believe it is possible with a few architecturalchanges and speed optimizations in the future.

4.1 Back-scatter sensor benchmarksWe start with micro-benchmarks of the backscatter sensorthat underpins motion tracing. The goal here is to demon-strate that WiDeo’s backscatter measurement algorithmsprovide high accuracy and fine resolution.Accuracy: We first measure WiDeo’s accuracy in mea-suring backscatter parameters. Given the complex geom-etry of indoor environments, a natural question is how do

9

Page 11: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

198 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

we know ground truth for all the multipaths to evaluatethe accuracy of WiDeo? We perform controlled experi-ments by connecting the RX chains with wires from thetransmitted chain. The lengths of the wires are variedto provide different delays, attenuators on each wire pro-vide tunable amplitude, and phase shifters are introducedto simulate AoA. This wired experiment can create 10different backscatter components. To mimic realistic in-door reflections, we vary the lengths and attenuations bysampling it from an indoor power delay profile [19], andAoA is picked uniformly at random. We vary the receiverbandwidth from 20MHz to 160MHz.

Since WARP radios can only support up to 40MHzbandwidth, we use signal analyzers for the higher band-width experiments. Higher receiver bandwidth is ex-pected to help improve accuracy because we are gettingfiner-grained observations in time due to higher samplingrates. However, the default configuration for WiDeo un-less stated otherwise is WARP radios with 20MHz band-width.

Fig. 4 plots the overall estimation accuracy for delayand AoA of the backscatter components as a function ofbandwidth, we omit amplitude results for brevity (theiraccuracy was within 1dB). As we can see WiDeo pro-vides extremely high accuracy, measuring delay to within0.3ns accuracy for a bandwidth of 40MHz, the most com-monly used WiFi bandwidth. Further AoA accuracy is1.2 degrees at 40MHz bandwidth. Accuracy improvesslightly for delay estimation with bandwidth, which isexpected since we now get more closely spaced samplesthat helps discern delay better. AoA accuracy is not af-fected much by bandwidth since that is primarily deter-mined by the number of antennas.Resolution: Next we conduct an experiment to measureWiDeo’s resolution, i.e. how close two backscatter re-flectors can be before WiDeo’s algorithms fail to disam-biguate their respective parameters? First, we create twobackscatter components whose delays are far from eachother by using wires of different lengths. We then slowlydecrease the relative delay and measure at what relativedelay the accuracy is a factor of two worse than in Fig. 4.Next we repeat the same experiment, but instead of de-lay, we make the AoA of two backscatter componentsvery close to each other and check at what relative AoAthe accuracy is a factor of two worse than in Fig. 4. Theresults are presented in Fig. 5.

WiDeo can resolve delay to 2ns, distinguishing twogesturing humans separated by only one foot. WiDeo canresolve angle to 5 degrees, distinguishing humans 1 footapart at 12 feet away.Range and Dynamic Range: A third benchmark is howweak a backscatter signal can be before it cannot be es-timated by WiDeo. Clearly if a backscatter is weakerthan the noise floor of the receiver radio (-90dBm), then

0  

0.5  

1  

1.5  

2  

2.5  

3  

0   5   10   15   20   25  

Maxim

um  error  in  ns  

Resolu5on  of  tau  in  ns  

0  

1  

2  

3  

4  

5  

6  

7  

0   10   20   30   40  

Max  error  in  degrees  

Resolu8on  of  theta  in  degrees  

Figure 5: WiDeo can accurately measure parameters evenwhen the backscatter are spaced only 2ns apart in time or 5degrees apart in spatial orientation.

WiDeo cannot detect it. But how much above the noisefloor does the backscatter have to be for accurate mea-surement? We repeat the accuracy experiment shown inFig. 4 by picking the parameters for 9 components fromthe power delay distribution while progressively decreas-ing the strength of the 10th backscatter component.

Fig. 6 (on left) plots estimation accuracy of differ-ent backscatter parameters as a function of the receivedstrength at the radio. When the backscatter componentis weaker than -70dBm (i.e less than 20dB above thenoise floor of the receiver), WiDeo’s accuracy degradesto around 6ns for the delay. In practice this means thatthe motion that is being traced needs to happen within 16feet radius of the radios for high backscatter sensor accu-racy. Note that the range of motion tracing can be morethan 16 feet as motion tracing may not need parametersto be highly accurate.

Another related benchmark is the resilience of WiDeoin scenarios where there is backscatter from a nearby re-flector and the motion we actually want to trace is far-ther away and producing weak backscatter. To test thiswe conduct a controlled experiment where there are twobackscatter reflectors, one nearby whose strength is keptconstant at 10dBm while the other one is made weakerand weaker. We plot the accuracy of backscatter mea-surement for the weaker component as a function of thedifference in strength w.r.t. the strong backscatter com-ponent in Fig 6(on right). WiDeo accurately measurescomponents as weak as 80dB below the strong reflector,well beyond the radio’s 40dB dynamic range. This worksbecause the declutterer estimates the strong component,then cancels it completely all the way down to the noisefloor.

Note that both the maximum range and the dynamicrange of WiDeo is limited by the noise floor of the ra-dio being used and the transmitted power by the sensor,and not due to the limitations of WiDeo’s algorithm. Thisis because WiDeo’s cancellation can cancel specified re-flections all the way to the noise floor. If the cancellationwere imperfect and doesn’t reach the noise floor; for ex-ample, a 20dB residue will limit WiDeo to sensing signalsabove -50dBm rather than the -70dBm shown in Fig 6,

10

Page 12: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 199

0"

1"

2"

3"

4"

5"

6"

7"

0" *10" *20" *30" *40" *50" *60" *70" *80" *90"

Accuracy"of"delay"in"ns"

Strength"of"weaker"signal"in"dB"m

0"

2"

4"

6"

8"

10"

12"

14"

0" 20" 40" 60" 80" 100" 120"

Accuracy"of"delay"in"ns"

Dynamic"Range"in"dB"

Figure 6: (on left)WiDeo can accurately estimate backscatterparameters for reflections that are as weak as −70dBm. (onright) It can also accurately estimate parameters for very weakbackscatter components even when there is a strong backscattercomponent present which is 80dB stronger.

which would reduce the range as well to 2 feet.

4.2 Motion tracing benchmarks

We now evaluate WiDeo’s ability to accurately trace mo-tion in indoor environments. We calculate two metricsLocation accuracy: This is the accuracy of the localiza-tion of a motion that is detected by WiDeo. We use Eu-clidean distance between the centroid of the ground truthmotion and the estimated motion as the metric.Motion tracing accuracy: This is the accuracy of thetraced motion. The metric we used is the root meansquare error of the traced motion which we calculate bycomputing the distance of each point in the traced motionwith the ground truth motion trace at that point. The dis-tances are squared and added up and normalized by thenumber of points before taking the square root. Hence,the metric represents the motion tracing error in meters.Similar to [43], we remove any offset between the groundtruth motion and traced motion.

The locations tested for motion tracing accuracy spansall scenarios: non line-of-sight (NLOS) to any of thetracing radios, LOS to a subset of the tracing radiosand through walls in an indoor environment spanning600sq.ft. By default, unless stated otherwise, the numberof tracing radios is fixed to three and they are deployed atthree fixed but arbitrarily picked locations in the testbed.The motions we trace are actually humans sketching var-ious shapes with their hands. By default, unless statedotherwise, we have two humans performing motion con-currently in our experiments.

We could not find any recent system that implementsfine-grained motion tracing within the design require-ments of WiDeo: namely being device-free, compact andone that uses existing communication signals and spec-trum. RF-IDraw [43] as discussed before is not device-free, nor compact. Other recent work such as WiTrack [6]is device free but implements coarse tracking of the en-

tire human body moving, but cannot track fine-grainedmotion of human limbs. Hence we refer the reader to§ 1,§ 2 for a qualitative comparison to these related sys-tems.

Our experimental results show the following

• WiDeo accurately traces motion, it achieves a me-dian localization accuracy of 0.8m and motion trac-ing accuracy of 7cm.

• WiDeo can accurately trace multiple independentmotions, tracing as many as five independent andconcurrent motions with an error less than 12cm.

• WiDeo’s resolution is 0.5m, i.e. if the two indepen-dent motions are occurring within half a meter orhigher of each other, WiDeo can trace them accu-rately.

• Accuracy improves modestly with the number of ra-dios involved in the tracing. When we increase thenumber of radios to five, localization accuracy im-proves to 0.7m, whereas motion tracing accuracyimproves to 6cm.

4.2.1 Motion tracing experiments

We use a SPEAG hand [3] to perform motion tracing ex-periments. This model hand is designed to have same di-mensions and absorption/reflection characteristics as thatof a typical human hand in 2.4GHz frequency range.

This hand is placed over a chart with figures of differ-ent shapes like the one shown in Fig. 7. Several markersare drawn on the shape and the backscatter is captured byWiDeo’s APs when the SPEAG hand is placed on each ofthese markers. The markers are spaced apart by approx-imately 5cm so as to emulate a scenario where WiDeocollects measurements every 10ms when a human hand ismoving at a speed of 5m/s [22]. The ground truth locationfor each marker on the chart is obtained by using laserrange measurements and architectural drawings. In Fig.7, the shape in the blue shown in the right is found usingsuch laser measurements. By placing the model hand inall the locations of the chart sequentially, we emulate thescenario where a human hand traces the particular trajec-tory whose ground truth is accurately determined.

We conducted experiments in scenarios with one, two,and all three APs in LOS. WiDeo’s accuracy is tabulatedin Fig. 8. WiDeo achieves an accuracy of 5.1cm whenthe APs are in LOS, and is still quite accurate at 12.8cmwhen two of the APs are in NLOS.

4.2.2 Understanding WiDeo’s motion tracing

Because of the time consuming nature of the data col-lection procedure for the above testbed experiments, we

11

Page 13: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

200 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

0.22 m

0.33 m

Figure 7: (Left) A chart with figure of 8 with multiple mark-ings where SPEAG hand (in inset) was placed and the data wascaptured by WiDeo’s AP. (Right) Ground truth data obtainedusing laser range finder (in blue) along with the motion tracereconstructed by WiDeo (in red) using 3 APs.

Mo0on$ Localiza0on$Accuracy$(m)$ Mo0on$Tracing$Accuracy$(cm)$

Testbed) Wireless)InSite) Testbed) Wireless)InSite)

All)AP)LOS) .54) .54) 5.1) 5.3)

1AP)NLOS)) 1.1) 1.1) 8.5) 8.4)

2AP)NLOS) 1.61) 1.63) 12.8) 12.5)

Figure 8: Median accuracy for different motion shapes ob-tained using SPEAG hand and Wireless InSite tool.

can only perform a limited number of experiments us-ing it. To extensively test the motion tracing accuracyof WiDeo under more diverse conditions, we simulatedthe entire system in an electromagnetic emulation envi-ronment called Wireless InSite [2]. Wireless InSite is aray tracing based tool to accurately model RF propaga-tion in any indoor environment with walls and other ob-jects. This tool enables us to emulate complex indoor en-vironments in which WiDeo will be used, as well as knowthe ground truth for every experiments. To demonstrateWireless InSite produces similar results, we modeled thetestbed described above and then collected data for thesame scenarios in Wireless InSite. We emulated the dy-namic range and progressive interference cancellation onthe data obtained from Wireless InSite simulation. Fig.8 compares the accuracy of motion tracing achieved withWireless InSite data with that obtained with the physi-cal experiments. We see that the two results match veryclosely which is due to Wireless InSite’s ability to accu-rately model indoor RF environments. Hence, in the restof the sections, we use Wireless InSite to analyze perfor-mance of WiDeo in more detail.

4.2.3 WiDeo’s motion tracing performance

We now evaluate the WiDeo’s motion tracing accuracy byconducting extensive experiments using Wireless InSite.Specifically, we vary the placement of the two movinghumans arbitrarily in the testbed across 100 different lo-cations. We calculate the median localization error andthe root mean square error of the traced motion. We plotthe CDFs in Fig. 9.

WiDeo achieves a median localization error of 0.8m

0.03 0.13 0.66 1 3.25 16.050

0.2

0.4

0.6

0.8

1

Localization error (m)

CD

F

0.045 0.079 0.139 0.2450

0.2

0.4

0.6

0.8

1

Motion tracing error (m)

CD

F

Figure 9: WiDeo’s motion tracing is extremely accurate; ittraces fine-grained motion with a median localization error of0.8m and motion tracing error of 7cm.

0 1 2 3 4 50.5

1

1.5

2

2.5

Distance between humans (m)L

oc

ali

za

tio

n e

rro

r (

m)

0 1 2 3 4 50.05

0.1

0.15

0.2

0.25

Distance between humans (m)

Mo

tio

n t

ra

cin

g e

rro

r (

m)

Figure 10: WiDeo provides high resolution motion tracing, itcan accurately trace two independent motions occurring evenif they are only spaced 0.5m apart (with error bar representingstandard deviation).

and a median tracing error of 7cm. The tail errors are of-ten cases where the human motion is happening in a deadzone where the backscatter to any of the tracing radiosis weaker than -80dBm. In these cases the backscattermeasurement itself has worse accuracy which translatesto poor accuracy for motion tracing. However WiDeostill achieves a motion tracing accuracy better than 15cmin 90% of the scenarios.

4.2.4 Resolution

Many applications that might build upon WiDeo’s motiontracing capability care about resolution, i.e. how closecan two independent human motions be occurring andWiDeo can still trace them accurately (e.g multi-playervideo games). To conduct this experiment we progres-sively move the two moving humans closer to each otherand plot the worse of the two motion tracing accuraciesas reported by WiDeo in Fig. 10.

WiDeo achieves a motion tracing resolution of 0.5 me-ters while still achieving an extremely good tracing accu-racy of 12cm. So two humans could be standing a littlebit more than a foot away from each other (e.g. in a videogame), moving their hands closest to each other simulta-neously, and still be able to accurately trace their motion.

We also observed that localization error is unaffected;the error is the same as in Fig. 9 . This is expected since

12

Page 14: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 201

the localization technique works by combining measure-ments from multiple tracing radios when a new movingbackscatter component is detected. Since we assume thattwo human motions do not start exactly at the same timeand are usually spaced at least 10ms apart, WiDeo’s lo-calization algorithms have a sufficiently long window oftime (10ms) in which they can perform localization on asingle new object without the presence of a nearby mov-ing object. The same argument applies when the secondnew motion is detected, by then the first motion is local-ized and can be accounted for and localization can focusonly on the new backscatter components that arise fromnew moving object.

4.2.5 Impact of number of tracing radios

In this experiment, we see impact on accuracy as we varythe number of radios performing tracing in WiDeo. Weconduct the same experiment as in § 4.2.3, but vary thenumber of tracing radios from one to five. We plot fivedifferent CDFs of localization and motion tracing error inFig. 11 .

As we can see, WiDeo’s localization error is poor (4m)with a single tracing radio. This is expected, since WiDeorelies on triangulation to localize well. However motiontracing error is less affected, WiDeo still traces with lessthan 12cm error. Consequently while we cannot localizewith a single radio, we can still trace. The reason is thatwith a single tracing radio, we cannot get an accurate esti-mate of the depth (location), but the relative motion fromthat initial location can still be accurately traced since itonly depends on relative shifts in backscatter measure-ments, which are quite accurate.

Increasing the number of tracing radios helps with lo-calization error, it goes down to 0.7m with five tracingradios. Motion tracing, which is already quite accurateeven with a single radio, improves slightly to 7cm. Thisis expected since triangulation improves with more radiosand hence localization improves. However backscattermeasurement doesn’t depend on having multiple obser-vations, it’s done independently by each radio. Hencetracing accuracy only improves by a small amount.

4.2.6 Distinct motions that WiDeo can trace

In this experiment, we check how many independent con-current human motions can WiDeo trace. We vary thenumber of human motions occurring concurrently fromone to six and plot the median tracing accuracy in Fig. 12.

WiDeo can trace up to five concurrent motions with anaccuracy of 12cm. To the best of our knowledge, no priorWiFi based system has demonstrated being able to tracefive moving humans concurrently. Beyond that accuracyworsens. The reason is that there aren’t enough radios to

0.009 0.058 0.394 2.672 18.1340

0.2

0.4

0.6

0.8

1

Localization error (m)

CD

F

# of tracing radios =1

# of tracing radios =2

# of tracing radios =3

# of tracing radios =4

# of tracing radios =5

0.025 0.045 0.082 0.148 0.270

0.2

0.4

0.6

0.8

1

Motion tracing error (m)

CD

F

# of tracing radios =1

# of tracing radios =2

# of tracing radios =3

# of tracing radios =4

# of tracing radios =5

Figure 11: WiDeo’s localization accuracy improves with thenumber of tracing radios to 0.7m because of better triangula-tion. However tracing accuracy is unaffected because WiDeo’salgorithm’s can trace accurately even with information from asingle tracing radio.

1 2 3 4 5 60

1

2

3

4

Number of humans

Lo

ca

liza

tio

n e

rro

r (

m)

1 2 3 4 5 60.05

0.1

0.15

0.2

0.25

Number of humans

Mo

tio

n t

ra

cin

g e

rro

r (

m)

Figure 12: WiDeo can accurately trace as many as five inde-pendent motions that are occurring simultaneously (with errorbar representing standard deviation).

provide sufficient number of backscatter measurements todisentangle these motions. Being able to trace five con-current motions is sufficient for a home environment, butnot for work environments where a far greater amount ofmotion is expected.

5 ConclusionThis paper demonstrated the surprising capability to buildmotion tracing camera using WiFi signals as the lightsource. The fundamental contributions are algorithmsthat can measure WiFi backscatter and mine them to tracemotion. We plan to prototype many interesting appli-cations that builds on top of WiDeo, including gesturerecognition, indoor navigation, elderly care and securityapplications.

References

[1] David Tse , Fundamentals Wireless Commu-nications. http://www.eecs.berkeley.

edu/~dtse/Chapters_PDF/Fundamentals_

Wireless_Communication_chapter2.pdf.

[2] Modeling Indoor Propagation. http:

//www.remcom.com/examples/

modeling-indoor-propagation.html.

13

Page 15: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

202 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

[3] SPEAG Hand. http://www.speag.

com/products/em-phantom/hand/

sho-v2-3rb-lb/.

[4] Wii. http://en.wikipedia.org/wiki/Wii.

[5] ADIB, F., KABELAC, Z., AND KATABI, D. Multi-Person Motion Tracking via RF Body Reflections.

[6] ADIB, F., KABELAC, Z., KATABI, D., ANDMILLER, R. C. 3D Tracking via Body RadioReflections. In 11th USENIX Symposium on Net-worked Systems Design and Implementation (NSDI14) (Seattle, WA, Apr. 2014), USENIX Association,pp. 317–329.

[7] ADIB, F., AND KATABI, D. See Through Wallswith WiFi! In Proceedings of the ACM SIGCOMM2013 Conference on SIGCOMM (New York, NY,USA, 2013), SIGCOMM ’13, ACM, pp. 75–86.

[8] ARSLAN, H., ET AL. Channel Estimation for Wire-less OFDM Systems. IEEE Surveys and Tutorials 9,2 (2007), 18–48.

[9] BAHL, P., AND PADMANABHAN, V. RADAR:an in-building RF-based user location and trackingsystem. Proceedings IEEE INFOCOM 2000. Con-ference on Computer Communications. NineteenthAnnual Joint Conference of the IEEE Computer andCommunications Societies (Cat. No.00CH37064) 2(2000), 775–784.

[10] BHARADIA, D., JOSHI, K. R., AND KATTI, S.Full Duplex Backscatter. In Proceedings of theTwelfth ACM Workshop on Hot Topics in Networks(2013), ACM, p. 4.

[11] BHARADIA, D., AND KATTI, S. Full DuplexMIMO Radios. In 11th USENIX Symposium on Net-worked Systems Design and Implementation (NSDI14) (Seattle, WA, Apr. 2014), USENIX Association,pp. 359–372.

[12] BHARADIA, D., MCMILIN, E., AND KATTI, S.Full Duplex Radios. In Proceedings of the ACMSIGCOMM 2013 conference on SIGCOMM (NewYork, NY, USA, 2013), SIGCOMM ’13, ACM,pp. 375–386.

[13] BOYD, S., AND VANDENBERGHE, L. Convex Op-timization. Cambridge University Press, New York,NY, USA, 2004.

[14] CANDES, E., AND ROMBERG, J. Sparsity and In-coherence in Compressive Sampling, 2006.

[15] CHINTALAPUDI, K., PADMANABHA IYER, A.,AND PADMANABHAN, V. N. Indoor LocalizationWithout the Pain. In Proceedings of the sixteenthannual international conference on Mobile comput-ing and networking (2010), ACM, pp. 173–184.

[16] CZINK, N., HERDIN, M., AND ZCELIKERNST BONEK. Number of Multipath Clus-ters in Indoor MIMO Propagation Environments.

[17] DONOHO, D. L. Compressed sensing. IEEE Trans.Inform. Theory 52 (2006), 1289–1306.

[18] EKANADHAM, C., TRANCHINA, D., AND SI-MONCELLI, E. P. Recovery of sparse translation-invariant signals with continuous basis pursuit.Signal Processing, IEEE Transactions on 59, 10(2011), 4735–4744.

[19] ERCEG, V., SCHUMACHER, L., KYRITSI, P., ANDET AL. TGn channel models. Tech. Rep. IEEEP802.11, Wireless LANs, Garden Grove, Calif, USA(2004).

[20] GJENGSET, J., XIONG, J., MCPHILLIPS, G., ANDJAMIESON, K. Phaser: Enabling Phased Array Sig-nal Processing on Commodity WiFi Access Points.In Proceedings of the 20th Annual InternationalConference on Mobile Computing and Networking(New York, NY, USA, 2014), MobiCom ’14, ACM,pp. 153–164.

[21] GOLDSMITH, A. Wireless communications. Cam-bridge university press, 2005.

[22] GUPTA, S., MORRIS, D., PATEL, S., AND TAN,D. Soundwave: using the Doppler Effect to SenseGestures. In Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems (2012),ACM, pp. 1911–1914.

[23] HARRISON, C., TAN, D., AND MORRIS, D. Skin-put: Appropriating the Body as an Input Surface.In Proceedings of the SIGCHI Conference on Hu-man Factors in Computing Systems (2010), ACM,pp. 453–462.

[24] JIA, Y., KONG, L., YANG, X., AND WANG, K.Through-wall-radar localization for stationary hu-man based on life-sign detection. 2013 IEEE RadarConference (RadarCon13), 3 (Apr. 2013), 1–4.

[25] JOSHI, K., HONG, S., AND KATTI, S. PinPoint:Localizing Interfering Radios. In Proceedings of the10th USENIX Conference on Networked SystemsDesign and Implementation (Berkeley, CA, USA,2013), nsdi’13, USENIX Association, pp. 241–254.

14

Page 16: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

USENIX Association 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) 203

[26] KIM, D., HILLIGES, O., IZADI, S., BUT-LER, A. D., CHEN, J., OIKONOMIDIS, I., ANDOLIVIER, P. Digits: Freehand 3D Interactions Any-where Using a Wrist-Worn Gloveless Sensor. InProceedings of the 25th annual ACM symposiumon User interface software and technology (2012),ACM, pp. 167–176.

[27] KOSBA, A., SAEED, A., AND YOUSSEF, M.RASID: A robust WLAN device-free passive mo-tion detection system. In Pervasive Computingand Communications (PerCom), 2012 IEEE Inter-national Conference on (March 2012), pp. 180–189.

[28] KUHN, H. W. The Hungarian Method for the As-signment Problem. Naval Research Logistics Quar-terly 2 (1955), 83–97.

[29] KUMAR, S., HAMED, E., KATABI, D., AND ER-RAN LI, L. LTE radio analytics made easy and ac-cessible. Proceedings of the 2014 ACM conferenceon SIGCOMM - SIGCOMM ’14 (2014), 211–222.

[30] MAAREF, N., MILLOT, P., PICHOT, C., AND PI-CON, O. A Study of UWB FM-CW Radar forthe Detection of Human Beings in Motion Insidea Building. Geoscience and Remote Sensing, IEEETransactions on 47, 5 (May 2009), 1297–1300.

[31] MALLAT, S., AND ZHANG, Z. Matching PursuitWith Time-Frequency Dictionaries. IEEE Transac-tions on Signal Processing 41 (1993), 3397–3415.

[32] MATTINGLEY, J., AND BOYD, S. CVXGEN: acode generator for embedded convex optimization.Optimization and Engineering 13, 1 (2012), 1–27.

[33] NARAYANAN, R. M. Through-wall radar imag-ing using UWB noise waveforms. Journal of theFranklin Institute 345, 6 (Sept. 2008), 659–678.

[34] PU, Q., GUPTA, S., GOLLAKOTA, S., AND PA-TEL, S. Whole-Home Gesture Recognition usingWireless Signals. In Proceedings of the 19th an-nual international conference on Mobile computing& networking (2013), ACM, pp. 27–38.

[35] RALSTON, T., CHARVAT, G., AND PEABODY,J. Real-time through-wall imaging using an ultra-wideband multiple-input multiple-output (MIMO)phased array radar system. In Phased Array Systemsand Technology (ARRAY), 2010 IEEE InternationalSymposium on (Oct 2010), pp. 551–558.

[36] SEIFELDIN, M., SAEED, A., KOSBA, A. E., EL-KEYI, A., AND YOUSSEF, M. Nuzzer: A Large-Scale Device-Free Passive Localization System forWireless Environments. IEEE Transactions on Mo-bile Computing 12, 7 (July 2013), 1321–1334.

[37] SEN, S., CHOUDHURY, R. R., AND NELAKUDITI,S. SpinLoc: Spin Once to Know Your Location.In Proceedings of the Twelfth Workshop on Mo-bile Computing Systems & Applications (NewYork, NY, USA, 2012), HotMobile ’12, ACM,pp. 12:1–12:6.

[38] SEN, S., LEE, J., KIM, K.-H., AND CONGDON,P. Avoiding Multipath to Revive Inbuilding WiFiLocalization. In Proceeding of the 11th Annual In-ternational Conference on Mobile Systems, Appli-cations, and Services (New York, NY, USA, 2013),MobiSys ’13, ACM, pp. 249–262.

[39] SHEN, Y., AND MARTINEZ, E. Channel Es-timation in OFDM Systems. Application Note,Freescale Semiconductor (2006).

[40] TIBSHIRANI, R. Regression Shrinkage and Selec-tion Via the Lasso. Journal of the Royal StatisticalSociety, Series B 58 (1994), 267–288.

[41] TIPALDI, G. D., AND RAMOS, F. Motionclustering and estimation with conditional randomfields. In Intelligent Robots and Systems, 2009.IROS 2009. IEEE/RSJ International Conference on(2009), IEEE, pp. 872–877.

[42] TROPP, J. A., AND GILBERT, A. C. Signal recov-ery from partial information via Orthogonal Match-ing Pursuit. IEEE TRANS. INFORM. THEORY(2005).

[43] WANG, J., VASISHT, D., AND KATABI, D. RF-IDraw: Virtual Touch Screen in the Air Using RFSignals. In Proceedings of the 2014 ACM Confer-ence on SIGCOMM (New York, NY, USA, 2014),SIGCOMM ’14, ACM, pp. 235–246.

[44] WELCH, G., AND BISHOP, G. An Introductionto the Kalman Filter. Tech. rep., Chapel Hill, NC,USA, 1995.

[45] WILSON, J., AND PATWARI, N. See-ThroughWalls: Motion Tracking Using Variance-Based Ra-dio Tomography Networks. IEEE Transactions onMobile Computing 10, 5 (May 2011), 612–621.

[46] XIONG, J., AND JAMIESON, K. ArrayTrack: AFine-Grained Indoor Location System. In NSDI(2013), pp. 71–84.

[47] YOUSSEF, M., AND AGRAWALA, A. The Ho-rus WLAN Location Determination System. InProceedings of the 3rd International Conferenceon Mobile Systems, Applications, and Services(New York, NY, USA, 2005), MobiSys ’05, ACM,pp. 205–218.

15

Page 17: WiDeo: Fine-grained Device-free Motion Tracing using RF ... · WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter Kiran Joshi, Dinesh Bharadia , Manikanta Kotaru,

204 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) USENIX Association

[48] YOUSSEF, M., MAH, M., AND AGRAWALA, A.Challenges: Device-free Passive Localization forWireless Environments. In Proceedings of the 13thAnnual ACM International Conference on MobileComputing and Networking (New York, NY, USA,2007), MobiCom ’07, ACM, pp. 222–229.

[49] ZETIK, R., CRABBE, S., KRAJNAK, J., PEYERL,P., SACHS, J., AND THOMA, R. Detection andlocalization of persons behind obstacles using M-sequence through-the-wall radar, 2006.

[50] ZHANG, D., MA, J., CHEN, Q., AND NI, L. M.An RF-Based System for Tracking Transceiver-FreeObjects. In Proceedings of the Fifth IEEE Inter-national Conference on Pervasive Computing andCommunications (Washington, DC, USA, 2007),PERCOM ’07, IEEE Computer Society, pp. 135–144.

[51] ZHANG, Z. Microsoft Kinect Sensor and Its Effect.MultiMedia, IEEE 19, 2 (Feb 2012), 4–10.

16


Recommended