+ All Categories
Home > Documents > Multi-user Data Sharing in Radar Sensor Networks

Multi-user Data Sharing in Radar Sensor Networks

Date post: 12-Sep-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
14
Multi-user Data Sharing in Radar Sensor Networks Ming Li, Tingxin Yan, Deepak Ganesan, Eric Lyons, Prashant Shenoy, Arun Venkataramani, and Michael Zink Department of Computer Science, University of Massachusetts, Amherst MA 01003. {mingli,yan,dganesan,elyons,shenoy,arun,zink}@cs.umass.edu Abstract In this paper, we focus on a network of rich sensors that are geographically distributed and argue that the de- sign of such networks poses very different challenges from traditional “mote-class” sensor network design. We iden- tify the need to handle the diverse requirements of mul- tiple users to be a major design challenge, and propose a utility-driven approach to maximize data sharing across users while judiciously using limited network and compu- tational resources. Our utility-driven architecture addresses three key challenges for such rich multi-user sensor net- works: how to define utility functions for networks with data sharing among end-users, how to compress and prior- itize data transmissions according to its importance to end- users, and how to gracefully degrade end-user utility in the presence of bandwidth fluctuations. We instantiate this ar- chitecture in the context of geographically distributed wire- less radar sensor networks for weather, and present results from an implementation of our system on a multi-hop wire- less mesh network that uses real radar data with real end-user applications. Our results demonstrate that our progressive compression and transmission approach achieves an order of magnitude improvement in application utility over existing utility-agnostic non-progressive approaches, while also scal- ing better with the number of nodes in the network. Categories and Subject Descriptors C.2.1 [Network Architecture and Design]: Wire- less communication; C.3 [SPECIAL-PURPOSE AND APPLICATION-BASED SYSTEMS]: Real-time and em- bedded systems General Terms Design, Experimentation, Measurement, Performance Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SenSys’07, November 6–9, 2007, Sydney, Australia. Copyright 2007 ACM 1-59593-763-6/07/0011 ...$5.00 Keywords Multi-user Sensor Networks, Radar Sensor Networks, Wireless Mesh Networks, Utility Control, Progressive En- coding 1 Introduction While much of the focus of the sensor network commu- nity has been on the design of miniature low-power “mote- class” wireless sensor networks, there is an equally important ongoing networking revolution for “rich” powerful higher- power sensors. This revolution has been driven by two tech- nology trends. The first trend is the emergence of cheaper, more efficient and more compact designs of traditionally large and unwieldy sensors such as radars and cameras [7], enabling more mobile, solar-powered deployments in remote locations that lack sensing coverage. The second trend is the recent success in designing WiFi-based long-range, multi- hop mesh networks [1, 5], which facilitate ad-hoc remote de- ployments of these sensors in areas where a wired network infrastructure is unavailable. These technological develop- ments have led to several efforts to deploy large-scale, dense, wirelessly connected networks of powerful sensors, includ- ing earthquake sensing ([8]), weather monitoring using wire- less radars ([18]), and road traffic monitoring ([3]). These emerging large-scale sensor systems (shown in Figure 1) have important differences from their existing resource-poor counterparts and raise a number of new re- search challenges. The first major difference between the two types of sensor networks is their design objective. Due to limited energy resources and the need for long lifetime, the design performance goal in mote-class sensor networks is to minimize energy consumption. Other resources such as bandwidth and computation are typically less of a concern since simple, low data-rate sensors such as those for tem- perature, humidity, or pressure are used. In contrast, rich sensors such as radars and cameras generate raw data at hun- dreds of kilobits or tens of megabits per second. However, the per-node bandwidth on a shared wireless mesh is lim- ited. Consequently, the need to optimize network bandwidth usage is as important as minimizing energy consumption in such networks. A second key difference is the diversity of end-users that the two types of sensor networks are designed to sup- port. Mote-class sensor networks typically have many tens of nodes deployed in a small geographic area, and are de-
Transcript
Page 1: Multi-user Data Sharing in Radar Sensor Networks

Multi-user Data Sharing in Radar Sensor Networks

Ming Li, Tingxin Yan, Deepak Ganesan, Eric Lyons, Prashant Shenoy,Arun Venkataramani, and Michael Zink

Department of Computer Science,University of Massachusetts,

Amherst MA 01003.{mingli,yan,dganesan,elyons,shenoy,arun,zink }@cs.umass.edu

AbstractIn this paper, we focus on a network of rich sensors

that are geographically distributed and argue that the de-sign of such networks poses very different challenges fromtraditional “mote-class” sensor network design. We iden-tify the need to handle the diverse requirements of mul-tiple users to be a major design challenge, and proposea utility-driven approach to maximize data sharing acrossusers while judiciously using limited network and compu-tational resources. Our utility-driven architecture addressesthree key challenges for such rich multi-user sensor net-works: how to define utility functions for networks withdata sharing among end-users, how to compress and prior-itize data transmissions according to its importance to end-users, and how to gracefully degrade end-user utility in thepresence of bandwidth fluctuations. We instantiate this ar-chitecture in the context of geographically distributed wire-less radar sensor networks for weather, and present resultsfrom an implementation of our system on a multi-hop wire-less mesh network that uses real radar data with real end-userapplications. Our results demonstrate that our progressivecompression and transmission approach achieves an order ofmagnitude improvement in application utility over existingutility-agnostic non-progressive approaches, while also scal-ing better with the number of nodes in the network.

Categories and Subject DescriptorsC.2.1 [Network Architecture and Design]: Wire-

less communication; C.3 [SPECIAL-PURPOSE ANDAPPLICATION-BASED SYSTEMS ]: Real-time and em-bedded systems

General TermsDesign, Experimentation, Measurement, Performance

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. To copy otherwise, to republish, to post on servers or to redistributeto lists, requires prior specific permission and/or a fee.SenSys’07, November 6–9, 2007, Sydney, Australia.Copyright 2007 ACM 1-59593-763-6/07/0011 ...$5.00

KeywordsMulti-user Sensor Networks, Radar Sensor Networks,

Wireless Mesh Networks, Utility Control, Progressive En-coding

1 IntroductionWhile much of the focus of the sensor network commu-

nity has been on the design of miniature low-power “mote-class” wireless sensor networks, there is an equally importantongoing networking revolution for “rich” powerful higher-power sensors. This revolution has been driven by two tech-nology trends. The first trend is the emergence of cheaper,more efficient and more compact designs of traditionallylarge and unwieldy sensors such as radars and cameras [7],enabling more mobile, solar-powered deployments in remotelocations that lack sensing coverage. The second trend is therecent success in designing WiFi-based long-range, multi-hop mesh networks [1, 5], which facilitate ad-hoc remote de-ployments of these sensors in areas where a wired networkinfrastructure is unavailable. These technological develop-ments have led to several efforts to deploy large-scale, dense,wirelessly connected networks of powerful sensors, includ-ing earthquake sensing ([8]), weather monitoring using wire-less radars ([18]), and road traffic monitoring ([3]).

These emerging large-scale sensor systems (shown inFigure 1) have important differences from their existingresource-poor counterparts and raise a number of new re-search challenges. The first major difference between thetwo types of sensor networks is their design objective. Dueto limited energy resources and the need for long lifetime,the design performance goal in mote-class sensor networksis to minimize energy consumption. Other resources such asbandwidth and computation are typically less of a concernsince simple, low data-rate sensors such as those for tem-perature, humidity, or pressure are used. In contrast, richsensors such as radars and cameras generate raw data at hun-dreds of kilobits or tens of megabits per second. However,the per-node bandwidth on a shared wireless mesh is lim-ited. Consequently, the need to optimize network bandwidthusage is as important as minimizing energy consumption insuch networks.

A second key difference is the diversity of end-usersthat the two types of sensor networks are designed to sup-port. Mote-class sensor networks typically have many tensof nodes deployed in a small geographic area, and are de-

Page 2: Multi-user Data Sharing in Radar Sensor Networks

Disaster Advisory Services

Internet

Proxy

Education First Responders

ScientistsCasual Users

Wireless sensors

Figure 1. Multi-user Sensor Networks

signed to perform one or few tasks efficiently, primarily pe-riodic data collection. This is both due to the lack of avail-able resources on the sensors to perform computationally in-tensive in-network data processing, and due to the limitedgeographic area that the sensors span. In contrast, manyrich sensor systems span vast geographies, and are intendedto serve a spectrum of users with different needs. To illus-trate, users of a large-scale transportation sensor network us-ing cameras will include traffic police, first responders whoneed notification about accidents, commuters who are inter-ested in traffic congestion on their routes, and even insurancecompanies who might desire information about accidents tosettle claims. These different types of users often impose dif-ferent, and sometimes conflicting, demands on the network.In a radar sensor network, scientists may desire access to rawdata to conduct research, while meteorological applicationsmay require data that has undergone intermediate process-ing, and end-users may only need the “final processed re-sult”. Further, a tornado detection application will requiretimely notifications of important events, while other usersare less sensitive to delay in sensor updates (e.g., end-usersare tolerant to slight delays in weather updates).

Thus, a key challenge in rich sensor networks is to opti-mize diverse user needs in the presence of limited resources.One option is to handle the different user needs separately,but this model ignores one of the most important charac-teristics of multi-user sensor networks — all the users of asensor network operate on thesamedata streams and thedata relevant to one user can potentially be used to han-dle the needs of other users. Thus, rather than separatelyhandling user needs, an approach that jointly considers userneeds to maximize data sharing among users is better suitedto make judicious use of the limited computational and net-work resources. Since the workload seen by such networkscan dynamically vary over time as user needs and interestschange—for instance, the workload imposed by users can in-crease significantly during an intense storm or a major trafficproblem—such data sharing techniques must also adapt todynamic load conditions.

1.1 Research ContributionsIn this paper, we describe a novel utility-driven architec-

ture that maximizes data sharing among diverse users in asensor network. We believe that maximizing utility acrossdiverse end-user queries usingmulti-user data sharingtech-niques (henceforth referred to as MUDS) is a key challengefor designing more scalable sensor networks. Our architec-ture is designed for hierarchical sensor networks where sen-sors are streaming data over a multi-hop wireless network toa sensor proxy. These incoming data streams at the proxyare used to answer queries from different users. The proxyand the sensors interact continually to maximize data shar-ing across queries while simultaneously adapting to band-width variations, and changing query needs of users. We in-stantiate this architecture in the context of ad-hoc networksof wireless radar sensors for severe weather prediction andmonitoring. Our work has three main contributions:

• Multi-query Aggregation: A key contribution of ourwork is multi-query aggregation, where radar datastreams are shared between multiple and diverse end-user queries, thereby maximizing total end-user util-ity. We demonstrate that different end-user applicationneeds, spatial areas of interest, deadlines, and priorities,can be combined into a single aggregated query, therebyenabling more optimized use of bandwidth resources.

• Utility-driven Compression and Scheduling: At thecore of our system is a utility-driven progressive datacompression and packet scheduling engine at eachradar. The progressive compression engine enablesradar data to be compressed and ordered such thatinformation of most interest to queries is transmittedfirst. Such an encoding enables our system to adaptgracefully to bandwidth fluctuations.The utility-drivenscheduler compares the utility of different progressivelycompressed streams that are intended for different setsof queries, and transmits packets such that utility acrossall concurrent queries at a radar is maximized.

• Global Transmission Control: In addition to localutility-driven techniques, our system supports globalutility optimization mechanisms driven by the proxy.The proxy continually monitors the utility of incom-ing data from different radars and decides how to con-trol streams to maximize total utility across the entirenetwork. Such a global control mechanism enablesthe system to adapt to uneven query distribution acrossthe network, and to deal with disparities in availablebandwidth among different radars due to wireless con-tention. This is especially important when some nodesin the network are observing important events such astornadoes, and need to obtain more bandwidth thanother nodes that are transmitting data for less criticalqueries.

In our experiments, we measure, evaluate and demon-strate the performance of our architecture and algorithms forradar sensor networks for severe weather monitoring. Wehave implemented the system on a testbed of Linux ma-chines that form an 802.11-based wireless mesh network.Using a combination of simulations and experiments with

Page 3: Multi-user Data Sharing in Radar Sensor Networks

real and emulated radar traces, we show that our system pro-vides more than an order of magnitude (11x) improvementin query accuracy and utility for a 12 node network, whencompared to an existing utility-agnostic non-progressive ap-proach. Our system also degrades gracefully with networksize — when the network size increases from one nodes totwelve nodes, the average utility achieved by each radar inour system only decreases by 25%, whereas the average util-ity of the existingNetRad[24] approach decreases by 80%.Further, our system adapts better to bandwidth variationswith only 15% reduction in utility when the bandwidth dropsfrom 150kbps to 10kbps.

The rest of this paper is structured as follows. Section 2provides an overview of radar sensor networks and the chal-lenges in these networks. Section 3 provides an overviewof our architecture, while Section 4 describes the design ofthe key components of our architecture. Sections 5 describesour implementation and evaluation. Finally, Sections 6 and7 discuss related work and our conclusions.

2 Radar Sensor NetworksIn this section, we provide an overview of the diverse end-

user applications that use a radar sensor network, followedby the formulation of the problem addressed in this paper.

2.1 End User ApplicationsA network of weather sensing radar sensors can be used

by diverse users such as automated weather monitoring ap-plications, meteorologists, scientists, teachers and emer-gency personnel. Several different weather monitoring appli-cations may be in use, each of which continuously requestsand processes data sensed by various radars:• Hazardous weather detection:Applications in this class

are responsible for detecting hazardous weather such asstorm cells, tornadoes, hail, and severe winds in real-time (e.g. [10]). This class of applications focuseson sharp changes in weather patterns; a tornado detec-tion application, for instance, looks for sharp changesin wind speed and direction that are indicative of a tor-nado.

• 3D wind direction estimation:This application con-structs a 3D map by computing the direction of the windat each point in 3D space. Since a single radar can onlydetermine wind direction in a single dimension (radialaxis), the application needs to merge data from two ormore overlapping radars in order to estimate the 3Dwind direction. Due to the need to merge data, onlyregions of overlap between adjacent radars are useful,and data from other areas need not be transmitted.

• 3D assimilation:This application integrates data frommultiple radars into a single 3D view to depict areas ofhigh reflectivity (intense rain) that occur in the region.

We note that the first application is of interest to meteorolo-gists for real-time weather forecasting, the second is usefulto researchers, while the third is useful to emergency man-agers to visualize weather in their jurisdiction. In addition tothese applications, end-users may pose other ad-hoc queriesfor data or instantiate continual queries that continuously re-quest and process data to detect certain events or conditions.

Hazardous Weather Detection

3D Wind Direction

Estimation3D

Assimilation

Merge StreamsProxy

Figure 2. Multi-hop Radar Sensor Networks

2.2 System Model and Problem FormulationOur MUDS radar sensing network comprises three tiers

as shown in Figure 2 (i) applications and end-users who posequeries and request field data, (ii) sensor proxies that act asthe gateway between the Internet and the radar sensor field,execute user queries, and manage the radar sensor network,and (iii) a wireless network of remote radar sensors that im-plement utility-driven services and stream their data to theproxy.

Each radar node comprises a mechanically steerable radarattached to an embedded PC controller; the embedded PCwith dual-core Intel processor that runs Linux is equippedwith 1GB RAM and a 802.11 wireless interface. A typicaldeployment will comprise many tens of radars distributedover a wide geographic area. The radars are “small” andare designed to be deployed in areas with no infrastructureusing solar-powered rechargeable batteries; they can also bedeployed on cellphone towers or on building rooftops whereinfrastructure such as A/C power is readily available. In ei-ther case, we assume that the radars connect to the proxynode using a multi-hop 802.11 wireless mesh network.

Each mechanically steerable radar has two degrees offreedom(θ,φ) which enable control over theorientationandthealtitudewhere the radar points and senses data. The radarscans the atmosphere by first positioning itself to point at analtitude φ and then conducts ascanby rotatingθ degreesand scanning while rotating. The MUDS system operates inrounds, where each round is referred to as anepoch. For thispaper, we assume an epoch of 30 seconds. Before each epochbegins, the proxy collects all queries for a particular radar.Each query represents a request for data from a weather mon-itoring application (e.g., the tornado detection) or from end-users (who may issue ad-hoc queries). A query can requestany subsetof the region covered by a radar scan—for in-stance, a tornado detection algorithm may only request datafrom regions where intense weather has been detected. Eachquery has a priority and a deadline associated with it, whichis then used to to assign aweightto each region that the radar

Page 4: Multi-user Data Sharing in Radar Sensor Networks

w1U1

w2U2

w1U1 + w2U2

Utility-based Scheduler

Multi-query Aggregator

Query 1: 3D assimilation

Query 2: Tornado detection

Query 1

Query 2 Query 1+2

Radar Scan

weight=2

weight=1

National Weather Service

Scientist

Radar Sensor

TransmitProgessive Compression

Engine

...

Figure 3. Multiple incoming queries in an epoch are first aggregated by the multi-query aggregator at the radar. The merged query and the radarscan for the epoch are input to the progressive encoder which generates different compressed streams for different regions in the query. The streamsare input to the utility-driven scheduler which schedules packets across all streams whose deadlines have not yet expired.

can scan. The weight represents the relative importance ofscanning and transmitting data from the region in the nextepoch.1 Thus, region-specific weights represent collectiveneeds of all queries that have requested data in that epoch.Although the radar follows the 30-seconds sensing epoch,the deadline of queries is determined by end-user and canbe of arbitrary lengths. A recent work[25] in radar sensornetworks studies requirements from end-users like meteo-rologists, first responders etc., and shows the deadlines fordifferent queries. We use this study as a baseline for settingdeadlines for different queries in this paper.

Assuming the weights are computed before each epochbegins, the radar then scans all regions with non-zeroweights during the epoch. Each scan is assumed to pro-duce tens of Megabytes of raw data, which is typically muchhigher than bandwidth available to each radar in a multi-hop802.11 mesh network. Thus, the primary constraint at a radarnode is bandwidth, and the radar node must determine howto intelligently transmit the results of the scan back to theproxy.

Each proxy is assumed to be a server (or a server clus-ter) with significant processing and memory resources. Theweather monitoring applications described in Section 2.1 areassumed to execute at the proxy, processing data streamsfrom various radars in real-time. Each application is as-sumed to process data from an epoch and issues per-radarqueries for data that it needs in the next epoch.

Assuming such a system, this paper addresses the follow-ing questions:• How can the radar sensor system merge and jointly han-

dle queries with diverse high-level needs such as tor-nado detection, 3D wind direction estimation and 3Dassimilation?

• Since the raw data from a scan exceeds the availablenetwork bandwidth and this bandwidth can vary signif-icantly over time, how should a radar node intelligentlycompress the raw data prior to transmission?

• How should the radar prioritize the transmission of thiscompressed scan result back to the proxy node so thatapplication overall utility is maximized?

1For instance, a region that is not requested by any query willreceive a weight of zero and need not be scanned by the radar.

• Since the query load on different radars can be un-even and data from some radars may be more criticalthan others during intense storms, how should the proxyglobally control transmissions across radars to ensurethat important data gets priority?

The following section discusses techniques employed bythe MUDS system to address these questions. For simplicityof exposition and because optimizing the radar scan strategyis not the goal of our work, we assume each radar points at afixed altitudeφ and performs a 360o scan of the atmosphereresulting in a full 2D scan. It is straightforward to extend thediscussion to three dimensional partial scans where both thealtitudeφ and the rotationθ are varied in a scan. Also, sinceour focus is on multi-user data sharing in a wireless envi-ronment, we do not focus on the design issues of long rangewireless mesh networks, and assume that existing techniquessuch as [5, 17] can be used.

3 MUDS System ArchitectureThe proxy and sensor in the MUDS system interact con-

tinually to maximize utility under query and bandwidth dy-namics. This interaction has four major parts: (a) a multi-query aggregation phase at the proxy and radar to compute asingle unified query per epoch, (b) progressive compressionof the radar scan at each radar by using the unified queryas input, (c) a utility-driven scheduling phase at each radarwhere packets are prioritized by overall utility gain, and (d)a global transmission control phase driven by the proxy tooptimize transmissions from different radars.

Multi-query aggregation: The first phase of our systemoperation is the multi-query aggregation phase where mul-tiple user queries in an epoch are combined to generate asingle unified query. This is done both by the proxy as wellas the radars — the proxy uses the unified query for globaltransmission control, and the radar uses it for progressivecompression and scheduling. Each user query is associatedwith a weight, a spatial region of interest, and a deadline.The weight of a query is dependent on the priority of theuser (e.g. the National Weather Service is a high priorityuser), and the priority of the query to the user (e.g. a tornadodetection query has higher priority during times of severeweather). Each query is also associated with a spatial areaof interest, for instance, the wind direction estimation queryis only meaningful for overlapping regions between radars.

Page 5: Multi-user Data Sharing in Radar Sensor Networks

Queries are executed in batches — queries that arrive withina single epoch are merged to generate a joint spatial querymap that captures the needs of all concurrent queries. Anexample of the spatial map that merges a tornado detectionand a 3D assimilation query is shown in Figure 3. The merg-ing of queries results in their weights being accumulated forshared regions of interest. The set of queries in an epoch iscommunicated by the proxy to the individual radar sensorswhenever there is a change due to the arrival of new queries.

Progressive compression:Each radar scan produces tensof Megabytes of raw data that must then be transmitted backto the proxy node. Since the raw data rate is significantlyhigher than the bandwidth available per radar on the mesh,the data rate must somehow be reduced prior to transmission.The existingNetRad[24] system employs a simple averagingtechnique to down-sample data—neighboring readings areaveraged and replaced by this mean; the larger the numberof neighboring readings over which the mean is computed,the greater the reduction in data rate. Rather than using anaive averaging technique, our system relies on the querymap to intelligently reduce the data rate using aprogres-sive compressiontechnique. The progressive compressionengine uses the unified query map and compresses data intwo steps. First, the weights of different regions in the mapare used to split the radar scan into multiple smaller regions,such that each region has a fixed weight and a fixed set of as-sociated queries. Thus, the radar scan in Figure 3 is split intothree regions with weights 1, 2, and 3 respectively. Each ofthese regions is then progressively encoded using a wavelet-based progressive encoder. The encoder compresses and or-ders data in each region such that most important features inthe data is transmitted first, and less important features aretransmitted later. Finally, the progressively encoded streamscorresponding to different regions are input to a utility-basedscheduler at the radar.

Utility-driven packet scheduling: The utility-basedscheduler schedules packets between different streams fromdifferent epochs, and makes a decision regarding whichpacket to send from among the streams. This decision isbased on the weight associated with the stream and the util-ity of the packet to the queries that are interested in thestream. For example, stream 3 in Figure 3 is of interest toboth queries; therefore transmitting a packet improves theutility for both the queries. In order to compute the utility ofa packet, the radar usesa priori knowledge of how applica-tion utility relates to the mean square error (MSE) of the data.This provides a mechanism for the scheduler to observeer-ror in the compressed raw dataand determine how this errorwould translate toapplication error. As we describe later,the mean square error of the data influences utility in differ-ent ways for different applications. The scheduler computesthe total benefit (computed as the product of marginal util-ity of the packet and weight assigned) that would result fromtransmitting the first packet from each stream, and picks thestream with greatest increase in benefit. Figure 4 provides anillustration of the scheduling decision. In the example, 66%of the first stream has been transmitted but only 33% of thesecond stream has been transmitted. Therefore, the differ-ence in mean square error is likely to be higher by transmit-

Stream 1: Tornado Detection

Stream 2: 3D assimilation + 3D wind direction estimation

66%

33%Transmit to

proxy

Utility-based Scheduler

Figure 4. In this scenario, 66% of stream 1 and 33% of stream 2have been transmitted. The scheduler determines the marginal utilityof transmitting a packet from each of the streams for the applicationsinterested in the streams and decides which packet to transmit next.

ting a packet from the second stream. However, there are twoadditional factors to consider. The first stream correspondsto a tornado detection query, which requires high resolutiondata in order to precisely pinpoint the location of the tor-nado, whereas the second stream corresponds to a 3D assim-ilation query and 3D wind direction estimation queries, eachof which needs only less precise data. On the other hand,a packet from the second stream is useful to two concurrentqueries, whereas a packet from the first stream is only usefulfor tornado detection. Thus, the decision of what packet tochoose depends on the mean square error of the data, numberof queries interested in the data, weights of the queries, andimportantly, the utility function of the queries.

Global Transmission Control: While the progressiveencoding and utility-driven scheduling at each sensor opti-mize for multiple queries at a single radar, there is a needfor global control of transmissions to maximize overall util-ity across the network because radars within the same wire-less contention domain share the wireless media and con-tend with each other while transmitting. In particular, thisis useful when queries are not evenly distributed across thenetwork, and some nodes that are handling higher priorityqueries need more bandwidth than others. In this case morebandwidth should be allocated to the radars that are achiev-ing higher marginal utility. The proxy uses a simple globaltransmission control policy where it monitors the marginalutility of incoming packets from different radars. If there isa great imbalance in the marginal utility of streams from dif-ferent radars, it notifies the radar with lower marginal utilityto stop its stream temporarily. This has the effect of reducingcontention in the network, especially at nodes close to theproxy, thereby potentially enabling a radar with more impor-tant data to obtain more bandwidth to the proxy.

4 MUDS System DesignWe describe each component of the MUDS architecture

in greater detail in this section.

4.1 Multi-Query AggregatorThe multi-query aggregator is central to the data sharing

goals of our system. Aggregating multiple user queries intoa single aggregated query has two benefits. First, it mini-mizes the number of scans performed by the radar (whichis time and energy-intensive) since each radar scan is usedto answer a batch of queries. Second, it allows the data ina single scan to be transmitted once but shared to answermultiple queries, thereby maximizing query utility in lim-ited bandwidth settings. In contrast, a system that scans and

Page 6: Multi-user Data Sharing in Radar Sensor Networks

transmits data separately for each query would be extremelyinefficient both due to increased scanning overhead, as wellas the duplication of data transmitted.

The proxy batches all queries that are posed in eachepoch, and at the beginning of the next epoch, it sends toeach radar a list of queries that require data from that radar.An alternative model could have been for the proxy to mergethe queries and transmit only the merged query to the radarsensor, but we eschewed this option since it would consumemore bandwidth than just sending the queries to the radar.Each query is specified by a 4-tuple<QueryType, ROI, Pri-ority, Deadline> that shows the type, region of interest, pri-ority, and the deadline of the query. In our system, the re-gion of interest is represented by a sector or a rectangle forsimplicity, although our approach can be easily extended tohandle more arbitrary regions of interest. The priority canbe either specified by the query or implicitly specified by theproxy — for instance, if the user is a high priority user likethe National Weather Service — or can be determined as acombination of the two.

The multi-query aggregator then combines multiple userqueries into a single aggregated query plan. The query planthat is generated is a spatial map in which the spatial areacorresponding to the region covered by the radar is pixelated.For each pixel in the scan data, the corresponding pixel in thequery plan is a list of 3-tuples<QueryType, Weight, Dead-line>, that show the type, weight, and the deadline of queriesinterested in data sensed at that pixel.

The weight value of a pixel for each query represents the“importance” of transmitting data sensed from that pixel tothat query. We use a heuristic for determining pixel weightsin order to maximize application utility. Letpi , Ii , anddirepresent the priority, the region of interest, and the deadlineof queryi. Priority pi is represented as a scalar value; regionof interest,Ii , is represented as a 2D map whereIi(u,v) is 1if the pixel (u,v) is within the region of interest ofi, and 0otherwise; and deadline,di is in seconds.

Let wi(u,v) represent the weight of pixel(u,v) for queryi. We would like the following three criteria to be satisfied:i) the weight for the pixel should be greater if the query hashigher priority than other queries, ii) the weight for the pixelshould be greater if the query’s deadline is shorter than otherqueries since higher weight will result in the data being trans-mitted first, and iii) the weight for the pixel should be zero ifthe pixel is not in the region of interest of queryi. Thus, theweightwi(u,v) is defined as:

wi(u,v) = pi Ii(u,v)1di

(1)

4.2 Progressive Compression EngineData compression is an integral component of rich sen-

sor networks where the data rates can be considerably higherthan available bandwidth. In our system, we use progressiveencoding to compress raw data. Progressive compression ofdata yields two benefits: (a) it enables the system to use allavailable wireless bandwidth to transmit data, thereby adapt-ing to bandwidth fluctuations, and (b) it enables us to orderdata packets based on utility of data to queries, thereby max-

imizing overall utility.Progressive encoding (also known as embedded encod-

ing) compresses data into a bit stream with increasing accu-racy. This means that as more bits are added to the stream,the decoded data will contain more detail. In our system, weuse a wavelet-based progressive encoding algorithm calledset partitioning in hierarchical trees (SPIHT)[20]. Thechoice of a wavelet encoder is well-suited for radar data pro-cessing applications since meteorological tornado detectionalgorithms use wavelet-based processing in order to detectdiscontinuities in reflectivity and velocity signals [6, 14].Moreover, SPIHT orders the bits in the steam such that themost important data is transmitted first. Thus, the decodeddata can achieve high fidelity even with few packets trans-mitted.

We provide a brief overview of the SPIHT algorithm next(refer [20] for a detailed discussion). The input data for thealgorithm is assumed to be a two-dimension matrix. BeforeSPIHT encoding, the matrix is first transformed into sub-bands of different frequencies using the wavelet transform.Then the subbands are formed into a pyramid in ascendingorder of frequency from top to bottom. The subband with thelowest frequency is on the top of the pyramid. A hierarchicaltree is built on the pyramid, naturally defines the spatial rela-tionship on the pyramid. Each node of the tree correspondsto a pixel in current subband. Its direct descendants corre-spond to the pixels of the same spatial orientation in the sub-bands in next higher frequency of the pyramid. The SPIHTencoding iterates through the hierarchical tree starting fromthe root node. In each iteration, the most significant bit ofeach node is output into a stream and is removed from thatnode. In the generated stream, the most important data is atthe head of the stream because most natural images like pho-tos or radar scans have energy concentrated in the low fre-quency components so the significance of a point decreasesas we move from the highest to the lowest levels of the tree.

Besides generating the progressive stream, the SPIHT en-coder also generates an incremental trace of the encodedstream that shows what the mean square error of the decodeddata would be after sending each byte of the stream. As de-scribed in the next section, this feature is essential to performutility-driven scheduling of packets.

We made a few modifications to the standard SPIHT en-coder to adapt it to our needs. The progressive encoding en-gine in our system first splits each scan into multiple regionssuch that all pixels in a region share the same list of threetuples,<QueryType, Weight, Deadline> in the aggregatedquery map. Although this may result in an exponential num-ber of regions with respect to the number of queries in theworst case, in practice we find the number of regions to besmall for radar queries. Each of these regions is encoded togenerate a progressively compressed stream per region. Onepractical problem is that the standard wavelet transform thatexpects a square matrix, but each region can be of arbitraryshape. To deal with this, we use a shape adaptive waveletcoding scheme[21] to encode each region. The shape adap-tive wavelet coding encodes arbitrarily shaped object withoutadditional overhead, i.e., the number of coefficients after thetransform is identical to the number of pixels in the origi-

Page 7: Multi-user Data Sharing in Radar Sensor Networks

nal arbitrarily shaped object. After the encoding, the gener-ated streams are buffered and fed into the local transmissionscheduler.4.3 Local Transmission Scheduler

At any given time, a radar may have multiple streams thatare buffered and being transmitted by the local transmissionscheduler. The goal of this scheduler is to optimize the trans-mission order of the data in the streams in order to maximizeoverall application utility despite fluctuating bandwidth con-ditions. We describe this in detail next.

Each stream buffered by the scheduler comprises pack-ets of the same length (1KB in our implementation). Thelocal transmission scheduler optimizes the transmission or-der of the packets based on their marginal utility to the setof queries corresponding to the stream. The marginal utilityof a packet is the increase in utility resulting from the trans-mission of that packet. Informally, the utility of a prefix ofa stream is determined by the application error that resultsfrom decoding and processing that prefix.

Formally, let p denote some prefix of a stream and leti denote a query corresponding to that stream. The utilityUi(p) of p to queryi is given by

Ui(p)=

{wi if erri(p) < req erri(p)wi

maxerri(p)−erri(p)maxerri(p)−req erri(p) if erri(p) ≥ req erri(p)

(2)wherewi is the weight of the queryi; erri(p) is the ap-

plication error that results from decoding and processingp;maxerri(p) is the maximum value of the application error(computed as the error corresponding to a 1KB prefix of thestream); andreq erri(p) is the error value below which theuser is satisfied with the result. Thus, the utility decreaseslinearly with the application error and stops decreasing whenthe user-specified limit is reached. The marginal utility of apacket to a query is the difference in utility to the query justbefore and after sending the packet.

How does the scheduler compute the application errorerri(p)? It is impractical for the scheduler to measureerri(p)by running the application on each prefix of the stream be-cause of the huge computation overhead of decompressingdata and executing the application. Thus, we need a sim-ple and accurate method to determineerri(p) given just thecompressed stream. One possibility is to use a data-agnosticmetric such as the compression ratio as an indicator of appli-cation error. However, since the progressive encoder couldbe encoding different scans with very different features, thismetric is only weakly correlated with application error.

Fortunately, our empirical evaluation confirms that a data-centric metric, the mean square error of the data stream, ishighly correlated to the application error. We leverage thisobservation to estimate application error as follows. We seedthe scheduler with a functionseederri(mse) that maps meansquare error of the decoded data to application error. Sucha function is generated a priori for each application usingtraining data from past radar scans. In the training proce-dure, scans are compressed into a progressive stream usingthe SPIHT compression algorithm. The stream is cut off atdifferent prefix lengths, giving us decoded data of varying

fidelity. For each such prefix, the application is run on thedecoded data, and the error of the decoded data as well asthe application error are measured. Based on this measureddata, we build a functionseederri(mse) for each applicationand seed each radar with this function.

Finally, during regular operation, the scheduler needs tocomputemsecorresponding to the decoded prefix just af-ter sending the packet. Themsecan be obtained from theerror trace generated by the progressive compressor as de-scribed in Section 4.2. The scheduler estimateserri(p) asseederri(mse) by simply performing a lookup table. Theweight of the querywi is incorporated in Equation 2 so thatmore urgent queries have higher utility. Note that by con-struction, all pixels in a region have the same weight.

The total marginal utility of a packetx is its marginal util-ity across all queries corresponding to the stream. To un-derstand this, suppose there arem queries corresponding toa stream. LetUi(p) be the utility of prefixp to queryi justbefore sending packetx, andUi(p+ x) just after. Then, theoverall marginal utility of packet is given by

∆U(p) = ∑i=1···m

(Ui(p+x)−Ui(p)), (3)

where the operator ‘+’ denotes extending the prefix to in-clude the next packet. Based on Equation 3 the scheduler cancalculate the marginal utility of the packet at the head of eachstream. Given the utility, the scheduler picks in each roundthe packet with maximum marginal utility across all packetsat the heads of existing streams, and transmits that packet.Such a scheduling algorithm can be implemented efficientlyin practice. First, we note that packets within a stream are al-ready present in order of decreasing marginal utility, so onlythe packet at the head of each stream needs to be examinedfor a scheduling decision. The marginal utility of the packetat the head of each stream can be computed efficiently witha small number of table lookups — one lookup to identifythe MSE difference resulting from transmitting the packet,and one lookup per query to identify the marginal utility forthe query from decoding the packet. Finally, the packet withthe highest marginal utility across all streams needs to bechosen. Since the number of streams is small, our imple-mentation simply uses a linear insert and search procedure;it is straightforward to use a heap instead.

The above packet scheduling algorithm achieves the max-imum total utility across all the concurrent streams at eachpoint in time ifU(p) is concave, i.e., the marginal utility isstrictly decreasing. This can be proved by reducing it to theknapsack problem[26]. Our empirical evaluation confirmsthat the marginal utility decreases with the length of the pro-gressively encoded stream.

Example: We exemplify our methodology for comput-ing the seederr() function for the three applications. Wefirst consider tornado detection. This application uses aclustering-based technique to detect tornadoes, and gener-ates the centroids and intensities of each tornado. In or-der to determine the error in tornado detection, we run theapplication on scans that were decoded after compressingthem to different compression ratios. Let the data MSE for

Page 8: Multi-user Data Sharing in Radar Sensor Networks

a decoded scan bemse1. There are three cases to considerto determineseederr(mse1). First, if the result on the de-compressed scan detects a tornado,t, within 300m of theresult on the raw scan, then this is a positive result. Thechoice of 300m as the threshold for positive detection wasmade based on discussions with meteorologists. In this case,tornado detection errortornadoerr(t) is computed as fol-

lows: tornadoerr(t) = |(RI(t)−DI(t))| · d(t)300 whereRI(t) is

the intensity of the tornado as determined from processingthe raw data,DI(t) is the intensity from processing the de-coded data, andd(t) is the distance between the actual cen-troid from the raw data, and the computed centroid from thedecoded data. Second, if a tornado,t, is detected in the de-coded scan but no tornado is detected in the raw scan within300m, then it is considered a false positive. In this case,tornadoerr(t) = DI(t). Finally, if a tornado,t, is detectedin the raw scan but no tornado is detected within 300m of itscentroid in the decoded scan, then this is considered a falsenegative, andtornadoerr(t) = RI(t).

The total error,seederr(mse1) is the sum over of theabove errors over all tornadoes detected in the raw scan andthe compressed scan. Determining the error function for the3D wind direction estimation and 3D assimilation applica-tions is more straightforward. Here, the applications are runon the raw radar scan and the decompressed scan, and themean square error of the difference between these results isused as the error for the application.

4.4 Global Transmission ControlWhile the local transmission scheduler uses the weight

map to optimize what order to transmit packets from eachradar, the global transmission controller performs a decisionacross all the concurrent streams on all the radars. Radarscompete with each other for wireless bandwidth in a num-ber of ways: (i) radars within the same wireless contentiondomain contend with each other when transmitting, (ii) inmulti-hop communication, all the nodes in the same rout-ing branch share the bandwidth of a forwarding node, and(iii) the proxy’s incoming bandwidth is shared among all theradars in the network. As a result, maximizing local utility ateach radar may not optimize global utility across all radarsin the network. A radar with higher utility data might havemuch lower available bandwidth than a radar with lower util-ity due to a number of factors.

This necessitates global control of transmissions fromradars, in addition to local utility optimization. Global trans-mission control in wireless networks has been the subject ofsignificant work (e.g. [12]). Most of these approaches usethe idea of a conflict graph that captures the interference pat-terns between nodes in the network. Such a conflict graphcan be used as the foundation for scheduling transmissionsfrom nodes such that spatial reuse is maximized, in additionto throughput.

While the use of conflict graphs is the subject of our futureresearch in the area, we use a simple but effective heuristicin this work. In our approach, the proxy monitors the incom-ing streams from the radars, and stops the transmission ofstreams that will not improve overall utility much. Specifi-cally, the proxy stops a stream when its utility reaches 95%

of its maximal utility. The proxy knows the maximum util-ity since it has a locally generated version of the aggregatedquery plan. Since utility is a concave function of the lengthof the transmitted data stream, the utility of a stream growsvery slowly after having achieved 95% of its maximal value.Therefore stopping the stream does not affect overall utilitysignificantly. However, stopping a stream can benefit otherstreams since there will be less channel contention, and lessforwarded data to the proxy. We experimentally demonstratethe effectiveness of such a threshold-based global transmis-sion control in Section 5.

5 Experimental EvaluationIn this section, we evaluate the performance of our system

using a radar trace-driven prototype implementation as wellas trace-driven simulations. We use two data traces in our ex-periments. The first is theOklahoma datasetcollected froma 4-radar testbed deployed in Oklahoma (obtained from me-teorologists [4]). Each radar in the testbed generates 107MBDoppler readings per 360-degree scan every 30 seconds. Wecollected 30 minutes of trace data from each radar. To obtaina larger scale dataset for scalability experiments, we also ob-tained an emulated radar data set generated by the AdvancedRegional Prediction System (ARPS) emulator. The ARPSemulator is a comprehensive regional-to-stormscale atmo-spheric modeling system designed by the Center for Anal-ysis and Prediction of Storms, which can simulate weatherphenomena like storms and tornadoes, and generate data atthe same rate as the real radars in the Oklahoma testbed. Weemulated 12 radars in the emulator and collected 30 minutesof trace data from each of them. We refer to this trace as theARPS dataset. The ARPS emulator takes days to generate a30 minute trace, hence larger traces were prohibitively timeconsuming. Note that the actual raw data from radars canbe up to an order of magnitude larger than the two datasetsthat we used. We were limited to collecting smaller datasetsby the bandwidth and storage capacity in the Oklahoma net-work, and the speed of the ARPS emulator.

Our radar network prototype comprises 13 radar nodes,each emulated by a Apple Mac Mini computer with an802.11 b/g wireless card. We manually configure the nodesinto a 3-hop wireless topology (shown in Figure 5) by set-ting their routing tables appropriately. The proxy is a serverrunning a proxy process that collects data from radars andprocesses user queries. The other twelve nodes run radarprocesses that encode and transmit radar data. To simplifyour protocol design, we use TCP as our transmission proto-col, since the progressively stream needs to be received reli-ably and in-order for decoding. Two TCP/IP connections arebuilt between each radar and the proxy—one for transmittingdata from the radar to the proxy, the other for sending con-trol information from the proxy to the radar. The progressivecompression engine was adapted from the open-source Qc-cPack library [9] that provides an implementation of SPIHTfor images.

To evaluate performance of individual components of oursystem under controlled conditions, we augment prototypeexperiments with simulations using real traces. In order toevaluate the query processing performance of our system,

Page 9: Multi-user Data Sharing in Radar Sensor Networks

Proxy

Figure 5. The routing topology of a 13-node wireless testbed with oneproxy and twelve emulated radars.

we implement a query generator. Each generated query isa 4-tuple< Type,ROI,Deadline,Priority >. TheTypefieldis the application type which can be tornado detection, winddirection estimation or 3D assimilation. TheROI field showsthe query’s region of interest which is represented by a sec-tor of the radar’s circular sensing range. TheDeadlinefieldrepresents the query’s reply deadline in seconds. ThePrior-ity field represents the query’s priority, which is determinedby the user’s preference to this query. We implemented twoquery arrival models: (i) aPoisson arrival modelin whichqueries arrive at each radar as a Poisson process with con-figurable average arrival rate, (ii) adeterministic modelinwhich queries arrive at each radar in fixed order at fixedrate. For the tornado detection query, we designed a addi-tional model, in collaboration with meteorologists, that mod-els query patterns during a tornado. In this model, the prior-ity of the tornado query, and the nodes on which it is poseddepends on where the tornado is predicted to be localized.

5.1 Determining the Utility FunctionAt the core of our system is a utility function that cap-

tures application-perceived utility as a function of the meansquare error of data being transmitted by the radars. To eval-uate the utility functions for the three applications, we ranthe applications on lossily compressed versions of the Ok-lahoma dataset. We lossily compress the data traces to1

2i

of original size withi ranging from 1 to 13. For each ofthese compression ratios, we measure the mean square errorof the resulting data after decompression, as well as the ap-plication error after executing it on the decompressed data.Given the application error, the utilities for the applicationsare generated using Equation 2. Here we use fixed user re-quirementEuser in the experiments so that the utility func-tions only need to computed once. We fit piece-wise linearfunctions to utility functions, and use these functions as theutility functions in the rest of our experiments. The graphon the top of Figure 6 shows the piece-wise utility functionsof the three applications obtained from our empirical evalua-tion. The bottom graph shows an example of how this utilityfunction would translate to actual number of packets when ascan is compressed.

5.2 Impact of Weighting PolicyThe weight value of a pixel quantifies the importance of

data sensed from that pixel to the queries. In MUDS sys-tem, we use Equation 1 to determine the weight of a pixel,in which the area of interest of the query, its priority aswell as its deadline are taken into account. We comparethis policy against three other weighting policies. We usea policy that only takes the area of interest into account, i.e.wAOI = Ii(u,v), as a baseline of our comparison. Then weconsider two variants of our policy: i)wdeadline= Ii(u,v)/di

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

Utilit

y

Data MSE

3D AssimilationWind Dir Estim

Tornado Detect

0

0.2

0.4

0.6

0.8

1

0 500 1000 1500 2000

Utilit

y

Data Size(KB)

3D AssimilationWind Dir Estim

Tornado Detect

Figure 6. Utility functions for the three applications are derived bycompressing and evaluating application performance on traces fromthe Oklahoma dataset.

in which the area of interest and the deadline are taken intoaccount, and iii)wpriority = Ii(u,v)pi in which the area of in-terest and the priority are taken into account.

We evaluate the four policies using trace-driven simula-tions with the Oklahoma dataset. Table 1 shows the averageutility per epoch achieved using different weighting policies.The weighting policy used in MUDS performs the best andachieves 1.6 folds more utility than the baseline, while thepriority-based policy achieves 30 percent more utility thanthe deadline-based policy, which shows that the priority hashigher impact than the deadline. This comparision demon-strates that our weighting policy decently quantifies the im-portantce of data.

Policy wAOI wdeadline wpriority wMUDSUtility 0.612 0.783 0.976 1.633

Table 1. Comparision of different weighting policies.

5.3 Performance of Progressive CompressionIn this section, we evaluate two main benefits of the

SPIHT progressive compression algorithm: (i) higher com-pression rate, and (ii) adaptation to bandwidth fluctuation.

5.3.1 Compression EfficiencyThe extreme data generation rates of radar sensors makes

compression an essential component of radar sensor systemdesign. In this section, we compare the compression effi-ciency of the SPIHT algorithm that we employ against anav-eragingcompression algorithm that is currently used in theNetRadradar system. Each radar scan is represented as a ma-trix of gates x azimuths, where the radial axis is divided intogates, and the angular dimension is divided into azimuths.

Page 10: Multi-user Data Sharing in Radar Sensor Networks

0 20 40 60 80

100 120 140 160 180 200

100 200 300 400 500

MSE

Bandwidth(kbps)

SPIHT CompressionAverage Compression

Figure 7. Comparison of SPIHT progressive compression againstaveraging compression. Each algorithm compresses data to the size thatcan be transmitted in one epoch for a given bandwidth.

The averaging compression algorithm compresses data sim-ply by averaging along the azimuth dimension. In order tocompress datan times, the averaging compression algorithmaverages values fromn adjacent azimuths in the same gateposition . The compressed data hasn times fewer azimuthsthan the original data.

We compare the two compression algorithms using trace-driven simulations with the Oklahoma dataset. Each scan inthe trace is compressed to the sizes that can be sent in oneepoch (30 seconds) under a fixed bandwidthB, i.e.,s= 30·B.The MSE of the compressed data is measured for differentbandwidth settings ranging from 10kbps to 500kbps. Fig-ure 7 shows MSE as a function of bandwidth. With in-creasing bandwidth, the MSE of the SPIHT algorithm de-creases much more quickly than the averaging algorithmsince SPIHT captures the key features of the radar scan usingvery few packets. Even at extremely low bandwidths such as20kbps, the MSE of the SPIHT compressed stream is 20,whereas the MSE of the same stream with averaging com-pression is an order of magnitude higher at 200. This showsthat SPIHT is an extremely efficient compression scheme forradar data.5.3.2 Bandwidth Adaptation

Next, we evaluate the ability of SPIHT to adapt to band-width fluctuations. SPIHT adapts to fluctuations naturallybecause of its progressive feature, i.e., data can be decodedprogressively without receiving the entire compressed datastream. We compare it against a non-progressive compres-sion algorithm under different levels of bandwidth fluctua-tion. The non-progressive algorithm is implemented by sim-ply removing the progressive feature from SPIHT. In otherwords, the non-progressive SPIHT encoder first estimateshow much bandwidth is highly likely to be available untilthe deadline of the stream, and would compresses data tothat size before transmission. The data can be decoded bythe proxy only after the entire compressed stream is receivedsince no partial decoding is possible.

Unlike progressive compression where the receiver candecode even a partially transmitted stream, a non-progressivecompression-based scheme has to rely on a conservative es-timate of the available bandwidth to ensure the compresseddata can be fully transmitted and received before the querydeadline. We use a moving window estimation algorithm in

0

20

40

60

80

100

120

140

0 5 10 15 20 25

MSE

Standard Deviation(kbps)

ProgressiveNon-Progressive

Figure 8. Comparison of progressive compression against non-progressive compression for different levels of bandwidth fluctuation.Bandwidth fluctuation follows a normal distribution with mean 40kbps;standard deviation is varied from 0kbps to 25kbps.

our implementation. The non-progressive encoder considersa window of bandwidth values in lastw epochs. The valuesare sorted in descending order and the 95th percentile valueis taken as the estimated bandwidth. We use a window sizeof 20 in the experiments.

We perform a trace-driven simulation using the Oklahomadataset where the available bandwidth in each epoch is cho-sen from a normal distribution with mean 40kbps. The stan-dard deviation of the distribution is varied from 0kbps to25kbps in steps of 5, and the resulting MSE from the twoschemes is measured. Figure 8 shows MSE of the decodeddata as a function of the standard deviation of the distribu-tion. At a standard deviation of zero, the two compressionalgorithms achieve the same accuracy since they utilize thesame amount of bandwidth. As the standard deviation in-creases, the bandwidth utilized by the non-progressive al-gorithm drops quickly, because it estimates available band-width conservatively. Therefore, the accuracy of the non-progressive algorithm degrades much more quickly than theprogressive algorithm. For the highest standard deviation,the MSE of the non-progressive algorithm is four times morethan that of the progressive algorithm.

Figure 9 gives us a time-series view of how band-width fluctuation impacts the two schemes. While the non-progressive scheme has high MSE due to its conservative es-timate, the MSE for the progressive compression scheme fol-lows the fluctuations in bandwidth since it is able to exploitthe entire available bandwidth. The R-value for the band-width and MSE time-series for the progressive algorithm is−0.79, indicating robust anti-correlation:i.e. bandwidth isinversely correlated to the MSE.

5.4 Performance of Data SharingIn multi-user sensor networks with diverse end-user

needs, sharing data among queries greatly improves utility ofthe system. We evaluate the ability of our system to handletwo types of data sharing: i) queries with identical regionsof interest but with different deadlines, and ii) queries withidentical deadlines but with overlapping regions of interest.

5.4.1 Temporally Overlapping QueriesWe first consider the case where queries have the same

region of interest but have different deadlines. In this case,

Page 11: Multi-user Data Sharing in Radar Sensor Networks

0

20

40

60

80

100

5 10 15 20 25 30

Band

widt

h(kb

ps)

Epoch

0

10

20

30

40

50

60

0 5 10 15 20 25 30

MSE

Epoch

ProgressiveNon-Progressive

Figure 9. Time series of bandwidth and MSE of decoded data.Bandwidth fluctuation follows a normal distribution with mean valueat 40kbps and standard deviation of 25kbps.

the progressive compression engine generates a single pro-gressively compressed stream for both queries. The queryprocessor decodes the compressed stream as it is received,and processes the two queries when their deadlines arise. Incontrast, a system using non-progressive compression cannoteasily share data between queries. We compare our approachagainst a non-progressive compression scheme in which datais compressed and transmitted separately for each query in-dividually.

We evaluate the two schemes using trace-driven simu-lations with the Oklahoma dataset. Two queries—tornadodetection and 3D assimilation—arrive at a radar every twoepochs. They have different deadlines but both ask for all thedata from a 360-degree scan. The tornado detection queryhas a deadline of one epoch, and the 3D assimilation queryhas a deadline of two epochs. Figure 10 shows the util-ity of the two schemes as bandwidth is varied from 10kbpsto 100kbps. At bandwidth of 10kbps, our system achievesfive times the utility of the non-progressive scheme. As thebandwidth increases, both schemes can get significant datathrough to the proxy, therefore the relative utility gains fromour system reduces.5.4.2 Spatially Overlapping Queries

We next evaluate our system’s ability to handle querieswith the same deadline and overlapping regions of interest.Regions that are of interest to multiple queries are weightedhigher than regions of interest to just one query, and aretherefore transmitted earlier and with higher fidelity than re-gions that are only of interest to one of the queries. We com-pare our scheme to a scheme without data-sharing. For thenon-data-sharing scheme, data for different queries are sentseparately even when there is overlap between the queries.

0 0.2 0.4 0.6 0.8

1 1.2 1.4 1.6

10 20 30 40 50 60 70 80 90 100

Avg

Utilit

y/Ep

och

Bandwidth(kpbs)

ProgressiveNon-Progressive

Figure 10. Performance for temporally overlapping queries. Twoqueries with different deadlines but same region of interest arrive atthe radar every two epochs.

0 0.2 0.4 0.6 0.8

1 1.2 1.4 1.6

0 30 60 90 120 150 180

Utilit

yOverlap Angle(Degree)

No Data-SharingData-Sharing

Figure 11. Evaluation of the impact of data sharing on utility. Twoapplications, tornado detection and 3D assimilation, with overlappingsectors are considered.

We consider two queries, tornado detection and 3D assimi-lation, each of which requires data from a 180-degree sector.The degree of overlap between the regions of interest for thetwo queries is varied from 0 degrees to 180 degrees in stepsof 30 degrees.

Figure 11 shows the end-user utility achieved by ourscheme and the non-data-sharing scheme as the angle ofoverlap of the two queries is varied. As the angle ofoverlap increases, the utility gain from our scheme in-creases. For an overlap of 180 degrees (maximum overlap),our scheme achieves 21% higher utility than the non-data-sharing scheme.

5.5 Performance of Local SchedulerWe now evaluate the benefit of the local transmission

scheduler, which always transmits the packet with the high-est utility gain first. We compare this approach against anapproach that uses a random transmission scheduler, whichpicks packets randomly from heads of the data streams. Inthe experiments, we simulate one radar and one server andcontrol the available bandwidth. The tornado detection, winddirection estimation, and 3D assimilation queries arrive inround robin order at the radar at the beginning of each epoch.All queries have the same priority and the same deadline ofthree epochs. We run the two systems at bandwidth rang-ing from 10kbps to 150kbps, and seeded with the Oklahomadataset.

Figure 12 shows the average utility per epoch as a func-tion of bandwidth. For bandwidth lower than 150kbps, the

Page 12: Multi-user Data Sharing in Radar Sensor Networks

0

0.1

0.2

0.3

0.4

0.5

0 20 40 60 80 100 120 140 160

Avg

Utilit

y/Ep

och

Bandwidth(kbps)

Utility-based ScheduleRandom Schedule

Figure 12. Comparison of utility-driven scheduling against randomscheduling.

utility-driven scheduler always achieves higher utility thanthe random scheduler, with as much as 100% increase in util-ity at low bandwidth. As bandwidth increases, the utilities ofthe two systems become closer. Our system performs betterunder low bandwidth conditions because the most importantdata are always sent in the first packets. When bandwidth ishigh enough to send all data in high fidelity, e.g., at 150kbps,there is negligible benefit from utility-driven scheduling.5.6 Performance of Global Control

Having evaluated the performance of local transmissioncontrol, we next consider global transmission control by theproxy. Such an optimization is beneficial when there is animbalance in query load across different regions in the net-work. We designed an uneven query pattern as follows - atornado detection query with priority=3 arrives at each radarwhich isi-hops (i varies from 1 to 3) away from the server ineach epoch, while each of the other radars has a wind direc-tion estimation or 3D assimilation query with priority=1 ineach epoch. We use the testbed consisting of one server andtwelve radars as shown in Figure 5. Each radar in the testbedis seeded with a radar trace from the ARPS dataset.

Figure 13 shows the average utility per epoch with in-creasing number of hops from the proxy. The utility de-creases for both of the approaches as queries with high pri-ority arrive at nodes farther from the proxy. This is becausenodes on the edge of the routing topology usually have lessavailable bandwidth than nodes closer to the proxy, as packetloss probability increases as packets travel more hops. Thus,a query arriving at an edge node cannot achieve high utilitybecause of the limited bandwidth, therefore, the contribu-tion of the tornado detection query to the overall utility is re-duced. However, the global control-based approach degradesmuch slower than the approach without global control. Forinstance, when the tornado query is posed three hops fromthe proxy, the global control-based approach achieves twicethe utility of the approach without such control. This showsthat global transmission control provides a simple but effec-tive approach to deal with imbalanced query loads.5.7 System Scalability

Until now, we have characterized the performance of indi-vidual components of our system. We now turn to full systemmeasurement and evaluation on our testbed. Our goals aretwo-fold: i) to demonstrate that our system as a whole scaleswell with network size and number of queries per epoch, and,

0

0.5

1

1.5

2

2.5

1 2 3

Avg

Utilit

y/Ra

dar/E

poch

Num of Hops

w/ Global Controlw/o Global Control

Figure 13. Performance of global transmission control. Utility isshown for differing numbers of hops from the proxy to nodes havinghigh-priority queries.

ii) to provide a breakdown of the utility gains provided by thedifferent components of our system.5.7.1 Impact of Network Size

Our first set of scalability experiments test our system atdifferent network scales. In the experiments we use differentnumber of nodes in the testbed shown in Figure 5 — the oneand four node experiments are for a one hop topology, theeight node experiments are for a two hop topology, and thetwelve node experiments are for a three hop topology. Eachradar is seeded with data traces from the ARPS dataset.

The query distribution for our experiments was designed,in collaboration with meteorologists, to realistically modelquery patterns during a tornado. The three queries — tor-nado detection, wind direction estimation, and 3D assimila-tion — arrives at each radar as a Poisson process with av-erage arrival rate of one query per three epochs and stan-dard deviation of one query per epoch. The wind directionestimation queries and 3D assimilation queries are assignedweights of one or two randomly.

The priority of the tornado query, and the nodes on whichit is posed depends on where the tornado is predicted tobe. Meteorologists use tracking algorithms such as ExtendedKalman Filters to track tornado trajectories, thereby predict-ing its likely location. Therefore, in our query model, weassume that the priority of tornado detection queries is threeon radars where the tornado is predicted to be observed bythe tracker, and is one otherwise. To generate this query pat-tern, we use a visual estimate from the ARPS emulator datato determine the likely centroid of the tornado.

We compare four schemes in this experiment. The ex-isting NetRadsystem with averaging compression and con-servative bandwidth estimation (described in Section 5.3.2)provides us a baseline for comparison. Then, we considerthree variants of our system: first, we turn on progressivecompression only, then we turn on progressive compressionas well as local transmission scheduling, and finally, we in-clude global control as well. Figure 14 shows the averageutilities per epoch ofNetRadand the three variations of oursystem.

For small networks (1 or 4 nodes), our gains over theNe-tRad system are primarily due to progressive compression.For instance, when there is only one radar in the network,just the addition of progressive compression gives us 3x as

Page 13: Multi-user Data Sharing in Radar Sensor Networks

0

0.5

1

1.5

2

2.5

3

3.5

4

12 8 4 1

Avg

Utilit

y/Ra

dar/E

poch

Number of Radars

NetRadNetRad+Progress

NetRad+Progress+Local_SchdNetRad+Progress+Local_Schd+Global_Alloc

Figure 14. Scalability to network size. Breakdown of contribution ofeach component of our system to the overall utility.

much utility as the NetRad scheme. Both local schedulingand global control have limited impact for the one and fournode network settings, because there is limited contentionand considerable available bandwidth from each node to theproxy. Thus, at a network size of one, the addition of localscheduling achieves only 4% more utility than just havingprogressive compression. Global control has no impact atnetwork size 1, and limited impact at network size 4.

As system size increases, contention between nodes alsoincreases. There is less available bandwidth per radar andmore bandwidth fluctuation due to increased contention andcollisions, and consequent variations in TCP window size.As a result, both local scheduling as well as global controlgive more gains. The benefit from these schemes increaseswith growing network size. For instance, the addition of lo-cal scheduling to progressive compression increases utilityfrom 15% at network size four to 38% at network size 12.The inclusion of global control improves utility by only 4%at network size 4, but provides a 30% improvement at net-work of size 12.

Another point to note is the increasing difference in per-formance between theNetRadscheme and our full system.With all three techniques enabled, our system achieves morethan an order of magnitude improvement in utility over theNetRad system for network size at 12. As network size in-creases from one to twelve, the utility of our system onlydecreases by 25%, whereas the utility of NetRad decreasesby 80%; this comparison demonstrates the scalability of oursystem.5.7.2 Impact of Query Load

Our second scalability experiment stresses the query han-dling ability of our system. We compare our system againstthe NetRadsystem under different query loads. Since thequery processor aggregates the same type of queries into asingle query in each epoch, there are at most three queriesposted on each radar per epoch. We run the experimentson the wireless testbed at network size 12. Each radar nodeis seeded with a data trace from the ARPS emulator. Weuse constant query arrival rate in the query distribution forour experiments. In each epoch at most three queries of dif-ferent types arrive at each radar. The priorities of the winddirection estimation queries and 3D assimilation queries areassigned one or two randomly. The priority of tornado detec-tion queries is three on radars which the tornado is predicted

0

0.5

1

1.5

2

3 2 1

Avg

Utilit

y/Q

uery

Query Rate(query/epoch)

MUDSNetRad

Figure 15. System scalability to the query load.

to be observed by the tracker, and is one otherwise. We eval-uate the two systems under different query rate ranging fromone to three queries per epoch.

Figure 15 shows the average utility per query as a functionof query rate. In our system, as the query rate increases, eachquery still gets data with sufficient accuracy to achieve highutility. Thus, the utility of NetRad system decreases by 25%when the query rate increases from one to three, whereas theutility of our system only decreases by 15%. This demon-strates the scalability of our system to high query load.

6 Related WorkWe discuss related work not covered in previous sections.Radar Sensor Network:The work most related to MUDS

is the Meteorological Command and Control (MC&C)[23]system deployed in NetRad radar network that schedulessensing tasks of radars. MC&C allocates resources such asbeam position to satisfy end-user’s needs. Based on the sens-ing schedules from MC&C, our MUDS system optimizesdata transmission to maximize the total utility gain.

Multi-query Optimization: A few approaches have ad-dressed multi-query optimization in sensor networks [16,22]. For instance, [16] considers a limited form of multi-usersharing where different users request data at different ratesfrom different sensors, and [22] considers a multi-query opti-mization for arbitrary SQL queries and do simple data aggre-grations such as min, max, sum, count and average. In con-trast, we consider data sharing for considerably more com-plex applications involving spatial and temporal data shar-ing, but focuses on the specific set of queries used in radarsensor networks.

Utility-based Design: There is a growing body of re-search on utility-based approaches to address different prob-lems in sensor networks including resource allocation inSORA [15], and sensor placement [2]. Much of this workis only peripherally related to our work. For instance, SORAemploys a reinforcement learning and an economic approachfor energy optimization in sensor networks [15]. The workis not designed for multi-user scenarios.

Data compression:Many techniques have used data com-pression to reduce communication energy overhead in sensornetworks. For instance, Sadler et al. [19] consider data com-pression algorithms such as LZW for networks of energy-constrained devices. However, the use of progressive com-pression together with multi-query optimization on resource-rich platforms is a novel approach that has not been studied

Page 14: Multi-user Data Sharing in Radar Sensor Networks

in the past.Utility in Internet-based Systems:For Internet-like net-

works, Kelly [13] pioneered a utility-theoretic frameworkfor rate control and, in particular, for deconstructing TCPlike protocols. Such approaches have also been used forjointly optimizing routing and rate control [13, 11]. Theseschemes attempt to allocate resources such as bandwidthacross users without consideration to data sharing betweenthe users. Multicast rate control schemes exploit data sharingacross users; but they apply to a one-to-many environmentunlike MUDS that is designed for many-to-one or many-to-many environments.

7 Concluding RemarksIn this paper, we focused on a network of rich sensors

that are geographically distributed and argued that the de-sign of such networks poses very different challenges fromtraditional “mote-class” sensor network design. We iden-tified the need to handle the diverse requirements of mul-tiple users to be a major design challenge, and proposeda utility-driven approach to maximize data sharing acrossusers while judiciously using limited network and compu-tational resources. Our utility-driven architecture addressesthree key challenges: how to define utility functions for net-works with data sharing among end-users, how to compressand prioritize data transmissions according to its importanceto end-users, and how to gracefully degrade end-user utilityin the presence of bandwidth fluctuations. We instantiatedthis architecture in the context of geographically distributedwireless radar sensor networks for weather, and presentedresults from an implementation of our system on a multi-hop wireless mesh network that uses real radar data with realend-user applications. Our results demonstrated that our pro-gressive compression and transmission approach achieves anorder or magnitude improvement in application utility overexisting utility-agnostic non-progressive approaches, whilealso scaling better with the number of nodes in the network.

Overall, these results demonstrate the significant benefitsof multi-user data sharing in rich sensor networks. While wehave considered only bandwidth optimization in this work,we are exploring joint radar sensing and bandwidth opti-mization in our future research. We also believe that thebenefits of data sharing can apply to a wider range of ap-plications and end-users than we have explored in this work.We plan to extend our work to camera sensor networks aswell as resource-poor mote-class sensor networks in our fu-ture research. In the experiments we found out that packetlosses, retransmissions and TCP behavior have a great im-pact on overall performance of the system. To address thisproblem, we will design a hop-by-hop bulk transfer protocolthat optimizes radar data transfers in our future work.

8 AcknowledgmentsThis research was supported, in part, by NSF grants EEC-

0313747, CNS-0626873, CNS-0546177, CNS-0520729, andCNS-0325868. We wish to thank our shepherd, AndrewCampbell, as well as the anonymous reviewers for their help-ful comments on this paper.

9 References[1] D. Aguayo, J. Bicket, S. Biswas, G. Judd, and R. Morris. Link-level measure-

ments from an 802.11b mesh network. InProc. SIGCOMM, 2004.

[2] F. Bian, D. Kempe, et al. Utility based sensor selection. InProc. IPSN, 2006.

[3] V. Bychkovsky, K. Chen, M. Goraczko, A. Miu, E. Shih, Y. Zhang, et al. Cartel:A distributed mobile sensor computing system. InProc. SenSys, 2006.

[4] http://www.caps.ou.edu/. CAPS: Center for Analysis and Prediction ofStorms.

[5] K. Chebrolu, B. Raman, and S. Sen. Long-distance 802.11b links: performancemeasurements and experience. InProc. MOBICOM, 2006.

[6] P. R. Desrochers and S. Y. Yee. Wavelet-based algorithm for mesoCyclone de-tection. InProc. SPIE, 1997.

[7] B. Donovan, D. J. McLaughlin, J. Kurose, et al. Principles and design consider-ations for short-range energy balanced radar networks. InProc. IGARSS, 2005.

[8] http://www.earthscope.org.

[9] J. E. Fowler. QccPack: an open-source software library for quantization, com-pression and coding. InProc. SPIE, 2000.

[10] R. Fritchie, K. K. Droegemeier, et al. Detection of hazardous weather phenomenausing data assimilation techniques. In32nd Conference on Radar Meteorology,2005.

[11] H. Han, S. Shakkottai, C. V. Hollot, R. Srikant, and D. Towsley. Overlay TCPfor multi-path routing and congestion control. InProc. of IMA Workshop onMeasurements and Modeling of the Internet, 2004.

[12] K. Jain, J. Padhye, V. N. Padmanabhan, and L. Qiu. Impact of interference onmulti-hop wireless network performance.Wireless Networks, 2005.

[13] F. Kelly, A. Maulloo, and D. Tan. Rate control in communication networks:shadow prices, proportional fairness and stability. InJournal of the OperationalResearch Society, volume 49, 1998.

[14] S. Liu, M. Xue, and Q. Xu. Using wavelet analysis to detect tornadoes fromdoppler radar radial-velocity observations. InJournal of Atmospheric OceanTechnology, 2006.

[15] G. Mainland, D. Parkes, and M. Welsh. Decentralized, adaptive resource alloca-tion for sensor networks. InProc. NSDI, May 2005.

[16] R. Muller, G. Alonso, and D. Kossman. Efficient sharing of sensor networks. InProc. MASS, 2006.

[17] R. Patra, S. Nedevschi, et al. WiLDNet: design and implementation of highperformance WiFi-based long distance networks. InProc. NSDI, 2007.

[18] B. Philips, D. Pepyne, et al. Integrating end user needs into system design and op-eration: the center for collaborative adaptive sensing of the atmosphere (CASA).In Proceedings of the 87th AMS Annual Meeting, San Antonio, TX, USA, Jan.2007.

[19] C. Sadler and M. Martonosi. Data compression algorithms for energy-constrained devices in delay tolerant networks. InProc. SenSys, 2006.

[20] A. Said and W. A. Pearlman. A new fast and efficient image codec based on setpartitioning in hierarchical trees.IEEE Transactions on Circuits and Systems forVideo Technology, 6:243–250, 1996.

[21] S. Li, W. Li Shape-adaptive discrete wavelet transforms for arbitrarily shapedvisual object coding.IEEE Transactions on Circuits and Systems for Video Tech-nology, 10: 725–743, 2000

[22] N. Trigoni, Y. Yao, A. Demers, J. Gehrke, and R. Rajaraman. Multi-query opti-mization for sensor networks. InProc. DCOSS, 2005.

[23] M. Zink, D. Westbrook, S. Abdallah, B. Horling, V. Lakamraju, E. Lyons, V.Manfredi, J. Kurose, and K. Hondl Meteorological Command and Control: AnEnd-to-end Architecture for a Hazardous Weather Detection Sensor Network. InProc. EESR, 2005.

[24] M. Zink, D. Westbrook, et al NetRad: Distributed, Collaborative and AdaptiveSensing of the Atmosphere. Calibration and Initial Benchmarks InProc. DCOSS,2005.

[25] J. Kurose, E. Lyons, D. McLaughlin, D. Pepyne, B. Philips, D. Westbrook,M. Zink An End-User-Responsive Sensor Network Architecture for HazardousWeather Detection, Prediction and Response InProc. AINTEC, 2006.

[26] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein, In-troduction to algorithmsThe MIT Press, 2001.


Recommended