+ All Categories
Home > Documents > What is a Hole? Discovering Access Holes in Disaster Rubble with...

What is a Hole? Discovering Access Holes in Disaster Rubble with...

Date post: 07-Mar-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
12
What is a Hole? Discovering Access Holes in Disaster Rubble with Functional and Photometric Attributes Christopher Kong Department of Computer Science, Ryerson University, Toronto, Ontario, Canada e-mail: [email protected] Alex Ferworn Department of Computer Science, Ryerson University, Toronto, Ontario, Canada e-mail: [email protected] Elliott Coleshill School of Information and Communications Technology, Seneca College, Toronto, Ontario, Canada e-mail: [email protected] Jimmy Tran Department of Computer Science, Ryerson University, Toronto, Ontario, Canada e-mail: [email protected] Konstantinos G. Derpanis Department of Computer Science, Ryerson University, Toronto, Ontario, Canada e-mail: [email protected] Received 14 June 2014; accepted 15 January 2015 The collapse of buildings and other structures in heavily populated areas often results in human victims becoming trapped within the resulting rubble. This rubble is often unstable, difficult to traverse, and dangerous for emergency first responders tasked with finding, stabilizing, and extricating entombed or hidden victims through access holes in the rubble. Recent work in scene mapping and reconstruction using photometric color and metric depth (RGB-D) data collected by unmanned aerial vehicles (UAVs) suggests the possibility of automatically identifying potential access holes into the interior of rubble. This capability would greatly improve search operations by directing the limited human search capacity to areas where access holes might exist. This paper presents a novel approach to automatically identifying access holes in rubble. The investigation begins by defining an access hole in terms that allow for their algorithmic identification as a potential means of accessing the interior of rubble. This definition captures the functional and photometric attributes of holes. From this definition, a set of hole-related features for detection is presented. Experiments were conducted using RGB-D data collected over a real-world disaster training facility using a UAV. Empirical evaluation suggests the efficacy of the proposed approach for successfully identifying potential access holes in disaster rubble. C 2015 Wiley Periodicals, Inc. 1. INTRODUCTION 1.1. Motivation Disasters involving collapsed buildings in urban areas occur for a variety of reasons. Due to the increased population density in these areas, the likelihood of humans becoming trapped in the resultant building rubble is quite high. In response to these events, organized teams or Task Forces of Urban Search and Rescue (USAR) personnel are deployed to locate and extract victims, build support for unstable structures (i.e., shoring), and provide medical care (FEMA, 2009). When rescue personnel perform triage on a collapsed structure, they first determine areas that are likely to contain trapped victims, and then they formulate a plan to access the structure’s interior. If access holes already exist, these will be evaluated before rubble removal is considered to save time and reduce the chances of creating secondary collapses (FEMA, 2009). Figure 1 shows an access hole that potentially leads into the rubble’s interior. The terms “hole” or “access hole” are not clearly de- fined within the USAR nomenclature. The challenge lies in the amorphous nature of holes (e.g., the lack of a proto- typical shape, depth, and orientation), thus a definition is left open to the interpretation of each search team. To com- pound the difficulty of this problem, disaster rubble typ- ically contains many irregularities within the rubble pile. For instance, inconsistency in the size, shape, and types of Journal of Field Robotics 00(0), 1–12 (2015) C 2015 Wiley Periodicals, Inc. View this article online at wileyonlinelibrary.com DOI: 10.1002/rob.21590
Transcript
Page 1: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

What is a Hole? Discovering Access Holes in DisasterRubble with Functional and Photometric Attributes

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

Christopher KongDepartment of Computer Science, Ryerson University, Toronto, Ontario, Canadae-mail: [email protected] FerwornDepartment of Computer Science, Ryerson University, Toronto, Ontario, Canadae-mail: [email protected] ColeshillSchool of Information and Communications Technology, Seneca College, Toronto, Ontario, Canadae-mail: [email protected] TranDepartment of Computer Science, Ryerson University, Toronto, Ontario, Canadae-mail: [email protected] G. DerpanisDepartment of Computer Science, Ryerson University, Toronto, Ontario, Canadae-mail: [email protected]

Received 14 June 2014; accepted 15 January 2015

The collapse of buildings and other structures in heavily populated areas often results in human victimsbecoming trapped within the resulting rubble. This rubble is often unstable, difficult to traverse, and dangerousfor emergency first responders tasked with finding, stabilizing, and extricating entombed or hidden victimsthrough access holes in the rubble. Recent work in scene mapping and reconstruction using photometriccolor and metric depth (RGB-D) data collected by unmanned aerial vehicles (UAVs) suggests the possibilityof automatically identifying potential access holes into the interior of rubble. This capability would greatlyimprove search operations by directing the limited human search capacity to areas where access holes mightexist. This paper presents a novel approach to automatically identifying access holes in rubble. The investigationbegins by defining an access hole in terms that allow for their algorithmic identification as a potential meansof accessing the interior of rubble. This definition captures the functional and photometric attributes of holes.From this definition, a set of hole-related features for detection is presented. Experiments were conducted usingRGB-D data collected over a real-world disaster training facility using a UAV. Empirical evaluation suggests theefficacy of the proposed approach for successfully identifying potential access holes in disaster rubble. C© 2015Wiley Periodicals, Inc.

1. INTRODUCTION

1.1. Motivation

Disasters involving collapsed buildings in urban areas occurfor a variety of reasons. Due to the increased populationdensity in these areas, the likelihood of humans becomingtrapped in the resultant building rubble is quite high. Inresponse to these events, organized teams or Task Forces ofUrban Search and Rescue (USAR) personnel are deployedto locate and extract victims, build support for unstablestructures (i.e., shoring), and provide medical care (FEMA,2009).

When rescue personnel perform triage on a collapsedstructure, they first determine areas that are likely to contain

trapped victims, and then they formulate a plan to accessthe structure’s interior. If access holes already exist, thesewill be evaluated before rubble removal is considered tosave time and reduce the chances of creating secondarycollapses (FEMA, 2009). Figure 1 shows an access hole thatpotentially leads into the rubble’s interior.

The terms “hole” or “access hole” are not clearly de-fined within the USAR nomenclature. The challenge lies inthe amorphous nature of holes (e.g., the lack of a proto-typical shape, depth, and orientation), thus a definition isleft open to the interpretation of each search team. To com-pound the difficulty of this problem, disaster rubble typ-ically contains many irregularities within the rubble pile.For instance, inconsistency in the size, shape, and types of

Journal of Field Robotics 00(0), 1–12 (2015) C© 2015 Wiley Periodicals, Inc.View this article online at wileyonlinelibrary.com • DOI: 10.1002/rob.21590

Page 2: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

2 • Journal of Field Robotics—2015

Figure 1. An image of a rubble field from the dataset introduced in this paper. An “access hole” is highlighted by the greenbounding box.

Figure 2. Left: A raw RGB image, and right: a per-pixel registered depth image is obtained by a (top) UAV outfitted with an AsusXtion Pro sensor. The colorized depth map visualizes the depth, where red and green represent smaller and larger depths from thecapture sensor, respectively. This figure is best viewed in color.

material constituting the rubble affect what is and is notconsidered to be a candidate entry hole.

An access hole can be defined through its intendeduse. In the context of the current work, the goal is to lo-cate holes that are sufficiently large to permit human entry.In this way, an access hole is defined in the context of itsfunctional utility for search and rescue. This is analogous tothe functional object recognition paradigm pursued in com-puter vision (Dickinson, 2009) that models objects, such aschairs, in terms of their function, i.e., the ability to supporta human, rather than the particulars of their appearance.

The instability of rubble can prevent USAR teams fromsafely traversing it to find access holes. This situation has ledresearchers to investigate ways to minimize risk to human

searchers, through the use of unmanned vehicles (Birk, Wig-gerich, Bulow, Pfingsthorn, & Schwertfeger, 2011; Ferworn,Tran, Ufkes, & D’Souza, 2011; Finn & Wright, 2012; Murphy,2004; Onosato et al., 2006). Previous work (Ferworn et al.,2011) demonstrated the ability to equip a UAV with a low-cost, off-the-shelf color camera with per-pixel metric depthinformation (i.e., an RGB-D sensor) and to capture criticaldisaster scene information from a safe distance and an al-ternative perspective. This information can then be used togenerate a scene-level model that can quickly provide firstresponders with important details about the structure of therubble (Ferworn, Herman, Tran, Ufkes, & McDonald, 2013)and provide input to an automated hole detection system,as pursued in the current paper.

Journal of Field Robotics DOI 10.1002/rob

Page 3: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

Kong et al.: What is a Hole? Discovering Access Holes in Disaster Rubble with Functional and Photometric Attributes • 3

In addition to the challenges posed by dealing withlarge amounts of visual data, traumatic events can over-whelm an individual, such as a building collapse disaster.This can lead to critical incident stress that can impair theability of personnel to function and perform tasks involvingthe detailed observation required for visual search (FEMA,2009). This paper argues that a system that automates theidentification of access holes potentially reduces the cogni-tive load faced by response personnel.

Current approaches for identifying access holes rely onvisual inspection by first responders. If responders are pre-cluded from entering the scene or are not yet present, theymust rely on imagery as their main source of information.The ability to collect data far outpaces a human’s ability todeal with those data.

This paper presents a novel vision-based approach forautomatically discovering access holes in disaster rubbleimagery. The case of holes leading into subsurface voids isexamined. An underlying assumption is that an access holepossesses clearly marked boundaries and a salient depthvariation from the surrounding area. In addition, the po-tential access hole possesses a minimum width and aspectratio to accommodate the entry of an adult human searcher.While the focus in the current paper is on human searchers,other types of search entities are readily accommodated,such as search dogs or robots, by adapting the thresholdsused by the approach.

A preliminary version of this work has appeared pre-viously (Kong et al., 2013).

1.2. Related Work

No prior work has directly addressed the concept of auto-matic hole discovery for the purpose of search and rescue,as pursued here. The related research is relatively limitedand can be organized in terms of two domains: (i) the dataacquisition platform and (ii) visual object detection.

Search and rescue operations are often time-critical. Re-sponse robots benefit USAR operations by providing robustplatforms to carry sensors, collect data, and deliver sup-plies to trapped victims (Murphy, 2000). Sensory data areuseful for determining the quality of the environment andpotentially assist with locating victims. Prior work has usedground-based robots for autonomous navigation and map-ping rubble interior spaces (Mobedi & Nejat, 2012); how-ever, this approach does not attempt to locate access holesfor insertion into rubble. Using ground vehicles as plat-forms for automated, top-down, road inspection has beensomewhat successful (Sy, Avila, Begot, & Bardet, 2008). Inthis work, a sensor collects baseline information about levelroad surfaces and detects variances that translate to de-tected surface cracks. Since disaster rubble is often com-prised of irregular shapes and materials, as opposed tolevel terrain, this approach is not directly appropriate forUSAR. Work has been carried out using autonomous ve-

hicles for detecting subsurface voids in mining operations(Wilson, Gurung, Paaso, & Wallace, 2009); however, thisapproach requires heavy equipment, level terrain, and amobile platform traversing the area of inspection. USARterrain is inevitably cluttered and chaotic, making effectiveground locomotion problematic.

Research in terrain traversability has yielded the con-cept of “negative obstacles.” Negative obstacles are definedas obstacles below the ground surface that return no sen-sor data and thus should be treated as holes to be avoided(Heckman, Lalonde, Vandapel, & Hebert, 2007). Early in-vestigations into detecting negative obstacles analyzed raytraces of every pixel, comparing actual range values toexpected range ones (determined via the position of theground plane) to determine the difference (Matthies, Kelly,Litwin, & Tharp, 1995). This method makes the assumptionof a homogeneous terrain being traversed, making it unsuit-able for USAR. Further work in negative obstacle detection(Sinha & Papadakis, 2013) projects three-dimensional (3D)point cloud data collected directly in front of the sensor to a2D ground plane to detect gap contours. Detections are thenfurther analyzed for traversability of ground robots in theUSAR domain. In contrast to these previous works, whichdealt with the avoidance of negative obstacles for terraintraversability, this paper focuses on the suitability of thesenegative obstacles for insertion of trained search personnelin subsurface voids.

Ground robots are limited in the areas they are ableto successfully traverse, since the terrain composition canadversely impact locomotion (Ollero, 2004). This has moti-vated, in part, the use of UAVs to conduct surveying andreconnaissance tasks (Finn & Wright, 2012). Using a UAVfor USAR operations allows rescue personnel to survey ar-eas that would not ordinarily be accessible, and to view theterrain from perspectives unattainable by terrestrial robots(Onosato et al., 2006). This rich information allows respon-ders to carefully plan missions (Birk et al., 2011; Goodrichet al., 2008), and it has proven extremely useful in find-ing victims in search and rescue missions (RCMP, 2013).Recent work has considered UAVs equipped with an RGB-D sensor to collect data for both terrain mapping and 3Dscene reconstruction (Ferworn et al., 2011). To avoid thelimitations of ground robots and to investigate areas inac-cessible by a human searcher, the current work employsthe use of a UAV to explore remote regions and to collectdata.

Rubble characterization is a difficult problem. Therehave been preliminary investigations attempting to con-tribute solutions to this problem (Binda, Saisi, & Tiraboschi,2001; Lombillo et al., 2013; Molino et al., 2007; Onosato,Yamamoto, Kawajiri, & Tanaka, 2012); however, there is nouniversally accepted categorization method for rubble. Mo-tivated by this prior work, the current paper addresses aspecific subproblem of rubble characterization: identifyingits absence.

Journal of Field Robotics DOI 10.1002/rob

Page 4: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

4 • Journal of Field Robotics—2015

An extensive body of work has accumulated onappearance- and geometry-based object recognition ap-proaches; see the surveys by Grimson, Lozano Perez, &Huttenlocher (1990), Mundy (2006), Dickinson (2009), andAndreopoulos and Tsotsos (2013). Appearance-based ap-proaches map a photometric input pattern to a label of a spe-cific object or class [see, e.g., Dalal & Triggs (2005), Felzen-szwalb, Girshick, McAllester, & Ramanan (2010), Lam-pert, Blaschko, & Hofmann (2008), Krizhevsky, Sutskever,& Hinton (2012), and Girshick, Donahue, Darrell, & Ma-lik (2014)], whereas geometry-based approaches utilizethree-dimensional surface descriptions of the input sceneto perform object recognition [see, e.g., Koppula, Anand,Joachims, & Saxena (2011), Lai, Bo, Ren, & Fox (2012),and Rusu, Bradski, Thibaux, & Hsu (2010)]. A major ad-vantage of geometry-based approaches over appearance-based ones is their invariance to material properties, view-point, and illumination. Further, these approaches simplifythe figure-background segmentation problem compared toappearance-based approaches. Three-dimensional recogni-tion has experienced a revived interest in both the roboticsand vision communities due to the introduction of com-modity priced RGB-D sensors (Newcombe et al., 2011) andthe abundant availability of three-dimensional models, e.g.,Song & Xiao (2014).

Most closely related to the current work are functionaldescriptions for object recognition (Grabner, Gall, & Gool,2011; Stark & Bowyer, 1991; Stark, Lies, Zillich, Wyatt, &Schiele, 2008; Winston, Binford, Katz, & Lowry, 1983), i.e.,centering the object model on what one can do with the ob-ject rather than its appearance or shape. Many object classesexhibit a large degree of appearance and physical variation;for instance, the number of legs of a chair, while usuallyfour, may vary. Access holes lack a canonical definition forsize, shape, or orientation, making detection by appearanceor shape a challenging task. For some of these objects, theirdescription could be more easily provided by their function.This idea is adapted in the current work to develop a work-ing definition of an access hole and to use the proposedfunction of an access hole to detect it. In other words, ratherthan describing what a hole looks like, it is more productiveto define its function.

1.3. Contributions

In light of previous work, this paper makes three contri-butions. First, a novel definition of an access hole is pre-sented based on a set of features derived from the func-tional form and photometric characteristics of a collapsedstructure. Second, a novel approach is developed to auto-matically identify access holes in collapsed structures to beused by USAR personnel in accessing the collapsed struc-ture. Analysis is performed on aerial imagery obtained bya UAV outfitted with an RGB-D sensor to identify candi-date access holes. Third, a publicly available dataset ob-

tained from a real-world USAR training rubble pile is intro-duced, where access holes are manually provided as groundtruth. A quantitative empirical evaluation of the introduceddataset indicates the potential of the proposed approach forsuccessfully identifying access holes in disaster rubble.

2. TECHNICAL APPROACH

2.1. Access Hole Definition

Before developing an approach to detect access holes, anoperational definition is required. In this paper, an “accesshole” (or “hole” for short) is defined by its potential for ac-cess into a collapsed structure, i.e., its function. In particular,a hole must be deeper in the interior than the surroundingterrain. Furthermore, to be useful for USAR, a hole must belarge enough to support entry by a searcher, such as a hu-man, dog, or robot. In the remainder of this paper, a searcheris assumed to be an adult human. This paper has identifiedthree attributes that characterize a hole and allows us toperform access hole detection: (i) depth disparity, (ii) holesize, and (iii) photometric brightness.

2.2. Hole Attributes

The input to the proposed approach is an image pair ex-tracted from an RGB-D sensor consisting of photometriccolor (RGB) and metric depth. The two images are regis-tered such that they have a one-to-one mapping. To performdetection, candidate regions that potentially contain accessholes must be identified from the terrain surrounding it.The proposed approach first oversegments the depth inputinto regions, i.e., superpixels (Ren & Malik, 2003), with thepurpose of isolating regions (i.e., potential holes) exhibit-ing depth measurement discontinuities along their bound-aries. A superpixel is a perceptually meaningful atomic im-age unit that contains pixels that are similar in some imageproperty, such as depth, color, and texture. It is implicitly as-sumed that the constituent pixels of a superpixel belong tothe same physical entity in the world. An adjacency graphis next created by identifying the neighbors of each super-pixel. For each superpixel, a set of geometric and photo-metric feature scores is assigned, where each score repre-sents the likelihood of a hole. Feature scores for each super-pixel are aggregated to realize a final hole detection score.Figure 3 summarizes the data processing flow for the pro-posed approach to access hole detection.

Depth disparity. Typically, rubble scene imagery is ex-tremely cluttered and unstructured. A hole, the region of in-terest, must be isolated from the area around it. Due to theheterogeneous nature of rubble, figure-ground separation(i.e., target entity versus background) of holes and rubblefrom photometric appearance alone is rendered difficult.Fortunately, RGB-D sensors provide an estimate of metricdepth information, i.e., the underlying geometry. The depthinformation is exploited to partition the image into a set of

Journal of Field Robotics DOI 10.1002/rob

Page 5: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

Kong et al.: What is a Hole? Discovering Access Holes in Disaster Rubble with Functional and Photometric Attributes • 5

Figure 3. Data flow for the proposed access hole detection approach. Using the input images, the depth image is oversegmentedand treated as an undirected graph. For each superpixel, a set of geometric- and photometric-based feature scores is determinedthat are used to calculate a final detection score. The final output is a set of localized access holes with a tightly fitted bounding boxrepresenting a candidate detection.

superpixels along boundaries that exhibit a strong depthgradient. A publicly available superpixel algorithm is usedto partition the image. An inappropriate number of parti-tions results in a contiguous entity (e.g., a hole) either beingundersegmented or oversegmented. Undersegmenting anentity produces superpixels that do not respect boundariesof noncontiguous regions, while oversegmentation subdi-vides a contiguous entity. The assumption is made that ev-ery superpixel overlaps with at most one hole, and the setof superpixel boundaries are a superset of real-world accesshole boundaries; these are standard assumptions in the useof superpixels in vision applications [see, e.g., Fulkerson,Vedaldi, & Soatto (2009) and Liu, Tuzel, Ramalingam, &Chellappa (2011)].

The absolute depth value of a region does not alonedetermine if a region is a hole. A hole by definition must bedeeper than its surrounding terrain; as such, it is the depthdiscontinuity between adjacent regions that is important.For each superpixel, an adjacency graph is built to obtaina list of its neighboring regions. A natural way to expressthe superpixel image is by an undirected graph G = (V,E),where each vertex, vi ∈ V , corresponds to a superpixel, andthe edges, (vi, vj ) ∈ E, denote the set of neighboring super-pixels. Figure 4 shows an example of the superpixel extrac-tion and neighborhood discovery steps.

For each superpixel, vi , its average depth is comparedagainst all other superpixels that share a boundary with it.

Superpixels that correspond to a local depth maximum com-pared to their neighbors serve as access hole candidates forscoring. The higher the mean depth for a candidate region,the more likely it is indeed an access hole. For each super-pixel, a relative depth score, Sd , is calculated. The depththreshold used for scoring is based on data collected fromanatomical models (Panero & Zelnik, 1979). This thresh-old establishes the minimum depth a region must be fromits surroundings to be a valid candidate. A linear score isassigned between 0 and 1 for any relative depth betweenthe minimum and maximum thresholds derived from theanatomical model. Any depth greater than the maximumthreshold is assigned a score of 1, and any depth less thanthe minimum threshold is assigned 0.

Hole size. An access hole must have an appropriate sizefor the insertion of rescue personnel or similarly sized enti-ties, e.g., a search dog or a response robot. Two size-basedregion attributes are computed: (i) width and (ii) aspectratio.

The width of the region is determined by fitting an ellip-soid around the superpixel from the metric values providedby the depth sensor and projecting the points to a plane. Toexclude outliers, points that lie beyond three standard de-viations from the mean are filtered and then projected to aplane. An ellipse is fitted to the point cluster and the majorand minor axes are computed. This yields a measure of thewidth and girth of a region in metric units. For a hole to

Journal of Field Robotics DOI 10.1002/rob

Page 6: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

6 • Journal of Field Robotics—2015

Figure 4. Outline of the processing flow for segmenting a depth image and determining the neighbors of a particular superpixel.Left: Raw depth image, middle: superpixel segmentation, and right: a superpixel highlighted in light gray with its surroundingneighbors highlighted in dark gray.

be considered appropriate for insertion of a searcher, thewidth of the major axis and girth of the minor axis wereadopted based on anatomical data of the average adult hu-man (Panero & Zelnik, 1979). A region width score, Sw ,between 0 and 1 is assigned, where a higher score indicatesa higher likelihood of accommodating a searcher. A scoreof 1 is assigned to Sw if the measurements of the major andminor axes are both equal to or greater than the anatomicalmodel. If the axis measurements are 50% (or below), a scoreof 0 is assigned. To minimize missed detections of holesdue to partial occlusion or superpixel oversegmentation, ascore is applied linearly between 0 and 1 for measurementsgreater than 50% of the anatomical model measurements.

To limit the candidacy of holes that may be thin andcurvilinear, a score for the aspect ratio of the region is intro-duced. The aspect ratio score is assigned linearly by calcu-lating the ratio of the area of a given superpixel to the areaof the bounding box tightly outlining the major and minoraxes. The higher the percentage occupied in the boundingbox, the better the candidacy of the detected region. A score,Sr , is assigned linearly between 0 and 1 based on the per-centage of the bounding box occupied by the superpixel.

Photometric appearance. Examining the depth infor-mation alone does not provide sufficient discriminatoryinformation about a hole. To account for this uncertainty,photometric brightness derived from the RGB image is in-corporated. It is assumed that access holes are poorly illumi-nated and thus appear darker in the RGB image. To capturethis attribute, two feature scores are introduced: i) absolutebrightness and ii) relative brightness.

To compute the absolute brightness intensity of a su-perpixel, the RGB image is converted to the YUV color space(Black, 2009), and the average brightness from the Y-channel(i.e., the luminance) for each superpixel is calculated, whereY ∈ [0, 1]. To determine the threshold for a valid brightnessintensity value, a dataset was compiled from images col-lected via Google Images (Google, 2014). The dataset con-tains 118 images depicting collapsed buildings and rubblefrom disaster scenes. Holes were hand-labeled and the mean

brightness intensity was collected. A photometric bright-ness score, Sb, is assigned ranging between 0 and 1, wherea higher score is assigned to regions lower than a pixel in-tensity threshold that was empirically determined from thetraining data.

Since holes are typically darker than the region sur-rounding them, each region is also scored based on its rela-tive brightness intensity. Using the Y-channel, the differencebetween the average brightness of a superpixel and the av-erage brightness of all pixels within (directly) neighboringsuperpixels is calculated. A minimum threshold was de-termined empirically using the image training set contain-ing the hand-labeled ground truth. A photometric contrastscore, Sc, is assigned to a given superpixel between 0 and 1.

2.3. Detection

Each superpixel is assigned a final detection score, S. Higherscores indicate a stronger likelihood of a superpixel beinga hole. The resulting detection score, S, is calculated asfollows:

S =∑

Si∈F

wiSi + b, (1)

where F = {Sd , Sw , Sr , Sb, Sc} is the set of feature scores, wi

denotes the weighting given to the corresponding feature,and b is a bias term. Each detected access hole is representedby a bounding box that tightly outlines the image region. Foreach image, the final output of the approach consists of thecoordinates of the bounding boxes and their correspondingdetection score.

3. EXPERIMENTAL RESULTS

3.1. System setup

Evaluation of the proposed approach was performed ona novel rubble scene dataset (Section 3.2). Throughout theevaluation, the various thresholds of the approach werefixed to the same values for all images. The minimum and

Journal of Field Robotics DOI 10.1002/rob

Page 7: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

Kong et al.: What is a Hole? Discovering Access Holes in Disaster Rubble with Functional and Photometric Attributes • 7

maximum depth used for computing the depth score, Sd ,were based on an anatomical human model (Panero &Zelnik, 1979) and were set to 200 and 1, 951 mm, respec-tively. The same anatomical model was used to set the min-imum width and girth thresholds used for computing thesize score, Sw , and aspect ratio score, Sr , set at 655 and 368mm, respectively. The threshold used to compute the pho-tometric brightness score, Sb, was empirically set to the lu-minance value of 0.274. Similarly, the brightness differencebetween a superpixel with its neighboring regions used tocompute the photometric contrast score, Sc, was empiricallyset to the luminance value of 0.267. To avoid overfitting tothe introduced dataset, an equal weighting of wi = 1

5 wasgiven to each feature, and the bias term, b, was set to zero.This is done to remain unbiased with regard to features thatmay be stronger in the limited amount of data availablefor learning the parameters. Consequently, detection scoresrange between 0 and 1.

3.2. Dataset

The proposed access hole detection approach was eval-uated on a challenging dataset containing images of areal rubble scene. Data were collected at the ReferenceRubble Pile of the Ontario Provincial Police (OPP), locatedin Bolton, Ontario, Canada (U.C.R.T., 2013). The rubblepile is used for training purposes, and it consists ofheterogeneous terrain comprised of concrete, metal, andwood debris fields, purpose-built simulation buildings,shipping containers, and partially crushed and buriedvehicles. Commodity RGB-D sensors, such as the Mi-crosoft Kinect and Asus Xtion, are notoriously sensitiveto external sources of infrared light (Ferworn et al., 2011).To minimize the corruption of depth estimates for theexperiments, data were captured during sunrise or duskwhen the influence of the Sun’s infrared emissions wasminimal. The dataset is comprised of 254 image pairsconsisting of an RGB image and a corresponding registereddepth map, with an image resolution of 640 × 480. Outof this set, there are 166 RGB-D images that contain18 unique holes that meet the definition of an accesshole. Ground truth was marked by hand-labeling thelocation of each access hole with a tight bounding box.Figure 7 shows a sample of the data used for evaluation. Theimage dataset and ground truth are publicly available athttp://ncart.scs.ryerson.ca/research/access-hole-

detection.

3.3. Evaluation

To quantitatively evaluate the detection accuracy of the ap-proach on the introduced dataset, Precision-Recall (P-R), astandard evaluation tool in information retrieval (Rijsber-gen, 1979), is used. The curve captures the tradeoff betweenaccuracy and noise as the detection threshold is varied. “Pre-

Table I. Comparison of average precision (AP) for a range oftarget superpixel values used to partition each image.

number of superpixels

9 11 13 15 18 20AP 0.43 0.37 0.37 0.36 0.38 0.36

cision” denotes the number of correctly detected holes overthe total number of detections, and it is defined as follows:

Precision = TP/(TP + FP), (2)

where TP denotes the number of true positives (i.e., cor-rectly detected holes) and FP denotes the number of falsepositives, i.e., the number of detections where no hole ispresent. “Recall” is the fraction of true positives that aredetected rather than missed, and it is defined as follows:

Recall = TP/nP, (3)

where nP is the total number of positives present in thedataset. A detection is considered a true positive if there isa spatial overlap greater than 50% with the hand-labeledground truth. A detection is represented as a (rectilinear)bounding box that spatially outlines the candidate accesshole along with the associated detection score, S, and theunique identifier of the image pair.

The detection approach was run on the introduceddataset with ground truth. To oversegment the depth image,a publicly available superpixel segmentation algorithm wasused. In particular, the entropy rate superpixel (ERS) (Liuet al., 2011) algorithm was used to produce a user-specifiednumber of superpixels with roughly similar sizes and com-pact shapes. To evaluate the sensitivity of the proposed ap-proach to the number of selected superpixel segments, thedetection approach was run using a range of segmentationtargets. To summarize the results for each P-R curve, the av-erage precision was computed over the recall interval 0–1.Table I shows the average precision for the approach using9, 11, 13, 15, 18, and 20 superpixels. The plot shows thatthe average precision is stable around 0.37, with nine seg-ments achieving the best result at 0.43 average precision.Figure 5 shows a Precision-Recall plot for the proposed ap-proach using nine superpixels. For extended experiments,please refer to Kong (2015).

The motivation behind the proposed approach is toautomatically identify and localize access holes for disasterscenarios, thus this paper is focused on high recall for detec-tions with moderate to high precision. The system performswell in this regard as it is able to detect all labeled groundtruth holes with ∼0.16 precision when recall is 1, i.e., allholes in the ground truth detected. Precision is lowered bythe number of false-positive detections. The ultimate goalis to provide detections to response personnel that correctly

Journal of Field Robotics DOI 10.1002/rob

Page 8: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

8 • Journal of Field Robotics—2015

Figure 5. Evaluation of the overall approach for detectingholes. The Precision-Recall curve is computed across the entireintroduced dataset; the number of superpixels is set to nine.

identify all access holes with minimal false detections. Sincea missed detection can result in the potential loss of life, ahigh false-positive rate is accepted so that no potential ac-cess holes are excluded. Figure 6 shows sample detectionoutputs. Upon examining the detections, it is found that

a number of false-positive detections occur in areas wherethe geometric features score high, but they are not excludedthrough the scoring of photometric properties. Nonuniformweighting of the various feature scores via learning mayameliorate some of these issues; however, a lack of sufficienttraining data currently limits the ability to tune the systemwithout overfitting to the current dataset. Ultimately, thesefalse positives can be rejected by further visual inspectionwith minimal effort, as compared to evaluating all inputsmanually.

Experiments were performed with an unoptimizedMATLAB code running on a 64-bit Intel Core I5 2.50 GHzmachine with 6 GB of RAM. To detect holes in a singleRGB-D image with a resolution of 640 × 480 segmented intonine superpixels, the system requires ∼9 s. Increasing thenumber of superpixels to 20 yields a runtime of ∼14 s perinput image pair. Significant runtime improvements are an-ticipated via optimizing the code and leveraging parallelcomputation, e.g., a graphics processing unit (GPU).1

1The intended use for the proposed approach is to create access holeinformation for first responders in-transit to a disaster scene. The“realistic time-frame” should be considered to be on the order ofmany hours, e.g., the main body of Canada Task Force 3 (TorontoHUSAR) arrived at the Algo Centre Mall Collapse roughly 14 hafter their activation (Belanger, 2014).

Figure 6. Sample output of the proposed access hole detection approach. Left: Input RGB image, middle: superpixel segmentationwith the ground truth label given in red, and right: detected regions given in green. The first two rows show successful detections,and the last row shows a successful detection with a false detection. This figure is best viewed in color.

Journal of Field Robotics DOI 10.1002/rob

Page 9: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

Kong et al.: What is a Hole? Discovering Access Holes in Disaster Rubble with Functional and Photometric Attributes • 9

Figure 7. A sample of RGB and depth image pairs from the introduced dataset used to evaluate the detection approach. Thisfigure is best viewed in color.

Journal of Field Robotics DOI 10.1002/rob

Page 10: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

10 • Journal of Field Robotics—2015

4. DISCUSSION AND SUMMARY

This paper is the first to present an automated system forthe detection of access holes in rubble. The approach hasshown promising results for detecting access holes usingboth functional and photometric attributes for the insertionof search personnel into holes within rubble.

A current limitation of the approach is the need for ac-curate depth maps. Current commodity RGB-D cameras donot work well outdoors in full daylight conditions. This is awell-known challenge within the field robotics community.To date, passive stereo-based algorithms have not achievedthe same level of depth accuracy as RGB-D sensors. The con-sideration of other, more sophisticated sensors is possible inthe future. An alternative approach is to investigate waysof improving the depth data estimates, such as integratingthe data over time rather than sampling a single frame. Thisapproach could reduce the number of areas with missing orcorrupted depth estimates.

The lack of a large real-world dataset is a current limit-ing factor. While the dataset introduced in this paper takesa first step, it is insufficient for providing examples of themultitude of debris configurations that rubble fields canpresent. Furthermore, the limited amount of data restrictstuning the weight of features when calculating the detec-tion score. This paper purposely remains neutral regardingthese weights to avoid the problem of overfitting perfor-mance biased to the current dataset. The availability of otherdisaster scene datasets would allow for learning the weightparameters and thus improve performance. In addition, amore diverse dataset would also provide a more thoroughevaluation of the approach. Overall, as more data becomeavailable, improvements in the algorithm may be realized.

The approach presented in this paper can be further de-veloped by improving the identified feature attributes andaugmenting the set with additional ones. For instance, holesin rubble tend to have different thermal properties from theterrain surrounding them (Matthies & Rankin, 2003). Theuse of forward looking infrared (FLIR) sensors to detectsecondary thermal effects present around potential holeswith humans inside may help reduce errors.

Improving the approach to run in real-time as dataare being captured onboard a UAV can provide numerousbenefits to search teams. GPS coordinates obtained fromthe UAV can be transmitted wirelessly to ground crews,allowing USAR teams to mark areas that require furtherinvestigation quickly and accurately as they are detected.

There are numerous positive implications of the currentcontribution. First, the introduced approach may reduce theneed for the dangerous task of humans performing initial vi-sual inspection of an urban disaster incident in order to findpotential areas of access. Second, the approach may be ableto reduce the cognitive load of response workers tasked withidentifying access points through visual inspection. Third, aUAV can investigate regions beyond line of sight, i.e., it can

search and analyze areas that might not have been acces-sible before. Finally, significant reductions can be made inthe search space of a large collapse to a manageable numberof locations, thus saving time. Search and rescue operationsare extremely time-critical, as the life expectancy of victimsunder buried rubble is limited. Identifying and localizingaccess holes in this way makes better use of limited time.

Since the detection approach is intended for planningpurposes, it provides a search team with advanced warningof “potential” access paths that can then be prioritized byhuman search specialists. The intent is to provide a meansof indicating holes that can then be explored or eliminatedfrom further consideration by expert human practitioners(the search team). Furthermore, the intention is to includethis information in a physics-aware disaster scene model(Ferworn et al., 2013), with the hole information representedand clearly marked for searchers inside the simulation. Itshould also be noted that this technique of collecting holedata and rendering a scene model would ideally be usedby the advance parties of the task force or the local firstresponders at the scene. These on-scene teams would thentransmit the simulation to inbound task forces whose searchteams and structural specialists would use the data as inputto form their plans prior to arriving at the scene.

In summary, this paper presented a novel approach forthe automated detection of access holes in scenes of rub-ble. Access holes represent areas of particular interest forfirst responders. They represent the possibility of accessingsubsurface voids where live humans may be hidden. Thispaper is the first to define the characteristics of an accesshole through both functional and photometric attributesinherent to a valid entry point. A novel approach for identi-fying candidate access holes in RGB-D data was proposed,a real rubble pile dataset was introduced, and an evaluationprotocol to validate the approach was provided. Empiricalevaluation has shown promising results for detecting accessholes.

REFERENCES

Andreopoulos, A., & Tsotsos, J. (2013). 50 years of object recog-nition: Directions forward. Computer Vision and ImageUnderstanding, 117(8), 827–891.

Belanger, P. R. (2014). Report of the Elliot Lake Commission ofInquiry, Executive Summary. Queen’s Printer for Ontario.

Binda, L., Saisi, A., & Tiraboschi, C. (2001). Application of sonictests to the diagnosis of damaged and repaired structures.NDT & E International, 34(2), 123–138.

Birk, A., Wiggerich, B., Bulow, H., Pfingsthorn, M., & Schwert-feger, S. (2011). Safety, security, and rescue missions withan unmanned aerial vehicle (UAV). Journal of Intelligent& Robotic Systems, 64(1), 57–76.

Dalal, N., & Triggs, B. (2005). Histograms of oriented gradientsfor human detection. In IEEE Conference on ComputerVision and Pattern Recognition (Vol. 1, pp. 886–893).

Journal of Field Robotics DOI 10.1002/rob

Page 11: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

Kong et al.: What is a Hole? Discovering Access Holes in Disaster Rubble with Functional and Photometric Attributes • 11

Dahlman, E., Oestges, C., Bovik, A. C., Fette, B. A., Jack, K.,Dowla, F., Parkvall, S., Skold, J., DeCusatis, C., da Silva,Ed., & others. (2009). Communications engineering deskreference. Academic Press. p. 470.

Dickinson, S. J. (2009). Challenge of image abstraction. In Dick-inson, S. J., Leonardis, A., Schiele, B., & Tarr, M. J. (eds.),Object categorization: Computer and human vision per-spectives. Cambridge University Press.

Felzenszwalb, P. F., Girshick, R. B., McAllester, D., & Ramanan,D. (2010). Object detection with discriminatively trainedpart-based models. IEEE Transactions on Pattern Analysisand Machine Intelligence, 32(9), 1627–1645.

FEMA (2009). Urban search & rescue structures specialist: Fieldoperations guide. U.S. Army Corps of Engineers.

Ferworn, A., Herman, S., Tran, J., Ufkes, A., & McDonald, R.(2013). Disaster scene reconstruction: Modeling and sim-ulating urban building collapse rubble within a game en-gine. Summer Simulation Multi-Conference (vol. 45, p. 11).

Ferworn, A., Tran, J., Ufkes, A., & D’Souza, A. (2011). Initialexperiments on 3D modeling of complex disaster environ-ments using unmanned aerial vehicles. In IEEE Interna-tional Symposium on Safety, Security, and Rescue Robotics(pp. 167–171).

Finn, R. L., & Wright, D. (2012). Unmanned aircraft systems:Surveillance, ethics and privacy in civil applications. Com-puter Law & Security Review, 28(2), 184–194.

Fulkerson, B., Vedaldi, A., & Soatto, S. (2009). Class segmen-tation and object localization with superpixel neighbor-hoods. In IEEE International Conference on Computer Vi-sion (pp. 670–677).

Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Richfeature hierarchies for accurate object detection and se-mantic segmentation. In IEEE Conference on ComputerVision and Pattern Recognition (pp. 580–587).

Goodrich, M. A., Morse, B. S., Gerhardt, D., Cooper, J. L.,Quigley, M., Adams, J. A., & Humphrey, C. (2008). Sup-porting wilderness search and rescue using a camera-equipped mini UAV. Journal of Field Robotics, 25(1-2),89–110.

Google (2014). Google image. https://images.google.com/.Grabner, H., Gall, J., & Gool, L. V. (2011). What makes a chair a

chair? In IEEE Conference on Computer Vision and PatternRecognition (pp. 1529–1536).

Grimson, W., Lozano Perez, T., & Huttenlocher, D. (1990). Ob-ject recognition by computer: The role of geometric con-straints. Cambridge, MA: MIT Press.

Heckman, N., Lalonde, J.-F., Vandapel, N., & Hebert, M. (2007).Potential negative obstacle detection by occlusion label-ing. In IEEE/RSJ International Conference on IntelligentRobots and Systems (pp. 2168–2173).

Kong, C. (2015). Discovering access holes in disaster rubblewith functional and photometric attributes. Master’s the-sis, Department of Computer Science, Ryerson University,Toronto.

Kong, C., Ferworn, A., Tran, J., Herman, S., Coleshill, E., &Derpanis, K. G. (2013). Toward the automatic detectionof access holes in disaster rubble. In IEEE International

Symposium on Safety, Security, and Rescue Robotics (pp.1–6).

Koppula, H., Anand, A., Joachims, T., & Saxena, A. (2011).Semantic labeling of 3D point clouds for indoor scenes.The paper was published in the proceedings from the Ad-vances in Neural Information Processing Systems confer-ence (pp. 244–252).

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNetclassification with deep convolutional neural networks.The paper was published in the proceedings from the Ad-vances in Neural Information Processing Systems confer-ence (pp. 1097–1105).

Lai, K., Bo, L., Ren, X., & Fox, D. (2012). A scalable tree-basedapproach for joint object and pose recognition. The paperwas published in the proceedings from the Association forthe Advancement of Artificial Intelligence conference.

Lampert, C. H., Blaschko, M. B., & Hofmann, T. (2008). Beyondsliding windows: Object localization by efficient subwin-dow search. In IEEE Conference on Computer Vision andPattern Recognition (pp. 1–8).

Liu, M.-Y., Tuzel, O., Ramalingam, S., & Chellappa, R. (2011).Entropy rate superpixel segmentation. In IEEE Conferenceon Computer Vision and Pattern Recognition (pp. 2097–2104).

Lombillo, I., Thomas, C., Villegas, L., Fernandez-Alvarez, J. P., &Norambuena-Contreras, J. (2013). Mechanical characteri-zation of rubble stone masonry walls using non and minordestructive tests. Construction and Building Materials, 43,266–277.

Matthies, L., Kelly, A., Litwin, T., & Tharp, G. (1995). Obsta-cle detection for unmanned ground vehicles: A progressreport. In International Symposium of Robotics Research(pp. 475–486).

Matthies, L., & Rankin, A. (2003). Negative obstacle detection bythermal signature. In IEEE/RSJ International Conferenceon Intelligent Robots and Systems (vol. 1, pp. 906–913).

Mobedi, B., & Nejat, G. (2012). 3-D active sensing in time-criticalurban search and rescue missions. IEEE/ASME Transac-tions on Mechatronics, 17(6), 1111–1119.

Molino, V., Madhavan, R., Messina, E., Downs, A., Balakirsky,S., & Jacoff, A. (2007). Traversability metrics for rough ter-rain applied to repeatable test methods. In IEEE/RSJ In-ternational Conference on Intelligent Robots and Systems(pp. 1787–1794).

Mundy, J. (2006). Object recognition in the geometric era: A ret-rospective. In Ponce, J., Hebert, M., Schmid, C., & Zisser-man, A. (eds.), Toward category-level object recognition,Vol. 4170 of Lecture Notes in Computer Science. Berlin:Springer.

Murphy, R. (2000). Marsupial and shape-shifting robots for ur-ban search and rescue. IEEE Intelligent Systems and TheirApplications, 15(2), 14–19.

Murphy, R. R. (2004). Trial by fire [rescue robots]. Robotics &Automation Magazine, 11(3), 50–61.

Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D.,Davison, A. J., Kohli, P., Shotton, J., Hodges, S., & Fitzgib-bon, A. (2011). Kinectfusion: Real-time dense surface

Journal of Field Robotics DOI 10.1002/rob

Page 12: What is a Hole? Discovering Access Holes in Disaster Rubble with ...ncart.scs.ryerson.ca/wp-content/uploads/2015/10/What-is-a-Hole... · e-mail: kosta@scs.ryerson.ca Received 14 June

12 • Journal of Field Robotics—2015

mapping and tracking. In IEEE International Symposiumon Mixed and Augmented Reality.

Ollero, A. (2004). Control and perception techniques for aerialrobotics. Annual Reviews in Control, 28(2), 167–178.

Onosato, M., Takemura, F., Nonami, K., Kawabata, K., Miura,K., & Nakanishi, H. (2006). Aerial robots for quick infor-mation gathering in USAR. In Society of Instrument andControl-IEEE International Conference on Aerospace Sci-ence and Engineering–International Joint Conference (pp.3435–3438).

Onosato, M., Yamamoto, S., Kawajiri, M., & Tanaka, F. (2012).Digital Garecki archives: An approach to know more aboutcollapsed houses for supporting search and rescue activ-ities. IEEE International Symposium on Safety, Security,and Rescue Robotics.

Panero, J., & Zelnik, M. (1979). Human dimension and interiorspace: A source book of design reference standards. NewYork: Watson-Guptill.

RCMP (2013). Saskatoon RCMP search for injured driverwith unmanned aerial vehicle. http://www.rcmp-grc.gc.ca/sk/news-nouvelle/video-gallery/video-pages/search-rescue-eng.htm .

Ren, X., & Malik, J. (2003). Learning a classification model forsegmentation. In IEEE International Conference on Com-puter Vision (pp. 10–17).

Rijsbergen, C. V. (1979). Information retrieval, 2nd ed.Butterworth-Heinemann.

Rusu, R. B., Bradski, G., Thibaux, R., & Hsu, J. (2010). Fast3D recognition and pose using the viewpoint feature his-togram. In IEEE/RSJ International Conference on Intelli-gent Robots and Systems (pp. 2155–2162).

Sinha, A., & Papadakis, P. (2013). Mind the gap: Detection andtraversability analysis of terrain gaps using LIDAR for saferobot navigation. Robotica (pp. 1–17).

Song, S., & Xiao, J. (2014). Sliding shapes for 3D object detectionin depth images. In European Conference on ComputerVision (pp. VI: 634–651).

Stark, L., & Bowyer, K. (1991). Achieving generalized objectrecognition through reasoning about association of func-tion to structure. IEEE Transactions on Pattern Analysisand Machine Intelligence, 13(10), 1097–1104.

Stark, M., Lies, P., Zillich, M., Wyatt, J. L., & Schiele, B. (2008).Functional object class detection based on learned af-fordance cues. In International Conference on ComputerVision Systems.

Sy, N., Avila, M., Begot, S., & Bardet, J.-C. (2008). Detec-tion of defects in road surface by a vision system. InIEEE Mediterranean Electrotechnical Conference (pp. 847–851).

U.C.R.T. (2013). USAR (Urban Search and Rescue) CBRNE(Chemical, Biological, Radiological and Nuclear) Re-sponse Team (U.C.R.T.). http://www.opp.ca/ecms/index.php?id=69.

Wilson, S. S., Gurung, L., Paaso, E. A., & Wallace, J. (2009).Creation of robot for subsurface void detection. In IEEEConference on Technologies for Homeland Security (pp.669–676).

Winston, P. H., Binford, T. O., Katz, B., & Lowry, M. (1983).Learning physical descriptions from functional defini-tions, examples, and precedents. Technical Report AIM-679, Department of Computer Science, Stanford Univer-sity.

Journal of Field Robotics DOI 10.1002/rob


Recommended