+ All Categories
Home > Documents > A flexible framework for learning-based Surface...

A flexible framework for learning-based Surface...

Date post: 22-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
136
Max-Planck-Institut f ¨ ur Informatik Computer Graphics Group Saarbr ¨ ucken, Germany A flexible framework for learning-based Surface Reconstruction Master Thesis in Computer Science Computer Science Department University of Saarland Waqar Saleem 10th December 2004 Supervisors: Prof. Dr. Hans-Peter Seidel Prof. Dr.-Ing. Philipp Slusallek Max-Planck-Institut f ¨ ur Informatik Computer Graphics Group Saarbr¨ ucken, Germany
Transcript
Page 1: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Max-Planck-Institut f ur InformatikComputer Graphics GroupSaarbrucken, Germany

A flexible framework for learning-based Surface Reconstruction

Master Thesis in Computer Science

Computer Science DepartmentUniversity of Saarland

Waqar Saleem

10th December 2004

Supervisors: Prof. Dr. Hans-Peter SeidelProf. Dr.-Ing. Philipp Slusallek

Max-Planck-Institut fur InformatikComputer Graphics GroupSaarbrucken, Germany

Page 2: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Eidesstattliche Erklarung

Hiermit erklare ich an Eides Statt, dass ich die vorliegende Masterarbeitselbstandig verfasst und keine andere als die angegebenen Quellen und Hilfs-mittel benutzt habe.

Saarbrucken, den 10. Dezember, 2004

Waqar Saleem

Page 3: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

To my family

Page 4: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Abstract

The problem of Surface Reconstruction arises in many real world situations. We

introduce in detail the problem itself and then take a brief look into its applications

and existing techniques, particularly learning based techniques, developed for its

solution. Having presented the context, we closely examine one such learning

based technique – the Neural Mesh algorithm for Surface Reconstruction.

Despite being relatively recent, the Neural Mesh algorithm has already under-

gone several revisions, thus giving rise to several variants of the original algorithm.

We study the algorithm and each of its variants in detail. All variants rely in vary-

ing degrees on a specific aspect of the algorithm – asignal counter. We observe

that algorithmic reliance on the signal counter impedes performance and propose

an alternate way of performing the same functionalities – using alist. Addition-

ally, on the practical side, we identify areas where inhouse implementations of the

algorithms were wanting in efficiency and revise those areas.

Changing over from the signal counter to the list represents a change in ap-

proach from theexact learningof the original algorithms to acomparative learning

framework. We show empirically that this change in approach does not produce

any significant difference in the quality of the algorithms’ output, while perfor-

mance, in terms of running time, increases dramatically.

i

Page 5: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Acknowledgements

This thesis is the result of the contribution of many people. Foremost, I would like

to thank my advisor, Ioannis, for his patience and for being a constant source of

help and guidance. Thanks also to my supervisor, Hans-Peter Seidel, for allowing

me time and flexibility in the project.

Of course, none of this would be possible without my family who worked hard

to sharpen my skills and have always encouraged and supported me to go further

in life.

Special mention goes to my friends and colleagues, especially Akiko, Christian

Rosl and Hitoshi, who were always there to help me with the countless technical

problems I have had.

Finally, my gratitude goes to Elena who kept her faith in me and egged me on

in times when I was down or simply lazy!

ii

Page 6: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Contents

I Background 1

1 Introduction 3

1.1 Our aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Applications 8

2.1 Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Reverse Engineering . . . . . . . . . . . . . . . . . . . . . . . .8

2.3 Shopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Virtual Museums . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.5 Entertainment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.6 Other applications . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Shape Acquisition 10

3.1 Popular Acquisition techniques . . . . . . . . . . . . . . . . . . .10

3.1.1 Images . . . . . . . . . . . . . . . . . . . . . . . . . . .11

3.1.2 Slices . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

3.1.3 Coordinate Measuring Machines (CMMs) . . . . . . . . .12

3.1.4 Reflection/Transmission of waves . . . . . . . . . . . . .12

3.2 Issues with optical triangulation . . . . . . . . . . . . . . . . . .13

4 Surface Reconstruction from point data – previous work 16

4.1 Related terms . . . . . . . . . . . . . . . . . . . . . . . . . . . .17

4.2 Categorization . . . . . . . . . . . . . . . . . . . . . . . . . . . .19

4.2.1 Implicit techniques . . . . . . . . . . . . . . . . . . . . .19

4.2.2 Physics-based and deformable-model techniques . . . . .21

4.2.3 Computational Geometry approaches . . . . . . . . . . .21

4.2.4 Parametric and projection-based methods . . . . . . . . .23

iii

Page 7: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

CONTENTS iv

4.2.5 Structured techniques . . . . . . . . . . . . . . . . . . . .24

4.2.6 Robust methods . . . . . . . . . . . . . . . . . . . . . . .25

4.2.7 Learning techniques . . . . . . . . . . . . . . . . . . . .25

II Core: Neural Meshes 27

5 Preliminaries 29

5.1 Terminology and notation . . . . . . . . . . . . . . . . . . . . . .29

5.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.2.1 Differences from GCSs . . . . . . . . . . . . . . . . . . .32

5.3 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

6 The Basic Algorithm 33

6.1 The Basic Step . . . . . . . . . . . . . . . . . . . . . . . . . . .33

6.1.1 ChoosingαL . . . . . . . . . . . . . . . . . . . . . . . . 37

6.1.2 The significance ofCL . . . . . . . . . . . . . . . . . . . 38

6.2 Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6.2.1 Node Addition . . . . . . . . . . . . . . . . . . . . . . . 41

6.2.2 Node Removal . . . . . . . . . . . . . . . . . . . . . . .42

6.2.3 Growth rate . . . . . . . . . . . . . . . . . . . . . . . . .44

6.2.4 Signal Counter computations . . . . . . . . . . . . . . . .47

6.3 Total cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48

6.3.1 Observations . . . . . . . . . . . . . . . . . . . . . . . .49

7 Topology learning add-on 50

7.1 Topology learning steps . . . . . . . . . . . . . . . . . . . . . . .53

7.2 Total Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55

8 A normal-based variant 56

8.1 Total cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59

9 A noise-filtering variant 60

9.1 Removing nodes more frequently . . . . . . . . . . . . . . . . . .63

9.2 Total Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64

10 Ensembles 65

10.1 Quality of ensemble members . . . . . . . . . . . . . . . . . . .68

Page 8: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

CONTENTS v

10.2 Total Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68

III Our Work 69

11 Experimentation 71

11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

11.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72

11.2.1 The jump distance . . . . . . . . . . . . . . . . . . . . .72

11.3 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .73

11.3.1 Modifications to previous algorithms . . . . . . . . . . .75

11.4 Total Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76

12 Implementation issues 77

12.1 Operations onL . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

12.2 ImplementingL . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

12.2.1 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

12.2.2 Linked-list . . . . . . . . . . . . . . . . . . . . . . . . . 81

12.2.3 Tree-based structures . . . . . . . . . . . . . . . . . . . .82

12.3 ImplementingA . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

12.4 Jumps inT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

12.4.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . .89

12.5 Tweaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91

IV Conclusion 94

13 Results 96

14 Conclusion 104

14.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104

14.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105

Page 9: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

List of Tables

6.1 Extra smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . .41

13.1 Cost comparison . . . . . . . . . . . . . . . . . . . . . . . . . .97

13.2 Mesh distances . . . . . . . . . . . . . . . . . . . . . . . . . . .99

13.3 Running times for large models . . . . . . . . . . . . . . . . . . .99

vi

Page 10: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

List of Figures

1.1 The Geometric Modeling pipeline . . . . . . . . . . . . . . . . . 3

3.1 Range imaging using optical triangulation . . . . . . . . . . . . .14

3.2 Rangefinding issues . . . . . . . . . . . . . . . . . . . . . . . . .15

6.1 Basic Step - node movement and pitfalls . . . . . . . . . . . . . .35

6.2 Effect ofαw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6.3 Mesh quality vs.αL . . . . . . . . . . . . . . . . . . . . . . . . . 38

6.4 VaryingαL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.5 Extra smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . .40

6.6 Edge Split vs. Vertex Split . . . . . . . . . . . . . . . . . . . . .42

6.7 Node Removal operations . . . . . . . . . . . . . . . . . . . . . .43

6.8 Early growth ofM . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6.9 Removing nodes . . . . . . . . . . . . . . . . . . . . . . . . . .45

7.1 Removing boundary nodes . . . . . . . . . . . . . . . . . . . . .52

7.2 Invalid edge collapses . . . . . . . . . . . . . . . . . . . . . . . .53

7.3 Problematic triangle removal . . . . . . . . . . . . . . . . . . . .54

7.4 Topology learning . . . . . . . . . . . . . . . . . . . . . . . . . .54

8.1 Normal based reconstruction . . . . . . . . . . . . . . . . . . . .59

9.1 The noise filtering function . . . . . . . . . . . . . . . . . . . . .62

10.1 Ensemble members . . . . . . . . . . . . . . . . . . . . . . . . .67

11.1 The activity list,L . . . . . . . . . . . . . . . . . . . . . . . . . . 73

12.1 Findingnd in T . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

12.2 Overjumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90

vii

Page 11: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

LIST OF FIGURES viii

13.1 Running time of individual steps . . . . . . . . . . . . . . . . . .98

13.2 Modifications to boundary handling . . . . . . . . . . . . . . . .100

13.3 Time and valence comparison . . . . . . . . . . . . . . . . . . .101

13.4 Awakening . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102

13.5 More large models . . . . . . . . . . . . . . . . . . . . . . . . .103

Page 12: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

List of Algorithms

1 The basic Neural Mesh algorithm . . . . . . . . . . . . . . . . . .34

2 The topology-learning Neural Mesh algorithm . . . . . . . . . . .51

3 The normal-based Neural Mesh algorithm . . . . . . . . . . . . .57

4 The noise-filtering Neural Mesh algorithm . . . . . . . . . . . . .61

5 Neural Mesh ensembles . . . . . . . . . . . . . . . . . . . . . . .66

6 The list modification of the Neural Mesh algorithms . . . . . . . .74

7 Findingnd in T . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

ix

Page 13: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Part I

Background

1

Page 14: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

We introduce theGeometric Modeling pipelineand highlight our interest in it

for the purposes of this thesis. To put things in perspective, we outline real world

situations where this pipeline is utilized. We then talk about about some popular

methods for accomplishing a few of its steps, including some previous work on the

Surface Reconstruction step.

Page 15: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 1

Introduction

In a large variety of applications, the need arises to have a digital copy of a real

world object where the object may be anything from a machine part to a commer-

cial product to a sculpture. Making this copy falls into the general framework of

theGeometric Modelingproblem and is performed in a stepwise process. As these

steps are performed in a fixed order, they are commonly represented as a pipeline.

Figure 1.1 illustrates the pipeline at its most general level. Depending on one’s

viewpoint, different steps of the pipeline may appear more important than the oth-

ers. Thus, it is not uncommon in Model Generation contexts to refer to the entire

pipeline as the Surface Reconstruction pipeline.

Figure 1.1: The Geometric Modeling pipeline

The input to the pipeline is a real world object which is processed to output

some digital representation of it. In the cases, where the generated representation

is a triangle mesh, an optional post-processing step is called to simplify the mesh

3

Page 16: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 1. Introduction 4

according to certain criteria. Each of the steps – acquisition, model generation and

mesh simplification – can be further divided into substeps, but we do not go into

detail as our goal here is to give a general idea of the pipeline.

Acquisition

Acquiring the shape of an object for subsequent digital manipulation is possible

in several ways - a probe can physically brush over the object taking samples at

regular intervals, the object can be sliced and the shape reconstructed from outlines

of the slices, patterns of light can be projected onto the surface and the shape

inferred from changes between the projected and reflected patterns. Most of these

methods yield depth or range values for points on the surface. This range data is

then used to reconstruct the surface. We talk about common acquisition methods

in Chapter 3.

Generation

Barring the shape representation obtained from a few acquisition methods, model

generation is basically a matter of fitting a surface1 to the acquired range data.

There are different approaches to this problem of Surface Reconstruction. Some

approaches try to fit a surface directly onto the range data. These surfaces are

usually splines, quadric or conic surfaces. The range data itself may also be tri-

angulated. Other schemes only approximate a surface to the range data. In such

cases, an initial mesh is deformed until it matches the range data. The deformation

is carried out under certain restrictions governing the distance of the mesh from

the range data and/or smoothness of the mesh. A volumetric approach can also be

taken where the range data is used to obtain a coarse approximation to the object’s

surface. This approximate surface is then discretized to yield a triangle mesh. The

type of representation wanted - functional or simplical complex (mesh) - depends

on the intended application. A more detailed look at these methods is presented in

Chapter 4.

Simplification

Meshes generated by volumetric methods or by direct triangulation of the range

data can have bad vertex distribution. Secondly, range data is too dense for user

1The surface may be a smooth polynomial surface, or a piece-wise linear, polygonal mesh.

Page 17: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 1. Introduction 5

applications, and a triangulation of it contains extraneous information. Mesh Sim-

plification ordecimationtechniques can be applied here to minimize the size of the

mesh while retaining important features of the original mesh. The smaller mesh

offers greater ease for storage, transmission and processing. An important issue in

mesh simplification is feature detection. Once the important features have been de-

tected, the remaining mesh can be simplified by collapsing short edges into a single

vertex, iteratively removing vertices and triangulating the resulting hole, finding a

suitable base mesh and repeatedly subdividing it to match the original mesh and/or

sampling the original mesh. There is also interest in mesh simplification for visu-

alization purposes where, to ensure fast rendering, a mesh may be stored at several

levels of detail (LOD). A distant object would then be rendered at a low LOD while

a much higher LOD would be used for an object close to the viewer.

In the rest of this thesis, we do not talk about Mesh Simplification as the Neural

Mesh algorithm that we concentrate on implicitly outputs a mesh simplified to any

desired level of detail.

1.1 Our aim

As rangefinding is the technique of choice for shape acquisition, Surface Recon-

struction algorithms have started to exploit the additional information that a range

scan provides [CL96, Cur97, TL94], i.e. reliability estimates, viewpoint and nor-

mal information for each range value. Such algorithms may give bad results in

regions of high curvature.

At the same time, research still continues in the traditional approach ofSur-

face Reconstruction from unorganized points[HDD+92, Hop94]. Here no addi-

tional information about the points is assumed other than their positions. These

algorithms are thus useful in a wider range of applications but because of lack of

apriori normal information, they also run into trouble in areas of high curvature.

Furthermore, as they have no external measure of reliability for the input points,

they can not handle noisy data very well.

In [IJS03b], the authors propose to solve the latter problem using a new ap-

proach – Neural Meshes. They begin with a deformable base mesh that grows to

match the input point cloud. While other techniques using deformable models try

to satisfy some energy conditions2 and grow the mesh by adaptive subdivision, this

2An energy term has to be minimized at all stages of growth. This energy term is derived fromdifferent factors like distance of the mesh to the point cloud, spring energy along the edges, number

Page 18: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 1. Introduction 6

technique takes a Neural Network approach to the problem3. The Neural Mesh is

trained to approach the point cloud by using samples from the point cloud; nodes

taking active part in the learning are rewarded by being split into two and lazy

nodes are penalized by being removed from the mesh. Topology is learnt [IJS03a]

by removing overly large triangles and by merging boundaries that are too close to

each other.

Neural Meshes have proven to be robust [IJL+04] with respect to noisy data

as well. Other robust methods [MMKR91] are cumbersome in that they employ

heavy machinery. Some method uses linear regression [MMKR91] to find a best

fit while others [LMP98, RL87, Ste95] repeatedly fit surfaces to a random subset

of the points and measure the residual error4 for the surface. The surface with the

least value for median residual or some other criterion function is taken to be the

best fit. Neural Meshes on the other hand are lightweight, intuitive and owing to

their simplicity, lend themselves easily to further extensions. We talk more about

robust methods in Section 4.2.6.

In this thesis we take a closer look at Neural Meshes and present ways in which

their performance is improved. We show that our modifications improve running

times while retaining mesh quality.

1.2 Outline

This thesis is divided into 4 parts. Part 1, Background, sets up the stage for Neural

Meshes, which we talk about in Part 2, Core, and and our work in regard to them,

which we present in Part 3. Part 4, Conclusion, closes off the thesis with some

results of the changes introduced in Part 2 and some concluding remarks.

Part 1 continues with Chapter 2 in which we put the Surface Reconstruction

pipeline (fig 1.1) into perspective by presenting situations where Surface Recon-

struction is typically applied. Chapter 3 then takes a look at popular acquisition

techniques, concentrating on range imaging using optical triangulation as that is

the acquisition technique assumed by Neural Meshes. And finally, in Chapter 4,

we present some previous work on Surface Reconstruction. We do not talk about

Neural Meshes here as they are handled comprehensively in the next part.

Part 2, Core, deals with Neural Meshes in detail. In Chapter 6, we take an in-

of vertices etc.3hence the name – Neural Meshes.4They perform operations on the distances between the input points and the surface.

Page 19: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 1. Introduction 7

depth look at Neural Meshes as originally proposed. The variants to the algorithm

are presented in Chapters 6 to 10.

Having presented the current state of Neural Meshes, in Part 3, we talk about

our extensions to them. Chapter 11 presents our idea on how to improve the per-

formance of the variants and Chapter 12 talks about the accompanying implemen-

tation issues.

The fourth and final part, Conclusion, has a chapter each on results and on

further work and conclusions. We should have, for the sake of completion, also in-

cluded some further discussion on the third (optional) step of the Geometric Mod-

eling pipeline, namely Mesh Simplification, but we omit that as Neural Meshes

already produce a good vertex distribution and their size is user controlled. There

is thus no further need for simplification.

Page 20: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 2

Applications

A digital copy of a real world object can be easily replicated, disseminated, modi-

fied and studied, more so than the object itself. Also, real world objects are subject

to decay and may require maintenance. A digital counterpart has no such maladies

and can be stored in the original form indefinitely. It is therefore convenient and in

some cases necessary to have such a copy. This chapter presents a brief survey of

such situations.

2.1 Manufacturing

Though most designing and modeling is done with CAD systems these days, there

are instances where a prototype might be handmade or sculpted, e.g. artists design-

ing the body of a car. In such a case, the finished prototype can be digitally copied

and the computer model can then be further modified or sent for manufacture. Its

properties may also be studied using Finite Element methods.

The manufacturing process itself may not be perfect. To test the process, dig-

ital copies of the first few manufactured parts could be made and compared with

the original prototype to ensure that any manufacturing errors are within tolerable

limits. If not, necessary adjustments could be made to the manufacturing system.

2.2 Reverse Engineering

Not all products are manufactured using CAD systems, especially those built be-

fore the advent of such tools. These products however may still be of interest to

manufacturers. For that purpose it is desirable to obtain a digital copy of the old

8

Page 21: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 2. Applications 9

product, which could then be further improved, studied or simply archived.

2.3 Shopping

As the use of the Internet grows, more and more people are shopping online. For

shoppers’ perusal, online retailers may want to make available 3D models of the

products they are offering.

2.4 Virtual Museums

There are several problems pertaining to cultural artifacts. They often date back

several hundreds of years because of which they are subject to aging and decay.

Special care has to be taken for their upkeep and maintenance, which serves only

to retard the aging process. Furthermore, geographic constraints also limit access

for interested viewers. An accurate 3D model of the artifact however can be made

available over the Internet for scientists and other enthusiasts worldwide, and since

the model does not age, one can view the artifact exactly the way it was at the time

of digitizing.

2.5 Entertainment

Modern cinematic effects are tending more and more towards reality. Virtual reality

as a means of entertainment or presentation is also gaining popularity. In these

environments, models of everyday objects add to the realism of the experience and

are thus of high import.

2.6 Other applications

Surface Reconstruction is also of importance in terrain reconstruction and cartogra-

phy. It also finds use in computer vision and robotics where a robot has to identify

obstacles either on the fly or when planning a route from one point to the other.

In medicine, scientists want to have models of body organs, tumors and other

structures. A surgeon could have a digital copy of a patient’s body and see be-

forehand the visual outcome of their intended steps. Making a digital copy of

one’s body could allow one to digitally try on new hair styles and spectacles, even

clothes, before making the actual purchase.

Page 22: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 3

Shape Acquisition

There exist a variety of ways to acquire the shape of an object. Shape is acquired

typically as a set of coordinates corresponding to points on the object’s surface.

These coordinates measure the distance or depth of the point from a measuring

device, and are calledrange values. The measuring device is accordingly called a

rangefinderand the data acquired for the entire object is calledrange data. A good

example is that of a camera which, instead of measuring color information of the

scene, measures depth information.

A large class of rangefinders rely on acquiring range data through projecting

energy onto the surface and then making measurements on the reflected energy.

They differ with regard to the kind of energy they use – light, sound etc. Of special

interest in this class of rangefinders areoptical rangefinders, i.e. rangefinders that

use light to acquire range data. These are relatively cheap, easy to use and produce

fairly accurate results. Levoy and others [LPR+00] have recently used them to

digitize large statues. In Section 3.1, we look at optical rangefinding and other

popular rangefinding techniques. Section 3.2 discusses some of the issues involved

with optical triangulation. The end product of this entire machinery is the input to

the Surface Reconstruction stage, which we talk about in the next chapter.

3.1 Popular Acquisition techniques

Despite the abundance of rangefinding techniques, it is important to keep in mind

that they are not the only shape acquisition methods. In applications such as ro-

botics where it is preferable to process data in real time, shape is acquired quite

differently. However, these methods internally construct some kind of depth rep-

10

Page 23: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 3. Shape Acquisition 11

resentation. Below, we take a look at some acquisition techniques. An excellent

summary of the methods introduced here can be found in [Cur97, Sec 1.3]. The

following list is not at all comprehensive and is meant merely as a rough overview

of the ideas involved in rangefinding. We will skim over most of the methods until

we reach the one of our interest,optical triangulation, which we will examine in

more depth.

3.1.1 Images

The acquisition technique of choice in robotics and terrain reconstruction isstereo

imaging. The idea is to capture 2D images of the object from different viewpoints

and to use the known camera coordinates in each case to reconstruct the shape of

the object from the photographs. Because of its reliance on photographic equip-

ment, this technique is subject to a host of photogrammetric issues. This technique

is used typically for scenes instead of a single object, e.g. a room. Automatic recon-

struction then involves further issues like finding corresponding objects in differ-

ent images (correspondence), telling objects apart from each other (segmentation),

identification of similar areas (region detection) and identification of boundaries

(edge detection).

Information gathered from a single 2D image can also be exploited to give

range data. The idea here is that the blurring of a point in the image is proportional

to its distance from the focus plane. This method thus depends on the reflective

properties of the object. This dependence is partly overcome by projecting a known

pattern of light on the object before capturing the image. The results obtained

however have only moderate accuracy.

3.1.2 Slices

For thesurfaces from contourstechnique, common in medicine, the object of in-

terest is physically sliced into many thin layers. The outlines of the slices are then

digitized to create a stack of contours. Similar outlines can also be obtained through

MRI and CT scans. The problem then is to reconstruct the three-dimensional struc-

tures from the stacks of two-dimensional contours. Common techniques based on

this method make use of information specific data, e.g. the fact that data are or-

ganized into contours (i.e., closed polygons), and that the contours lie in parallel

planes.

Page 24: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 3. Shape Acquisition 12

3.1.3 Coordinate Measuring Machines (CMMs)

Coordinate Measuring Machines (CMMs) take a brute force approach by having

mounted, movable touch probes go over the entire surface of the object. The move-

ment of the probe with respect to a reference position is tracked. so that the loca-

tion of a contact point can be calculated. CMMs are precise and accurate. This

has, in fact, led them to be the industrial standard for manufacturing applications.

However the process is a slow one, the machinery needs a human operator and

the handling is clumsy. Also, a CMM may not be the method of choice when the

object to be digitized is fragile.

3.1.4 Reflection/Transmission of waves

The remaining methods chiefly use waves – x-rays, microwave, sound, light, which

are projected onto the object and then measuring the reflected energy. In the case

of x-rays, the waves do not reflect but pass through or are transmitted through the

object1. In this case, the transmitted waves are measured. Depending on the wave

sent, range values are estimated by measuring either the time taken to reflect to the

source, the amount of radiation transmitted, or the direction of the reflected rays.

Radar/Sonar

Rangefinders using radar and sonar obtain range values by recording the time taken

for a projected wave to reflect to the source. They are used mainly for long-range

remote sensing, e.g. airborne laser radar is used to gather data for terrain recon-

struction. Sonar rangefinders do not give very accurate results.

Range scans obtained using microwave radar at optical frequencies are very

accurate for large objects. However, for objects roughly a meter in size and smaller,

the time differences to be detected reduce to the order of picoseconds (10−12 s).

The detection of these times requires very high speed circuitry.

X-rays

Industrial Computer Tomography (CT) scanners project high-energy x-rays at the

object and then measure the transmitted radiation along different lines of sight.

1We do not consider transparent or translucent objects. Shape acquisition for such surfaces re-mains to be a research issue [GLL+04, LGB+02].

Page 25: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 3. Shape Acquisition 13

Operations on this data yield a high resolution volumetric description of the ob-

ject. The problem with using these scanners is that they are very expensive and

potentially hazardous because of the use of radioactive materials.

Light

Optical rangefinders are relatively inexpensive and yield good results. For this

reason they are quite popular and come in several flavors. We have already seen

how 2D images and optical frequency radar can be used for rangefinding. Other

optical rangefinders useinterferometricmethods – they project varying patterns of

light onto the surface and use the reflected patterns to infer shape geometry. In this

way, they gather data about surface geometry instead of directly recording range

data. These methods run into problems when the shape does not exhibit smooth

variations. This sets a limit on the maximum slope of the surface in order to gather

reliable results.

This brings us to the method of optical triangulation. Rangefinding using opti-

cal triangulation is illustrated in Figure 3.1. A light source, typically laser, projects

light onto the surface. A sensor then catches the reflected light rays and determines

their direction. As the positions of the light source, the sensor, and the direction

of projection are known, the point of intersection of the projected and reflected

rays can be found. The intersection point gives the range value for the surface

point that the projected ray hit. Range data for the entire object is then acquired

by translating or rotating the surface through the beam, or by sweeping the beam

across the surface. This framework extends easily to 3D by having not a beam but

a sheet of light scan the surface. The sheet is obtained by passing a beam through

a cylindrical lens.

3.2 Issues with optical triangulation

Rangefinding in general is subject to several problems which lead to faulty or in-

complete range data. Firstly, imperfections in the hardware contribute to noise in

the range data. And secondly, as the line of sight sweeps over the surface, scans

obtained from angles just grazing the surface are sparse and not fully representative

of the surface area they correspond to.

Optical techniques, because of their inherent dependence on the surface char-

acteristics and shape of the object, have further problems. Irregularities in slope

and reflectance cause the projected light to reflect in unexpected directions. Range

Page 26: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 3. Shape Acquisition 14

Figure 3.1: Range imaging using optical triangulation

(a) 2D illustration – the object is scanned from A to B.(b) Range scan from (a).(c) For 3D, the

beam is passed through a cylindrical lens to obtain a sheet. The direction of travel of the sheet is

indicated separately.(d) Range scan from (c)

data for such regions cannot be recorded and these regions are thus represented as

gaps in the range scan. Other regions that are occluded for the current line of sight

by part of the object are also not registered in the range scan. Finally, one scan

captures a single face of the object at a time.

All the above problems are addressed by taking multiple scans of the object

from different viewpoints - scans from different sides of the object to capture the

total shape2, scans of single faces from several viewpoints to handle self-occlusion

2The scanned areas from each side overlap partly. This ensures that regions that are scanned at

Page 27: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 3. Shape Acquisition 15

and irregularities in properties. Combining information from different scans of an

area gives a representation that is dense, more accurate and less noisy than any of

the individual scans.

Having separate scans means that there should be a strategy to combine them

as well [TL94]. Combining the scans can be seen as a stepwise process. First,

the scans areregistered, i.e. they are brought into a single coordinate system. The

registered scans are thenintegratedto obtain the entire shape. Care should be taken

here to account for overlapping scans.

The noise from individual scans carries over to the integrated data. Addition-

ally, integration is subject to its own problem ofmisalignment, where registered

scans may be imperfectly stitched together. This is illustrated in Fig 3.2.

Figure 3.2: Rangefinding issues

(a) The surface is scanned from A to B, and after moving the object, from C to D.(b) The integrated

data is noisy and imperfectly aligned - the region between B and C overlaps.

grazing angles in one scan are sampled sufficiently in another one.

Page 28: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4

Surface Reconstruction from

point data – previous work

The problem of Surface Reconstruction from unorganized points1 in a generalized

setting2 was first handled efficiently by Hoppe and others in [HDD+92, Hop94].

Since then, there has been a flood of effort in this direction. To date, numer-

ous methods have been proposed, often inspired by and/or exploiting ideas from

physics and mathematics. Still, Surface Reconstruction continues to be a widely

researched topic, with researchers and practitioners refining existing methods and

developing new techniques constantly.

One reason why there is so much effort put into Surface Reconstruction is that,

as mentioned earlier, the problem arises in multiple disciplines. The literature on

the topic is voluminous and comes from communities such as computer vision,

medical imaging, computational geometry, scientific visualization, shape model-

ing, and, of course, computer graphics. Funke and others make an interesting

observation in [FR02] where they point out that the output of the surface recon-

struction problem is typically a structure that isO(n), yet the effort most methods

invest in producing the output is far more. They theorize informally that there must

be a proportionate cost algorithm for the solution. Perhaps, it is the search for this

optimal solution(s) that drives researchers on. Recently, interest in surface recon-

struction has also been boosted by the availability of huge data sets – in the order

of millions and even billions of points, made possible by the Digital Michelangelo

1A set of points is said to beunorganizedor unstructuredwhen it contains no information otherthan the points’ coordinates. Point sets that are not unstructured can be treated as such by ignoringtheir additional information.

2Earlier work relied on domain specific information.

16

Page 29: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 17

project [LPR+00].

Techniques for Surface Reconstruction from a point cloud,P, are numerous

and can be classified in many different ways. At a glance,polygonaltechniques

reconstruct the surface as a polygonal mesh, whileimplicit techniques give an im-

plicit, functional representation of the object.Unstructuredtechniques reconstruct

a surface form points given only their coordinates. Such point sets are termedunor-

ganizedor scattered. Structuredtechniques exploit additional information present

in data sets. This additional information is typically in the form of normal in-

formation or reliability estimates in range scans.Interpolating schemes output

a surface which passes through the input points. Closely related to interpolating

schemes aredirect triangulationschemes that triangulate the input points.Approx-

imatingmethods useP to generate a surface which may not pass through some or

any of the points.Volumetrictechniques first build some kind of representation

of the volume represented byP and then extract a surface from it, typically us-

ing some variant of the Marching Cubes algorithm [LC87].Parametricmethods

reconstruct the surface as a 3-dimensional function on a 2-dimensional parameter

domain. Physics-basedschemes start with an initialdeformable modelwhich is

deformed under some energy minimization process until it satisfactorily meetsP.

Nodes are typically added to the model by subdivision steps. Most of the above

methods assume little or no noise inP. While some of them can naturally cope

with some noise,robustmethods are designed specifically for noisy inputs, and are

a topic in their own self.

As the arena of Surface Reconstruction methods becomes more crowded, it

becomes increasingly difficult to succinctly categorize methods. Recent methods

often borrow ideas from several classes of techniques. In this chapter, after looking

in Section 4.1 at some terms related to surface reconstruction, we try to briefly

outline some of the techniques in the major categories. These categories, as we

shall see, are not watertight and a single algorithm can simultaneously belong to

several of them.

4.1 Related terms

There are many terms related to Surface Reconstruction and it is worthwhile to

get to know and understand these terms. The input to any surface reconstruction

method is a point set, and the output is a representation of the surface. This is

in contrast to methods which output a representation of not just the shape but the

Page 30: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 18

interior as well. Thus, while our output models are ‘hollow’, the output models

from such methods are ‘filled’. Such methods are calledsolid modelingmethods

and are useful in applications like manufacturing and medicine, where the interior

of the object is of as much interest as its external shape. In this context, our Surface

Reconstruction methods are referred to asshape modeling, shape reconstruction,

surface fittingor surface meshingschemes.

Solid modeling schemes are also referred to as3D reconstructionschemes. In

this context, shape modeling methods are seen to be straddled between 2D and

3D, and are thus referred to as2.5D reconstructionmethods. 2.5D refers to 2D

surfaces embedded in 3D, or, in a restricted sense, to height data. This terminology

is typically used in terrain reconstruction, where a terrain is reconstructed from

data obtained from airborne radar.

The equivalent problem of shape reconstruction in two dimensions is theCurve

Reconstructionproblem. Here, one wants to reconstruct a curve given its sample

points. This problem generates considerable interest in the Computational Geom-

etry community, and many Computational Geometric methods for Surface Recon-

struction are direct extensions of their Curve Reconstruction cousins.

A closely related problem to Surface Reconstruction is that ofFunction Recon-

structionor surfaces on surfaces, posed by Barnhill as an open problem in [Bar85]

and dealt with extensively in [Fol90]. The aim here is to determine, given a surface,

a real valued function on the surface. [BF91, Fra87, NF90, Nie93b] survey meth-

ods commonly used to solve this problem. Typically, solutions pose conditions on

the domain surface, which is usually taken to be a deformed sphere [FLN+90], a

convex body [BPR87] or a body obeying some continuity restraints [BX94, Pot92,

Res87]. Nielson and others [NFHL91] present a method that works for airplane

wings. Polynomial [BX94] or non-polynomial [BOP92, Nie93a] interpolation can

then be used. This problem arises, for example, in modeling and visualizing the

rainfall on the earth, the pressure on the wing of an airplane or the temperature on

the surface of a human body [BBX95].

For some simple cases, Function Reconstruction may be used for Surface Re-

construction. These are special cases where the surface to be reconstructed is

known to be the graph over a known surface, and the domain surface is accord-

ingly simple. As this is very rarely the case, Function Reconstruction and Surface

Reconstruction remain to be separate problems and should not be confused with

each other.

Page 31: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 19

4.2 Categorization

The reason there exist many different methods for Surface Reconstruction is that

no one method or technique is better than the other. While each approach has

its advantage, the choice of method is very much application dependent. Polyg-

onal methods output a polygonal model, typically a triangle mesh as a triangle is

the simplest of polygons. As modern graphics hardware can handle increasingly

complex triangular meshes, if visualization is the only goal of the reconstruction,

then a polygonal model suffices. On the other hand, representing a surface implic-

itly [BW97] allows a complex shape to be described by one formula. Implicit rep-

resentations unify surface and volume modeling and facilitate several shape editing

operations. Consequently, they are better suited to CAD and manufacturing appli-

cations. Because they yield a compact representation of the surface, they can also

be viewed as compression techniques.

4.2.1 Implicit techniques

Implicit methods are extensions of Blinn’s [Bli82] idea of blending local implicit

primitives. As opposed to techniques that reconstruct piecewise linear representa-

tions ofP [FHMB84, O’R81, Vel93], they typically fit one function to all the data

points. As such, they are inherently interpolating. If desired, the function may be

approximated to yield a piecewise linear (polygonal) approximation [AS85, LC87].

Often, the difference between different implicit methods is simply in the choice of

function used to represent the surface. Krishnamurthy and Levoy [KL96] calculate

detailed displacement vectors and fit B-spline surfaces to the data. Moore and oth-

ers [MW90] fit piecewise polynomials recursively and then enforce continuity us-

ing ‘freeform blending’. In [LTGS95], a union of spheres is blended to fit the data.

The spheres are initially configured using a Delaunay tetrahedralization [Del34] of

P. Ohtake and others [OBA+03] use weighted piecewise quadratic functions for

multilevel local fitting. Quadratic surfaces are also used in [Dah89]. Bajaj and

others [BCX95, BBX95] use a modified form of Bernstein-Bezier patches.P is

modeled using splines in [BI92, DTS93, Guo91, Guo93].

Some implicit techniques [Alf89, BI92, BCX95, Dah89, DTS93, FLN+90,

Guo91, Guo93, Nie93a] have the drawback that they require an input triangulation

of P in order to fit it with smooth implicit surface patches, while others [BBX95,

MW91] don’t. However, such methods build an intermediate polyhedral represen-

tation of their own. Hoppe and others, in their three-stage process of [HDD+92,

Page 32: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 20

HDD+93a, HDD+94] also build a polyhedral model in order to get a smooth sur-

face representation ofP.

Radial Basis Functions (RBFs)

A lot of work in implicit reconstruction has gone into Radial Basis Functions

(RBFs). Earlier techniques [CFB97, SPOK95, TO98, TO99] could not exploit

the power of RBFs because of large computational requirements [DLR86, Flu92,

SS91]. Yngve and Turk [YT99] propose to work with a reduced point set, but

the simplification compromises the ability to represent complex objects with arbi-

trary topology and also loses detail. In [CMB+01], Carr and others make mod-

ifications to the required calculations to achieve a very fast algorithm that inter-

polates the given points using polyharmonic RBFs. Recent RBF methods include

[DTS01, KHS03, MYR+01, OBS03].

The state of the art [OBA+03] in implicit reconstruction are [CMB+01, DTS02,

TO02].

Zero set (Z(f)) methods

A separate and distinct class of implicit techniques tries to approximate the data

with a ‘best-matching’ function. A function is more likely to be chosen than an-

other if its zero set,Z(f), is closer to the data points. [Pra87, Tau91] minimize

the sum of squared Hausdorff distances from data points toZ(f) wheref is a

polynomial in three variables. Muraki [Mur91] takesf to be a linear combination

of 3 Gaussian kernels. Apart from the closeness off to 0 at the data points, his

goodness of fit function also measures how well the unit normals toZ(f) match

the normals estimated from the data.

In [HDD+92, HDD+93a, HDD+94], Hoppe and others describe a three stage

procedure to extract a smooth functional representation ofP. First they generate a

triangular mesh fromP in a stepwise process. The first step is to use neighbour

information inP to assign tangent planes to each point. These planes help define a

signed distance function,f , which is 0 for points on the surface, positive for points

outside it, and negative for points inside it. A Marching Cubes algorithm [AS85]

is then used to extract a polygonal representation ofZ(f). The obtained mesh

is optimized with respect to number of triangles and distance fromP. A smooth

surface is then constructed from the mesh.

Curless and Levoy [CL96, Cur97] take a similar approach in that they also use a

Page 33: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 21

signed distance function from estimated tangent planes. Bajaj and others [BBX95]

useα-shapes [EKS83, EM94] to constructf , while Boissonnat and Cazals [BC00]

use interpolation based on natural neighbours which are easily obtained from the

Voronoi diagram ofP [Sib80, Sib81, Wat92]. Bernardini and others [BBCS97] and

Fomenko and Kunii [FK88] present moreZ(f) methods.

4.2.2 Physics-based and deformable-model techniques

Deformable model techniques grow an initial mesh subject to some constraints

until it is satisfactorily close toP. Sclaroff and Pentland [SP91] describe a method

for fitting a deformed sphere to a set of points using deformations of a superquadric.

Kobbelt and others [KVLS99] simulate the wrapping of a plastic membrane around

the object. Liao and Medioni [LM95] use a modified version of two dimensional

splines which they call ‘B-snakes’. They repeatedly fit B-snakes toP and make

corresponding changes to the initial model so as to modify its medial axis.

Physics-based methods describe an energy term whose minimization controls

the initial model’s growth. Various energy terms measure the closeness of the

model toP, the smoothness of the model and other attributes. Hoppe and oth-

ers [HDD+93b] deform an initial mesh under such conditions. [PS91, Ter86,

TPBF87] introduce the concepts of snakes and active surfaces in this regard. Chen

and Medioni [CM95] initialize their mesh with a simple balloon model that is

totally contained inP. They then ‘inflate’ the balloon until it meetsP. [Set96,

ZMOK98] use variational level set methods to deform under energy conditions a

membrane enclosingP. Zhao and Osher [ZO02] present another level set approach.

Chaine [Cha03] uses the notion of ‘flow’, brought into Computer Graphics

by [Ede02, GJ02], and translates a physical convection scheme presented by Zhao

and others [ZOF01] into a geometric algorithm based on a three dimensional De-

launay triangulation ofP. He selects a closed and oriented surface in the 3 di-

mensional triangulation ofP and transforms it using physical convection schemes.

Taubin [Tau95] proposes a signal theoretic approach to the surface reconstruction

problem.

4.2.3 Computational Geometry approaches

Most of the techniques presented so far can be classified as Computer Graphics

approaches to Surface Construction. They content themselves with producing a

model that looks reasonably close to the input point set. Algorithms are tested

Page 34: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 22

empirically – they are run on point clouds from known surfaces and the output is

compared with the actual surface. Computational Geometry approaches tend to be

more theoretical and interest themselves not just in techniques but also in proofs

and guarantees. [MM98] gives a good classification of Computer Graphics and

Computational Geometry approaches to Surface Reconstruction.

Computational Geometry techniques are combinatorial in nature – they gener-

ate a triangulation of the points, typically the Delaunay triangulation, and output

some of the generated faces. One of the earliest works on surface reconstruction

in this field is that onα-shapes by Edelsbrunner and others [EKS83].α-shapes are

parameterized constructions of Voronoi diagrams and Delaunay triangulations that

associate a polyhedral shape with an unorganized set of points. A simplex (edge,

triangle, tetrahedron) is included in anα-shape if its circumsphere of radius at most

α is empty of sample points. The spectrum ofα-shapes, i.e. the set ofα-shapes

for all values ofα, gives an idea of the overall shape of the point set.α-shapes

are used widely for surface reconstruction [BBCS99, BMR+99, EM94, TC98].

Bajaj and others [BBX95] useα-shapes to obtain an intermediate volumetric rep-

resentation ofP. Sakkalis and Charitos [SC99] and Melkemi [Mel97] useα-shapes

for Curve Reconstruction.

Another early work is that of Boissonnat [Boi84]. His Delaunay Sculpting

algorithm progressively eliminates tetrahedra form the Delaunay tetrahedraliza-

tion of P based on their circumspheres. The use of Delaunay triangulation and

its dual, the Voronoi diagram, for Surface Reconstruction is widespread. For

the two-dimensional case, [ABE98, Att97, BB97, dG95, Gol99] provide theoret-

ical results on two dimensional Delaunay based smooth curve reconstruction and

[AM00, DK99, DMR99, Gie99] do so for curve reconstruction techniques based

on the traveling salesman problem and other ideas from Computational Geometry.

Amenta and others [AB98, ABK98] were the first to give theoretical guarantees

for their Crust algorithm in the three dimensional case. Their crust algorithm is

similar to Boissonnat’s Delaunay sculpting and to the method of Melkemi [Mel97]

in two dimensions. They use the Voronoi diagram ofP to extract its medial axis,

M. The edges in the Delaunay triangulation ofM ∪ P that connect points fromP

are included in the crust. In three dimensions, they choose two Voronoi vertices for

each point inP to approximateM. They prove that, under sufficient sampling con-

ditions, their reconstruction is close to the original surface. In [ACDL00], Amenta

and others introduce the Cocone algorithm which simplifies both the proof and the

algorithm for Crust. They prove that, given the sampling conditions, their recon-

Page 35: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 23

struction is homeomorphic to the original surface. Dey and others [DFR01] give

a fast implementation of Cocone under more relaxed conditions and Funke and

others [FR02] relax the conditions even further in their implementation.

In contrast to the method of Amenta and others, Boissonnat and Cazals [BC00]

present an algorithm with no sampling preconditions. They make up for undersam-

pling by using the Voronoi diagram ofP to generate natural neighbours [Sib80,

Sib81, Wat92] through which they interpolate to directly produce a smooth sur-

face. They however require normal information. When absent, normals are esti-

mated from the data.

Recently, Amenta and others [ACK01] use ideas from [Ede93] to present the

Powercrust algorithm where they use the Voronoi diagram ofP to reconstruct a

surface as the union of a finite set of balls. The algorithm is similar to the technique

used by Sakkalis and Charitos [SC99] to reconstruct curves. Both these techniques

provide topological guarantees under sampling conditions. With the Powercrust,

Amenta and others do away with problems of their earlier algorithms which may

output non-manifolds ifP does not meet the required sampling conditions.

Another interesting approach is taken by Mencl in [Men95] where he fills con-

tours in an extension of the Euclidean minimum spanning tree ofP. Attene and

Spagnuolo [AS00] use the minimum spanning tree and Gabriel graph ofP to define

new tetrahedra removing operations for sculpting algorithms like [Boi84]. In the

tradition of Delaunay-based methods, Gopi and others also present an algorithm

in [GKS00]. Edelsbrunner [Ede98] reports success with a proprietary software.

Bernardini and others [BBCS97] present another method in this spirit.

The cost of most of these algorithms depends on the cost of calculating the

Delaunay triangulation ofP. With the advent of fast implementations for the prob-

lem [Dev98], such methods can be quite fast.

4.2.4 Parametric and projection-based methods

Parametric techniques represent the surface as a function of a two dimensional

parameter. The domain space is usually a plane [HS89, VMA86, Vem87] or a

sphere [Bri85, SB78, SB79].

Projection-based methods take the surface to be reconstructed to locally be the

graph of a function, or a plane. They assign a plane to each sample and triangulate

P by projecting neighbouring points to the plane. [ABCO+01, Boi84, FCOAS03,

OG98] present a few such techniques. Levin [Lev03] triangulates the estimated

function by using least-square function approximation techniques.

Page 36: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 24

4.2.5 Structured techniques

The proliferation of relatively cheap and fairly accurate optical scanners and their

subsequent widespread use has ushered the development of special purpose algo-

rithms tuned to range data. Foremost among these are the methods of Curless

and Levoy [CL96, Cur97] and Turk and Levoy [TL94]. Curless and Levoy use

range data to derive error and tangent plane information, similar to the method of

Hoppe and others [HDD+92, Hop94]. They use this information to represent the

entire surface as one continuous function which they evaluate locally on a voxel

grid. Because they make only local computations, their method is especially fast

and can handle very large data sets. They make subsequent hole-filling steps which

also use problem specific information. As the technique uses one function to repre-

sent the entire volume of the surface, it can be classified as implicit and volumetric.

The method of Hilton and others [HSIW96] is similar to this method in that they

use weighted signed distance functions for merging range images.

In [TL94], Turk and Levoy propose an incremental algorithm which merges

range scans by first eroding away redundant geometry and then ‘zippering’ along

the remaining boundaries. Finally they perform a ‘consensus’ step that reintro-

duces the original geometry to establish final vertex positions.

Earlier methods include those of Soucy and Laurendeau [SL92] who use Venn

diagrams to identify overlapping data regions, followed by reparameterization and

merging of regions. Rutishauser and others [RST94] use errors along lines of sight

from the sensor to establish a consensus surface position, followed by a retessella-

tion that incorporates redundant data. Pulli and others [PDH+97] build a hierarchi-

cal volume representation ofP using an octree [SF92]. Grosso and others [GSF88]

generate depth maps from stereo and average them into a volume with occupancy

ramps of varying slopes corresponding to uncertainty measures. Succi and oth-

ers [SSGT91] also create depth maps from stereo and optical flow and merge them

volumetrically using a straight average of estimated voxel occupancies. The recon-

struction is an isosurface extracted at an arbitrary threshold.

Other structured techniques rely on assigning values to a voxel grid. The values

may be binary, tertiary or may follow a probability distribution. Connolly [Con84]

casts rays from a range image accessed as a quadtree into a voxel grid stored

as an octree. In [CSA88], Chien and others generate octree models under the

condition that the lines of sight are in the direction of the six faces of a cube.

[LC94, TG94] describe methods for generating binary voxel grids from range data.

Elsner and others [EWA97] cast rays through a range image into a volume grid and

Page 37: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 25

assign probability values to the grids.

4.2.6 Robust methods

Often in processing range data, methods described above cope well when the data

contains only additive zero mean Gaussian noise. Least squares methods [Pra87],

for example, give optimal results with this type of data. However any outliers in

the data, that is data that does not conform to the Gaussian distribution, will cause

the estimators to give erroneous results. Techniques that give good results in the

presence of outliers are termed robust.

There are two main types of outlier data to consider. Firstly unstructured out-

liers occur as a result of errors in measurement in the device. Secondly structured

outliers are points on surfaces, that do not belong to the surface being approxi-

mated. Meer and others [MMKR91] give an overview of various robust regression

methods. Their ‘estimators’ apply a function to the residuals, the sum of which is

minimized,min f(r). In the least squares casef(r) = r2. Other functions seek

to minimize the effect of larger residuals. Details of otherf(r) functions are given

by Zhang [Zha95].

The proportion of outliers to the total number of data points that an estimator

can tolerate before failing to fit a surface is termed the break down point. Least

squares estimators have a break down point of 0% and the LMedS [RL87] has a

break down point of 50%. Several authors describe methods for achieving higher

than 50% with additional computational expense.

The Least median of squares (LMedS) technique repeatedly chooses random

samples and fits a surface to them. The one with the least median residual is chosen

as the best fit and is output. Stewart [Ste95] describes the MINPRAN algorithm

which is similar to LMedS, but uses a different criterion function. [LMP98] uses

a variation of LMedS estimator to fit planar surfaces to range data. Instead of the

median residual they use the K-th residual. They also achieve a greater than 50%

breakdown point for their method. Fits are chosen over a number of values of K.

The fit with the lowest error is taken as the best fit.

4.2.7 Learning techniques

A novel perspective on Surface Reconstruction is to see it is a learning problem.

Methods based on this approach assume that all necessary information about the

Page 38: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 4. Surface Reconstruction from point data – previous work 26

shape of the object to be modeled3 is contained in the range data. Reconstruction

then reduces to a matter of simply learning that information. The topic of statistical

learning is detailed in [HTF01].

Taking a learning approach opens the door to automated learning methods,

more specifically artificial Neural Networks. A detailed handling of the use of

Neural Networks in shape learning can be found in [Bis95]. In [Koh82], Kohonen

introduces Self Organizing Maps (SOMs), a type of Neural Network. Based on

SOMs, Fritzke introduced Growing Cell Structures (GCSs) in [Fri93]. These are

presented in more detail in [Boh00]. Whereas SOMs are typically static, GCSs

are incremental [Fri96]. SOMs have found use in Computer Graphics applica-

tions in visualizing multidimensional data [GS93], in free-from surface reconstruc-

tion [HV98], in grid fitting [BF01], and in mesh generation [Yu99]. The technique

presented in [VHK99] is closest to the Neural Mesh technique in that it uses GCSs

for Surface Reconstruction.

3plus, unfortunately, some noise

Page 39: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Part II

Core: Neural Meshes

27

Page 40: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

We study in detail the development of Neural Meshes from the originally pro-

posedbasicalgorithm to the most recentensemblesapproach.

Page 41: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 5

Preliminaries

The Neural Mesh algorithm has undergone several refinements. Because of that,

there are more than one variants of it. We present each of the variants in detail in

subsequent chapters. This chapter gives a general introduction to the variants and

serves as a preparation before we go into the technicalities later.

In talking about Neural Meshes, we shall frequently use terms that are native to

Neural Network contexts. Section 5.1 explains the usage of such terms for Neural

Meshes and also lists the formal terminology that will be used in the technical

discussions later. Section 5.2 brings together the different variants by giving an

overview of the underlying algorithms. It also outlines the differences between

Neural Meshes and the Growing Cell Structures [Fri93] they are inspired from.

Section 5.3 describes the simple initialization of the Neural Mesh which is the

same in almost all variants.

5.1 Terminology and notation

A Neural Mesh simulates a Neural Network. Vertices in the mesh correspond to

nodes of the network and edges to the network’s connections. The configuration

of the Neural Mesh at any time is analogous to the state of the underlying Neural

Network at that time. For this reason, we shall use terminologies from both con-

texts interchangeably, e.g. ‘state of the mesh’. This does not lead to ambiguity as

long as one understands that the Neural Mesh and the Neural Network are one and

the same.

During the course of a run of the algorithm, the mesh will regularly be searched

29

Page 42: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 5. Preliminaries 30

for the node closest1 to a sampled point or asample2. The closest node is called

the winner. When a later search for closest node returns another point, then that

point becomes the winner. Once a winner is selected, it is typically repositioned

and its neighbourhood is smoothed. The winner and its neighbourhood are now

said to have ‘learned’ from the sample. Nodes that are often winners and do a lot

of learning areactivenodes whereas those that are seldom or never winners are

lazy. The activity of any node is reflected in a real-valued function that is assigned

to it. This function is called the node’ssignal counter. If a region in the mesh has

many active nodes, then the region is alsoactive.

Following is a list of the technical terminology that will be used in following

discussions.

• The input point cloud is denoted byP.

• The Neural Mesh is denoted byM.

• Nodes ofM are denoted byv. When a particular node is meant, a corre-

sponding subscript is added, e.g. a winner is denoted asvw.

• Where applicable, the label of a mesh element denotes not just the element

but its state as well, e.g.vw refers not only to the winner but also to its

position.

• Signal counters are denoted asτ . For example,τx denotes the signal counter

of the nodevx.

• m′ represents the new state of mesh elementm.3 This reflects the continu-

ously changing state of the network. As an example,τ ′w = 0.5 ∗ τw means

that the signal counter of nodevw will from now on be half of its current

value.

We remain consistent with this notation throughout this thesis, even if it means

being inconsistent with the notation of any of the papers that introduce the variants

we discuss in the following chapters.

1in terms of Euclidean distance.2Points sampled from the input point cloud serve as inputs to the algorithm.3Note thatm′ does not denote a new element – it denotes only the new state of the existing

elementm, which shall continue to be denoted asm.

Page 43: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 5. Preliminaries 31

5.2 Overview

There are several variants of the Neural Mesh algorithm. In each variant though,

the Neural Mesh,M, starts out as a tetrahedron and grows by conditioning itself to

samples taken repeatedly from the input point cloud,P. These samples are used as

training datafor M to learn the shape represented byP. Each vertex inM has an

associated real function called a signal counter which is initialized with zero. For

each sample, the closest vertex inM is found4. This vertex is called the winner. It

is moved some distance toward the sample, its 1-ring neighbourhood is smoothed

and its signal counter is incremented. A vertex’s signal counter is thus a measure

of its activity.

Vertices exhibiting high activity are periodically ‘rewarded’ by adding new

vertices in their neighbourhoods. The idea is that active vertices represent regions

in M that are currently under-representing the corresponding regions inP. More

vertices are added to these regions for a fairer representation. Active nodes are

identified by their high signal counter values. When a new node is added, it claims

some fraction of the signal counter of the most active node.

As a counterpart to the above, vertices which have not been active for some

time – lazy vertices – are periodically removed from the mesh. Just as active ver-

tices are found in under-represented regions of the point cloud, lazy vertices occur

in over-represented regions. These regions need to be rid of misplaced nodes for a

fairer representation ofP. Identification of these nodes is not straightforward and

is handled varyingly in different variants.

Later variants of the algorithm include two furthertopology-learningsteps

which can be switched on or off. These steps create boundaries by periodically

removing very large triangles and create handles by merging very near boundaries.

The criterion to remove a triangle is that its area must be more than certain times

the average area of the triangles in the mesh. The justification behind removing

large triangles is that triangle area inM is inversely proportional to the density of

the corresponding region inP. A very large triangle thus represents an especially

sparse region inP – a region ofP that should not be represented at all. The bound-

ary merging criterion uses a threshold on the Hausdorff distance [Ata83, Rot91]

between the boundaries.

Another variant of the algorithm deals with the possibility of noise inP. Noise

can be in two forms – noisy data and outliers. Noisy data is usually caused by im-

4in case of more than one closest vertex, any one is chosen

Page 44: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 5. Preliminaries 32

perfections in rangefinding hardware and results in slightly displaced range values.

On the other hand, outliers are points inP that do not correspond to the represented

surface. These are caused by errors in the measuring device, object slope and/or

measurement procedure. Notice that Neural Meshes, being learning algorithms,

can naturally cope with some amount of noise. The variant enhances this natural

ability by filtering the samples obtained fromP and controlling the movement of

winners towards suspicious samples.

A more powerful method builds upon this by running the algorithm several

times and ‘averaging’ the models obtained from each run.

5.2.1 Differences from GCSs

Neural Meshes are a direct extension of Fritzke’s Growing Cell Structures [Fri93].

However, they do introduce some new features of their own. Firstly, GCSs are

designed forn-dimensional simplical complices. This generality affords them op-

erations which would be careless when applied to triangle meshes, i.e. edge splits

and node removals, which can cause topological changes in a triangle mesh. The

Neural Mesh algorithm replaces them with vertex split and edge-collapse, whose

topological impact can be easily checked.

In GCSs, not just the winner but its neighbours also learn froms, i.e. the

1-ring neighbours ofvw also move towardss but at a smaller rate. This leads

to faster overall convergence. Neural Meshes, to avoid unwanted artefacts, do

not move the neighbours of a winner, but smooth its neighbourhood instead. In

the topology learning variants, the algorithm differs from other topology learning

algorithms [Fri95, MS94] in that its main primitives are the boundaries instead of

the vertices and edges of the mesh. This helps to deal with topological noise inP.

The quality of a reconstruction is typically quantified by the ratio of regular

(valence 6) vertices inM.

5.3 Initialization

The initialization ofM, which is the same in almost all variants, is as follows:

• M is a tetrahedron.

• For eachv in M, τv=0

• iterations = 0

Page 45: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6

The Basic Algorithm

Neural Meshes were first presented in [IJS03b]. The algorithm, presented as Al-

gorithm 1, is the base for all future variants which simply add to and/or modify its

steps. A thorough understanding of this algorithm facilitates understanding later

variants.

The algorithm iterates over three simple steps — one Geometry Learning step

and two Growth steps — until some user criterion is met. This termination criterion

is typically the number of nodes inM but can just as well be mean distance between

M andP, mean distance betweenvw andsover the past few iterations, etc. Though

the steps themselves are preserved in all later variants, there are differences in

how they are carried out, e.g. some variants update signal counters differently

than others. In the following sections, we see how thebasicalgorithm goes about

these steps, followed by an analysis of the cost1 of these steps. This format shall

roughly serve as a template for the following few chapters that discuss variants of

this algorithm.

6.1 The Basic Step

The Basic Step is a collection of several simpler substeps. It learns the geometry

of the target shape with as many nodes asM currently has. Notice that the sam-

pling substep of this step is the only place in the algorithm where the input point

cloud,P, is involved. This makes the algorithm’s running time independent of the

size ofP. This is a distinct advantage over other techniques that process all input

1Cost refers to the complexity term of the algorithm which determines running time.

33

Page 46: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 34

Algorithm 1 The basic Neural Mesh algorithmRepeat until some user criterion is met:

1. GEOMETRY LEARNING - BASIC STEP

• SampleP uniformly at random and return a sample,s.

• Find the vertex,vw, in the mesh that is closest tos.

• Update the position ofvw.

• Update the positions of 1-ring neighbours ofvw.

• Update the winner’s signal counter.

• Update all signal counters.

• iterations = iterations + 1

2. GROWTH

(a) NODE ADDITION

everyCvs iterations, whereCvs is an input parameter:

• find the vertex,vs, with the highest signal counter.

• add a new node,vn, in the neighbourhood ofvs.

• assign a ratio of the signal counter ofvs to vn.

(b) NODE REMOVAL

everynCec iterations, whereCec is an input parameter andn is thecurrent number of vertices inM:

• find lazy vertices.

• remove them.

points. Also, the fact that only the winner moves towards the sample2 classifies the

learning process as one ofcompetitive learning3.

The winner,vw, is found initially by comparing the distances between the sam-

ple,s, and each node inM. After M crosses an initial threshold, e.g. 1000 nodes4,

the nodes are copied to an octree [SF92]. This makes the search for the winner

more efficient. An added cost of the octree is that it needs to be constantly updated

to keep track of the movement of the nodes inM.

2In GCSs, the winner’s topological neighbours also move towards the sample, though by a lesserdistance than the winner.

3In Neural Network literature, the term ‘winner’ is generally used in competitive learning con-texts.

4Recall thatM is almost always initialized with a tetrahedron, i.e. 4 nodes. The number of nods

Page 47: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 35

Figure 6.1: Basic Step - node movement and pitfalls

(a) P is sampled ats. The winner moves towardss and its topological neighbours are smoothed.(b)

M has converged to a local minimum. The two non-shaded nodes will never be winners, nor their

topological neighbours. They will thus never move from their positions.(c) A foldover has occurred

in M. This results from inappropriate values of user constants.

Oncevw is found, its position is updated as

v′w = (1− αw)vw + αws (6.1)

whereαw is a user defined constant controlling the winner’s movement and thus

M’s rate of learning. A relatively large value ofαw implies quick learning and fast

convergence. But it can also lead to unwanted properties inM like convergence

to local minima and foldovers (Figure 6.1). On the other hand, a conservative

value manages to better avoid these artefacts at the cost of learning rate. From our

experience, the value of 0.1 is a good compromise between the two. Figure 6.2

illustrates the effect ofαw onM.

After vw has been repositioned, the positions of the 1-ring neighbours ofvw5

are also updated. This is thesmoothingsubstep and it ensures smoothness of the

mesh and a good vertex distribution for the given connectivity. For everyvi ∈ 1-

ring(vw),

in M is changed by the Growth Steps which we discuss shortly.5i.e. all nodes connected by an edge to the winner.

Page 48: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 36

Figure 6.2: Effect ofαw

(a) (b) (c)

(d)

Reconstructions of the Max Planck model at 5k vertices withαw set, from left to right, to 0.03, 0.1

and 0.4 respectively. The general shape is captured well in all models. Asαw increases, feature

learning gets better as illustrated in the crease between the lips, the tip of the nose, the eyes and

the ears. A largeαw causes exaggerated learning leading to spiky triangles and loss of smoothness.

Degradation of mesh quality in the form of the ratio of regular vertices is also shown by the graph at

the bottom, which shows ratio of regular vertices inM during its growth, for the different values of

αw. Although the value ofαw = 0.03 gives better results thanαw = 0.1, the latter value is chosen

as the default as it provides better learning with minimal loss of mesh quality.

• the discrete Laplacian [Tau95] is computed,

L(vi) =1

valence(vi)

∑vk∈1−ring(vi)

(vk − vi) (6.2)

Page 49: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 37

• the displacement is calculated,

Lt(vi) = L(vi)− (L(vi) · n)n (6.3)

wheren is the approximated vertex normal ofvi.

• and the position ofvi is updated

v′i = vi + αLLt (6.4)

whereαL is an input parameter, typically 0.056.

The above process is repeatedCLnumber of times, whereCLis an input parameter,

e.g. 5. This smoothing substep also helps prevent and resolve unwanted effects

like foldovers and convergence to local minima.

Two signal counter computations follow. Firstly the winner’s signal counter is

updated as

τ ′w = τw + 1 (6.5)

and then all signal counters, including the winner’s, are updated, i.e. for allv,

∀vi ∈ M τ ′i = αscτi (6.6)

whereαsc is an input parameter between 0 and 1, typically 0.95. The job of the

signal counter is many-fold. It indicates the active nodes in that they have higher

signal counters than other nodes (eq. 6.5). At the same time, it favours recent activ-

ity in that recent winners have a higher signal counter than the older ones (eq. 6.6).

This favoritism ensures that nodes that earlier took active part in learning but have

now been somehow misplaced are not confused with those that are currently active.

These ‘misplaced’ nodes are typically found in clusters that are over-representing

P and are discussed in more detail in Section 6.2.2.

6.1.1 ChoosingαL

For M to learn the shape ofP, as desired, the changes in the shape ofM should be

such thatM converges to the shape ofP. Now, considering the steps that change

the shape ofM, αw controls the winners’ movements (eq. 6.1) andαL controls the

smoothing of the winners’ neighborhoods (eq. 6.4). This suggests a relationship

6We talk more on selecting a value forαL in Section 6.1.1.

Page 50: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 38

between the two parameters,αw andαL. However, the discussion following eq. 6.1

and Figure 6.2 have already brought us to a somewhat ‘optimal’ value ofαw, i.e.

αw = 0.1. We thus experiment with different values ofαL for this fixed value of

αw.

As such,M converges towardsP even without the smoothing substep, i.e. when

αL = 0. However, because of the possible convergence to local minima inM,

the shape thatM converges to, is not necessarily the one represented byP. As

mentioned earlier, it is exactly to avoid such problems that the smoothing substep

is introduced. A natural question that arises out of this is the range of acceptable

values ofαL. Too high a value might driveM away from convergence and a value

that is too low might retain local minima. Figures 6.3 and 6.4 illustrate the effect

of varyingαLonM.

Figure 6.3: Mesh quality vs.αL

(a)

Mesh quality, measured by ratio of regular vertices inM, is measured for 5k reconstructions of5 models with varying values ofαL. For all models, mesh quality improves initially asαL grows.After αL = 0.2, the quality of reconstructions for 3 of the models begins to decline while it continuesto improve for the other 2. It is however clear that the quality is strongly dependent on the chosenvalue ofαL.

From the graph, the value ofαL = 0.2 seems to be viable for a general setting. This is different

from the authors’ default value ofαL = 0.05. This is because in our tests, we use only 1 iteration of

smoothing (CL = 1) whereas the authors used 5 smoothing iterations (CL = 5) for their tests. The

results achieved in both cases are similar, though the computational cost for additional smoothing is

much greater. We talk more about additional smoothing iterations in Section 6.1.2.

6.1.2 The significance ofCL

The winner’s 1-ring neighbourhood is smoothed not once butCL times, whereCL

is usually 5. The extra smoothing better resolves unwanted artefacts like foldovers

and convergence to local minima (Figure 6.1). This leads to a better quality mesh

as shown in Figure 6.5. The improved mesh quality however comes at the cost of

running time and surprisingly, does not lead to much improvement in the visual

Page 51: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 39

Figure 6.4: VaryingαL

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)

(k) (l) (m) (n) (o)

5k reconstructions of some models withαL set, from left to right, to 0, 0.05, 0.2, 0.4 and 0.5 respec-tively. In the top 3 rows, mesh quality is compromised for higher values ofαL. The value ofαL = 0does not give good results as we had expected.

Dark parts of the meshes indicate ‘twists’, i.e.M has turned inside out at these places during

growth.Twists are not uncommon inM and do not necessarily indicate erroneous learning.

quality of the output. Smoother meshes are also less faithful toP, as shown in

Table 6.1.

Page 52: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 40

Figure 6.5: Extra smoothing

(a) smooth once (b) smooth 10 times

(c) Node valences (d) Time for reconstruction

We exaggerate the effect ofCL by setting it to 10. Reconstructions at 5k of the brain model with

CL = 1 (a) andCL = 10 (b) are shown. Despite exaggerated smoothing, there is marginal dif-

ference in visual quality but the quality of the smoother mesh as indicated by occurrence of regular

vertices(c) is vastly superior. Running time(d) for the smoother model is correspondingly high.

6.2 Growth

The Growth steps are instrumental inM’s learning. While the Basic Step serves

only to learn the geometry ofP, the Growth steps helpM learn sharp features and

concavities inP and increase the size ofM. These steps insert new nodes intoM

and remove existing ones. The relative frequency of their invocation determines

Page 53: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 41

Table 6.1: Extra smoothing

Exp. 1 Exp. 2 Exp. 3 Avg

CL = 1 4.56 5.52 4.53 4.87CL = 10 4.96 5.36 5.45 5.26

A smoother model does not representP as faithfully as a less smooth one. The table shows Hausdorff

distances of the original brain model with reconstructions obtained with 1 and with 10 smoothing

iterations.

whetherM grows larger or smaller – if more nodes are removed than added, the

mesh disappears speedily! On the other hand, if nodes are added more frequently

than they are removed, the mesh grows in size. In Sections 6.2.1 and 6.2.2, we

introduce the individual steps and then discuss the relation governing the growth

of M in Section 6.2.3. In Section 6.2.4, we look at the problem of deciding signal

counter values for newly added nodes.

6.2.1 Node Addition

EveryCvs iterations, the Node Addition step is invoked. The idea is to locate high-

activity regions inM, indicated by active vertices, and increase the node population

there. All nodes inM are searched to find the one,vs, with the highest signal

counter. If there are several such nodes, any one is chosen.

For the actual addition of a node toM, two operations present themselves as

candidates — theedge splitand thevertex splitoperations. As shown in Figure 6.6,

the edge split operation retains the valence of the old node while the new node

always has a valence of four. In comparison, the vertex split operation distributes

the valences more evenly. Therefore the vertex split operation is selected.

The longest edge,es, incident onvs is found. Starting fromes, the edge star7

of vs is traversed in both directions so as to find two other edges,e1 ande2, which

divide the star in half. The vertex split operation is then performed onvs alonge1

ande2 with the new node,vn, at the midpoint ofes.

The Node Addition step is useful in that it growsM selectively. New nodes are

added only where they are needed, i.e. in regions where the representation ofP by

M is still below par.

7An edge star of a vertex is the set of edges incident on it.

Page 54: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 42

Figure 6.6: Edge Split vs. Vertex Split

(a)

(a) A is to be split along edge AB(b) C is added by an edge split operation(c) C is added by a

vertex split operation along edges AX and AY.

6.2.2 Node Removal

The Node Removal step is invoked everynCec iterations wheren is the number of

nodes inM. A typical value ofCec is 10. The idea here is to locate low-activity

regions, indicated by lazy nodes, and reduce their vertex population. In theory,

these nodes can be identified with the help of the signal counter. But as shown in

Section 6.3.1, limits to machine accuracy preclude this approach. Alternatively, all

nodes that have not been the winners since the last Node Removal invocation are

removed.Cec thus acts like a threshold; innCec iterations, each of then nodes

is expected to be the winnerCec number of times. Nodes that avail none of these

chances are removed.

For the removal itself, three operations present themselves – thevertex delete,

thehalf-edge collapseand thefull-edge collapseoperations. For non-trivial cases,

the vertex delete operation creates a polygonal hole. Computing how to triangu-

late this hole in an optimal way is then an added burden. The half-edge collapse

operation, which is not very much different, evades this problem in that it has a

predetermined triangulation of the resulting hole. Also, it is a natural counterpart

to the vertex split operation chosen for the Node Addition step. The full-edge col-

lapse is quite similar to the half-edge collapse but it disturbs many more triangles

around the collapsed edge. Therefore, the half-edge collapse operation is preferred.

Figure 6.7 illustrates the discussed operations.

The half-edge collapse operation, in addition to removing a node, changes the

valences of three other nodes, as shown in Figure 6.7. For that reason, when an

edge around a vertex is to be collapsed, the algorithm selects from the node’s edge

Page 55: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 43

Figure 6.7: Node Removal operations

(a) B is to be removed. A, B, C and D and have valencesa, b, c and d respectively. (b) B is

removed with the vertex delete operation. A polygonal hole results.(c) B is removed with the

half-edge collapse operation along edge AB. A, C and D now have valencesa + b − 4, c − 1 and

d−1 respectively.(d) B is removed using the full-edge collapse operation along edge AB. Triangles

incident on both A and B are affected.

star that edge whose collapse will cause the affected nodes to become as close to

regular as possible. Such an edge has the least regularity error,E, of all the edges

in the edge star, whereE is given by

E =13

√(a + b− 10)2 + (c− 7)2 + (d− 7)2 (6.7)

anda, b, c andd are as shown in Figure 6.7. The root and the scaling in eq. 6.7 are

superfluous for the purposes of this algorithm but prove useful in a later variant.

Performing half-edge collapse on certain edges produces topological anomalies

in M. Luckily, such collapses are easy to check! If the Node Removal step selects

such an edge for collapse, the collapse is not carried out. We talk more on these

‘illegal’ half-edge collapses in Chapter 7.

Removing lazy nodes fromM resolves several issues. AsM grows, some nodes

can get misplaced, i.e. they no longer correspond to any region inP. Such a mis-

placement could occur due to smoothing in the Basic Step, or the initial spurt of

growth whereM learns the general shape ofP very rapidly in the first few iter-

ations. In this brisk period of growth, it is possible for a node to end up being

misplaced. A node can also get misplaced when it ends up in a region ofM that

Page 56: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 44

Figure 6.8: Early growth ofM

M learns the general shape of the target model very quickly. From left to right are the base mesh and

reconstructions of the Stanford bunny at 100, 250 and 500 vertices respectively.

already representsP sufficiently. Another malady of the Neural Mesh is the con-

vergence to local minima (Figure 6.1). Nodes misplaced in this way degrade the

quality ofM. The Node Removal step moderates the growth ofM by cleanly doing

away with all such misplaced nodes in one go and thus improving the quality ofM.

6.2.3 Growth rate

As the Growth steps both add and remove nodes, their relative invocation governs

whetherM’s size decreases or increases. Let us first estimate the number of nodes

removed every time the Node Removal step is invoked. Recall that these are the

nodes that have not been the winners since the lastnCec iterations. Since the num-

ber of these nodes depends on the selection of the winners which in turn depends on

the random samples chosen fromP, the exact number cannot be determined. We

therefore make probabilistic arguments and content ourselves with the expected

value of the number.

Let us consider a node,v, in M. Assuming that all unwanted artefacts like

convergence to local minima and foldovers have been resolved,v has as good a

chance as any other node to be the winner,vw, in a given iteration.v is lazy if it is

not the winner, i.e.vw 6= v for this iteration. So, for this iteration

pr(vw = v) =1n

(6.8)

⇒ pr(vw 6= v) = 1− 1n

(6.9)

Let L denote the set of nodes that have been lazy for the lastnCec iterations. These

Page 57: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 45

Figure 6.9: Removing nodes

(a) (b)

(c)

5k reconstructions of the hand model with (right) and without (left) Node Removal.M has not com-pletely learnt the fingers in the model yet. Features close to each other compete among themselves,causing false bridges between the fingers. The Node Removal step (right) cleans up unnecessarynodes leading to smaller false bridges and a cleaner area between the thumb and the forefinger. Theresulting model is smoother and of a better quality as also verified by the higher ratio of regularvertices shown in the graph.

Because of incomplete learning of the fingers, bridges inM are not removed as doing so would

change the topology ofM. The algorithm is unable to make such a topology changing step at the

moment.

are the nodes that will be removed. So from the above equations, it follows that

pr(v ∈ L) =(

1− 1n

)nCec

(6.10)

Page 58: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 46

The expected size ofL, i.e. the expected number of nodes that will be removed, is

then given by

E(|L|) =∑v∈M

pr(v ∈ L) (6.11)

Substituting from eq. 6.10, this can be rewritten as

E(|L|) =∑v∈M

(1− 1

n

)nCec

(6.12)

As there aren vertices inM, this becomes

E(|L|) = n

(1− 1

n

)nCec

(6.13)

As n grows, this value converges to

E(|L|) ≈ ne−Cec (6.14)

Thus the Node Removal step removes an average ofne−Cec nodes each time.

Between two calls of this step, the Node Addition step has been adding nodes as

well. As only one node is added each time, a total ofnCecCvs

nodes have been added.

The total number,nn of new nodes can then be given as

nn =nCec

Cvs− ne−Cec (6.15)

For the mesh to grow, we then have

nn > 0 (6.16)

⇒ nCec

Cvs> ne−Cec (6.17)

⇒ Cec > e−CecCvs (6.18)

The inequality 6.18 gives a lower bound forCec necessary to ensure expansion

of the mesh. This lower bound is not completely accurate as it assumesM to

be in a convergent state (eq. 6.14). Though this state of convergence may hold

whenM is reasonably big, it is not entirely accurate during initial stages ofM’s

growth. However, relying on the fact thatM learns the general shape ofP quickly

(Figure 6.8), we will assume this convergence condition to hold from early on. In

Page 59: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 47

practice too, a value ofCec that is greater than the above bound by 1 or 2 results in

expansion ofM. We shall assume for the rest of this thesis that the values ofCec

andCvs have been chosen such that the mesh grows instead of shrinks.

6.2.4 Signal Counter computations

For the purposes of the Node Addition step, a node’s signal counter acts like a

score – the higher the score the more likely the node is to be rewarded8 by having

a new node,vn, placed in its neighbourhood. Assigning a score tovn poses an

interesting problem. Ifvn were to be assigned a default score of zero, andvs were

to retain its score,vn’s low score would not correctly reflectvn’s presence in an

area that is, supposedly, highly active. At the same time,vs’s high score would

give it a competitive edge over other nodes inM and it would be quite likely that

vs is selected again the next time there is a call to the Node Addition step.

Both these issues are resolved by distributingvs’s score at the time of the split

between itself andvn. This calls for a strategy to determine how the score should

be split. A naive approach is to simply allot half the score to each vertex. A close

examination of the algorithm however leads to a more interesting strategy.

The Growth steps causeM to mimic the density ofP, i.e. regions ofM close

to denser regions ofP have a higher number of nodes than regions ofM close to

sparser regions ofP. This is a consequence of the wayP is sampled coupled with

the way the Growth steps choose nodes.P is sampled uniformly at random. That

means that the denser regions ofP contribute more samples. Consequently, there

is more activity in the corresponding regions inM. This high activity is then fur-

ther rewarded by the Node Addition step which adds more nodes to these regions.

Simultaneously, the Node Removal step cleans up regions inM that correspond to

the sparser, infrequently sampled regions ofP.

Getting back to the original problem of determining a strategy to distribute

vs’s signal counter, the authors follow [Fri93] by using therestricted Voronoi cells

(RVCs) ofvs andvn. A node’s RVC is the intersection of the node’s Voronoi cell

with the surface to be learnt. There are two observations to be made about RVCs

in M.

• The fact that the node closest to a sample is chosen to be the winner means

that the likelihood of a node to be the winner is directly proportional to the

8This is in agreement with the competitive learning process from the Basic Step.

Page 60: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 48

number of input points in the node’s RVC9.

• Assuming the density ofP to be locally uniform, the relative number of input

points in a node’s RVC can be estimated by the area of its RVC.

The signal counter ofvs is distributed among itself andvn in the ratio of the areas

of their RVCs. This area, as we just observed, is a measure of the likelihood of

the node for further activity. The ratio ensures that the node that is more likely to

learn, i.e. has a higher RVC area, has an edge over the other by receiving the bigger

share of the signal counter. In keeping with [Fri93], the RVC area of a vertex,v, is

approximated by the area,Fv, of a square as

Fv = (lv)2 (6.19)

where

lv =1

valence(v)

∑vi∈1−ring(v)

‖vi − v‖ (6.20)

Ironically, as the mesh grows, the ratio of RVC areas tends to one, which is

not much different from the naive strategy we had started out with! This is be-

cause nodes with high RVC area become winners more often and so get split later

by the Node Addition step. Nodes with small RVC area are seldom winners and

are removed by the Node Removal step. Thus, all nodes end up having more or

less equal RVC areas. This is in agreement with the discussion earlier aboutM

mimicking P’s density.

There are no signal counter computations for the Node Removal step.

6.3 Total cost

The two most expensive substep of the Basic Step in order of cost is the global

multiplication (eq. 6.6). It takesO(n) time. The cost of the Node Addition step is

determined by the search for the node with the highest signal counter. This search

takesO(n) time. The Node Removal step involves identifying the nodes that have

not been the winners since the last time this step was called and then removing

them. From eq. 6.14, these operations takeO(n) time.

9Another way to look at it is that a node is the winner when the current sample is closer to it thanto any other node. Such a sample, by definition, lies in the node’s (restricted) Voronoi cell. The morepoints there are in the node’s restricted Voronoi cell, the more likely it is that one of them will bepicked as a sample and that the node will be the winner.

Page 61: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 6. The Basic Algorithm 49

The Basic Step is repeatedO(n) times. The total cost of the Basic Step is thus

O(n2). Node Addition is also doneO(n) times. Its total complexity is thus also

O(n2). The Node Removal step is calledO(log n) times and costsO(n). Its total

complexity is thenO(n log n). Therefore, the total complexity of the algorithm is

O(n2).

6.3.1 Observations

Because of machine limitations, it is safe to say that many of the calculations in

the global signal counter update (eq. 6.6) are wasteful. To understand this, let us

considerc, the smallest integer such that

αcsc = 0 (6.21)

in the machine’s accuracy. Then from eq. 6.6 it follows that nodes that have not

been the winners for the lastc iterations will have signal counter equal to zero.

This also implies that at any time, a maximum ofc nodes will have non-zero signal

counters and only these nodes are actually affected by the global multiplication. It

is because of this that the signal counter is not used to selcet nodes for the Node

Removal step.

A possible optimization to the algorithm could thus be to maintain a list of

thesec nodes. Every time a winner is found, it is appended to the front of the list

and if doing this pushes the size of the list abovec, the node at the back of the list

is removed. Then, for the global multiplication step, only the signal counters of the

nodes in the list would have to be updated.

Page 62: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 7

Topology learning add-on

The basic algorithm has one major drawback – it does not learn the topology of

the target shape.M remains homeomorphic to the intial mesh1 no matter what

the shape represented byP. A solution to this problem was proposed by the same

authors in [IJS03a] soon after the introduction of the basic algorithm. The algo-

rithm is presented as Algorithm 2. Notice that the Topology Learning steps create

boundariesin M. Nodes and edges belonging to boundaries are termed boundary

nodes and boundary edges respectively.

Barring the relocation of the signal counter computations to the Node Addition

step, the Basic and Growth steps are the same as in the basic algorithm. Boundary

nodes are of course given special treatment as handling them carelessly can easily

causeM to no longer be a manifold.

Moving the signal counter computations to the Node Addition step is not sur-

prising as that is the only place where the counter’s values are needed. Also, as

the Node Addition step is called less often than the Basic Step, this also speeds up

the algorithm. Between two calls of the Node Addition step, indices of nodes that

have been the winners are stored alongwith the iteration numbers in which they

were selected. When the Node Addition step is called, for each of these winners,

w, the signal counter,τw, is updated as follows

τ ′w = αCvssc (τw + α−x1

sc + α−x2sc + · · ·+ α−xn

sc ) (7.1)

wherex1, . . . , xn ∈ {0, 1, . . . , Cvs − 1} are the iterations between two calls of the

1which, in most cases, is a tetrahedron. SoM remains homeomorphic to a sphere.

50

Page 63: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 7. Topology learning add-on 51

Algorithm 2 The topology-learning Neural Mesh algorithmSteps of the algorithm are explained inbold only where they are different from thebasic algorithm, or are new to this one.

Repeat until some user criterion is met:

1. GEOMETRY LEARNING - BASIC STEP

• perform all substeps from the basic algorithmexcept the signalcounter computations.

2. GROWTH

(a) NODE ADDITION

everyCvs iterations, whereCvs is an input parameter:

• update all signal counters.• perform the Node Addition substeps from the basic algorithm,

paying special attention to boundary vertices.

(b) NODE REMOVAL

as in the basic algorithm,paying special attention to boundary ver-tices.

3. TOPOLOGY LEARNING

everynCec iterations:

(a) TRIANGLE REMOVAL – remove large triangles. (This createsboundaries.)

(b) BOUNDARY MERGING – merge boundaries that are close to eachother. (This creates handles.)

step for whichw was the winner. For all other nodes,v, the update is performed as

τ ′v = αCvssc τv (7.2)

2Notice that this does not introduce any inaccuracy to the algorithm. The signal

counter values, when needed, are the same as they would otherwise have been.

The Topology Learning steps give rise to boundaries inM. Boundaries are

given special treatment by the Growth steps – boundary nodes with valence less

2To save time,αCvssc and theαxi

sc ’s from eq. 7.1 can be precomputed and used as constants whenrequired.

Page 64: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 7. Topology learning add-on 52

than four are not split. Additionally, if a boundary node is to be split, then the

split is always made along the boundary, i.e.es is chosen to be the longest bound-

ary edge incident on the original node, even if it is shorter than a non-boundary

incident edge.

Figure 7.1: Removing boundary nodes

(a) Boundary node B is to be removed by half-edge collapse along boundary edge AB. A, B and C

have valencesa, b andc respectively.(b) After the collapse, A and C have valencesa + b− 3 and

c− 1 respectively. B has been removed.

A boundary node is taken to be regular if its valence is four. Figure 7.1 shows

the valence changes brought about by removing a boundary node. Therefore, when

a boundary edge is being considered for collapse in the Node Removal step, its

regularity error,E, is measured as

E =12

√(a + b− 7)2 + (c− 7)2 (7.3)

The scaling and the root in the above equation keep error values for boundary edges

comparable with error values for non-boundary edges as calculated in eq. 6.7. Once

selected, an edge is first checked for a valid collapse3. Invalid collapses are not

carried out.3Some edge collapses result in ‘unusual’ mesh configurations, like two nodes connected to each

other by two edges. Such configurations lead to problems in most mesh implementations. Someother collapses causeM to no longer be manifold [HDD+93b, HDD+93a]. All collapses of this sortare ‘invalid’ in our setting.

Page 65: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 7. Topology learning add-on 53

Figure 7.2: Invalid edge collapses

Collapsing some edges can result in unusual mesh configurations (a) or cause the mesh to no longer

be manifold (b). Such edges are typically found near or at boundaries. In the figures, boundaries

are shown inbold. (a) Collapsing the boundary edge AB creates a boundary with only two edges.

That means two vertices are doubly connected with edges. This is incompatible with most mesh

implementations, including ours.(b) A and B lie on different boundaries. Collapsing the edge AB

results in non-manifoldness.

7.1 Topology learning steps

The Triangle Removal step is motivated directly by the discussion in Section 6.2.4,

i.e. the distribution of nodes inM follows the distribution ofP. A sparse region

in M, characterized by relatively large triangles, corresponds to a sparse region in

P, possibly a hole. When the sparseness inM crosses a threshold, the likelihood

of the underlying region inP to be a hole is taken to be convincingly significant.

To reflect this, the corresponding triangles are removed fromM. This introduces

boundaries toM. The threshold,Tr, is calculated as a multiple of the mean face

area,A, as

Tr = αrA (7.4)

whereαr is an input parameter, e.g. 10. The justification for this is provided in

a later paper [IJL+04]. Like edge collapses, removing some triangles also causes

M to no longer be manifold. Such removals are rectified by iteratively removing

neighbouring triangles untilM becomes manifold again.

In the Boundary Merging step, boundaries that are too close to each other are

merged to create handles. This step is applied because the algorithm has no direct

way of learning handles.M learns a handle as two boundaries on opposite sides.

Page 66: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 7. Topology learning add-on 54

Figure 7.3: Problematic triangle removal

(a)

Some triangle removals cause topological anomalies. Such triangles are typically at or near bound-

aries, which are shown inbold in the figure. (a) Triangle ABC is chosen to be removed.(b) Re-

moving ABC causes a topological anomaly at B.(c) The problem is fixed by removing neighboring

triangles AXB and ACY.

Figure 7.4: Topology learning

(a) (b)

Reconstructions of the torus with 5k nodes with (right) and without (left) topology learning. Notice

that the hole is represented in the figure on the left by large triangles. The inability of the basic

algorithm to learn topology leads to self-intersections in the region.

As M grows, the distance between the two boundaries decreases and after a certain

threshold, they are merged. The threshold,Tm, is again calculated in terms ofA as

Tm = αm

√A (7.5)

whereαm is an input parameter. Boundaries are merged when the Hausdorff dis-

Page 67: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 7. Topology learning add-on 55

tance between them falls belowTm. The Hausdorff distance between the bound-

aries is estimated as the Hausdorff distance between the sets of vertices represent-

ing them. The merging procedure is simple – starting from the two closest nodes,

one from each boundary, both boundaries are traversed in the same orientation

checking for possible triangles with the next vertex. The candidate which is closest

to equilateral is selected and added to the mesh. Traversal followed by triangle

addition continues until the two boundaries are completely connected with a set of

triangles.

As M already learns the general shape and topology of the target space in the

initial phases, regular invocation of these steps in latter phases of the algorithm is

redundant. Therefore, they are called with decreasing frequency, i.e. every time

the Node Removal step is called.

A problem with the above approach is that two boundaries that are separated

by a hole, will not merge with each other. Such holes occur frequently in practice

because of occlusion during the acquisition process.

7.2 Total Cost

The Triangle Removal step needs to scan all triangles, which areO(n) in num-

ber, once to calculateA, and a second time to identify triangles to be removed.

Both these operations requireO(n) time. Boundary Merging requires each of the

b boundaries to to be compared with every other boundary. This takesO(b2) time.

Assuming a boundary, being 1-dimensional, to haveO(√

n) vertices, the complex-

ity of this step is alsoO(n). As both these steps are called with the Node Removal

step which is calledO(log n) times, the total complexity of the Topology Learning

steps isO(n log n).The global counter update has been eliminated from the Basic Step. Its cost

is now dictated by the octree computations which areO(log n). The Growth steps

however are stillO(n). Thus the algorithm’s total complexity isO(n2).

Page 68: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 8

A normal-based variant

The normal-based variant from [JIS03] is based on the observation that normals

of nodes inM that correspond to features inP vary much more during the nodes’

learning phase than the normals of nodes close to flatter areas ofP. This informa-

tion is not recoverable from the final mesh. Thus, information on normal variation,

or normal activityas we shall call it, can be used to detect features and areas of

high curvature inP. M can then be populated accordingly, i.e. higher density of

nodes at regions of high curvature and fewer nodes in flat regions. The algorithm

is presented as Algorithm 3. It is almost identical to the basic algorithm, the only

exception being that the signal counter,τ , is replaced by a normal counter,η, which

entails its own calculations.

While τ captures spatial information,η measures normal activity. Nodes far

away fromP or in a dense cluster corresponding to a sparser area ofP have a

low value forτ . Nodes close toP have high values. A high value ofηi on the

other hand means thatvi’s normal has recently been quite active, irrespective of

v’s position. It should however be noted that nodes far away fromP will seldom

be winners and their normals will thus not be active. Such nodes will also have

low values forη. As normal activity depends on the curvature of the shape, growth

based onη gives a curvature adaptive reconstruction ofP. Models with such node

distributions are preferable in Computer Graphics applications where feature and

normal information is more important than spatial information.

The changes in values ofη asM grows give valuable insight into the underlying

shape. Firstly, one should observe thatM grows rapidly in the first few iterations

where it learns the general shape ofP. During this growth, nodes in high curvature

areas exhibit large normal activity, whereas normal activity for nodes in flatter areas

56

Page 69: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 8. A normal-based variant 57

Algorithm 3 The normal-based Neural Mesh algorithm

• Steps of the algorithm are explained inbold only where they are differentfrom the basic algorithm, or are new to this one.

• Instead of a signal counter,τ , every node now has a normal counter,η.

• η is initialized with a value of 0 for every node inM.

Repeat until some user criterion is met:

1. GEOMETRY LEARNING - BASIC STEP

• Finds andvw.

• Gather normal information at vw.

• Update the position ofvw.

• Update the position of 1-ring neighbours ofvw.

• Update the normal counter,ηw, of the winner.

• Update all normal counters.

• iterations = iterations + 1

2. GROWTH

(a) NODE ADDITION

as in the basic algorithm,usingη instead ofτ .

(b) NODE REMOVAL

as in the basic algorithm.

remains relatively tame throughout the growth ofM. As the high curvature regions

are learned, normal activity in them gradually falls off to zero. This is because the

shape, being smooth, is locally similar to a plane and thus flat. So, asM grows,

normal counters converge to zero. Of notable exception are nodes at creases. These

nodes exhibit high values forη throughoutM’s growth. The reason for this is the

nonexistence of a tangent plane at a crease. Having no definable tangent plane of

its own, a crease is associated with two tangent planes – one for each side of it.

The normals at the crease then take on values in the range of the normals of these

two tangent planes.

The calculation of the normal counter is simple. Firstly, a change in the win-

Page 70: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 8. A normal-based variant 58

ner’s normal is measured as

δw = 1−−→nw−→nw

′ (8.1)

where−→nw and−→nw′ are the (normalized) normals estimated atvw before and after its

movement respectively.δw is clearly 0 for nodes other than the winner.δw is then

normalized as

nδ =δw

Mδ(8.2)

whereMδ is the mean value ofδw’s over the lastCδ iterations andCδ is a user

defined constant, e.g. 1000. Update of the winner’s normal counter,ηw, is then

carried out as

η′w = ηw + nδ (8.3)

This is followed by the global update which is performed as

∀vi ∈ M η′i = αncηi (8.4)

whereαnc is an input parameter.

The global update, like the one of eq. 6.6, phases out old information. This in-

formation, however, is useful for feature detection purposes where the total activity

of a node is of interest. Thus, when the algorithm is to be used to detect features in

P, the authors suggest to drop the global update and to carry out the normal counter

update alternatively as

η′w = αfdηw + (1− αfd) nδ (8.5)

whereαfd is a user defined constant. Features are identified as regions correspond-

ing to vertices with high counter values.

The normalization in eq. 8.2 provides numerical stability in the later stages of

M’s growth whereδw tends to zero.Mδ also provides a means of judging how far

M has grown. As was discussed earlier, most normal counters inM tend to zero as

M grows. A small value ofMδ indicates that most of the significant learning has

already been done.

The remaining steps are nothing new. As in the basic algorithm, visual quality

of the output is refined by smoothing the 1-ring neighbourhood of the winnerCL

times. Counter values are slowly phased out by multiplying globally with a con-

stant (eq. 8.4). The Node Addition step splits the node with the highest counter

and the split node’s normal counter is distributed in ratio of RVC areas. The Node

Page 71: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 8. A normal-based variant 59

Removal step clears away the nodes that have not been the winners since the last

time nodes were removed.

Notice that although the topology learning steps were available by this time, the

algorithm does not use them and retains the topology of the initial mesh throughout

the learning process. This is because the topology learning steps assume, firstly,

that sparse regions inP correspond to boundary regions, and, secondly, thatM

reflects the density ofP. Based on these assumptions, they remove large triangles

in M. The latter assumption does not hold in this case as now flat regions ofM are

purposely cleaned up. Sparsity of a region inM then reflects the flatness of the

corresponding region inP, not the absence of corresponding data.

Figure 8.1: Normal based reconstruction

(a) (b)

Wireframes of reconstructions of a uniformly sampled cube using the basic (left) and normal-based

(right) algorithms. The reconstruction on the left has roughly equal sized triangles everywhere,

reflecting the density ofP, while the second one has smaller triangles and thus more nodes at the

edges where normal activity is high, and larger triangles meaning less nodes on the flat regions.

8.1 Total cost

Apart from the normal counter and added smoothing, there is not much difference

between this algorithm and the basic one. Computationally, the normal counter

calculations are the same as those for the signal counter. Thus the complexity of

this algorithm is the same as that of the basic algorithm, i.e.O(n2).

Page 72: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 9

A noise-filtering variant

The variant proposed in [IJL+04] revises the Basic Step of the algorithm to better

handle noise and outlier data inP, and fine tunes existing operations to improve

the algorithm in terms of running time and adaptivity. The most striking difference

is that the Node Removal step is called every constant number of iterations and

nodes to be removed are identified with the help of signal counters. At the heart

of this variant still lies the basic algorithm. The changes introduced serve only

to optimize the original operations. Notice also that the Neural Mesh algorithm,

being a learning algorithm, naturally copes with some amount of noise.

The algorithm, presented as Algorithm 4, better addresses the possibility of

P being noisy and containing outlier data. For every−→d = −−→vws, a moving aver-

age,Md, and standard deviation,σd, are updated. A window of 1000 iterations is

chosen. The position ofvw is updated as

v′w = vw + αw · F(∣∣∣−→d ∣∣∣) · −→d (9.1)

whereαw is an input parameter as in the previous algorithms1. F(∣∣∣−→d ∣∣∣) is a

filtering functionthat estimates whether the currents is an outlier and how much

it should contribute to the current winner’s learning. It does so with the help of a

threshold,εt, calculated as

εt = Md + αfσd

whereαf is a user parameter that controls the tolerance level for outliers. A variant

1Notice that the ‘usual’ update ofvw, as given in eq. 6.1, can be rewritten asv′w = vw + αw

−→d .

60

Page 73: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 9. A noise-filtering variant 61

Algorithm 4 The noise-filtering Neural Mesh algorithmSteps of the algorithm are explained inbold only where they are different from thebasic algorithm, or are new to this one.

Repeat until some user criterion is met:

1. GEOMETRY LEARNING - BASIC STEP

• Finds andvw.

• Update the position ofvw.

• Update positions of 1-ring neighbours ofvw.

• Updateτw.

• iterations = iterations + 1

2. GROWTH

(a) NODE ADDITION

as usual, paying attention to boundary nodes,

(b) NODE REMOVAL

everyCnr iterations:

• perform the Node Removal substeps as usual, paying attention toboundary nodes.

3. TOPOLOGY LEARNING

everynCtop iterations:

• perform the topology learning steps as usual.

of Hubber’s filter2 is then used to defineF(∣∣∣−→d ∣∣∣) as

F(∣∣∣−→d ∣∣∣) =

1 if

∣∣∣−→d ∣∣∣ ≤ εt

εt∣∣∣−→d ∣∣∣ if∣∣∣−→d ∣∣∣ > εt

(9.2)

Another feature that is new to this revision is thatαsc is calculated as a function

2This particular filter is chosen as it is suitable for filtering out outliers [BSMH98].

Page 74: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 9. A noise-filtering variant 62

Figure 9.1: The noise filtering function

P contains noise and outlier data (the underlying surface is also shown). The current sample, s, is an

outlier. d, the vector between s and the winner, w, is shown. A marks the average length,Md, of d

from previous iterations. B is the tolerance limit upto whichF(∣∣−→d ∣∣) is 1.

of the number of vertices inM. It is now termed asαn and is calculated as

(αn)λn =12

(9.3)

⇒ αn =(

12

) 1λn

(9.4)

whereλ is an input parameter, e.g. 6, andn is the current number of vertices in

M. αn is calculated such that an inactive vertex loses half its signal counter value

in λn iterations because of the global signal counter update. Furthermore, eq. 9.4

implies that the rate at which information is erased decreases asM grows.

Every node,v, also remembers the number,Iv, of the last iteration in which

it was the winner. When a node becomes the winner in the current iteration, i.e.

iteration numberIc, its signal counter is updated as

τ ′v = τvα(Ic−Iv)n + 1 (9.5)

The extra smoothing from the basic algorithm is dropped and the 1-ring neighbour-

hood ofvw is smoothed just once, i.e.CL = 13.

3For our implementation, we also useCL = 1.

Page 75: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 9. A noise-filtering variant 63

The Node Addition step updates all signal counters and winner iteration num-

bers – for all nodes,v,

∀vi ∈ M τ ′i = τiα(Ic−Iv)n (9.6)

∀vi ∈ M Iv = Ic (9.7)

The node with the highest signal counter is chosen and the split is performed as

usual, paying special attention to boundary nodes.

The effects of eq. 9.7 might seem counterproductive for the algorithm consid-

ering thatIv would be needed to identify nodes to be removed. However, that is

not the case as this algorithm goes about Node Removal in a novel way.

9.1 Removing nodes more frequently

The Node Removal step is called after a constant number,Cnr, of Node Addi-

tions whereCnr is an input parameter, e.g. 125. The vertex with the lowest signal

counter, along with all other vertices with signal counter less than a certain thresh-

old, is removed as usual, with special attention given to boundary nodes. The

threshold is calculated asαnCecn whereCec is an input parameter. Notice that when

a node becomes a winner, its signal counter goes above one (eq. 9.5). To fall be-

low the mentioned threshold, the node must have been inactive for at leastnCec

iterations. In this way,Cec plays its traditional role and despite the more frequent

invocation of the Node Removal step, the same nodes are selected for removal as

would have been selected by the original algorithm.

In the previous variants,M loses a large number of nodes every time the Node

Removal step is called. Even though this happens less and less often, the effect on

the size ofM each time is significant. Removing fewer nodes more often makes

the growth ofM smoother and more natural.

The double criterion for vertex removal removes simultaneously nodes that

are far away fromP, and nodes that are in unnecessarily large clusters, without

having to wait a large number of iterations for them to be cleared up. Nodes distant

from P are the least performing and thus have the smallest signal counters, while

nodes in overly large clusters become lazy after some initial activity and their signal

counters fall below the above mentioned threshold. All these unwanted nodes are

collectively thrown out in one sweep of the Node Removal step.

Topology Learning is unchanged – the steps and frequency of invocation are

Page 76: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 9. A noise-filtering variant 64

the same as before.

9.2 Total Cost

Except for computationally insignificant variations in the way signal counters are

updated, the Basic and Node Addition steps are the same as in the topology learning

algorithm. Their costs areO(n) each. The Node Removal step now entails finding

the node with the least signal counter and other nodes whose signal counters are

below a threshold. Its cost thus remainsO(n). The Topology Learning steps are

unchanged and so their combined cost also remainsO(n).

What is different in this algorithm is that the Node Removal step is calledO(n)

times. This however does not change the complexity of the algorithm which stays

atO(n2).

Page 77: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 10

Ensembles

As with any algorithm, there are two major considerations to the Neural Mesh algo-

rithm – speed and accuracy. While the variants presented in the previous chapters

have attempted to some extent to better the speed of the reconstruction, the vari-

ant from [ILL+04] concentrates solely on the latter issue, i.e. correctly capturing

the global shape of the data. As a result, this algorithm is much slower than its

predecessors, as we shall see shortly.

The authors borrow a commonly used idea from machine learning techniques

– ensembles. P is used to generate several models. These form the ensemble.

The ensemble is then used, usually by calculating an average, to obtain a better

model. In supervised learning1 literature, this technique is referred to asboosting

an algorithm’s performance. The power of this method lies in the allowance to

individual models to be inaccurate. As long as the inaccuracies occur in different

areas of the models, they will be ironed out in the final average. This necessitates

non-determinism of the model generating algorithm being used. A deterministic

algorithm outputs identical models every time it is run. Averaging these models

yields yet another copy and thus preserves inaccuracies.

Neural Meshes, being probabilistic, are ideally suited to this approach. In a

Neural Mesh, errors occur in the form of local minima (Figure 6.1). Once a node

is stuck in a local minimum, it remains inactive for the rest of its life inM. Conver-

gence to local minima during the mesh’s growth is thus undesirable and is resolved

by the smoothing substep, the Node Removal step and by taking a conservative ap-

proach to the learning process. The relaxation afforded by the ensembles approach

1In a supervised learning context, a Neural Network is trained to learn a function that maps inputsto desired outputs. This is in contrast to unsupervised learning where the algorithm generates a modelfor a set of inputs. The Neural Mesh is an example of unsupervised learning.

65

Page 78: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 10. Ensembles 66

allows a more daring approach. Section 6.2.3, particularly ineq. 6.18, states that

the rate of expansion of the mesh is dependent on the ratioCvsCec

. Turning up this ra-

tio leads to an increase in the global flexibility of the mesh. This means that while

the growth of the mesh is more uncontrolled now, the places where local minima

appear in each reconstruction are also more random. Reconstructions obtained in

this manner are ideal for the ensembles approach. The algorithm itself is presented

as Algorithm 5.

Algorithm 5 Neural Mesh ensembles

1. POPULATION

Run the Neural Mesh algorithm without topology learning on the given dataseveral times and generate many coarse models.

2. AVERAGING

• Voxelize all coarse models on the same grid and extract an averagemodel by majority vote on each voxel.

• Triangulate the voxelized model to obtain an average coarse mesh.

3. REFINEMENT

Run the Neural Mesh algorithm with topology learning on the original inputdata starting with the average coarse mesh.

Individual models are obtained using the Noise Filtering variant (Chapter 9)

and a high expansion rate as discussed above. Notice that topology learning is not

used until the Refinement step. This ensures that the models are closed surfaces

and that voxelization in the Averaging step is easier and more robust.

Once the ensemble has been populated, each model is voxelized on the same

grid. A Depth Buffer Based voxelization algorithm [KPT99] is used. This par-

ticular algorithm is chosen as it is insensitive to the orientation of the surface2.

This obliterates the effect of twists that appear in the models because of aggressive

learning. Every voxel in the grid is given a value of 0 or 1 depending on whether it

is part of a voxelized model or not. Voxels that have a value of 1 for more than half

the models are included in the average model. This model is triangulated using the

Marching Cubes algorithm [LC87], which is a standard for such applications.

The average mesh is already a fair representation of the shape. However, it

2Because of relaxed learning conditions, individual ensemble members often turn ‘inside out’during growth, this changing orientation. The authors refer to this as a ‘twist’. We encountered somemeshes with twists in Figure 6.4.

Page 79: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 10. Ensembles 67

Figure 10.1: Ensemble members

(a) (b)

(c) (d)

Aggressive learning in the ensemble members leads to twists and other unwanted artefacts. However,

because of the high flexibility ofM, they appear at different places in different reconstructions.

lacks topological information and geometric detail. As a final step, it is used as

the initial approximation for the Neural Mesh algorithm which is now run with

topology learning switched on. In this step, the average mesh can be refined to any

desired level of detail.

Page 80: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 10. Ensembles 68

10.1 Quality of ensemble members

It is worth pointing out, at the risk of repetition, that there is a fundamental dif-

ference of approach in this algorithm from the preceding ones. The ensembles

approach relies on a technique that is known asweak learning, i.e. producing a

good result from relatively inaccurate ones. In the previous variants, the algorithm

is to be run just once and the obtained model is to be the final one. Therefore, much

care is taken to ensure the accuracy of the output model. On the other hand, the

quality of the models that form the ensemble is intentionally poor and they contain

otherwise unwanted artefacts like twists and false bridges (Figure 10.1). As the

contributions of these artefacts is annulled in the final output, there is no need to

take special care to avoid them, especially if it impedes growth of the mesh.

The size of the ensemble members is also decidedly smaller than the desired

size of the final reconstruction. The reason for this is two-fold. Firstly, at the last

stage of the algorithm, i.e. the Refinement step, the aim is mainly to learn the

topology and to further refine the learnt shape. Geometry is already assumed to be

correctly learnt. For this reason, the ensemble members should capture whatever

learning errors that could occur. Such errors typically occur early on in the recon-

struction process. A relatively small reconstruction thus suffices for this purpose.

The second reason to keep ensemble members small is to save running time. Even

at a few thousand vertices, Neural Meshes already representP quite well. Growing

them beyond that point serves merely to learn details. As the Refinement step is

already solely dedicated to this purpose, there is no need to waste computation by

growing ensemble members unnecessarily.

10.2 Total Cost

The algorithm used to procure the ensemble models and to refine the resulting

average model is the Noise Filtering variant which costsO(n2), as discussed in

Section 9.2. The other two steps involved, namely the Depth Buffer Based vox-

elization and meshing using the Marching Cubes algorithm, do not affect the com-

plexity term. The entire procedure thus takesO(n2) time. Note however that the

multiplicative constants are much higher for this algorithm than for previous vari-

ants.

Page 81: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Part III

Our Work

69

Page 82: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Based on the observation that signal counter computations are the performance

bottleneck for all variants of the Neural Mesh algorithm, we propose a new mecha-

nism for ordering nodes in the mesh. The proposed mechanism is more flexible

than signal counters and allows us to perform the desired functionalities much

more efficiently. We present the details of the new mechanism and look into its

implementation.

Page 83: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 11

Experimentation

All variants of the Neural Mesh algorithm, presented in the previous chapters, rely

on the signal counter to order nodes according to their activity. Because of the

dynamic nature of the algorithm, this ordering also changes constantly. The nature

of the ordering is twofold. To understand this, let us say thatv1 is ‘ahead’ ofv2 if τ1

is greater thanτ2. The difference in signal counter values is then the ‘lead’ that the

higher valued node has. In this context we can also define successors, predecessors

and neighbours.v1’s successor is the node with the smallest lead onv1. v1 is the

predecessor of its successor. Thus the highest valued node has no successor and

the least valued node has no predecessor. We can now see that the signal counter

ordering isrelative– it tells us if one node is ahead of another – andquantitative–

it tells us how much the difference is. For example, considerv1 andv2 such thatv2

is the successor ofv1 in the current iteration. It is possible forv1 to be the winner

in this iteration and, after the signal counter updates, still be behindv2 according

to the signal counter ordering.

11.1 Motivation

We observe that a lot of computation is wasted in all variants simply searching

through the nodes. The Node Addition step, for example, searches the entire mesh

for the single node which has the highest signal counter. Similarly, the Node Re-

moval step goes through the mesh just to identify a few, low scoring nodes. Con-

sequently, the cost of global counter updates in the Basic step, and node searches

in the Growth steps are the dominant factors in theO(n2) complexity terms of the

algorithms. Wasteful as it may be, it is the only way employable within the current

71

Page 84: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 11. Experimentation 72

framework.

It makes sense to introduce some kind of sorting to the nodes, such that nodes

with extreme values of signal counters can be accessed more simply and once their

signal counter values are changed, i.e. their signal counter values are no longer at

the extrema, the new ‘extreme nodes’ can be easily identified.

11.2 Experiment

We experiment with acomparative learningapproach to the Neural Mesh algo-

rithm, as opposed to the previous ‘exact learning’ approach. We introduce a new

ordering to the nodes that quantifies the lead of nodes over each other in a differ-

ent way, namely the lead of a node over another is determined as the number of

nodes between them. Such an ordering lends itself easily to a sorting. The nodes

can be thought of as being in a sorted list,L,1 – the node with the highest value

at the front or top, and the one with the least value at the back or bottom of the

list. As M grows, the state of the list changes with new nodes being added, existing

ones removed and some others shuffled to reflect their participation in the learning

process. Every time a node is the winner, it moves ahead of some of its successors.

We refer to this as a ‘jump’, and the number of nodes skipped in the list is referred

to as the ‘jump distance’. A node jumping past the top of the list becomes the new

top. Thus jumping the top node has no effect on the state of the list. Figure 11.1

illustrates the idea of thisactivity list, L.

11.2.1 The jump distance

Selection of jump distance is key the to the power and versatility of this approach.

Jumping a node brings it closer to the top ofL and increases the chances of in-

creasing the population in its neighbourhood. Deciding which nodes we take to

the top and which ones we leave behind at the bottom determines the type of re-

construction we get. Jump distances calculated as functions of normal activity will

yield curvature sensitive reconstructions. Constant jump distances will yield re-

constructions whose density is similar to the input point density. In addition, the

jump distance can be calculated as a function of area or volume changes brought

about when a winner is repositioned, or a linear combination of these quantities. A

variable jump distance really opens up a world of possibilities to experiment with.

1Actually, the nodes themselves are inM. The list only contains references to them.

Page 85: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 11. Experimentation 73

Figure 11.1: The activity list,L

L contains node handles sorted in order of activity of the nodes.(a) The node with handle 729 is the

current winner and is jumped in the list by a distance of 6.(b) The jump distance for node handle 57

oversteps the list bounds. The jump is cut short at the top ofL.

The simplest method, and the one we use for our implementation, is to have

a constant jump distance, i.e. all winners jump by the same amount. To have any

reasonable impact, this amount is set to a third of the list size. In this way, active

nodes occupy the top portion of the list, and lazy and inactive ones are quickly

siphoned to the bottom.

11.3 The algorithm

We present the necessary modifications in Algorithm 6. Notice that we are not

proposing a new algorithm, nor a variant of the basic one. We simply propose a

Page 86: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 11. Experimentation 74

Algorithm 6 The list modification of the Neural Mesh algorithmsThe list-based steps are presented inbold. They can be implemented in any of theNeural Mesh algorithms presented earlier.

Repeat until some user criterion is met:

1. GEOMETRY LEARNING - BASIC STEP

• Finds andvw.

• Update the position ofvw.

• Update the position of 1-ring neighbours ofvw.

• Updatevw’s position in list.

• iterations = iterations + 1

2. GROWTH

(a) NODE ADDITION

if iterations = Cvs,

• find the vertex, vs, at the top of the list.• split vs as per algorithm.

• calculate and insertvs and the new node,vn, at new positionsin the list.

(b) NODE REMOVAL

if time for Node Removal as per algorithm:

• find the vertices at the bottom of the list.• remove them fromM as per algorithm.

• remove them from the list.

3. TOPOLOGY LEARNING

if iterations = nCec,

(a) TRIANGLE REMOVAL

• Remove triangles fromM as per algorithm.

• Remove nodes from list that have been removed fromM .

(b) BOUNDARY MERGING

as per algorithm.

new way to go about the existing algorithms. The outlined method does away with

the messy signal counter computations from the previous algorithms. In practice

Page 87: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 11. Experimentation 75

though, we retain the signal counter for the initial stages ofM’s growth, as the exact

learning approach is preferable in the beginning whenM is learning the general

shape ofP. At an early point, typically whenM has reached 1000 nodes, the nodes

are copied to an octree to facilitate future searches for winners. At the same time,

we copy the nodes toL. As we do not want to lose information on the growth so

far, we add the nodes to the list in order of their signal counter values, with the

least valued node at the bottom of the list and the highest valued one at the top. At

this point, the signal counter is discarded.

11.3.1 Modifications to previous algorithms

The changes made to the steps of the algorithms are minimal and intuitive. In the

Basic Step, the winner’s position in the list is updated with a jump to reflect its

recent activity. There is no global signal counter update and only the winner is

affected. Nodes to be split are now identified trivially – at the top of the list. When

the top node is split, its position is divided between itself and the new node in the

same way as the signal counter used to be distributed among the two – in ratio of

their RVC areas (Section 6.2.4). The old node is removed from the top of the list

and, along with the new node, is inserted in the list at the calculated positions.

Nodes to be removed are picked off the bottom of the list. In keeping with the

noise-filtering variant, we do a two-fold removal at everyCnr iterations. Firstly,

we remove the node at the bottom ofL. Secondly, we clean up lazy nodes. We

know from the discussion in Section 9.1 that these are the nodes that have not been

the winner for at leastnCec iterations. From eq. 6.14, we also know that there are

an expectedne−Cec such nodes. We can then calculate the expected number,nl, of

lazy nodes inCnr iterations as

nl =Cnre

−Cec

Cec

For typical values ofCnr andCec, this value is much smaller than one. Thus, when

the node at the bottom of the list is removed, we also remove its successor with

probabilitynl. For the other variants, the nodes to be removed are identified much

more easily. They are simply the nodes that have not been the winners since the

last call of the Node Removal step.

As was mentioned in Chapter 7, the Triangle Removal step, in keepingM man-

ifold after a triangle removal, may additionally remove triangles other than the one

originally selected. These triangles are neighbours of the selected triangle. Re-

Page 88: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 11. Experimentation 76

moving these triangles sometimes leaves nodes disconnected from the rest ofM.

These nodes are removed fromM. When this happens, we also remove them from

L.

11.4 Total Cost

As mentioned, the list is merely a modification to the existing algorithms. The total

cost of any algorithm using this modification depends firstly on the algorithm itself

and secondly on the implementation of the list. In the next chapter, we decide upon

an efficient implementation and analyze the costs of the previous algorithms using

the list instead of the signal counter.

Page 89: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12

Implementation issues

The Neural Mesh algorithms were developed prior to this thesis here at the MPI In-

formatik. For that reason, implementations already existed and were easily acces-

sible. All implementations use the inhouse Geometric Modeling Utilities (GMU)

library. GMU implements a triangle mesh using the half-edge data structure and

supports common mesh operations like half edge collapse, vertex split, adding,

deleting and repositioning vertices etc. All these operations can be assumed to

take constant time. Each vertex, edge and triangle has a unique handle which is

used as a reference to it.

Signal counters1 were originally stored in a separate data structure which also

remembered the value ofαsc2 and, when needed, the iteration numbers for each

node when it was the winner. Later implementations store the counters and itera-

tion numbers as attributes of the vertices themselves, thus allowing faster access.

An iteration over all vertices is needed to identify candidates to be split or removed.

The octree is implemented as a hierarchy of nodes where each node contains

information on its bounding box and the vertices contained in itself or its children.

Vertices can be added, removed or updated in the hierarchy. Updating the 3D co-

ordinates of a vertex can cause the vertex to be repositioned to another node. For a

given point, the octree can be queried for the vertex closest to the point.

Boundaries are stored in an additional data structure referred to as the bound-

ary manager. It stores all boundaries inM as loops of boundary edges and supports

operations such as looking for and adding new boundaries, finding pairwise dis-

tances between them, merging two boundaries to form a handle, etc. The boundary

1or normal counters, depending on the variant being implemented2or αnc

77

Page 90: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 78

manager is updated by Growth and Topology Learning steps.

Our main contribution is the list,L, introduced in the previous chapter. As it

will be a separate data structure, it must be implemented well to justify the replace-

ment of counters which were stored as vertex attributes3. Before choosing a good

structure, it is good to understand what operations will be performed onL. We look

into these operation in Section 12.1. Based on these operations, we consider pos-

sible data structures and decide upon one of them in Section 12.2 where we also

see the need for an additional structure – an ‘access mechanism’,A. We decide on

a suitable implementation forA in Section 12.3. In Section 12.4 we look in detail

into how some operations can be performed on the chosen data structure. Finally,

in Section 12.5, we discuss some performance improving optimizations.

12.1 Operations onL

To implementL, we need to understand what its contents will be, and how they

will be operated on. From the previous chapter, we know thatL will store nodes of

M. Actually, these nodes exist inM and making a copy of them forL is not entirely

necessary. We thus store references to the nodes inL. Such references are readily

provided in the form of unique, numericvertex handlesfor each vertex by GMU.

Vertex handles are the standard way in GMU of referring to nodes in a mesh.

As M grows, different operations will be performed onL. In outlining these

operations below, we assume that illegal vertex handles are not passed toL, e.g. a

delete operation is not called for a handle that is not present inL.

• Initialization

– Sort: sort vertex handles according to their corresponding signal coun-

ter values.

– Add: given vertex handles, add them toL in some order.

• Basic Step

– Access: given a vertex handle, locate it inL.

– Jump: move a list element forward by a number of elements equal to

the jump distance. Jumps overstepping the top are cut short there.

3and were, thus, trivial to access.

Page 91: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 79

• Node Addition

– Top: return the vertex handle at the top ofL. This element is referred

to as TOP.

– Insert: given a vertex handle and a number less than the list size, insert

the handle at a position inL equal to the given number.

– Delete Top: delete TOP.

• Node Removal

– Bottom: return the vertex handle at the bottom ofL. This element is

referred to as BOTTOM.

– Delete Bottom: delete BOTTOM.

• Triangle Removal

– Access: given a vertex handle, locate it inL.

– Delete Any: delete a selected element fromL.

In the discussion below, we shall refer back to these operations to decide on a good

implementation forL. We assume TOP to be at positionn in L and BOTTOM at

position 1 wheren is the current number of vertices inM. Also, as each node has

a unique vertex handle, there are no copies inL. The Initialization operations are a

simple matter. Sorting can be done using any of a wide range of available sorting

algorithms. Once sorted, the vertex handles can be added toL in the sorted order.

As these operations are called just once in one run of the algorithm and the number

of nodes at this point is small, the time spent for these operations is negligible

compared to the time taken for other repetitive steps. For this reason, we do not

consider the Initialization steps when we decide on a data structure forL.

12.2 ImplementingL

Having outlined the contents and desired operations for our list, we can now look

into an appropriate data structure for the implementation. Throughout the discus-

sion, we keep in mind that searching for nodes using the signal counter ordering

is inherently linear, i.e.O(n). As the new ordering contains information only on

relative positions4, we expect to be able to do things faster.4and does away with quantitative information

Page 92: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 80

Before we embark on a discussion of possible data structures, we point out that

there are, in fact, two mechanisms related withL. One is the list itself and the other

is the access mechanism,A, by means of which we locate a given element in the

list. A is necessary because of the dynamic nature of the list where elements change

position in almost every5 iteration of the algorithm. Thus, when considering a data

structure for the list, we will also have to look at the supporting access mechanism,

that points us at any given time to the position inL of any given vertex handle.

A good implementation would have to be efficient in terms of both the list and

accesses to it.

In the following sections, we consider several data structures, weighing their

pros and cons in terms of speed, and eventually settle on one. As mentioned earlier,

we do not take into account efficiency in carrying out the Initialization operations,

as they can be carried out quite efficiently independently of the list.

12.2.1 Vector

A seemingly obvious candidate for the list is the vector data structure6. Vertex

handles can be added to the vector in sorted order. Vector indices can then represent

position in the list, e.g. a handle at index 3 has 4th position7 in the list. As the vector

size is known, the Top and Bottom operations are simply a matter of accessing the

corresponding indices in the vector. However, this scheme fails when we consider

Insert, Jump and the delete operations. Inserting a node,v, at some position,p,

would mean copying the current vector,V, with sizen to a new one,V ′, with size

(n + 1) as follows

V ′ [0 . . . p− 2] = V [0 . . . p− 2]

V ′[p− 1] = v

V ′ [p . . . n] = V [p− 1 . . . n− 1]

This copying would be done very frequently during the run of the algorithm and,

being anO(n) operation, is prohibitively expensive.

Moving nodes around in the vector is likewise impractical. In keeping with

the inherently static nature of the vector, we could simply keep adding new nodes

5The list does not change state if the top node is the current winner, as the top node cannot jumpany further ahead. For all other winners, the list changes state.

6The array is not considered because it has a prespecified size. This is not suitable for our pur-poses asM is constantly growing.

7indices start from 0.

Page 93: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 81

to the end of the vector and use the access mechanism,A, to reflect the changes

brought about by learning. As more and more nodes are added, vector indices

will no longer represent position inL. A will then have to keep track of the vector

index and correspponding position inL for each vertex handle. When a nodev is

added toL at positionp, the nodes previously occupying positionsp and greater

are pushed down. Correspondingly,A will have to update the position numbers

for these nodes in its records. Insertions typically occur in the middle ofL. This

means that the position numbers for half the nodes inL will have to be updated

in the A. This operation is again linear and thus more expensive than we would

like it to be. Moreover, as nodes change position in the list because of jumps,

this system soon degenerates to one similar to the one with signal counters, where

nodes are stored in no specific order and their relative positions are specified by

corresponding numbers. This is exactly what we want to improve on.

12.2.2 Linked-list

Another natural candidate is the linked-list data structure. It is more dynamic than

the vector and can thus better deal with the changing state ofL. Once sorted, vertex

handles can be added simply to the linked-list in the sorted order. The order of

vertex handles in the linked-list will represent their order inL. Pointers can be

maintained to the top and bottom of the linked-list for easy retrieval. Deletion of

any element is straightforward; the links of its neighbours are readjusted and the

element is removed. If the top or bottom element is deleted, the corresponding

pointer finds its new position at the immediate neighbour of the deleted node.

Insertions, which occur roughly in the middle of the list, would take linear time

as approximately half the list would need to be traversed to arrive at the insertion

point. This can be optimized by keeping a ‘mid’ pointer to the middle element in

the list and updating it every time an Insert, Jump or delete operation is performed.

Insertions can then be made relative to the mid pointer as that would mean a smaller

traversal. However, this optimization is not reliable and for large list sizes, the

traversal may still beO(n).The biggest problem with this approach is the Jump operation. Typical jump

distances are in the order ofn. Jumping a node in the linked-list by a distance

d translates to traversingd nodes in the list and reinserting the element into the

list. As d is O(n) and Jump is the most frequently called operation, the linked-list,

despite performing well for the other operations, is not acceptable. Jumps may be

optimized by maintaining several pointers along the list. These pointers will then

Page 94: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 82

have to be updated at every Insert, Jump and delete operation. For the extra pointers

to be significantly useful, there will have to be quite a few of them. This would act

adversely by complicating the other operations. Managing these pointers would

also add extra overhead. Like with the vector, we could try to save the position

of each vertex handle in the access mechanism. Jumping an element would then

mean updating inA, the position of the jumped element and the elements that were

jumped over. Position numbers will also need to be updated for Insert and delete

operations. This will degrade the performance of almost all steps toO(n) . We

thus have to abandon the linked-list as well.

12.2.3 Tree-based structures

The vector and linked-list data structures are both inherently linear and thus the

Jump operation in them takesO(n) time. We thus need to look into non-linear

data structures, namely trees. Like the linked-list, trees are well-suited to dynamic

scenarios. Operations on trees typically take logarithmic time. To ensure that these

times do not degenerate, the tree needs to be kept balanced at all times. For this

purpose we use ‘self-balancing trees’. These are trees that make self-adjustments

to maintain their balanced property whenever an insert or delete operation upsets

it. There are several self-balancing trees to choose from – splay trees [ST85], red-

black trees [Bay72, GS78], B-trees [BM72] and AVL trees [AVL62]. Splay trees

focus on reducing access times by moving frequently accessed elements closer to

the root. This is not particularly suited to Neural Mesh algorithms where, asM

matures, most of the nodes are accessed with roughly equal frequency. Red-black

trees are efficient but complicated to implement. B-trees are also not straightfor-

ward with individual nodes in the tree containing several elements. We choose the

AVL tree8 with some modifications for our purposes and call the resulting treeT.

Nodes inT contain a copy of the vertex handle they represent. Their relative

positions inT represent the relative positions of their corresponding vertex handles

in L. The top and bottom elements inL are at the rightmost and leftmost leaves of

T respectively. Pointers to these can be maintained for easy access. When any of

these elements is deleted, the corresponding pointer finds its new position in the

deleted node’s subtree, or else at its parent.

8In short, an AVL tree is a binary search tree which satisfies the AVL condition – the differencein height of left and right subtrees at all nodes should be no more than 1. If an insert or deleteoperation violates this condition at some node, then the offending node and its neighbours rearrangethemselves such that the condition holds once again. The sorted order of the elements is preservedduring this rearrangement.

Page 95: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 83

To facilitate jumps, in addition to its AVL balance factor, each node inT also

stores child information, i.e. the number of children in its left subtree and the

number of children in its right subtree. A jump traverses the height of the tree

in order to find its destination and can be carried out inO(log n) time. We talk

more about it shortly. Insert is carried out as adding a node at the bottom and

then jumping it by a number corresponding to the intended position. Maintaining

child information means that Insert and delete are logarithmic operations as child

information has to be updated to the root. Thus, we can perform the Jump, Insert

and delete operations in logarithmic time, while the top and bottom elements can

be retrieved in constant time.

As a bonus, since AVL trees inherently support a sorting of their elements,

vertex handles can be added directly toT with their signal counter values.T auto-

matically takes care of storing the nodes in a sorted order.

12.3 ImplementingA

Having satisfied ourselves with the implementation of the intended operations, we

now look into a good access mechanism,A, for T. As mentioned before,A is re-

quired to point us to elements in the list, e.g.A should help us find out where

exactly vertex handle 25 is in the list. This is important because the position of

vertex handle 25, as of all other vertex handles, is constantly subject to change.

It can of course be located, when needed, by searching the entire list. As the list

is sorted not by vertex handles, but by their activities, searching it for a specific

handle is anO(n) operation. We hope to improve this by introducingA.

Firstly, we see the operations thatA will be needed for. They are presented

below in context of the operations inM andT that make them necessary.

• Access: when a vertex inM is the winner, its vertex handle jumps inL and

the node moves inT. A should point us at all times, for a given vertex handle,

to its corresponding node inT.

• Removal: when a vertex is deleted fromM, its handle should be erased from

A and the corresponding node removed fromT, which would then make

necessary adjustments to rebalance itself.

• Insertion: when a new vertex is added toM, a new node containing a copy of

the vertex’s handle should be added toT at the appropriate position and an

entry should be made inA mapping the new vertex handle to the new node.

Page 96: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 84

Of the above, the Access operation which corresponds to the Basic Step of the

Neural Mesh algorithms is invoked the most frequently. The natural structure for

this operation is a ‘map’, which would map a given vertex handle to its correspond-

ing node. As the node related to a vertex handle does not change, the map entry

itself need not be updated. We therefore choose a C++ STL map which is inter-

nally implemented as a red-black tree9, and thus performs the above operations

in logarithmic time. The C++ STL map stores (key, value) pairs with the single

condition that thekeyshave an order. As ourkeysare vertex handles which are

numeric and unique, this condition is fulfilled. Thevaluefor each vertex handle in

A is a pointer to the corresponding node inT. From the time of addition toA to the

time of removal, pairs remain unchanged.

The Removal operation may not seem entirely necessary. Having extra entries

in A does not hamper its capability to perform the Access and Insertion operations.

However, we note that an Access, which is done most often, requires a search for

the desired handle in the underlying red-black tree. Search time then depends on

the size ofA. Having unnecessary entries inA will serve only to retard the search.

As Removal is called much less often, it is better to remove unwanted entries from

A.

Having two mechanisms,L andA, may seem excessive, but on close observa-

tion, one sees that we have to be able to support searches based on two criteria.

Firstly, we need to keep vertex handles sorted according to their activities. This fa-

cilitates finding TOP and BOTTOM and when they are removed, their immediate

neighbours. It also makes possible an efficient Jump procedure. These require-

ments are fulfilled byL. Secondly, we need to be able to search vertex handles

based on their numeric values in order to efficiently find their positions inL. This

necessitatesA.

12.4 Jumps inT

Except for the Jump operation, all operations onL are trivial for most data struc-

tures, especially for the modified AVL tree,T, that we choose for our implemen-

tation. Apart from the added overhead of maintaining child information at each

node, these operations work inT just as they do in a regular AVL tree. The only

operation unique toT is the Jump operation, which involves accessing the node,nj ,

to be jumped, finding its destination,nd, as per the given jump distance, deleting

9a self-balancing tree

Page 97: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 85

the original node and then re-inserting it as an immediate neighbour of the found

destination. As insert and delete are already implemented and accessing the node

is trivially handled byA, the crux of this operation is in findingnd. We outline this

process shortly.

Before we present the process, it will be useful to mention a few things.T

follows the convention where nodes with larger value are to the Right. Thus, the

top node, TOP, inL is the Rightmost leaf inT and the bottom node, BOTTOM,

is the Leftmost leaf. All jumps inL are forward, i.e. if a vertex handle is jumped

from position numberp1 to position numberp2 in L, it must be thatp2 ≥ p110.

Correspondingly, when a call for a jump is made toT , it is in the Right direction.

The node,nj , to be jumped is located and marked using a pointer. We then move

the pointer systematically until it reaches the intended destination. At each step,

we choose to advance the pointer to one of the nodes adjacent11 to the node that is

currently marked. It should be noted that because of the tree structure ofT, node

adjacency inT does not mean adjacency inL of the corresponding vertex handles.

In fact, this is exactly the advantage in using trees – moving the pointer by one node

in T may be equivalent to jumping several vertex handles inL. For this reason, it

is possible that a movement of the pointer may cause us to go beyond the intended

destination,nd. In this case, we need to jump back. This gives rise to the Left jump

in T, which initially seems like a bug in the design as it does not correspond to any

of the listed operations onL. One should note however thatT, being a tree, has both

Left and Right children at each node. Any reasonable traversal ofT should thus

include moves to both Left and Right. Moreover, the Left jump does not translate

to a backtrack inT; it simply implies traversing the left subtree of a node. The Left

jump is not a bug in our design, but a natural consequence of the underlying tree

data structure.

In finding the destination,nd, for jump(nj , dist), one of three scenarios is

possible:

1. nd is in the right subtree ofnj

2. nd is one of the ancestors ofnj

3. nd is in the right subtree of one of the ancestors ofnj

10Top is at position numbern in L, wheren is the total number of elements inL.11Each node is said to be adjacent to, where present, its parent, its left child and its right child.

From a given node, our pointer can move only to nodes that are adjacent to it.

Page 98: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 86

At this point, it is important to understand the significance of the child information

stored at each node. Let us first establish some notation. L stands for Left and

R for Right, such thatL = R andR = L. T stands for Top. A node’s Top link

points to its parent.dir is a variable such thatdir ∈ {L, R, T}. For a node,nx,

in T, the corresponding vertex handle is denoted asvx, and the handle’s position

in L aspx. nx.child[dir] gives the number of nodes in thedir subtree12 of nx.

Where no child exists, the corresponding number is 0.nx.dir points to thedir

link13 of nx. When some of the nodes do not exist, the corresponding links are

NULL. nx.fromTop ∈ {L, R, NULL} tells us ifnx is a Left or Right child, or if

it is ROOT.

Let us now say thatnx.child[L] = il and thatni1 , . . . , nil are theil nodes in

the Left subtree ofnx in T. Because of the inherent sorting of nodes inT, we can

say thatpi1 , . . . , pil < px. In fact,vi1 , . . . , vil (not necessarily in that order) form

a contiguous block immediately to the left ofvx in L14. Similarly,nx.child[R] = ir

means that the nodes corresponding to their right neighbours ofvx in L can be

found in the right subtree ofnx in T.

We can now see that Scenario 1 above corresponds to a jump in whichdist ≤nj .child[R]. A search fornd in this case means looking into the right subtree of

nd. We call this a ‘descent’ with direction,dir = R. Descending fromnd to its

right child, nR = nd.R, in T corresponds to skipping forward several positions

in L. Knowing exactly how many positions have been skipped lies at the heart

of solving the problem at hand. The number of positions skipped is equal to the

number of vertex handles betweenvd andvR in L. From the sorting ofT, we know

that the nodes corresponding to these vertex handles will be in the Left subtree of

nR in T. The number of these nodes is thusnR.child[L]. Thus, in going fromnd to

its right child,nR = nd.R in T, we have skippednR.child[L] positions inL. The

remaining jump distance is then given by

dist = dist− nR.child[L]− 1

In general, when descending fromnx in dir direction,dist is updated as

dist = dist− ndir.child[dir] (12.1)

12Note thatdir 6= T in this case.13L link = Left child, R link = Right child, T link = parent14In a horizontal scenario, we assume TOP to be at the rightmost end ofL and BOTTOM to be

at the leftmost. Moving right inL corresponds to moving forward and moving left corresponds tomoving backwards.

Page 99: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 87

wherendir = nx.dir. If this distance is non-zero we descend further tonR’s child

in directiondir. If at some point,dist becomes negative, we flip the direction of

descent, i.e.dir = dir, makedist positive again by the updatedist = −dist, and

continue our descent. Our descent stops when we reach the node at whichdist = 0.

This isnd. The above scenario and the next two are illustrated in Figure 12.1.

Figure 12.1: Findingnd in T

(a)

(a) A portion of L corresponding to a subtree inT is shown. In it,vj andvd for three differentjump operations are indicated.(b) The jump command is passed to the access mechanism,A,which points us to the corresponding node inT. This node isnj and we mark it with a pointer.The child information for each node is shown below it. Step 1 is to decide, based on this childinformation, whether to ascend or descend and then to move the pointer accordingly. At the nextstep, the remaining distance is calculated as per equations 12.1 and 12.3, and the next movementof the pointer is determined. Notice that in descent, the child information of the node the pointeris currently at is used in the distance update, whereas in ascent, the child information of the nodethe pointer was just at is used. We continue moving the pointer until it reaches a node where theremaining distance is 0. This isnd.

Theblue jump corresponds to Scenario 1,red jump to Scenario 2 and thegreenone to Scenario 3.

Scenarios 2 and 3 involve an ‘ascent’ and occur whendist > nj .child[R]. Here

Page 100: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 88

the pointer markingnj moves up to the parent ofnj . This cannot be done whennj

is the root, ROOT, ofT as there is no parent. In fact, a call ofjump(ROOT, dist)wheredist > ROOT.child[R] represents an ‘overjump’ where an attempt is being

made to jump a vertex handle past TOP. Such a jump is internally cut short to

jump(ROOT, ROOT.child[R]) thus making it a Scenario 1 jump. Like Scenario

1, we are interested in the number of positions inL that are skipped when we ascend

from nx to its parent,nT = nx.T, in T. Based on the arguments for Scenario 1, we

see that the update ofdist in case of ascent is as follows

if nx.fromTop = dir, dist = dist + nx.child[dir] + 1 (12.2)

else, dist = dist− nx.child[dir]− 1 (12.3)

with dir = R. Ascent continues either until we reach a node at whichdist = 0,

we reach ROOT or we reach any such thatny.child[dir] ≤ dist. The first case

corresponds to Scenario 2. We refer to the other two cases, which correspond to

Scenario 3, asforksand the node at which we stopped ascent as thefork node. If a

fork occurs, we continue by descending indir direction from the node at which the

fork occurred, i.e. the fork node. To check overjumps, if the fork occurs at ROOT,

anddist > ROOT.child[dir], we updatedist asdist = ROOT.child[dir].The 3 scenarios can thus be rewritten as

1. descent (dir = R)

2. ascent (dir = R)

3. ascent followed by descent (dir = R)

As all jumps inL are in the forward direction,dir is initialized to R for all jumps

in T. During the jump however,dir may change. Specifyingdir does not make

sense for ascent as each node has only one parent. We still observe thedir value

as the ascent may need to be followed by a descent, for whichdir is required.

Moreover, when the Node Addition step repositions the most active vertex inM,

instead of deleting and re-inserting the top node inT, it might be more efficient

to jump the node back by the corresponding amount15. Maintaining thedir value

for ascents keeps the door open for such implementations. Also note that no valid

jump necessitates a descent followed by an ascent, and thatdist can never become

negative during an ascent. An illustration of different jumps is given in Figure 12.1.

15A backward jump corresponds to a descent or ascent withdir = L.

Page 101: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 89

Having discussed the details, we now give the algorithm for findingnd for

a jump in Algorithm 7. Oncend has been found,nj is deleted from its current

position. This might causeT to readjust itself so as to maintain the AVL property.

nj is then reinserted inT as thedir child of nd. This might not always be possible

asnd might already have adir child. In that case,nd is inserted as thedir-most leaf

of nd’s dir subtree. For example, ifdir = L andnd already has a Left subtree, then

nj would be added as the Right-most leaf innd’s Left subtree inT. This ensures

thatvj is the successor (dir = R) or predecessor (dir = L) of vd in L. Again, if

reinsertion ofnj causes the AVL property inT to be violated,T will readjust itself.

12.4.1 Problems

There are some problems with the process of findingnd. To explain these prob-

lems, we need to establish some notation. We assign the labelNi to the node in

T corresponding to the vertex handle numberi. Coming back to the problems,

looking at Figure 12.1, we see that very small jumps may sometimes cause large

traversals, e.g. a call tojump(53, 2) would cause a traversal all the way up to the

N176 and then down its right subtree. This is a particularly long traversal for a

jump distance of just 2 and is as such a problem. However, such cases occur only

for small jump distances. For our application, the jump distances are typically in

the order of the size ofL which is sufficiently large to avoid such cases.

Overjumps also pose a problem. An overjump is a jump that tries to go be-

yond the top ofL. The overjump problem has already been dealt with for the case

where the overjumping node comes from the Left part ofT like in the blue jump

in Figure 12.2. In this case, the search for the destination definitely goes through

ROOT where the extra distance is cut short as discussed above. The problem ap-

pears when the overjumping node is from the Right half ofT. Steps 4, 5, 6 and 7

of the red jump in Figure 12.2 are a total waste. Ideally, the overjump should have

been detected atN28 and the pointer put to descent mode in Step 3. The attribute

of N28 that would help us decide whether to continue the ascent or to treat the

jump as an overjump, is thatN28 lies in the ROOT−→TOP branch. This property

holds forN73 andN84 as well. Indeed, an ascent withdir = R from these nodes

will only be turned back as a descent at ROOT. It is thus advisable to treat ascents

with dir = R from such nodes as overjumps. Doing this would require either a flag

at such nodes or a search either up to the root, or down to find TOP or both. The

search option is too costly considering that once implemented, it will be invoked

for all ascents. Flagging the nodes seems feasible but seeing as to how nodes are

Page 102: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 90

constantly being either removed or repositioned to maintain balance inT, flagging

would entail extraneous bookkeeping and updating.

Figure 12.2: Overjumps

(a)

The intended jumps inL are both overjumps. The blue overjump, like all overjumps from the left

half of T, goes through the root, where it is checked and adjusted such that no unnecessary steps are

made. On the other hand, the red overjump, like all overjumps fromT’s right half, ends up wrongly

at the root where it is checked and sent back down.

We content ourselves with the argument that because of the random nature of

the Neural Mesh algorithm, most nodes have roughly equal chances of being the

winner in an iteration. Typical jump distances are such that only those nodes in

T that are close to TOP will be involved in overjumps. As these nodes are few

compared to the entire population ofT, we can safely conclude that overjumps

occur rarely, though regularly, and thus do not degrade the overall performance.

Page 103: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 91

12.5 Tweaks

Algorithm 7 finds the destination,nd, in T for a jump(nj , dist) operation. How-

ever there is more to a jump than finding the destination.nj has to be deleted from

its current position and re-inserted as an inorder neighbour16 of nd. All in all there

are 4 logarithmic operations – searching fornj in A, findingnd, deletingnj from

T and re-insertingnj in T. While the operation onA cannot be optimized further,

the other three can. We bunch them together. Thus, in finding the destination, we

start with unlinkingnj and adjusting the affected links. During the search, we up-

date child and AVL information at each node visited. Whennd is finally found,nj

is immediately added as the inorder neighbour. Had it not been for rebalancing,

we could have said that only the nodes visited during the search are affected and

need to be updated. However, becauseT readjusts to maintain the AVL property,

other nodes inT may also be affected by the repositioning ofnj . When such a

case occurs, non-trivial updations have to be made inT, namely updates to nodes

which otherwise have nothing to do with the jump itself, but are affected because

of readjustments inT. Detecting the occurrence of such cases involves some nasty

bookkeeping. However it pays off in terms of performance benefit.

Boundaries inM are handled by a separate data structure called the boundary

manager,B, which stores boundaries as loops of boundary edges.B needs to be

updated at Growth and Topology Learning steps. UpdatingB was an expensive

undertaking in the existing implementations. They categorically searched the entire

mesh and rebuiltB at every update. It should be obvious that, asM gets larger,

this procedure becomes more and more expensive. Some observations into how

boundaries are treated inM leads to dramatic changes in the algorithms’ running

time.

We notice that every time a Growth step affects boundaries inM, it does so only

to one boundary at a time, either by adding a node to it, or removing one from it.

This means that only the corresponding loop inB needs to be updated. AsB stores

handles only of boundary nodes and edges, searching inB is trivial compared to

searching inM. Once found, the loop is updated by iterating over edges adjacent to

the ones already in the loop, and selecting the ones that are on a boundary. Whereas

the existing implementations categorically called for a search in all ofM at every

invocation of the Growth steps, we call our more efficient update procedure only

16A node’s inorder neighbours are the nodes that are visited just before and just after it during aninorder traversal of the tree.

Page 104: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 92

when the node just split is a boundary node, or the node just removed belonged to

a boundary.

Topology Learning steps create and remove boundaries inM. When one or

more boundaries are created inM by the Triangle Removal step, we store the han-

dles of the removed triangles’ vertices. If they are still connected toM, we check

for boundary edges incident on them. If any such edges are found, we trace their

boundary loops and add the loops toB. This again is much more efficient than it-

erating over all ofM to rebuildB. Similarly, when two boundaries are merged, we

simply look at the neighbourhood of the affected nodes, and not at all ofM.

Page 105: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 12. Implementation issues 93

Algorithm 7 Findingnd in T

• ‘TOP’ is the Rightmost leaf inT. ‘ROOT’ is the root.

• dir ∈ {L, R, T}; L = R andR = L

• N.fromTop =dir ≡ node N is adir child of its parent; ROOT.fromTop =NULL

• N.child[dir] = i ≡ node N hasi children in itsdir subtree; for any leaf nodein T, leaf.child[L] = leaf.child[R] = 0

jump(nj , dist):

if (nj 6= Top anddist 6= 0)

dest = nj , ascent = True, descent = False, dir =Rwhile (dist 6= 0)

if (dist < 0)dist = −distdir = dir

if ( ascent)

if (dest = ROOT)ascent = Falsedescent = Trueif (dist > dest.child[dir])

dist = dest.child[dir]else if (dist > dest.child[dir])

if (dest.fromTop = dir)dist =dist + dest.child[dir] + 1

else

dist =dist− dest.child[dir]− 1

dest = dest.T

else

ascent = Falsedescent = True

if (descent)sub = dest.dirdist = dist− sub.child[dir]− 1dest = sub

return dest

Page 106: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Part IV

Conclusion

94

Page 107: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

We show the results of our modifications to the Neural Mesh algorithms by

comparing mesh quality and running times for reconstructions. We then test our

implementation on large data sets. The thesis closes with some concluding remarks

on problems with and possible future work on Neural Meshes.

Page 108: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13

Results

In order to compare the reconstructions obtained using the list,L, with the ones

using the signal counter, we use several parameters. As before, we quantify the

quality of a mesh with the ratio of its regular vertices. To compare two reconstruc-

tions, we use the Metro tool from [CRS98] to measure the distances from the orig-

inal mesh to each of the reconstructions. As expected, we also compare algorithms

based on their running times. Unless otherwise stated, we use the noise-filtering

variant [IJL+04] from Chapter 9 with its default parameter set which is as follows:

αw

αf

λ

αL

CL

Cvs

Cnr

Cec

Ctop

αr

αm

=

0.116

0.05150125101010

2.576

For our implementation of the variant with the list, we change one of the above

values toαL = 0.2. The justification for this was seen in Figure 6.3. Also, we run

the algorithm initially with the signal counters but at an early stage in the growth

of the mesh – when it reaches 1000 vertices – we switch over to the list. For jumps

within the list, we use a constant jump distance ofn3 wheren is the total number

96

Page 109: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 97

of vertices in the mesh.

Table 13.1: Cost comparison

Basic Add Rem Top Total

BasicO(n)O(n)

O(n)O(n)

O(n)O(log n)

O(n2)

Top.O(log n)

O(n)O(n)O(n)

O(n)O(log n)

O(n)O(log n)

O(n2)

Norm.O(n)O(n)

O(n)O(n)

O(n)O(log n)

O(n2)

NoiseO(log n)

O(n)O(n)O(n)

O(n)O(n)

O(n)O(log n)

O(n2)

Ens.O(log n)

O(n)O(n)O(n)

O(n)O(n)

O(n)O(log n)

O(n2)

ListO(log n)

O(n)O(log n)

O(n)O(log n)

O(n)O(n)

O(log n)O(n log n)

The cost and frequency of invocation of each step is given for the basic (Chapter 6), topology learning

(Chapter 7), normal based (Chapter 8), noise filtering (Chapter 9) and ensembles (Chapter 10) algo-

rithms, as well as for our list modification (Chapters 11 and 12) of these algorithms. For example,

the Basic step in the Topology Learning algorithm costsO(log n) and is invokedO(n) times.

Table 13.1 shows a cost comparison of steps of the algorithm in the variants

presented in the previous chapters. Notice that the list modification can be applied

to any of the variants. Figure 13.1 shows the difference in running times for in-

dividual steps of the algorithm. As the Topology Learning steps are not affected

by the list1, their running times are unchanged in both versions. The steps that

are directly affected by the list are the Basic and Growth steps. There is almost

no change in the running time of the Basic Step. The Node Addition and Node

Removal steps now take much less time. The total effect is more pronounced in

larger models as shown in Figures 13.2 and 13.3.

Table 13.2 shows a summary of mesh distances obtained for reconstructions

of various models. It is apparent that generally, reconstructions obtained with our

modifications are ‘farther’ from the original mesh than the ones obtained using the

original method. This is not necessarily a drawback, as strict adherence to input

data is counterproductive in case of noise. Increased distance can also be attributed

to the extra smoothing that we do – we use a value ofαL = 0.2as compared to the

1except, when a Triangle Removal step removes a vertex from the mesh. In that case, the listneeds to be updated. This happens very rarely and does not affect the running time significantly.

Page 110: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 98

Figure 13.1: Running time of individual steps

(a)

Data shown is averaged over four reconstructions of the Stanford bunny model of up to 20k – two

each with and without the list modification. There is almost no change in the time taken for the Basic

step and topology learning. There is some change in running time of the Node Removal step and

drastic change for the Node Addition step.

authors’ setting ofαL = 0.05.

Figure 13.2 shows the effects of the modifications we make to the boundary

handling mechanism. The payoff for our choice of parameters is also reflected in

the higher quality reconstructions obtained with our method. Figure 13.3 gives

some more comparisons but with smaller meshes. Notice that because of the

smaller number of vertices in the reconstructions used2, the final difference be-

tween corresponding running times in Figure 13.3 is not as pronounced as in Fig-

ure 13.2. However, the pattern is still obvious.

As our method is now fast enough, we try it out at large models obtained cour-

tesy of the Digital Michelangelo project [LPR+00]. Figures 13.4 and 13.5 show the

results. Our default settings do not work for models which have especially sparse

2Data for reconstructions up to 100k is shown in Figure 13.2 whereas the reconstructions used tocollect data for Figure 13.3 were up to 20k.

Page 111: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 99

Table 13.2: Mesh distances

Exp. 1 Exp. 2 Exp. 3 Avg

Maxwith L

without L3.963.91

3.676.22

3.965.57

3.865.23

brainwith L

without L3.422.57

3.192.66

3.762.73

3.462.65

bunnywith L

without L0.0060.005

0.0080.007

0.0080.007

0.0070.006

handwith L

without L7.913.75

8.717.35

8.278.71

8.306.60

An average of three experiments is taken for 20k reconstructions of the Max Planck, brain, Stan-

ford bunny and hand models. Reconstructions are obtained using our list modification and with the

original signal counters. Hausdorff distances between the reconstructions and original models are

obtained using the Metro tool. The effect of our modifications, in the general case, is to increase the

distance.

data at some places. We alter the smoothing parameters to obtain satisfactory re-

sults. We also keep topology learning off as the models have sparse data which

would lead to the creation of unnecessary boundaries in the reconstructions. We

could alternatively have setTr to a high value to circumvent the problem but we

take the safe option by avoiding boundaries altogether! The reconstructions were

carried out on a 1.7 GHz Pentium 4 machine with 512 MB memory and running

Debain Linux 3.0. The models were allowed to grow until the memory demands

became too great for the system to handle. Table 13.3 shows the corresponding

running times.

Table 13.3: Running times for large models

Size of model Time taken

Atlas 270k 3h 46mAwakening 160k 7h 43m*Youthful 330k 31h 2m*

* - with extra smoothing iterations

Running times for large reconstructions are shown. Because of extra smoothing iterations, the Awak-

ening model, at roughly two-thirds the size of the Atlas, took almost twice the time. The Youthful

model is grown much larger and so takes longer.

Page 112: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 100

Figure 13.2: Modifications to boundary handling

(a) with L, no topol-ogy learning

(b) without L, notopology learning

(c) without L, topol-ogy learning

(d) with L, topologylearning

(e) (f)

50k reconstructions of the Max Planck model are shown with and without the use ofL, and with and

without topology learning. The logarithmic nature of themodified algorithmsversus the quadratic

time taken by theoriginal algorithmsis clearly illustrated in the left graph. Also note that, because

of our smarter handling of boundaries, the difference in running time with and without topology

learning is virtually eliminated in the modified algorithm – the green plots in the left graph overlap,

whereas the difference between the blue plots increases with the number of nodes. Our choice of

αL = 0.2 also gives us superior mesh quality as shown in the graph on the right.

Page 113: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 101

Figure 13.3: Time and valence comparison

20k reconstructions are obtained with and without our list modifications for the Max Planck, brain,

Stanford bunny and hand models. Each experiment is conducted thrice. The running times and

fraction of regular vertices shown for intermediate stages of the reconstructions are averages of the

corresponding experiments. For each model, data for reconstructions obtained without our modifica-

tions is denoted by solid lines, and that for the reconstructions obtained with the list modifications,

in dotted lines of the corresponding color. In all cases, our modifications yield better quality recon-

structions at a higher rate.

Page 114: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 102

Figure 13.4: Awakening

Reconstructions of the Awakening model [LPR+00] with the list modification. We try (left) with

our default setting ofαL = 0.2 andCL = 1. At 5k vertices, as shown, learning is clearly going in

the wrong direction. We change the smoothing parameters (right) toαL = 0.05 andCL = 5 and

reconstruct up to 160k vertices. Notice that the algorithm still has problems learning the thin base at

the bottom of the model.

Page 115: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 13. Results 103

Figure 13.5: More large models

More large models from the Digital Michelangelo project [LPR+00]. We reconstruct the Atlas model(left) up to 270k vertices wih our default settings. The Youthful model (right), like the Awakeningmodel in the previous figure, has a thin base at the bottom, which causes problems in the growthof the model for the default settings. With the modified settings used earlier, i.e.αl = 0.05 andCL = 5, we grow the model up to 330k vertices.

Because of the model’s turning inside out during growth, we use an alternate viewer for the Atlas to

better show the learnt details.

Page 116: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 14

Conclusion

In this thesis, after setting the context of surface reconstruction, we presented the

Neural Mesh algorithm and its variants in detail. We proposed a novel way to go

about any of these variants, which also represents a shift from the ‘exact learning’

paradigm to a ‘comparative learning’ one. Our method is highly flexible, produces

comparable results and reduces running times drastically. This enables us to run

the Neural Mesh algorithms for much larger reconstructions and allows us to study

the algorithms behaviour for such data.

14.1 Problems

As we do not modify the algorithms themselves, the problems that plague the orig-

inal algorithms persist, namely the reliance on user parameters, each of which can

potentially have significant impact on the output model1. While we understand

well the meaning and importance of these parameters, a normal user cannot be ex-

pected to fully grasp the significance of, say,Cec, nor the effects of their choice

on the ensuing reconstruction process. This problem is compounded by the abun-

dance of such parameters, e.g.αw, αsc, αL, Cvs, Cec, αr, andαm in the topology

learning algorithm.

Assigning default values to the parameters, and leaving it to more experienced

users to modify them, is one viable solution. However, this serves only to limit the

potential of Neural Meshes. Different configurations of these parameters result in

different growing conditions for the Neural Mesh, ranging from slow and conser-

vative learning with controlled growth to aggressive learning with highly flexible

1The effect ofαw, αL and the relationship betweenCvs andCec was discussed in Chapter 6.

104

Page 117: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Chapter 14. Conclusion 105

growth. Additionally, each model has a corresponding, unique ‘optimal’ parame-

ter set – less smoothing should be done for a model known to have sharp features,

parameters can be set to allow faster learning for simpler models etc. Figure 6.3

also implies that one set of parameters might lead to good reconstructions for some

models and poor ones for others. Though an experienced user would have some

idea of the optimal set for a given model, the best way to determine it is to run

the algorithm several times with different configurations and to take the user’s re-

sponse to tune the parameters accordingly. This is a time consuming prospect that

again requires user intervention.

14.2 Future work

Automating parameter selection to efficiently choose the optimal parameter set

for a given model would go a great deal in making the Neural Mesh algorithm

more accessible. Apparently, this issue arises commonly in the Neural Networks

community, and is known asNeural Network benchmarking. [Fle95, Pre95, Pre96]

offer hope in this direction.

While we refer in this thesis to the possibility of doing so, we have not yet ex-

perimented with variable jump distances for the list. This approach seems to us to

be especially promising in that the jump distance singlehandedly steers the growth

of the Neural Mesh. It will be a fruitful endeavour to study how different jump dis-

tances affect the reconstruction process and the shape of the final reconstruction.

The framework used to implement the comparative learning approach can very

well be extended to accommodate the original, exact learning approach. While it is

of passing academic interest to experiment with variations of a technique, in most

cases, the original, tried and tested technique remains the best. Shifting the original

algorithms to our current framework will be our immediate priority.

Lastly, as Funke and others succinctly put forward in [FR02], the fact that we

spend supra-linear time and effort for a problem that seems to be inherently linear,

holds in itself the promise of a better solution or at least, a better understanding

of the problem. Until these are found, we shall, like so many others, continue our

search for them.

Page 118: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

Bibliography

[AB98] Nina Amenta and Marshall Bern. Surface reconstruction by voronoi

filtering. In Proceedings of the Fourteenth Annual Symposium on

Computational Geometry (SCG’98), pages 39–48, New York, June

1998. Association for Computing Machinery.

[ABCO+01] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T.

Silva. Point set surfaces. In Thomas Ertl, Ken Joy, and Amitabh

Varshney, editors,Visualization 2001: proceedings: October 21–26,

2001, San Diego, California, pages 21–28, 1109 Spring Street, Suite

300, Silver Spring, MD 20910, USA, October 2001. IEEE Computer

Society Press.

[ABE98] N. Amenta, M. Bern, and D. Eppstein. The crust and theβ-skeleton:

Combinatorial curve reconstruction.Graphical models and image

processing: GMIP, 60(2):125–??, ???? 1998.

[ABK98] Nina Amenta, Marshall Bern, and M. Kamvysselis. A new voronoi-

based surface reconstruction algorithm. In Michael Cohen, editor,

Proceedings of SIGGRAPH 98, Annual Conference Series, Addison

Wesley, pages 415–422. Addison Wesley, 1998.

[ACDL00] Nina Amenta, Sunghee Choi, Tamal K. Dey, and Naveen Leekha.

Simple algorithm for homeomorphic surface reconstruction. InPro-

ceedings of the 16th Annual Symposium on Computational Geometry

(SCG-00), pages 213–222, N. Y., June 12–14 2000. ACM Press.

[ACK01] Nina Amenta, Sunghee Choi, and Ravi Krishna Kolluri. The power

crust, unions of balls, and the medial axis transform.Computational

Geometry, 19(2-3):127–153, 2001.

106

Page 119: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 107

[Alf89] P. Alfeld. Scattered data interpolation in three or more variables.

In T. Lyche and L. Schumaker, editors,Mathematical Methods in

Computer Aided Geometric Design, pages 1–34. Academic Press,

1989.

[AM00] Ernst Althaus and Kurt Mehlhorn. TSP-based curve reconstruction

in polynomial time. InProceedings of the Eleventh Annual ACM-

SIAM Symposium on Discrete Algorithms, pages 686–695, N.Y., Jan-

uary 9–11 2000. ACM Press.

[AS85] Eugene L. Allgower and Phillip H. Schmidt. An algorithm for

piecewise-linear approximation of an implicitly defined manifold.

SIAM Journal on Numerical Analysis, 22(2):322–346, April 1985.

[AS00] M. Attene and M. Spagnuolo. Automatic surface reconstruction from

point sets in space. In M. Gross and F. R. A. Hopgood, editors,Com-

puter Graphics Forum (Eurographics 2000), volume 19(3), 2000.

[Ata83] Mikhail J. Atallah. A linear time algorithm for the Hausdorff dis-

tance between convex polygons.Information Processing Letters,

17(4):207–209, November 1983.

[Att97] D. Attali. r-Regular shape reconstruction from unorganized points.

In Proceedings of the 13th International Annual Symposium on Com-

putational Geometry (SCG-97), pages 248–253, New York, June 4–6

1997. ACM Press.

[AVL62] G. M. Adel’son-Vel’skii and E. M. Landis. An algorithm for the

organization of information.Soviet Mathematics Doklady, 3:1259–

1263, 1962.

[Bar85] R. E. Barnhill. Surfaces in computer-aided geometric design: A sur-

vey with new results. InComputer Aided Geometric Design, vol-

ume 2, pages 1–17, September 1985.

[Bay72] R. Bayer. Symetric binary btrees: Data structure and maintenance

algorithms. Acta Informatica, Springer Verlag (Heidelberg, FRG

and NewYork NY, USA) Verlag, 1(4), November 1972.

Page 120: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 108

[BB97] F. Bernardini and C. Bajaj. Sampling and reconstructing manifolds

using alpha-shapes. Technical Report CSD-TR-97-013, Department

of Computer Science, Purdue University, West Lafayette, IN, 1997.

[BBCS97] Fausto Bernardini, Chandrajit L. Bajaj, J. Chen, and Daniel R.

Schikore. A triangulation-based object reconstruction method. In

6th Annual Video Review of Computational Geometry, Proc. 13th

ACM Symp. Computational Geometry, pages 481–484. ACM Press,

4–6 June 1997.

[BBCS99] F. Bernardini, C. L. Bajaj, J. Chen, and D. R. Schikore. Automatic

reconstruction of3D CAD models from digital scans.International

Journal of Computational Geometry and Applications (IJCGA), 9(4–

5):327–??, 1999.

[BBX95] Chandrajit L. Bajaj, Fausto Bernardini, and Guoliang Xu. Automatic

reconstruction of surfaces and scalar fields from3D scans.Computer

Graphics, 29(Annual Conference Series):109–118, November 1995.

[BC00] Jean-Daniel Boissonnat and Frederic Cazals. Smooth shape recon-

struction via natural neighbor interpolation of distance functions.

In Proceedings of the 16th Annual Symposium on Computational

Geometry (SCG-00), pages 223–232, N. Y., June 12–14 2000. ACM

Press.

[BCX95] Chandrajit L. Bajaj, Jondon Chen, and Guoliang Xu. Modeling with

cubic A-Patches.ACM Transactions on Graphics, 14(2):103–133,

April 1995.

[BF91] R. E. Barnhill and T. A. Foley. Methods for constructing surfaces

on surfaces. In G. Farin, editor,Geometric Modeling: Methods and

their Applications, pages 1–15. Springer, Berlin, 1991.

[BF01] J. Barhak and A. Fischer. Adaptive reconstruction of freeform ob-

jects with 3D SOM neural network grids. InProceedings Ninth

Pacific Conference on Computer Graphics and Applications. Pacific

Graphics 2001. IEEE Comput. Soc, Los Alamitos, CA, USA, pages

97–105, 2001.

Page 121: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 109

[BI92] Chandrajit L. Bajaj and Insung Ihm. Smoothing polyhedra using

implicit algebraic splines.Computer Graphics, 26(2):79–88, July

1992.

[Bis95] Christopher M. Bishop.Neural Networks for Pattern Recognition.

Oxford University Press, Oxford, UK, 1995.

[Bli82] James F. Blinn. A generalization of algebraic surface drawing.ACM

Transactions on Graphics, 1(3):235–256, July 1982.

[BM72] Rudolf Bayer and Edward M. McCreight. Organization and mainte-

nance of large ordered indices.Acta Informatica, 1:173–189, 1972.

[BMR+99] F. Bernardini, J. Mittleman, H. Rushmeier, C. T. Silva, and

G. Taubin. The ball-pivoting algorithm for surface reconstruc-

tion. IEEE Transactions on Visualization and Computer Graphics,

5(4):349–359, October/December 1999.

[Boh00] Christian-Arved Bohn.Radiosity on Evolving Networks. PhD the-

sis, Fachbereich Informatik, Universitat Dortmund, Dortmund, Ger-

many, 2000. Available from http://imk.gmd.de/reta/.

[Boi84] Jean-Daniel Boissonnat. Geometric structures for three-dimensional

shape representation.ACM Transactions on Graphics, 3(4):266–286,

October 1984.

[BOP92] R. E. Barnhill, K. Opitz, and H. Pottmann. Fat surfaces: A trivari-

ate approach to triangle-based interpolation on surfaces.Computer

Aided Geometric Design, 9(5):365–378, November 1992.

[BPR87] R. Barnhill, B. Piper, and K. Rescorla. Interpolation to arbitrary data

on a surface. In G. Farin, editor,Geometric Modeling: Algorithms

and New Trends, pages 281–289. SIAM, Philadelphia, 1987.

[Bri85] J. F. Brinkley. Knowledge-driven ultrasonic three-dimensional organ

modeling.IEEE-Transactions, PAMI, 7(4):431–441, 1985.

[BSMH98] Michael J. Black, Guillermo Sapiro, D. Marimont, and David

Heeger. Robust anisotropic diffusion.IEEE Transactions on Image

Processing, 7(3):421–432, March 1998.

Page 122: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 110

[BW97] Jules Bloomenthal and Brian Wyvill.Introduction to Implicit Sur-

faces. Morgan Kaufmann Publishers Inc., 1997.

[BX94] Chandrajit L. Bajaj and Guoliang Xu. Modeling scattered function

data on curved surfaces. InPacific Graphics ’94: Proceeding of the

second Pacific conference on Fundamentals of computer graphics,

pages 19–32. World Scientific Publishing Co., Inc., 1994.

[CFB97] J. Carr, W. Fright, and R. Beatson. Surface interpolation with radial

basis functions for medical imaging. InIEEE Transactions Med.

Imag., volume 16, February 1997.

[Cha03] Raphaelle Chaine. A geometric-based convection approach of 3-

D reconstruction. InProceedings of the Eurographics/ACM SIG-

GRAPH symposium on Geometry processing, pages 218–229. Euro-

graphics Association, 2003.

[CL96] Brian Curless and Marc Levoy. A volumetric method for building

complex models from range images. InProceedings of the ACM

Conference on Computer Graphics, pages 303–312, New York, Au-

gust 4–9 1996. ACM.

[CM95] Yang Chen and Gerard Medioni. Description of complex objects

from multiple range images using an inflating balloon model.Com-

puter Vision and Image Understanding: CVIU, 61(3):325–334, May

1995.

[CMB+01] Jonathan C. Carr, Tim J. Mitchell, R. Beatson, Jon B. Cherrie,

W. Richard Fright, Bruce C. McCallum, and Tim R. Evans. Recon-

struction and representation of 3D objects with radial basis functions.

In Stephen Spencer, editor,Proceedings of the Annual Computer

Graphics Conference (SIGGRAPH-01), pages 67–76, New York,

August 12–17 2001. ACM Press.

[Con84] C. I. Connolly. Cumulative generation of octree models from range

data. InIEEE International Conference on Robotics and Automation,

March, 1984, pages 25–32, March 1984.

[CRS98] P. Cignoni, C. Rocchini, and R. Scopigno. Measuring error on

simplified surfaces. In David Duke, Sabine Coquillart, and Toby

Page 123: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 111

Howard, editors,Computer Graphics Forum, volume 17(2), pages

167–174. Eurographics Association, 1998.

[CSA88] C. H. Chien, Y. B. Sim, and J. K. Aggarwal. Generation of vol-

ume/surface octree from range data. InCVPR’88 (IEEE Computer

Society Conference on Computer Vision and Pattern Recognition,

Ann Arbor, MI, June 5–9, 1988), pages 254–260, Washington, DC.,

June 1988. Computer Society Press.

[Cur97] Brian Lee Curless. New methods for surface reconstruction from

range images. PhD Thesis CSL-TR-97-733, Stanford University,

Computer Systems Laboratory, June 1997.

[Dah89] W. Dahmen. Smooth piecewise quadric surfaces. In T. Lyche and

L. Schumaker, editors,Mathematical Methods in Computer Aided

Geometric Design, pages 181–194. Academic Press, 1989.

[Del34] B. Delaunay. Sur la sphere vide.Izvestia Akademia Nauk SSSR, VII

Seria, Otdelenie Matematicheskii i Estestvennyka Nauk, 7:793–800,

1934.

[Dev98] Olivier Devillers. Improved incremental randomized delaunay tri-

angulation. InProceedings of the Fourteenth Annual Symposium

on Computational Geometry (SCG’98), pages 106–115, New York,

June 1998. Association for Computing Machinery.

[DFR01] T. Dey, S. Funke, and E. Ramos. Surface reconstruction in almost

linear time under locally uniform sampling. InProceedings of the

17th European Workshop on Computational Geometry (EUROCG-

01), pages 129–132, Berlin, Germany, March 26–28 2001. Institute

of Computer Science, Freie Universitat Berlin.

[dG95] Luiz Henrique de Figueiredo and Jonas Gomes. Computational mor-

phology of curves.The Visual Computer, 11(2):105–112, 1995.

[DK99] Tamal K. Dey and Piyush Kumar. A simple provable algorithm for

curve reconstruction. InProceedings of the Tenth Annual ACM-

SIAM Symposium on Discrete Algorithms, pages 893–894, N.Y., Jan-

uary 17–19 1999. ACM-SIAM.

Page 124: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 112

[DLR86] N. Dyn, D. Levin, and S. Rippa. Numerical procedures for surface

fitting of scattered data by radial basis functions.SIAM J Sci. Stat.

Comput., 7:639–659, 1986.

[DMR99] Tamal K. Dey, Kurt Mehlhorn, and Edgar A. Ramos. Curve recon-

struction: Connecting dots with good reason. InProceedings of the

Conference on Computational Geometry (SCG ’99), pages 197–206,

New York, N.Y., June 13–16 1999. ACM Press.

[DTS93] W. Dahmen and T.-M. Thamm-Schaar. Cubicoids: modeling and vi-

sualization.Computer Aided Geometric Design, 10(2):89–108, April

1993.

[DTS01] Huong Quynh Dinh, Greg Turk, and Greg Slabaugh. Reconstruct-

ing surfaces using anisotropic basis functions. InProceedings of the

Eighth International Conference On Computer Vision (ICCV-01),

pages 606–613, Los Alamitos, CA, July 9–12 2001. IEEE Computer

Society.

[DTS02] Huong Quynh Dinh, Greg Turk, and Greg Slabaugh. Reconstructing

surfaces by volumetric regularization using radial basis functions.

IEEE Trans. Pattern Anal. Mach. Intell., 24(10):1358–1371, 2002.

[Ede93] H. Edelsbrunner. The union of balls and its dual shape. In ACM-

SIGACT ACM-SIGGRAPH, editor,Proceedings of the 9th Annual

Symposium on Computational Geometry (SCG ’93), pages 218–231,

San Diego, CA, USA, May 1993. ACM Press.

[Ede98] Herbert Edelsbrunner. Shape reconstruction with delaunay complex.

In LATIN: Latin American Symposium on Theoretical Informatics,

1998.

[Ede02] Ricky Pollack and Eli Goodman Festschrift, chapter Surface recon-

struction by wrapping finite point sets in space. Springer-Verlag,

2002.

[EKS83] Herbert Edelsbrunner, D. G. Kirkpatrick, and R. Seidel. On the shape

of a set of points in the plane.IEEE Trans. Information Theory, IT-

29:551–559, 1983.

Page 125: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 113

[EM94] Herbert Edelsbrunner and Ernst P. Mucke. Three-dimensional alpha

shapes.ACM Transactions on Graphics, 13(1):43–72, January 1994.

[EWA97] D. L. Elsner, R. T. Whitaker, and M. A. Abidi. A volumetric tech-

nique for 3d modeling through fusing multiple noisy range images.

In International Workshop on Image Analysis and Information Fu-

sion, Adelaide, Austalia, pages 405–416, November 1997.

[FCOAS03] Shachar Fleishman, Daniel Cohen-Or, Marc Alexa, and Claudio T.

Silva. Progressive point set surfaces.ACM Transactions on Graph-

ics, 22(4):997–1011, October 2003.

[FHMB84] Olivier Faugeras, Martial Hebert, P. Mussi, and Jean-Daniel Bois-

sonnat. Polyhedral approximation of 3-D objects without holes.

Computer Vision, Graphics, and Image Processing, 25:169–183,

1984.

[FK88] A. T. Fomenko and T. L. Kunii.Topological Modeling for Visualiza-

tion. Springer Verlag, April 1988.

[Fle95] Arthur Flexer. Statistical evaluation of neural network experiments:

Minimum requirements and current practice. Technical Report

OEFAI-TR-95-16, The Austrian Research Institute for Artificial In-

telligence, Schottengasse 3, A-1010 Vienna, Austria, 1995.

[FLN+90] Thomas A. Foley, David A. Lane, G. M. Nielson, Richard Franke,

and Hans Hagen. Interpolation of scattered data on closed surfaces.

Computer Aided Geometric Design, 7(1-4):303–312, June 1990.

[Flu92] Jan Flusser. An adaptive method for image registration.Pattern

Recognition, 25(1):45–54, 1992.

[Fol90] T. A. Foley. Interpolation of Scattered Data on a Spherical Domain,

pages 303–310. Chapman and Hall, 1990.

[FR02] Stefan Funke and Edgar A. Ramos. Smooth-surface reconstruction

in near-linear time. InProceedings of the 13th Annual ACM-SIAM

Symposium On Discrete Mathematics (SODA-02), pages 781–790,

New York, January 6–8 2002. ACM Press.

Page 126: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 114

[Fra87] R. Franke. Recent advances in the approximation of surfaces from

scattered data.Topics in Multivariate Approx., pages 79–98, 1987.

[Fri95] Bernd Fritzke. A growing neural gas network learns topologies. In

G. Tesauro, D. Touretzky, and T. Leen, editors,Advances in Neural

Information Processing Systems, volume 7, pages 625–632. The MIT

Press, 1995.

[Fri96] B. Fritzke. Growing self-organizing networks – why? InESANN’96:

European Symposium on Artificial Neural Networks, pages 61–72,

1996.

[Fri93] Bernd Fritzke. Growing cell structures - a self-organizing network

for unsupervised and supervised learning. Technical Report ICSTR-

93-026, International Computer Science Institute, Berkeley, May 93.

[Gie99] Joachim Giesen. Curve reconstruction, the traveling salesman prob-

lem, and menger’s theorem on length. InProceedings of the Confer-

ence on Computational Geometry (SCG ’99), pages 207–216, New

York, N.Y., June 13–16 1999. ACM Press.

[GJ02] Joachim Giesen and Matthias John. Surface reconstruction based

on a dynamical system.Computer Graphics Forum, 21(3):363–363,

2002.

[GKS00] M. Gopi, S. Krishnan, and C. T. Silva. Surface reconstruction based

on lower dimensional localized delaunay triangulation. In M. Gross

and F. R. A. Hopgood, editors,Computer Graphics Forum (Euro-

graphics 2000), volume 19(3), 2000.

[GLL+04] Michael Gosele, Hendrik P. A. Lensch, Jochen Lang, Christian

Fuchs, and Hans-Peter Seidel. DISCO: acquisition of translucent ob-

jects.ACM Transactions on Graphics, 23(3):835–844, August 2004.

[Gol99] Christopher Gold. Crust and anti-crust: A one-step boundary and

skeleton extraction algorithm. InProceedings of the Conference

on Computational Geometry (SCG ’99), pages 189–196, New York,

N.Y., June 13–16 1999. ACM Press.

[GS78] L. Guibas and R. Sedgewick. A dichromatic framework for balanced

trees.IEEE-FOCS, Proc. FOCS Conf, 1978.

Page 127: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 115

[GS93] M. Gross and F. Seibert. Visualization of multidimensional data sets

using a neural network.The Visual Computer, 10(3):145–159, 1993.

[GSF88] E. Grosso, G. Sandini, and C. Frigato. Extraction of 3D information

and volumetric uncertainty from multiple stereo images. In Yves

Kodratoff, editor,Proceedings of the 8th European Conference on

Artificial Intelligence, pages 683–689, Munich, FRG, August 1988.

Pitman Publishers.

[Guo91] Baining Guo. Surface generation using implicit cubics. In N. M.

Patrikalakis, editor,Scientific Visualization of Physical Phenomena

(Proceedings of CG International ’91), pages 485–503. Springer-

Verlag, 1991.

[Guo93] B. Guo. Nonsplitting macro patches for implicit cubic spline sur-

faces. In R. J. Hubbold and R. Juan, editors,Eurographics ’93, pages

433–445, Oxford, UK, 1993. Eurographics, Blackwell Publishers.

[HDD+92] Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and

Werner Stuetzle. Surface reconstruction from unorganized points.

In Edwin E. Catmull, editor,Computer Graphics (SIGGRAPH ’92

Proceedings), volume 26, pages 71–78, July 1992.

[HDD+93a] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle.

Mesh optimization. Proceedings of SIGGRAPH’93, pages 19–26,

1993.

[HDD+93b] H. Hoppe, T. D. DeRose, T. DuChamp, John McDonald, and

W. Stuetzle. Mesh optimization. Technical Report TR-93-01-01,

University of Washington, Department of Computer Science and En-

gineering, January 1993.

[HDD+94] Hugues Hoppe, Tony DeRose, Tom Duchamp, Mark Halstead, Hu-

bert Jin, John McDonald, Jean Schweitzer, and Werner Stuetzle.

Piecewise smooth surface reconstruction. In Andrew Glassner, edi-

tor, Proceedings of SIGGRAPH ’94 (Orlando, Florida, July 24–29,

1994), Computer Graphics Proceedings, Annual Conference Series,

pages 295–302. ACM SIGGRAPH, ACM Press, July 1994. ISBN

0-89791-667-0.

Page 128: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 116

[Hop94] Hugues Hoppe.Surface Reconstruction from Unorganized Points.

PhD thesis, Dept. of Computer Science and Engineering, U. of

Washington, 1994.

[HS89] T. Hastie and W. Stuetzle. Principal curves.Journal of the American

Statistical Association, 84:502–516, 1989.

[HSIW96] A. Hilton, A. J. Stoddart, J. Illingworth, and T. Windeatt. Reliable

surface reconstruction from multiple range images.Lecture Notes in

Computer Science, 1064:117–??, 1996.

[HTF01] T. Hastie, R. Tibshirani, and J. Friedman.Elements of Statistical

Learning: Data Mining, Inference, and Prediction. Springer Verlag,

New York, 2001.

[HV98] M. Hoffman and L. Varady. Free-form modeling for scattered data

by neural networks.Journal for Geometry and Graphics, 1998.

[IJL+04] I Ivrissimtzis, W. K. Jeong, S. Lee, Y. Lee, and H. P. Seidel. Neural

meshes: Surface reconstruction with a learning algorithm. 2004.

[IJS03a] I. Ivrissimtzis, W. K. Jeong, and H. P. Seidel. Neural meshes: Statis-

tical learning methods in surface reconstruction. Technical Report

MPI-I-2003-4-007, Max Planck Institut fur Informatik, Germany,

2003.

[IJS03b] I. Ivrissimtzis, W. K. Jeong, and H. P. Seidel. Using growing cell

structures for surface reconstruction. InShape Modeling Interna-

tional, pages 78–86, 2003.

[ILL +04] I. Ivrissimtzis, Y. Lee, S. Lee, W. K. Jeong, and H. P. Seidel. Neural

mesh ensembles. In2nd International Symposium on 3D Data

Processing, Visualization and Transmission, 2004.

[JIS03] W. K. Jeong, I. Ivrissimtzis, and H. P. Seidel. Neural meshes: Statis-

tical learning based on normals. InPacific Conference on Computer

Graphics and Applications, pages 404–408, 2003.

[KHS03] Nikita Kojekine, Ichiro Hagiwara, and V. Savchenko. Software tools

using CSRBFs for processing scattered data.Computers and Graph-

ics, 27(2):311–319, April 2003.

Page 129: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 117

[KL96] Venkat Krishnamurthy and Marc Levoy. Fitting smooth surfaces to

dense polygon meshes. In Holly Rushmeier, editor,SIGGRAPH 96

Conference Proceedings, Annual Conference Series, pages 313–324.

ACM SIGGRAPH, Addison Wesley, August 1996. held in New Or-

leans, Louisiana, 04-09 August 1996.

[Koh82] Teuvo Kohonen. Self-organized formation of topologically correct

feature maps.Biological Cybernetics, 43:59–69, 1982.

[KPT99] Evaggelia-Aggeliki Karabassi, Georgios Papaioannou, and Theo-

haris Theoharis. A fast depth-buffer-based voxelization algorithm.

Journal of Graphics Tools: JGT, 4(4):5–10, 1999.

[KVLS99] Leif P. Kobbelt, Jens Vorsatz, Ulf Labsik, and Hans-Peter Seidel. A

shrink wrapping approach to remeshing polygonal surfaces.Com-

puter Graphics Forum, 18(3):119–130, September 1999. ISSN

1067-7055.

[LC87] William E. Lorensen and Harvey E. Cline. Marching cubes: A

high resolution 3D surface construction algorithm. In Maureen C.

Stone, editor,Computer Graphics (SIGGRAPH ’87 Proceedings),

volume 21, pages 163–169, July 1987.

[LC94] A. Li, , and G. Crebbin. Octree encoding of objects from range

images.Pattern Recognition, 27(1994):727–739, 1994.

[Lev03] D. Levin. Mesh-independent surface interpolation. pages 37–49,

2003.

[LGB+02] H. Lensch, M. Gosele, P. Bekaert, J. Kautz, M. Magnor, J. Lang,

and H.-P. Seidel. Interactive rendering of translucent objects.Proc.

IEEE Pacific Graphics 2002,Beijing, China, pages 214–224, Octo-

ber 2002.

[LM95] Chia-Wei W. Liao and Gerard Medioni. Surface approximation of a

cloud of3D points.Graphical models and image processing: GMIP,

57(1):67–74, January 1995.

[LMP98] Kil-Moo Lee, P. Meer, and Rae-Hong Park. Robust adaptive segmen-

tation of range images.IEEE Trans. Pattern Analysis and Machine

Intelligence, 20(2):200–205, February 1998.

Page 130: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 118

[LPR+00] Marc Levoy, Kari Pulli, Szymon Rusinkiewicz, David Koller, Lucas

Pereira, Matt Ginzton, Sean Anderson, James Davis, Jeremy Gins-

berg, Brian Curless, Jonathan Shade, and Duane Fulk. The digi-

tal michelangelo project: 3D scanning of large statues. In Sheila

Hoffmeyer, editor,Proceedings of the Computer Graphics Confer-

ence 2000 (SIGGRAPH-00), pages 131–144, New York, July 23–28

2000. ACMPress.

[LTGS95] Chek T. Lim, George M. Turkiyyah, Mark A. Ganter, and Duane W.

Storti. Implicit reconstruction of solids from cloud point sets. In

SMA ’95: Proceedings of the Third Symposium on Solid Modeling

and Applications, pages 393–402. ACM, May 1995. held May 17-

19, 1995 in Salt Lake City, Utah.

[Mel97] M. Melkemi. α-shapes and their derivatives. pages 367–369, June

1997.

[Men95] R. Mencl. Surface reconstruction from unorganized points in space.

pages 67–70, 1995.

[MM98] R. Mencl and H. Muller. Interpolation and approximation of surfaces

from three-dimensional scattered data points. pages 51–67, 1998.

[MMKR91] Peter Meer, Doron Mintz, Dong Yoon Kim, and Azriel Rosenfeld.

Robust regression methods for computer vision: A review.Interna-

tional Journal of Computer Vision, 6(1):59–70, April 1991.

[MS94] Thomas Martinetz and Klaus Schulten. Topology representing net-

works. Neural Networks, 7(2), 1994.

[Mur91] Shigeru Muraki. Volumetric shape description of range data using

“blobby model”. In Thomas W. Sederberg, editor,Computer Graph-

ics (SIGGRAPH ’91 Proceedings), volume 25, pages 227–235, July

1991.

[MW90] D. Moore and J. Warren. Adaptive mesh generation ii: Packing

solids. Technical Report TR90-139, Rice University, 1990.

[MW91] D. Moore and J. Warren. Approximation of dense scattered data

using algebraic surfaces. In24th Annual Hawaii International Con-

ference on System Sciences, pages 681–690, 1991.

Page 131: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 119

[MYR+01] B. S. Morse, T. S. Yoo, P. Rheingans, D. T. Chen, and K. R. Subra-

manian. Interpolating implicit surfaces from scattered surface data

using compactly supported radial basis functions. In Bob Werner,

editor,Proceedings of the International Conference on Shape Mod-

eling and Applications (SMI-01), pages 89–98, Los Alamitos, CA,

May 7–11 2001. IEEE Computer Society.

[NF90] G. M. Nielson and R. Franke. Scattered data interpolation and appli-

cations: A tutorial and survey. In H. Hagen and D. Roller, editors,

Geometric Modeling: Methods and their Applications, pages 131–

160. Springer, Berlin, 1990.

[NFHL91] Gregory M. Nielson, Thomas A. Foley, B. Hamann, and David Lane.

Visualizing and modeling scattered multivariate data.IEEE Com-

puter Graphics and Applications, 11(3):47–55, May 1991.

[Nie93a] G.M. Nielson. Modeling and visualizing volumetric and surface-on-

surface data. In H.Hagen et al., editor,Focus on Scientific Visualiza-

tion, pages 191–242. Springer-Verlag, 1993.

[Nie93b] Gregory M. Nielson. Scattered data modeling.IEEE Computer

Graphics and Applications, 13(1):60–70, January 1993.

[OBA+03] Yutaka Ohtake, Alexander Belyaev, Marc Alexa, Greg Turk, and

Hans-Peter Seidel. Multi-level partition of unity implicits. In Jessica

Hodgins and John C. Hart, editors,Proceedings of ACM SIGGRAPH

2003, volume 22(3) ofACM Transactions on Graphics, pages 463–

470. ACM Press, 2003.

[OBS03] Yutaka Ohtake, Alexander Belyaev, and Hans-Peter Seidel. A multi-

scale approach to 3d scattered data interpolation with compactly sup-

ported basis functions. InInternational Conference on Shape Mod-

eling and Applications 2003, Seoul, Korea, May 12–15 2003.

[OG98] C. Oblonsek and N. Guid. A fast surface-based procedure for object

reconstruction from3D scattered points.Computer Vision and Image

Understanding: CVIU, 69(2):185–195, February 1998.

[O’R81] J. O’Rourke. Polyhedra of minimal area as 3d object models. InPro-

ceedings of International Joint Conference on Artificial Intelligence,

pages 664–666, 1981.

Page 132: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 120

[PDH+97] K. Pulli, T. Duchamp, H. Hoppe, J. McDonald, L. Shapiro, and

W. Stuetzle. Robust meshes from multiple range maps. InIn Pro-

ceedings of IEEE Int. Conf. on Recent Advances in 3-D Digital Imag-

ing and Modeling, May 1997.

[Pot92] H. Pottmann. Interpolation on surfaces using minimum norm net-

works. Computer Aided Geometric Design, 9(1):51–68, May 1992.

[Pra87] Vaughan Pratt. Direct least-squares fitting of algebraic surfaces. In

Maureen C. Stone, editor,Computer Graphics (SIGGRAPH ’87 Pro-

ceedings), volume 21, pages 145–152, July 1987.

[Pre95] Lutz Prechelt. Some notes on neural learning algorithm benchmark-

ing. Neurocomputing, 9(3):343–347, 1995.

[Pre96] L. Prechelt. A quantitative study of experimental evaluations

of neural network learning algorithms: Current research practice.

Neural Networks, 9(3):457–462, 1996.

[PS91] Alex Pentland and Stan Sclaroff. Closed-form solutions for physi-

cally based shape modeling and recognition.IEEE Transactions on

Pattern Analysis and Machine Intelligence, PAMI-13(7):715–729,

July 1991.

[Res87] K. L. Rescorla. C1 trivariate polynomial interpolation.Computer

Aided Geometric Design, 4(3):237–244, November 1987.

[RL87] Peter J. Rousseeuw and Annick M. Leroy.Robust Regression and

Outlier Detection. John Wiley & sons, December 1987.

[Rot91] Gunter Rote. Computing the minimum Hausdorff distance between

two point sets on a line under translation.Information Processing

Letters, 38(3):123–127, 17 May 1991.

[RST94] M. Rutishauser, M. Stricker, and M. Trobina. Merging range im-

ages of arbitrarily shaped objects. InProceedings of the Conference

on Computer Vision and Pattern Recognition, pages 573–580, Los

Alamitos, CA, USA, June 1994. IEEE Computer Society Press.

[SB78] R. B. Schudy and D. H. Ballard. Model detection of cardiac cham-

bers in ultrasound images. Technical Report 12, Computer Science

Department, University of Rochester, 1978.

Page 133: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 121

[SB79] R. B. Schudy and D. H. Ballard. Towards an anatomical model of

heart motion as seen in 4-d cardiac ultrasound data. InProceedings

of the 6th Conference on Computer Applications in Radiology and

Computer-Aided Analysis of Radiological Images, 1979.

[SC99] Takis Sakkalis and Ch. Charitos. Approximating curves via alpha

shapes.Graphical models and image processing: GMIP, 61(3):165–

176, May 1999.

[Set96] J. A. Sethian.Level Set Methods. Cambridge University Press, 1996.

[SF92] K. Subramanian and D. Fussel. A search structure based on k-d

trees for efficient ray tracing. Technical Report Tx 78712-1188, The

University of Texas at Austin, 1992.

[Sib80] R. Sibson. A vector identity for the dirichlet tessellation. InMath-

ematical Proceedings of the Cambridge Philosophical Society, vol-

ume 87, pages 151–155, 1980.

[Sib81] R. Sibson. A brief description of natural neighbour interpolation.

pages 21–36, 1981.

[SL92] Marc Soucy and Denis Laurendeau. Multi-resolution surface mod-

eling from multiple range views. InConf. on Computer Vision and

Pattern Recognition (CVPR ’92), pages 348–353, June 1992.

[SP91] Stan Sclaroff and Alex Pentland. Generalized implicit functions

for computer graphics. In Thomas W. Sederberg, editor,Computer

Graphics (SIGGRAPH ’91 Proceedings), volume 25, pages 247–

250, July 1991.

[SPOK95] V. V. Savchenko, A. A. Pasko, O. G. Okunev, and T. L. Kunii. Func-

tion representation of solids reconstructed from scattered surface

points and contours.Computer Graphics Forum, 14(4):181–188, Oc-

tober 1995.

[SS91] Robin Sibson and G. Stone. Computation of thin-plate splines.SIAM

Journal on Scientific and Statistical Computing, 12(6):1304–1313,

November 1991.

Page 134: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 122

[SSGT91] G. Succi, G. Sandini, E. Grosso, and M. Tistarelli. 3d feature extrac-

tion from sequences of range data. InProceedings of the 5th Inter-

national Symposium on Robotics Research, pages 116–127, 1991.

[ST85] D. D. Sleator and R. E. Tarjan. Self adjusting binary search trees.

Jrnl. A.C.M., 32(3):660?, July 1985.

[Ste95] C. V. Stewart. Minpran: a new robust estimator for computer vision.

In IEEE Transactions on Pattern Analysis and Machine Intelligence,

volume 17, pages 925–938, October 1995.

[Tau91] G. Taubin. Estimation of planar curves, surfaces and non-planar

space curves defined by implicit equations, with applications to edge

and range image segmentation. InProceedings of the IEEE Transac-

tion on Pattern Analysis and Machine Intelligence, volume 13, pages

1115–1138, November 1991.

[Tau95] Gabriel Taubin. A signal processing approach to fair surface design.

In Robert Cook, editor,SIGGRAPH 95 Conference Proceedings, An-

nual Conference Series, pages 351–358. ACM SIGGRAPH, Addison

Wesley, August 1995. held in Los Angeles, California, 06-11 August

1995.

[TC98] Marek Teichmann and Michael Capps. Surface reconstruction with

anisotropic density-scaled alpha shapes. In David Ebert, Hans Ha-

gen, and Holly Rushmeier, editors,IEEE Visualization ’98, pages

67–72. IEEE, 1998.

[Ter86] D. Terzopoulos. Regularization of inverse visual problems involving

discontinuities.IEEE Transactions on Pattern Analysis and Machine

Intelligence, 8(4):413–424, 1986.

[TG94] G. Tarbox and S. Gottschlich. Ivis: An integrated volumetric inspec-

tion system. InProceedings of the 2nd CAD-Based Vision Workshop,

pages 220–227, 1994.

[TL94] Greg Turk and Marc Levoy. Zippered polygon meshes from range

images. In Andrew Glassner, editor,Proceedings of SIGGRAPH ’94

(Orlando, Florida, July 24–29, 1994), Computer Graphics Proceed-

ings, Annual Conference Series, pages 311–318. ACM SIGGRAPH,

ACM Press, July 1994. ISBN 0-89791-667-0.

Page 135: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 123

[TO98] G. Turk and J. F. O’Brien. Variational implicit surfaces. Technical

Report GIT-GVU-99-15, Georgia Institute of Technology, 1998.

[TO99] Greg Turk and James F. O’Brien. Shape transformation using vari-

ational implicit functions.Computer Graphics, 33(Annual Confer-

ence Series):335–342, 1999.

[TO02] Greg Turk and James F. O’brien. Modelling with implicit surfaces

that interpolate. ACM Transactions on Graphics, 21(4):855–873,

October 2002.

[TPBF87] Demetri Terzopoulos, John Platt, Alan Barr, and Kurt Fleischer.

Elastically deformable models. In Maureen C. Stone, editor,Com-

puter Graphics (SIGGRAPH ’87 Proceedings), volume 21, pages

205–214, July 1987.

[Vel93] Remco C. Veltkamp. 3D computational morphology. In R. J. Hub-

bold and R. Juan, editors,Eurographics ’93, pages 115–127, Oxford,

UK, 1993. Eurographics, Blackwell Publishers.

[Vem87] Baba Chalapati Vemuri.Representation and recognition of objects

from dense range maps. PhD thesis, University of Texas at Austin,

1987.

[VHK99] L. Varady, M. Hoffman, and E. Kovacs. Improved free-form model-

ing of scattered data by dynamic neural networks.Journal for Geom-

etry and Graphics, 3:177–181, 1999.

[VMA86] B. C. Vemuri, A. Mitiche, and J. K. Aggarwal. Curvature-Based

Representation of Objects from Range Data.Image and Vision Com-

puting, 4(2):107–114, May 1986.

[Wat92] D. F. Watson.Contouring: A guide to the Analysis and Display of

Spatial Data. Pergamon Press, 1992.

[YT99] G. Yngve and G. Turk. Creating smooth implicit surfaces from

polygonal meshes, 1999.

[Yu99] Y. Yu. Surface reconstruction from unorganized points using self-

organizing neural networks. InIEEE Visualization 99, Conference

Proceedings, pages 61–64, 1999.

Page 136: A flexible framework for learning-based Surface Reconstructiondomino.mpi-inf.mpg.de/intranet/ag4/ag4publ.nsf/0/F... · Max-Planck-Institut fur Informatik¨ Computer Graphics Group

BIBLIOGRAPHY 124

[Zha95] Zhengyou Zhang. Parameter estimation techniques: A tutorial with

application to conic fitting. Technical Report RR-2676, Inria, Institut

National de Recherche en Informatique et en Automatique, October

1995.

[ZMOK98] H. Zhao, B. Merriman, S. Osher, and M. Kang. Implicit nonpara-

metric shape reconstruction from unorganized points using a varia-

tional level set method. Technical Report UCLA CAM Report 98-7,

UCLA, 1998.

[ZO02] H. Zhao and S. Osher. Visualization, analysis and shape reconstruc-

tion of unorganized data sets. In S. Osher and N. Paragios, edi-

tors,Geometric Level Set Methods in Imaging, Vision and Graphics.

Springer-Verlag, 2002.

[ZOF01] Hong-Kai Zhao, Stanley Osher, and Ronald Fedkiw. Fast surface

reconstruction using the level set method. InProceedings of the IEEE

Workshop on Variational and Level Set Methods in Computer Vision,

pages 194–202, Vancouver, BC, Canada, July 2001.


Recommended