+ All Categories
Home > Documents > 3D Recording for Archaeological Fieldwork

3D Recording for Archaeological Fieldwork

Date post: 29-Sep-2015
Category:
Upload: johngallin
View: 14 times
Download: 2 times
Share this document with a friend
Description:
Three Dimensional Recording for Archaeological Fieldwork
Popular Tags:
8
I n archaeology, measurement and docu- mentation are both important, not only to record endangered archaeological sites, but also to record the excavation process itself. Annotation and precise documentation are important because evidence is actually destroyed during archaeological work. On most sites, archaeologists spend a large amount of time drawing plans, making notes, and taking photographs. Because of the publicity that accompanied some recent archaeological research pro- jects, such as Stanford’s Digital Michelangelo project 1 or IBM’s Pieta project, 2 archaeologists are becoming aware of the advantages of using 3D visualization tools. Archaeologists can now use the data recorded during excavations to generate virtual 3D models suited for project report presentation, restoration planning, or even digital archiving, although many issues remain unresolved. Until recently, the cost in time and money to gener- ate virtual reconstructions remained prohibitive for most archaeological projects. At a more modest level, some archaeologists use commer- cially available software, such as PhotoModeler (http://www.photo- modeler.com), to build simple virtu- al models. These models can suffice for some types of presentations, but typically lack the detail and accuracy needed for most scientific applications. Clearly, archaeologists need more flexible measure- ment techniques, especially for fieldwork. Archaeolo- gists should be able to acquire their own measurements simply and easily. Our image-based 3D recording approach offers several possibilities. 3-8 To acquire a 3D reconstruction, our system lets archaeologists take sev- eral pictures from different viewpoints using a standard photo or video camera. In principle, using our system means that archaeologists need not take additional mea- surements of the scene to obtain a 3D model. However, a reference length can help in obtaining the recon- struction’s global scale. Archaeologists can use the resulting 3D model for measurement and visualization purposes. Figure 1 shows an example of the types of pic- tures possible with a standard camera. In developing our system, we regularly visited Sagalassos, a site that is one of the largest archaeologi- cal projects in the Mediterranean. The site consists of elements from a Greco-Roman period spanning more than a thousand years from the 4th century BC to the 7th century AD. Sagalassos, one of the three great cities of ancient Pisidia, lies a few miles north of the village Aglassun in the province of Burdur, Turkey. The ruins of the city lie on the southern flank of the Aglassun moun- tain ridge (a part of the Taurus mountains) at an eleva- tion of several thousand feet. Figure 2 shows Sagalassos against the mountains. A team from the University of Leuven, under the supervision of Marc Waelkens, has been excavating the area since 1990. Because of the dif- ferent measurement problems, Sagalassos has been an ideal test field for our algorithms. Image-based 3D recording The first step in our 3D recording system recovers the relative motion between images taken consecutively. This process involves finding corresponding features between these images—image points that originate from the same 3D features. The process happens in two phas- es. First, the reconstruction algorithm generates a recon- struction containing a projective skew so that initially parallel lines are not parallel, angles are not correct, and distances are too long or too short. Next, using a self-cal- ibration algorithm, 3,9 our system removes these distor- tions, yielding a reconstruction equivalent to the original. The reconstruction only contains a sparse set of 3D points. Although interpolation might be one solution, it yields models with poor visual quality. Therefore, the next step attempts to match all of an image’s pixels with those from neighboring images so that the system can Feature Article Until recently, archaeologists have had limited 3D recording options because of the complexity and expense of the necessary recording equipment. We outline a system that helps archaeologists acquire 3D models without using equipment more complex or delicate than a standard digital camera. Marc Pollefeys University of North Carolina, Chapel Hill Luc Van Gool Katholieke Universiteit Leuven and ETH Zürich Maarten Vergauwen, Kurt Cornelis, Frank Verbiest, and Jan Tops Katholieke Universiteit Leuven 3D Recording for Archaeological Fieldwork 2 May/June 2003 Published by the IEEE Computer Society 0272-1716/03/$17.00 © 2003 IEEE
Transcript
  • In archaeology, measurement and docu-mentation are both important, not only torecord endangered archaeological sites, but also torecord the excavation process itself. Annotation andprecise documentation are important because evidenceis actually destroyed during archaeological work. Onmost sites, archaeologists spend a large amount of time

    drawing plans, making notes, andtaking photographs. Because of thepublicity that accompanied somerecent archaeological research pro-jects, such as Stanfords DigitalMichelangelo project1 or IBMsPieta project,2 archaeologists arebecoming aware of the advantagesof using 3D visualization tools.

    Archaeologists can now use thedata recorded during excavations togenerate virtual 3D models suitedfor project report presentation,restoration planning, or even digitalarchiving, although many issuesremain unresolved. Until recently,the cost in time and money to gener-ate virtual reconstructions remainedprohibitive for most archaeologicalprojects. At a more modest level,some archaeologists use commer-cially available software, such asPhotoModeler (http://www.photo-modeler.com), to build simple virtu-al models. These models can sufcefor some types of presentations, but

    typically lack the detail and accuracy needed for mostscientic applications.

    Clearly, archaeologists need more exible measure-ment techniques, especially for eldwork. Archaeolo-gists should be able to acquire their own measurementssimply and easily. Our image-based 3D recordingapproach offers several possibilities.3-8 To acquire a 3Dreconstruction, our system lets archaeologists take sev-eral pictures from different viewpoints using a standard

    photo or video camera. In principle, using our systemmeans that archaeologists need not take additional mea-surements of the scene to obtain a 3D model. However,a reference length can help in obtaining the recon-structions global scale. Archaeologists can use theresulting 3D model for measurement and visualizationpurposes. Figure 1 shows an example of the types of pic-tures possible with a standard camera.

    In developing our system, we regularly visitedSagalassos, a site that is one of the largest archaeologi-cal projects in the Mediterranean. The site consists ofelements from a Greco-Roman period spanning morethan a thousand years from the 4th century BC to the7th century AD. Sagalassos, one of the three great citiesof ancient Pisidia, lies a few miles north of the villageAglassun in the province of Burdur, Turkey. The ruins ofthe city lie on the southern ank of the Aglassun moun-tain ridge (a part of the Taurus mountains) at an eleva-tion of several thousand feet. Figure 2 shows Sagalassosagainst the mountains. A team from the University ofLeuven, under the supervision of Marc Waelkens, hasbeen excavating the area since 1990. Because of the dif-ferent measurement problems, Sagalassos has been anideal test eld for our algorithms.

    Image-based 3D recordingThe rst step in our 3D recording system recovers the

    relative motion between images taken consecutively.This process involves nding corresponding featuresbetween these imagesimage points that originate fromthe same 3D features. The process happens in two phas-es. First, the reconstruction algorithm generates a recon-struction containing a projective skew so that initiallyparallel lines are not parallel, angles are not correct, anddistances are too long or too short. Next, using a self-cal-ibration algorithm,3,9 our system removes these distor-tions, yielding a reconstruction equivalent to the original.

    The reconstruction only contains a sparse set of 3Dpoints. Although interpolation might be one solution,it yields models with poor visual quality. Therefore, thenext step attempts to match all of an images pixels withthose from neighboring images so that the system can

    Feature Article

    Until recently, archaeologists

    have had limited 3D

    recording options because

    of the complexity and

    expense of the necessary

    recording equipment. We

    outline a system that helps

    archaeologists acquire 3D

    models without using

    equipment more complex or

    delicate than a standard

    digital camera.

    Marc PollefeysUniversity of North Carolina, Chapel Hill

    Luc Van GoolKatholieke Universiteit Leuven and ETH Zrich

    Maarten Vergauwen, Kurt Cornelis, FrankVerbiest, and Jan TopsKatholieke Universiteit Leuven

    3D Recording forArchaeologicalFieldwork

    2 May/June 2003 Published by the IEEE Computer Society 0272-1716/03/$17.00 2003 IEEE

  • reconstruct these points. A pixel inthe image corresponds to a ray inspace. Because we can predict theprojection of this ray in other imagesfrom the cameras recovered poseand calibration, we can restrict thesearch for a corresponding pixel inother images to a single line. We alsoemploy additional constraints, suchas the assumption of a piecewisecontinuous 3D surface, to constrainthe search even further.

    Its possible to warp the images sothat the search range coincides withthe horizontal scan-lines, letting ususe an efcient stereo algorithm tocompute an optimal match for thewhole scan-line at once.6 Using thisalgorithm, we can obtain a depthestimatethe distance from the camera to the objectsurfacefor almost every pixel of an image. Fusing allthe images results gives us a complete surface model.To achieve a photorealistic result, we can apply theimages used in the reconstruction as texture maps. Fig-ure 3 (next page) illustrates the four steps of the process.The following sections describe the steps in more detail.

    Relating imagesStarting from a collection of images or a video

    sequence, our systems first step relates the differentimages to each other. A restricted number of corre-

    sponding points helps determine the images geometricrelationships. Our system selects the feature points suit-ed for matching or tracking. Depending on the type ofimage datasuch as video or still picturesour systemtracks the feature points to obtain several potential cor-respondences. From these correspondences, we com-pute the multiview constraints.

    However, the set of corresponding points can beand almost certainly will becontaminated with sev-eral wrong matches. In light of this potential trouble, atraditional least-squares approach will fail; we there-fore use a more robust method. The system uses the mul-

    IEEE Computer Graphics and Applications 3

    1 Reconstruc-tion of a cornerof the Romanbaths at theSagalassosarchaeologysite. (a) Oursystem used thesix images (b) and auto-matically creat-ed the model.

    (a)

    (b)

    2 Overview ofthe Sagalassossite.

  • tiview constraints to guide the search for additional cor-respondences, which it can in turn employ to refineresults for the multiview constraints.

    Structure and motion recoveryOur system uses the relation between the views and

    the correspondences between the features to retrievethe scenes structure and the cameras motion. Our

    approach doesnt depend on the initialization becausewe carry out all measurements in the images usingreprojection errors instead of 3D errors. The systemselects two images to set up an initial projective recon-struction frame and then reconstructs matching featurepoints through triangulation. The system then renesthe initial reconstruction and extends it. By sequential-ly applying the same procedure, the system can com-pute the structure and motion of the whole sequence.Figure 4 illustrates the pose-estimation procedure.

    The system can rene these results through a globalleast-squares minimization of all reprojection errors.Efcient bundle-adjustment techniques work well forthis process. The ambiguity is then restricted furtherthrough self-calibration. Finally, the system carries outa second bundle adjustment, taking the self-calibrationinto account to obtain an optimal estimation of theimages structure and motion.

    Dense surface estimationTo obtain a more detailed model of the observed sur-

    face, we use a dense-matching technique. The systemcan use the structure and motion obtained previouslyto constrain the correspondence search. Because wecompute the calibration between successive image pairs,

    we can exploit the epipolar con-straint that restricts the correspon-dence search to a one-dimensionalsearch range. The system warpsimage pairs so that epipolar linescoincide with the image scan-lines.For this purpose, we use a rectica-tion scheme5 that deals with arbi-trary relative camera motion. Wethen reduce the correspondencesearch to a matching of the imagepoints along each image scan-line,which increases the algorithmscomputational efciency.

    Figure 5 shows an example of arectied stereo pair. The system haslocated all corresponding points onthe same horizontal scan-line inboth images. In addition to theepipolar geometry, we can exploitother constraints, such as the neigh-boring pixels order and the matchsbidirectional uniqueness. We usethese constraints to guide the corre-spondence search toward the mostprobable scan-line match usingdynamic programming.6 Thematcher searches at each pixel inone image for maximum normal-

    ized cross-correlation in the other image by shifting asmall measurement window along the correspondingscan-line. The algorithms pyramidal estimation schemedeals with large disparity ranges, but the system limitsthe disparity search range according to observed fea-ture disparities from the previous reconstruction stage.

    The pairwise disparity estimation lets us computeimage-to-image correspondence between adjacent rec-

    Feature Article

    4 May/June 2003

    Input sequence

    Feature matches

    3D features and cameras

    Dense depth maps

    3D surface model

    Structure andmotion recovery

    Dense matching

    Relating images

    3D model building

    3 Overview ofour image-based 3Drecordingapproach.

    M

    m

    m

    mi3

    mi2

    i1

    i

    4 Estimation ofa new viewusing inferredstructure-to-image matches.

    5 Example of arectified stereopair.

  • tied image pairs and independent depth estimates foreach camera viewpoint. We obtain an optimal joint esti-mate by fusing all independent estimates into a com-mon 3D model using a Kalman filter. The system canperform the fusion economically through controlled cor-respondence linking,4 which combines the advantagesof small- and wide-baseline stereo and provides a densedepth map by avoiding most occlusions. Multiple view-points increase the depth resolution while small localbaselines simplify the matching.

    Building virtual modelsOur dense structure and motion recovery approach

    yields all the necessary information to build textured3D models. We approximate the 3D surface with a tri-angular mesh to reduce geometric complexity and tai-lor the model to the computer graphics visualizationsystem requirements. A simple approach consists ofoverlaying a 2D triangular mesh on one of the images,then building a corresponding 3D mesh by placing thetriangle vertices in 3D space according to the valuesfound in the corresponding depth map. We use theimage itself as the texture map. If no depth value is avail-able or the confidence is too low, our system doesntreconstruct the corresponding triangles. This approachworks well on dense depth maps obtained from multi-ple stereo pairs.

    A multiview linking scheme can enhance the textureitself. The system computes a median or robust meanof the corresponding texture values to discard imag-ing artifacts such as sensor noise, specular reections,and highlights. To reconstruct more complex shapes,the system must combine multiple depth maps.Because all depth maps reside in a single metric frame,registration is not an issue. To integrate the multipledepth maps into a single surface representation, weuse a volumetric technique.10 Alternatively, to rendernew views from similar viewpoints, we use image-based approaches11 that avoid the difcult problem ofobtaining a consistent 3D model by using view-depen-dent texture and geometry. Doing so also helps us takeinto account more complex visual effects, such asreections and highlights.

    Applications to archaeological fieldworkThe techniques described here have many applica-

    tions in the eld of archaeology. The on-site acquisitionprocedure consists of recording an image sequence ofthe scene. So the algorithms can yield good results,although the viewpoint changes between consecutiveimages should not exceed 5 to 10 degrees. An exampleis the Dionysus statues found in Sagalassos on the uppermarket square. The statue is now located in the gardenof the museum in Burdur.

    It was simple to record a one-minute video of the stat-ue. Bringing in more advanced equipment, such as alaser range scanner, would not only be logistically morecomplicated but would also require special authoriza-tion. Figure 6 illustrates different steps of the recon-struction process. We obtained the 3D model from asingle depth map. We could have obtained a more com-plete and accurate model by combining multiple depthmaps. And we could have obtained a smoother look forthe shaded model by ltering the 3D mesh in accordancewith the standard deviations obtained as a byproductof the depth computation. This type of result is not soimportant when the model is texture mapped.

    Figure 7 shows a second example, a Medusa headlocated on the entablature of a fountain. We obtainedthe 3D model from a short video sequence and used asingle depth map to reconstruct the 3D model. Errorson the camera motion and calibration computations canresult in a global bias on the reconstruction. From theresults of the bundle adjustment, we estimate this errorto be of the order of just a few millimeters for points onthe reconstruction. The depth computations indicatethat 90 percent of the reconstructed points have a rela-tive error of less than 1 mm. The stereo correlation usesa window that corresponds to the object and thereforethe measured depth will typically correspond to thedominant visual feature within that patch.

    An important advantage of our approach compared tomore interactive techniques12 is that it can deal withmore complex objects. Compared to non-image-basedtechniques, we can extract surface texture directly fromthe images, resulting in a much higher degree of real-ism and contributing to the authenticity of the recon-

    IEEE Computer Graphics and Applications 5

    6 3D reconstruction of Dionysus showing (a) one of the original video frames, (b) the corresponding depth map, (c) a shaded view ofthe 3D reconstruction, and (d) a view of the textured 3D model with the original images.

    (a) (b) (c) (d)

  • struction. Archaeologists can use reconstructionsobtained with this system as scale models on which theycan carry out measurements or plan restorations.

    A disadvantage of our technique is that it cant direct-ly capture the photometric properties of an object. Itstherefore not immediately possible to rerender the 3Dmodel under different lighting. We could possibly com-bine recent work13 on recovering the radiometry of mul-tiple images with our approach so that we coulddecouple reectance and illumination. However, doingso would require us to record the scene under differentilluminations or lighting conditions.

    Recording 3D stratigraphyAn important aspect of archaeological annotation

    and documentation is stratigraphy, a process thatreflects the different layers of soil that correspond todifferent time periods in an excavated sector. Becauseof practical limitations, stratigraphy is often onlyrecorded for certain soil slices, not for the whole sec-tor. Our technique allows a more optimal approach tothis documentation problem. We can generate a com-plete 3D model of the excavated sector for every layer.

    Because the technique only involves taking a series ofpictures, it does not slow down the progress of thearchaeological work.

    In addition, our system enables modeling all foundartifacts separately and including the models in the nal3D stratigraphy, which makes it possible to use the 3Drecord as a visual database. For example, we recordedthe excavations of an ancient Roman villa at Sagalassosusing our technique. Figure 8 shows several layers of theexcavations 3D model. It took about one minute perlayer to acquire the images at the site. From the resultsof the bundle adjustment, we can estimate the globalerror to be of the order of 1 cm for points on the recon-struction. Similarly, the depth computations indicatethat the depth error of most of the reconstructed pointsshould be within 1 cm. The correlation window corre-sponds to an area of approximately five square cen-timeters in the scene. This means that some small detailsmight not appear in the reconstruction, but this accu-racy level is more than sufcient to satisfy the require-ments of the archaeologists. To obtain a single 3Drepresentation for each stratigraphic layer, we used avolumetric integration approach.

    Feature Article

    6 May/June 2003

    (a) (b) (c) (d)

    7 3D reconstruction of a Medusa head showing (a) one of the original video frames, (b) the corresponding depth map, (c) a shadedview of the 3D model, and (d) a textured view of the 3D model.

    8 3D stratigra-phy showingthe excavationof a Roman villaat three differ-ent moments.The left imageshows a frontview of threestratigraphiclayers. The rightimage shows atop view of thefirst two layers.

  • Construction and reconstructionOur technique also offers many advantages in terms

    of generating and testing construction hypotheses. Easeof acquisition and the level of detail we can obtain makeit possible to reconstruct every building block separate-ly. Archaeologists could then interactively verify differ-ent construction hypotheses on a virtual reconstructionsite. We could even use registration algorithms14,15 toautomate this process. Figure 9 shows two segments ofa broken column. The whole monument contains 16columns that were all broken in several pieces by anearthquake. Because each piece can weigh several hun-dreds kilograms, trying to t the pieces together is verydifcult. Traditional drawings also do not offer a prop-er solution.

    Our approachs exibility lets us use existing photo orvideo archives to reconstruct a virtual site. This appli-cation is suited for monuments or sites destroyedthrough war or natural disaster. We illustrated the fea-sibility of this type of approach with a reconstruction ofthe ancient theater of Sagalassos based on a videosequence lmed by Belgian TV as part of a documen-tary on Sagalassos. From the 30-second helicopter shot,we extracted about one hundred images. Because of the

    motion in the images, we could only use fields, notframes, restricting the vertical resolution to 288 pixels.Figure 10 shows three images from the sequence. Fig-ure 11 shows the reconstruction of the feature pointstogether with the recovered camera poses.

    Obtaining a virtual reality model for a whole site con-sists of taking a few overview photographs from a dis-tance. Because our technique is independent of scale, itcan yield an overview model of the whole site. The onlydifference is the distance needed between two cameraposes. Figure 12 shows an example of the resultsobtained for Sagalassos. We created the model fromnine images taken from a hillside near the excavationsite. Its a relatively straightforward process to extract adigital terrain map from the global site reconstruction.We could achieve absolute localization by localizing asfew as three reference points in the 3D reconstruction.

    The problem is that this kind of overview model is toocoarse for use in realistic walkthroughs or for close-upviews at monuments. For these purposes, archaeologistswould need to integrate more detailed models into theoverview model by taking additional image sequencesfor all the interesting areas on the site. The system woulduse these additional images to generate reconstructions

    IEEE Computer Graphics and Applications 7

    9 (a) Two images of a broken pillar. (b) The ortho-graphic views of the matching surfaces generated fromthe obtained 3D models. The surface on the right isobserved from the inside of the column.

    (a)

    (b)

    10 Three images of the helicopter shot of the ancient theater of Sagalassos.

    11 The reconstructed feature points and camera posesrecovered from the helicopter shot.

    12 Overview model of Sagalassos.

  • of the site at different scales, going from a global recon-struction of the whole site to a detailed reconstructionfor every monument. These reconstructions thus natu-rally ll in the different detail levels. Seamlessly merg-ing reconstructions obtained at different scales remainsan issue for further research.

    Fusing real and virtualAnother potentially interesting application is com-

    bining recorded 3D models with other model types. Inthe case of Sagalassos, we translated some reconstruc-tion drawings to CAD models16 and integrated themwith our Sagalassos models. This reconstruction is avail-able at http://www.esat.kuleuven.ac.be/sagalassos/ asan interactive virtual reality application that lets userstake a virtual visit to Sagalassos.17

    Another challenging application consists of seamless-ly integrating virtual objects in video. In this case, the ulti-mate goal is to make it impossible to differentiate betweenreal and virtual objects. But to do this, we need to over-come several problems rst. Among them are the rigidregistration of virtual objects into the real environment,the mutual occlusion of real and virtual objects, and theextraction of the real environments illumination distri-bution to render virtual objects with the illuminationmodel. Accurate registration of virtual objects into a realenvironment, as shown in Figure 13, is a challengingproblem. Systems that fail to do so will also fail to givethe user a real-life impression of the augmented outcome.

    Because our approach does not use markers or a prioriknowledge of the scene or the camera, it lets us deal withvideo footage of unprepared environments or archivedvideo footage. More details on our approach can be foundelsewhere.18 To successfully insert a large virtual objectin an image sequence, the 3D structure should not be dis-torted. For this purpose, its important to use a cameramodel that takes nonperspective effects into account andto perform a global least-squares minimization of thereprojection error through a bundle adjustment.

    ConclusionsOur approach uses several different components that

    gradually retrieve all information necessary to constructvirtual models from images. There are multiple advan-tages to using our 3D modeling technique: The on-siteacquisition time is brief, the construction of the modelsis automatic, and the generated models are realistic. Ourtechnique supports some promising applications, suchas recording 3D stratigraphy, generating and verifyingconstruction hypotheses, reconstructing 3D scenesbased on archive photographs or video footage, andintegrating virtual reconstructions with archaeologicalremains in video footage.

    Our future research plans consist of increasing thereliability and exibility of our approach. One impor-tant topic is the development of wide-baseline match-ing techniques so that pictures can be taken furtherapart. Another aspect consists of being able to takeadvantage of auto-exposure modes without degradingthe visual quality of the models. In terms of applications,we are exploring possibilities in different elds, includ-ing architectural conservation, geology, forensics, moviespecial effects, and planetary exploration.

    AcknowledgmentsWe thank Marc Waelkens and his team for making it

    possible for us to do experiments at the Sagalassos(Turkey) site. We also thank the FWO project G.0223.01,the IST-1999-20273 project 3DMURALE and the NSFproject IIS 0237533 for their nancial support.

    References1. M. Levoy et al., The Digital Michelangelo Project: 3D

    Scanning of Large Statues, Proc. Siggraph, ACM, 2000,pp. 131-144 .

    2. H. Rushmeier et al., Acquiring Input for Rendering atAppropriate Levels of Detail: Digitizing a Pieta, Proc. 9thEurographics Rendering Workshop, Springer-Verlag, 1998,pp. 81-92.

    3. M. Pollefeys, R. Koch, and L. Van Gool, Self-Calibrationand Metric Reconstruction in Spite of Varying andUnknown Internal Camera Parameters, Intl J. ComputerVision, vol. 32, no. 1, 1999, pp. 7-25.

    4. R. Koch, M. Pollefeys, and L. Van Gool, Multi-ViewpointStereo from Uncalibrated Video Sequences, Proc. EuropeanConf. Computer Vision, Springer-Verlag, 1998, pp.55-71.

    5. M. Pollefeys, R. Koch, and L. Van Gool, A Simple and Ef-cient Rectication Method for General Motion, Proc. ICCV,IEEE Computer Society Press, 1999, pp. 496- 501.

    6. G. Van Meerbergen et al., A Hierarchical Symmetric StereoAlgorithm Using Dynamic Programming, Intl J. Comput-er Vision, vol. 47, nos. 1-3, 2002, pp. 275-285.

    7. M. Pollefeys et al., Virtual Models from Video and Vice-Versa, Proc. Intl Symp. Virtual and Augmented Architec-ture, Springer-Verlag, 2001, pp.11-22.

    8. M. Pollefeys, Obtaining 3D Models with a Handheld Cam-era, Siggraph Course, ACM, 2001, Course notes CD-ROM;http://www.cs.unc.edu/~marc/tutorial/.

    9. M. Pollefeys, F. Verbiest, and L. Van Gool, Surviving Dom-inant Planes in Uncalibrated Structure and Motion Recov-

    Feature Article

    8 May/June 2003

    13 Architect contemplating a virtual reconstruc-tion at Sagalassos.

  • ery, Proc. European Conf. Computer Vision, Springer-Ver-lag, LNCS, vol. 2351, 2002, pp. 837-851.

    10. B. Curless and M. Levoy, A Volumetric Method for Build-ing Complex Models from Range Images, Proc. SIGGRAPH96, ACM, 1996, pp. 303-312.

    11. R. Koch, B. Heigl, and M. Pollefeys, Image-Based Render-ing from Uncalibrated Light-elds with Scalable Geometry,LNCS 2032, Springer-Verlag, 2001, pp.51-66.

    12. P. Debevec, C. Taylor, and J. Malik, Modeling and Ren-dering Architecture from Photographs: A Hybrid Geome-try and Image-Based Approach, Proc. SIGGRAPH 96, ACM,1996, pp. 11-20.

    13. Q.-T. Luong, P. Fua, and Y. Leclerc, The Radiometry ofMultiple Images, IEEE Trans. Pattern Analysis and MachineIntelligence, vol. 24, no. 1, Jan. 2002, pp. 19-33.

    14. Y. Chen and G. Medioni, Object Modeling by Registrationof Multiple Range Images, Proc. IEEE Intl Conf. on Robot-ics and Automation, IEEE CS Press, 1991, pp 2724-2729.

    15. J. Wyngaerd et al., Invariant-Based Registration of Sur-face Patches, Proc. Intl Conf. Computer Vision, IEEE CSPress, 1999, pp. 301-306.

    16. F. Martens et al., Computer-Aided Design and Archaeol-ogy at Sagalassos: Methodology and Possibilities of Recon-structions of Archaeological Sites, Virtual Reality inArchaeology, J. Barcelo, M. Forte, and D. Sanders, eds.,ArcheoPress, 2000, pp. 205-212.

    17. M. Pollefeys et al., A Guided Tour to Virtual Sagalassos,2nd Intl Symp. Virtual Reality, Archaeology, and CulturalHeritage, ACM Siggraph, 2001, pp. 213 - 218.

    18. K. Cornelis et al., Augmented Reality from UncalibratedVideo Sequences, LNCS 2018, Springer-Verlag, 2001,pp.150-167.

    Marc Pollefeys is an assistant pro-fessor of computer vision in thedepartment of computer science atthe University of North Carolina,Chapel Hill. His research interestsinclude developing exible approach-es to capture visual representations

    of real-world objects, scenes, and events. He received a PhDin electrical engineering from the Katholieke UniversiteitLeuven.

    Luc Van Gool is professor of com-puter vision at the Katholieke Uni-versiteit Leuven and the SwissFederal Institute of Technology. Hisresearch interests include 3D recon-struction, animation, texture syn-thesis, object recognition, robot

    vision, and motion capture. He received a PhD in Electri-cal Engineering from the Katholieke Universiteit Leuven.

    Maarten Vergauwen is PhD stu-dent in the electrical engineeringdepartment at the Katholieke Uni-versiteit Leuven. His research inter-ests include 3D reconstruction fromimages, camera calibration, andvision in space applications. He

    received an MS in electrical engineering from theKatholieke Universiteit Leuven.

    Kurt Cornelis is a PhD student inthe electrical engineering departmentat the Katholieke Universiteit Leuven.His research interests include 3Dreconstruction, recognition, and theuse of computer vision in augmentedreality. He received an MS in electri-

    cal engineering from the Katholieke Universiteit Leuven.

    Frank Verbiest is a PhD studentin the department of electrical engi-neering at the Katholieke Universis-teit Leuven. His research interestsinclude computer vision, especially3D reconstruction from images. Hereceived an MS in electrical engi-

    neering and an MS in artificial intelligence from theKatholieke Universiteit Leuven.

    Jan Tops is a research assistant inthe department of electrical engi-neering at the Katholieke Universis-teit Leuven. His research interestsinclude computer vision, 3D model-ing, and visualization. He receivedan MS in computer science from the

    Katholieke Universiteit Leuven.

    Readers may contact Marc Pollefeys at the Univ. of NorthCarolina at Chapel Hill, Dept. of Computer Science, CB#3175, 205 Sitterson Hall, Chapel Hill, NC 27599-3175;[email protected].

    IEEE Computer Graphics and Applications 9


Recommended