This lecture was presented at the 3D Digital Documentation Summit held July 10-12, 2012 at the Presidio, San Francisco, CA

Close Range Photogrammetry vs. 3D Scanning for Archaeological Documentation

The proliferation of terrestrial laser scanners on the market over the past
few years has been accompanied by a rapid adoption of the technology by
archaeologists.  This increased archaeological use has come with growing
number of arguments against the use of 3D scanning as a practical means of
documentation by archaeologists, preservationists, conservators and
architects. More recently the introduction of several affordable and/or
free close-range photogrammetric software packages that require minimal
processing labor has generated much discussion regarding how useful such a
cheap and easy 3D capture solution is for archaeologists.  When confronted
with multiple options for 3D documentation, several questions arise: How
much can be gained from using a $150,000 laser scanner over photogrammetry
with a digital camera and free processing software when documenting an
excavation or ceramic vessel? Can a mid-range scanner capture sufficient
detail on rock art for general documentation? How does the accuracy and
repeatability of these newer close-range photogrammetry options compare
with 3D scanning? Many factors can influence which technology is most
appropriate for a given application and when a combined approach may be
more productive. This presentation addresses these questions and compares
and contrasts data collection and processing for photogrammetry and 3D
scanning documentation in archaeology for both site and object scale case
examples.

A variety of non-metric, close-range photogrammetric data capture methods
(e.g. calibrated vs. non-calibrated, wide-angle vs. normal lens, etc.) will
be reviewed through a comparison of at least three photogrammetric software
packages including Eos Systems’ PhotoModeler Scanner, AutoDesk’s 123D Catch
and AgiSoft’s PhotoScan.  The resulting data sets will be compared to scan
data of the same objects as captured by a Leica C10 mid-range laser scanner
and the Breuckmann SmartScan HE close-range scanner. Test data will include
rock art and architecture from Knowth, Ireland; Defiance House Ruin, United
States; architectural sculptures from El Zotz, Guatemala; as well as
controlled lab tests.

In addition, these 3D documentation methods will be compared to traditional
documentation in terms of cost and potential products/deliverables and also
consider the advantages and drawbacks of the data produced by the two
methods. While 3D data sets are of course vastly richer than line drawings
or photographs, the sheer immensity of a full-resolution point cloud is
burdensome to process and manipulate, and includes extraneous information
which can obscure, rather than clarify, the most important features in a
line drawing. Thus, vector extraction techniques for the rapid creation of
digital line drawings from large point clouds will be discussed.  We will
close with a summary of 3D scanning and photogrammetry metadata standards
as developed by the Center for Advanced Spatial Technologies for the
Digital Archaeological Record (tDAR) and the Archaeological Data Service
(ADS).

Transcript

I’m in afternoon. I’m on the east coast or actually central time. We’ll be talking about close range photogrametry versus 3-D scanning for archeological documentation. The paper is by Katie Simon but she could not be here today and Fred Limp will be presenting the paper for her.

Fred is the Leica and GS Systems Chair and university professor at the University of Arkansas. He is the Director Emeritus for the Center for Advanced Spatial Technology and is currently serving as the President of the Society for American Archeology.

Limp:    Well good afternoon and both Katie is in Cypress on a project and Rachel is in Italy on a project, and so I’m  here with you today looking forward to reviewing with you some of the really interesting..I have no idea what’s happened.

Okay, we are talking today about a variety of different methodologies and as was pointed out yesterday quite appropriately, it’s just a tool. But remember if you have tools, if you have a hammer everything looks like a nail, and so one of the questions that we need to ask is which tool and when do we use it and how do I perhaps, as a person not intimately involved with the technology, understand which selections to make. What we need, I would suggest, is the whole 3-D catalogue. It seems since I happen to be here in San Francisco, that’s a good analogy with my apologies to Stuart Brandon. Some of you may know what that means and some of you may not know what that means. In any event what I’m about to talk about here next is a little bit of a start towards that.

Before we get into it, just to give you a little bit of our bonafide so that you have a sense of whether or not we actually know what we’re talking about, the group that I’m representing, that Katie and Rachel are part of, is a research unit at the University of Arkansas. We primarily focus on geoinfomatics, geomatics, computational sciences, heritage applications are a portion of our interest, probably about a fifth, something like that. We also looked at issues of inner operability archive and what have you and we have a reasonably sizable staff looking at that.

I was going  back to a little bit of our background in photogrametry and I realized in 1996, we received an NCPTT grant to do close range photogrametry, which back then was not quite the easy task that it is today and we actually were modifying a Leica (then Zeist) image station which was used for aerial photogrametry for close range photogrametry. By the way the report is still there.

In terms of our scanning background, our first instrument that we purchased was in 2003 not terribly long ago, was an Optek illrus which by the way if you were interested in large things, the Optek remains today one of the best long range terrestrial systems that actually goes out to easily 800 meters and can do a kilometer to a kilometer and half in good conditions. We’ve added then a number of other systems to that. We work with a broad range of software, obviously Cyclone but also Rapid Form Optikat poly works, a variety of the others. A lot of open source mesh lab and others as well and there’s one little thing I put at the bottom, a key part of, in my opinion when we are looking with structural laser scanning and photogrametry, is control.

We have had a lot of discussions about projects. Most of those appear to be in site coordinates but things are actually out there in the real world and they are in geodetic space, and I would suggest that one of things we need to think about is putting in actual geodetic control in our various activities. Now that raises a little bit of a problem. Some of the software doesn’t like big numbers. For those of you involved in GIS for a long time or other things, you know we always had eight bit data and how did we deal with that. It still seems that a lot of the point cloud software doesn’t like big long numbers. So geodetic numbers can be a problem but nevertheless I would submit that survey control is a key aspect of whether it’s photogrametry or laser scanning.

This is sort of how we look at the whole area of 3-D development. We talk about a 3-D ecosystem with a variety of different pieces and parts and I want to mention two in particular. We haven’t been talking much about that but semantic decomposition, that is the extraction of actually useful information out of these unorganized data sets, is a very important task. It’s really hard. It’s a computationally interesting problem and we really need to deal with it. And the other one is an archive. We’ve talked about that. I want to come back to this a little bit later on. We look at sort of a very simplified 3-D data pipeline. We see that actually laser, and I use that term with air quotes always, laser scanning and photogrametry is actually very similar in many respects. And it’s to this measure draw vector analyzed product that we’re looking at when we’re talking about this entire work flow, so we’re trying to get somewhere. Now if we break that down, this is just an example of a close range photogrametry work flow, there’s an awfully lot of, I don’t know if you can see all the decision points, there’s a lot of parameters and I want to make this point very clear. We’ve talked about acquisition and obviously acquisition is very important and there are parameters that you set during acquisition, but there are an enormous number of parameters that are involved in these software packages that can alter the results during processing. However, we’re lucky. Generally speaking, the default in many instances, are adequate. So on the other hand, in some instances they aren’t, so we would encourage really thinking through the structure of the various parameter selection.

Now with that very superficial background how do you decide? How do you choose which of the two alternatives. Well here are a few parameters that you might look at. How much does it cost? How far away are you working? What’s the depth of the information you’re trying to capture? Is it lit? Obviously photogrametry doesn’t work in the dark. Laser scanning in many instances actually does. What are the goals though? The one at the bottom is really I think, the key essential point and we want to look at that as we go forward. But how do we measure whether or not these particular methods achieve our goal.

So before I actually get into that, I want to actually make one point here which I find kind of interesting. What you’re seeing here is basically a semi real time vector acquisition of a 3-D data set. It turns out it’s a structure and defiance Cabe which is actually a Park Service data set. We did a pre and post stabilization metric analysis and so essentially you can bring the data back, you can vectorize it and then you have 3-D vectors which are different than 2-D vectors. Now obviously it’s possible to flatten these into some sort of planametric data, but I just want to point out that two dimensional data looses it’s three dimensional character and particularly when you’re looking at things like stabilization. It may actually be movement in that surface that actually is the difference between the two condition states. Something to be thinking about as we go forward.

So, here’s the question. I have two data sets. I have two ways of accomplishing a particular goal. How do I compare those, and we’ve seen one excellent example. Tom’s demonstration of the way in which the metrics of the two data sets were compared. That’s the traditional geometics surveying way to look at three dimensional data, we look at how far apart they are. Well that’s a perfectly valid and a perfectly useful way to go about doing it. But in fact, it may obscure or overlook information of significance to a particular disciplinary focus. For example, stylistic components in a particular object may look metrically the same in two different data sets but may be quite different. And so we’re finding that computer vision and gaming metrics can be very effective measures of is that data good enough, because that’s the fundamental question we’re asking. Is the data that I’m acquiring good enough to do “x”? Well what is “x” and if “x” is a stylistic analysis for example of particular motifs or features, have I acquired the data at that level in order to accomplish my goal?

By the way, have you noticed that we’re all up here talking about how computers are going to change our life for the better and the projection systems hardly ever work, so just a question.

So this is a particular case study. We cheated here. We said, what should we really use; a C10 to scan a set of particular rock art features at a megalithic site in Ireland or should we use photogrametry. Now we’re taking the C10 at its absolute maximum extent. So we knew the answer to this question but nevertheless. So here we have a data set. This is the traditional metric comparison. This is the data as acquired through processing, through photogrametry and the data as acquired from the C10 and basically, a long story short, is there the same. There’s very little difference except at the margins and that’s not surprising. But, we’re not really interested in sort of the overall physical dimensions of the rock art. We’re actually interested in the motifs and the particular stylistic components. So here’s two representations. How do you measure, did the stylistic component appear in the data? How do you do that? That’s not an easy question. You can look at it. You can say perhaps it does. But one way to do it is to look at some saliency metrics coming out of the computer vision community. So essentially what you do here is you build a colonel. In this case the colonel happens to be seven millimeters and you pass the colonel across the image and you look at the two images. You can see the photogrametric derived point cloud. You can see the scanning and it’s pretty good. They’re not too bad. If we pass a five millimeter colonel, you can see the differences in the data and if we pass a three millimeter colonel and again, we’re cheating, there’s no way on earth that a C10 goes down to three millimeters. I mean you’re beyond noise but the point is that you can see here that we actually have a metric that tells us that if the objective of the analysis was to recover data at this particular level of quality, if you will, we are able to do it with method A and we cannot do it with method B and again we can apply the same logic to any sort of comparative processes. The point we’re trying to make here is published numbers, you know the unit works at six millimeters or what have you, or the pixel size is .02, that’s okay. There’s nothing wrong with that but ultimately is the information content of the data meeting, the objectives and goals, that we’re looking at and there are ways to measure that.

Now here’s another way. So here we have a case of photogrametry scanning. Photogrametry is the desired goal. So now we’re going to compare structured light scanning to a variety of photogrametric methods and as everybody’s been talking about, there’s a bunch of them out there, 1, 2, 3 -D Catch, Photoscan, Photomodeler and so on and so forth. And so what we’re doing in the next process is we’re looking at how these two compare. Again, we’re back to a traditional surveying or geomatics measurement. We’re looking at Hausdorf distance which is not just exactly how far things are apart, it’s a bit more complicated than that. But essentially it’s a metric that’s well understood that will tell you whether two data sets are alike or different. Long story short, they’re are pretty much similar. There’s a variation here but it probably has to do with just the way the things work. One of the data sets is from the Broikman. The Broikman is a very high end structured light scanner and the other is a Photoscan. By the way the camera was a Canon 5D Mark2 with a 21 point megapixel full frame sensor, all processed through a variety of different software’s.

So, here’s another way to look at it the same way. In other words, are these two the same? Have they measured the same information? So we looked at the traditional metric analysis and we said well, yeah, they’re pretty similar. But let’s look at it again using some of these computer vision methods. What we’re looking at here is a photoscan and there’s some parameters that you can set at Photoscan as we’ve talked a little bit about. So there’s what’s called a lower power smooth, a high power smooth, and what have you and then in the, I guess it’s your right, is the actual Broikman data, and again the image I think tells you the story and that is that the Broikman is obviously covering key information content that we need for the particular analysis that we’re involved with. We can quantify that but the pictures tell a thousand words essentially. So again, we have a particular computational method that allows us to say A not B.

Here’s another example and this is particularly significant. A lot of photogrametric methods when you’re looking at uniform materials, you’ve got metal,  you’ve got some other things that have specular characteristics, sometimes the automated point matching systems have a lot of trouble finding key points whereas a structured light system in this case obviously is applying  a pattern to the surface. So in particular material cases, system A may be better than system B. Again, I won’t bore you with the math, but there are actual metrics that we can say, well in fact this is better than that for the following metric reason.

Another example here. These are some anfora stamps. We’ve actually been looking at anfora stamp erosion as anfora were stamped, and as you get further along the stamps actually wear out so you need to have a lot of detail. So here’s the photographic data is the same as before. Here’s the data in 1,2,3-D Catch with texture. It doesn’t look to bad. Without texture, there’s really no 3-D data there. There’s texture data. However, with the Photoscan products we do have quite a bit.

Here’s another little bit of an example, long story short, applying the similar metrics that we looked at before, the Broikman and the Photoscan with a standard camera are essentially similar information content. Why get $150,000 scanner if the camera will work. So we could look at these. I’m not going to go  through the details. There are pluses and minuses of these various particular strategies and photogrametry for the sorts of reasons that you’ve seen.

I want to jump ahead to another point. I think this is very important and that is that even though we have a lot of interesting digital recording methods, we still have silos data type A as in silo A, data type B as in silo B. They are not in similar coordinate systems, they are not organized, they’re not brought together. They’re typically looked at by individual specialists. It’s not always that bad but I would strongly urge a consideration for using some integrated management. What you’re seeing here by the way, is our GIS 10.1. Yes, it doesn’t handle large point clouds that are enormous but what you can do is you can put reduced versions of those point clouds in a similar coordinate system into [ ? ] which then links back to the full resolution products and so on and so forth. But those of you in the built environment Autodesk Bentley doing similar sorts of things, point clouds and revet and stuff, I think it is essential  that all of these data are brought together in a common coordinate and similar software environment.

The other thing I think that’s absolutely essential and we’ve talked a little bit about this, what are we going to do about archives. Actually people have already done a great deal about archives around the world and there are many institutions that have these archives. I would argue and I mean no disrespect to the private sector, I actually work there a lot myself, the private sector is not particularly a good environment for long term archival purposes. We need institutions, whether they’re universities, whether they’re government agencies and what have you, that create, a trusted repository is a term of art by the way, so that has certain characteristics. There are already a number of these that are out there. In Europe, the ADS, here in the United States, the digital archeological record, there are others that meet these requirements, the Mellon Foundation is putting a lot of resources into trusted digital archives for a variety of different domain areas.

A couple of other things about archives. It is essential in my opinion that the raw, unprocessed data is placed in the archive, that the process stack is placed in the archive what did you do to it and not just the final digital  objects. We’ve talked about ownership and stuff. There’s a solution to that. It’s called creative comments licensing. There’s creative comments one, two, three and so on. All of our data is creative comments three unless otherwise specified. I also mentioned persistent URI’s. You need to be able to get back to digital data. You can’t be moving it around. You have to be able to find it.

The other thing that’s important about archives is metadata, and we’ve talked about how do we record it but how do we process it. This is a little piece of our laser scanning work flow process metadata structures. So in other words, what do you record when you do this, what do you record. What are the meshing parameters that you use? What were the hole filling algorithms that you used and so on and so forth as you move from one to the other.

At the Archeological Data Services and the Data Digital Antiquity, there is a comprehensive document about laser scan metadata, all of the field acquisition characteristics, the process acquisition characteristics. There are similar requirements or specifications for close range photogrametry. A number of folks have been mentioning that we need to be thinking about these. I would encourage you to look at the ADS website, steal it, it’s public. There’s a great deal of information, this is just a little bit of a piece of it.  So, some conclusions. Well obviously, we’ve talked a lot about processing but post processing and the modifications that you make to the data are essential. Generally speaking, those process stack operations are not captured.We have the results and we’ve even said and we understand in the commercial sector, it’s the result that’s the product to be turned over to the client. Absolutely understood, not a problem. When heritage recordation of public resources that will be around for a long time are involved, we have a different set of responsibilities and requirements I would argue and that is that we need to be able to return to original data from sometime in the past, compare, look at condition changes, look at these sorts of things so that not only the product but the actual raw data and the process. So perhaps we tweak parameters. We can change things. There’s no question that in the appropriate circumstances, 3-D scanning and photogrametry both have very powerful and significant goals. We need to determine what is our metric and it should not be three millimeters or whatever. It has to be what’s the information content that we’re trying to achieve at the end and oh, by the way, we need three millimeters to get there. First of all, it’s essential. It is absolutely required that in the field that the appropriate metadata and process metadata are recorded and made part of the archive information.

So, stay hungry, stay foolish is an objective that we had back then, and I think it’s still entirely appropriate and one final comment I’ll call to your attention. We’ve been able to develop a lot of these process analyses and the other aspects that I’ve talked about here today as a result of a multiyear fairly substantial NSF grant and so there’s GMV.test. [?].edu and it’s still under development. We’ve got about another six months before the money runs out, and of course we’re trying to get everything done in the next six months, but in any event, you can see work flows. You can go in and say I have this particular objective, click on it, it will take you to a discussion of a scanner selection criteria, click on that, it will take you to what are the parameters that you need to set for pole filling and so on and so forth. Again, not all there. Please bear with us but I do encourage you to visit the site.
Thank you very much.

Speaker Bio

Katie Simon earned a Masters of Arts in Anthropology from the University of Arkansas focusing on computer and remote sensing applications in archaeology including ground-based, aerial and satellite methods. This followed several years of cultural resource management work in the American southwest for both federal agency and private consulting firm positions. She has worked for the Center for Advanced Spatial Technologies for four years and specializes in 3D scanning applications in heritage management including development of data collection and processing methodologies and standards. She has field experience throughout the Americas, Europe, the Middle East and in Africa.

Rachel Opitz works at the Research Associate Center for Advanced Spatial Technologies 341 JBHT, University of Arkansas.  She works with Close-range photogrammetry vs. 3D scanning for archaeological documentation: comparing data capture, processing and model generation in the field and the lab

 

Share →

2 Responses to Close Range Photogrammetry vs. 3D Scanning for Archaeological Documentation

  1. Mike Green says:

    Hello, I am very interested in this lecture and was wondering if there was anywhere online I might be able to view it?

    Thanks,
    Mike

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>