Documenting National and World Heritage Sites: the Need to Integrate Digital Documentation and 3D Scanning with Traditional Hand Measuring Techniques
This lecture was presented at the 3D Digital Documentation Summit held July 10-12, 2012 at the Presidio, San Francisco, CA
Documenting National and World Heritage Sites: the Need to Integrate Digital Documentation and 3D Scanning with Traditional Hand Measuring Techniques
For the past fifteen years, digital documentation technology of historic sites and resources has developed into a powerful tool and its use has become widespread in historic preservation. Data scanning has been touted as being both precise and time saving allowing three dimensional modeling to be readily available for the designer, historian and preservationist. But does technology completely replace traditional and more labor intensive methods for recording historic resources? Or is there still a need to continue traditional hand measuring methods that are irreplaceable? When it comes to documentation of historic buildings, be it World Heritage sites like the Taj Mahal Tomb in India or the Roman Forum, Rome, Italy digital documentation can be of great help in understanding the context of the place from remote locations. Although digital methods have enabled preservation professionals to achieve greater accuracy in documentation, traditional methods, such as hand measuring, is still the only way a historic resource can be understood by the preservation professional and the overall condition of the resource can be verified. This paper will discuss ways in which digital documentation methods can be integrated with traditional hand measuring methods. Four different methods of digital three-dimensional building documentation– 3D Scanning, Photogrammetry, Gigapan and High Dynamic Range Photography–will be presented. The paper also discusses techniques like Google mapping combined with physical study of the site that helped the team find the long lost Taj Ganj (original market place connected to Taj). The author will present her ideas on how these techniques can be integrated with traditional hand measuring in accordance with the Historic American Building Survey (HABS).
In order to develop assessments aimed at acquiring accurate and pertinent information about historic resources, the author proposed digital-manual method of building documentation methodology attempts to produce accurate information of the geometry of the historic resource and, simultaneously, quantifying the state of repair of the material assemblies of the artifact. This approach combines essential information of the resource—documenting building geometry and assessing areas of material conservation. All of which can lead to more efficient and accurate historic resource documentation.
Cordell: Thank you. We’ll go ahead and get started and let others come on in as they finish up their snacks, but I wanted to take the opportunity to introduce our next speakers for you who are representing their work on national and world heritage sites. I’m glad to welcome Dr. Krupall Krusche and Dr. Christopher Sweet.
Dr. Krusche is an assistant professor at the University of Notre Dame and teaches architectural design in historic preservation and structural design and she started the historic preservation concentration there back in 2006 and offers a wide range of courses on world heritage documentation and research. I’m going to let her tell you about the work that she’s been doing with the digital historic architectural research and materials analysis team that she’s put together and has been doing documentation work around the world.
She’s brought with her today a colleague that I don’t think is on your original schedule and that’s Dr. Christopher Sweet, who works at the Center for Research Computing. He’s part of the cyber infrastructure group there at Notre Dame and so we’re glad to welcome them this morning. Thank you.
Krusche: Good morning everyone. I can’t imagine to..it’s a privilege to be able to talk after professor John Asmus has spoken about all the work he has done throughout his lifetime, and I was furiously taking notes, just trying to keep up with writing as he is speaking. It’s a privilege to speak after him but at the same time it makes you feel very humble and yet he has not even scratched the surface of what he has done. So let me start by just saying thank you for organizing this conference to the National Park Service. It’s an excellent venue for heritage being discussed through 3-D scanning and not just 3-D scanning with all other things that get spoken about. There is specific importance given to this and it’s very crucial to understand that 3-D scanning by itself is a good tool, but it’s a tool. How do we go forward? I’ll talk about this with a brief story.
Recently, well not recently, around 2007, Mesa Verde was 3-D scanned. I had a colleague there. She was working on the site. She is an archeologist there. She was really excited about being able to get this 3-D data. She got the data. Once it was complete and she was all excited about it, but she couldn’t do anything with it. This is a major problem that exists today. This is where we started. My course and my work with DHARMA and our DHARMA team is basically based on the fact of how do you take things forward after you 3-D scan something.
So the project I’m going to show briefly now is the project at the Roman Forum site. We have been partnering with UNESCO and the superintendents in Rome. We have also done, we are presently working on the Yamana river front in India. That project I won’t be able to talk about in the short period but basically through the Roman Forum project, I would like to explain how you bring together different processes that are instrumental in creating a good method of documentation of site. So basically we place special emphasis on understanding what historic documentation already exists on the site itself, so we dig into history and we look into pictures, we look into all kinds of archival data about the site. We go on site and we take field notes and these are a crucial aspect which a 3-D scanner cannot work with. A 3-D scanner does not take field notes and that was something very evidently understood with the first few sites we worked on.
The goal is to be able to create HAB quality drawings and to be able to do that we have to integrate two different technologies; one is Gigapan. How many of you are aware of Gigapan technology here? Many, oh, that’s great, excellent. And then along with Gigapan, Photogrametry which is very typical in our field and merging that with 3-D scanning.
What we will be doing in this presentation is, I’ll show you how it connects to the past where we are trying to do line drawings from the 3-D data, and Dr. Chris Sweet will talk about how we’re taking it to the future where we are finding ways of merging Gigapan and 3-D scanning together.
So this is the Roman Forum site. It’s around 250 meters wide and we had to take twenty-seven scans. A that time we had the scan station with us. We didn’t have the C10,but that was very useful to be able to do such large scale scans of the site itself. Here’s a map that we used before and after we went on site. Before we went on site, we created an outline of how we were going to move about the whole area. Now we are looking at monuments that span a height of eighty feet or more and so it was very difficult. We had to navigate between the terrain of the Forum site, so we had to create a plan of moving where we could capture the maximum amount of data from a specific point. So all these positions that have been determined were pre-determined after we went on site and then again post-corrected based on the actual scan positions that we took. We also took GPS coordinates of this site so that the scanned data could then be cross verified into a more global system.
Here is the scan station. We did this project in 2010 and we used, of course, the Leica scan station and worked with the software of Cyclone. Now people who work with Leica as a scan station would know that there are problems with Cyclone. There are issues, there are bugs, there are problems that you have to face and so one of the things that we had to do was to find the ways in which you can take this 3-D data and export it to be able to use it in different ways.
The results we got and I’m going to very briefly go through this, is we were able to scan the whole site with the minimum resolution of one centimeter by one centimeter. Now one centimeter is not a good resolution rate. For scan people you will say, it’s not but imagine the whole Forum site. You’re looking at huge, huge order. One centimeter was really good. Now this was the minimum. We have places where we have much higher resolutions that we captured the data at. For example you will see, I don’t know if the projector is doing justice, but in front of you are actually columns of Temple of Faustina right in front and then the presentation actually show cracks in those columns. So it’s very, very highly detailed data that we produced.
What helped is in a key way that has never been done before is it allowed us to examine these temples in relation to each other. The accuracy of the 3-D scanned data, you know people have studied these temples, they have studied them back and forth. The Roman Forum has been discussed in many books in Rome. What could we contribute to this? That was our biggest question. When these results came out, we were amazed to find out that there are some very interesting relationships between these temples and how they were built, and I’ll talk about that briefly in a second. But the information we got was highly accurate and very well read but only to us. So then we deployed another team of students that went and hand measured the whole site. Now people will say we are 3-D scanning, why do you need to hand measure again. It’s a repetition of work. We wanted to study this. We wanted to understand what do we get out of hand measuring and what do we get out of 3-D scanning and what can we do with the separate results we achieved.
So here you go, you are looking at the front portico of Basilica Amelia and basically at this point you will see that these are notes taken by individual hand measuring teams. They are notes about when and what material was used, what time period, what’s the condition therein, if these are restorations done at a later date like after the excavation of the site or if this is from a previous time period, and also different things like if it’s brick, if it’s concrete, what are we looking at. These were then added in with hand measured drawings like basically measuring all the site with units to identify what exactly we are looking at. Finally, all of this was taken together to create the front portico part. What we did after that was we merged the 3-D scanned data and the hand measured drawings to get the results that we got. Now what does that mean? Basically, when you take the scan data from the site, all that scanned data information is pixilated point cloud information. It did not give me anything that would be substantially telling me this is brick, this is from this time period, that’s stone from that time period. So by putting the two together, we were able to inform our team what exactly we are looking at and what errors and what condition.
We also meticulously went and measured and looked at all the pavement that exists at the Roman Forum. We are doing this in the heat of summer in Rome and it was the highest recorded heat in Rome. Terrible. So people who say 3-D scanning is fast and quick and easy to do, no, trust me, it wasn’t that way. But it’s very accurate so we did get a lot of information in a short period. We were working for ten days on site. It was very, very difficult from morning six to evening ten o’clock. Basically the whole time, we were able to access much, much larger data assets of information than we would have by just hand measuring. Here is an example of the Gigapan images that we took. This is for example, one hundred fifty images put together to create the under arch of the Arch of Septimius Severus and in the insert you will see the detail that is visible from such an image. Just panning the site, we collected so much data that allowed people like the archeologist working on site to be able to study individual capitals, details of the entablature, everything that would not be visible to the naked eye in much easier form and we wanted to digitize and put this digital information online so scholars would be able to access this and study the site in much better detail than if they would without these.
Finally, the end result of what we wanted to put together was taking the scanned data, the hand measure information and the Gigapan information, we needed the photographs to be able to cross verify what we were looking at, and create, this is Ecole des Beaux Arts traditional water color technique using three hundred pound paper, very, very traditional going back to drafting everything but we took all the scan data information and we filtered it. Now one of the biggest problems you will see with 3-D scanning is when you are looking at sectional information, you are looking at layers of data that you see at the same time. It’s very difficult to identify what is the front surface, what is the back, especially when you have monuments that are covering a large span of space. So we had to filter this information to be able to understand what are we actually looking at and then produce plates that would allow anybody, just anybody to be able to understand what we have done and be able to work with it to create their own research or go further with their own research.
Finally, so here is for example the result of the scanned data on top looking at the sectional view towards the tabularium and then the water colors that were produced out of it. What it did was it allowed the accuracy of the 3-D data to be projected into a water color plate and allowed us to create the understanding and the easiness of understanding. These plates are very big. Right now you are looking at it in a presentation format but these are huge plates and they’re going to be in an exhibit that we have planned at the Roman Forum in 2014, but each of those pavements are actually physically marked on these plates. You can see the stenciling of the wordings that are on the floor, all of that is part of the plates.
Finally, I told you that we did some comparisons. Here is a comparison of all of the different temples that exist. Just a quick note. I’m not sure how many people know the Roman Forum as well, but the Temple of Satin and Temple of Vespasian are aligned to 0.09 centimeters alignment, both the entablatures sit at the same height even though Satin is an ionic temple, it is supposed to be shorter and stouter, it was purposefully lifted and kept at the highest height of the Roman Forum site.
Right now we are working on creating pen and ink drawings that explain the exact condition of the site as it stands today. So that’s from me and I’m going to let Dr. Sweet proceed further. Thank you.
Sweet: Thank you. Okay, so, the work of my team is really to try and merge these datasets. Obviously, Krupali acquired this vast scanned dataset of so many gigabytes of data. The Gigapan was looked after by the department of Paul Turner so he runs Academic Technologies at Notre Dame and here again, one of his people have acquired vast amounts, many gigabytes of data. So what we were going to do was to try and merge these in some automated way. Obviously you can take parts of point clouds and individual pictures or even several pictures and map them manually but it’s very laborious, so we wanted some sort of automatic method that you could put your registered data in, give some information on the source of the photographs, and then produce something that could be viewed preferably online so we could disseminate some information but also export it in formats than could do things like edge detection, etcetera. One thing that’s not marked here of course, the scanners themselves generate photographs and is part of our proof of concept. We’ve used some of these so you’ll see examples of that later.
So our platform is something called Viscera, so this is a project that we’ve had at Notre Dame for several years started out as visualization platform for a project we did with the Pandi Group at Stanford, so this is the folding at home project. Since then we used it for many different projects including digital humanities, word analysis, we have C++ code, we have Java codes that allows us to write either very efficient code or things than can be deployed on the web. So it’s split in such a way that you can generate a specific application that can process the specific data you’re interested in but then there’s a backhand that allows you to project this onto different technologies, cell mobile devices, traditional displaced 3-D or even large projection planetary type of things.
So as an initial test, we started with a statue in Bond Hall which is architecture at Notre Dame. So we have a scan of that. We also have Gigapan photographs. So I’m going to show you in a second the actual different sets of data that we had to merge. On this slide you can see on the right hand side, there’s actually a
close-up of the scans. So these are the individual pixels of the head. The next picture is the actual scan including the colored points. So with the Leica scanners they can actually color the points individually to give you some sort of idea of what it looks like . And then on the right side are several photographs. You can see they all overlap and they’re all high resolution and so we want to actually walk through. Obviously Gigapan has a method of producing panoramas but that really puts you at the center of something. You can look around but you can’t walk through it or take different viewpoints.
So the technique really was to figure out how we map the picture to the point and obviously, you cannot put a texture onto a point. You need to create a surface so there’s some technology that you have to develop there to generate the surface. You will then have to do some geometry based on the source of the pictures so that you can then generate the texture that will map to it. So this is a representation of the way we do it. We form a sort of virtual plane, which you can think of as the camera plane and then from there we can determine where each point on the cloud, no matter what it’s direction, intersects the point and allows us to generate a map which you can use things like [ ? ] to produce something from.
This illustrates the type of surface so what we have to do is we have to figure out adjacent points in some sense and join them together so we can draw triangles or quads between them. Fortunately the Cyclone software and output state does actually do it in an ordered manner so we don’t have to do like an N square cube search algorithm to try and figure it out. Obviously there’s voids in it where there are windows or other architectural objects but we can then form this surface and then we can use traditional techniques to actually fit our photograph onto it.
So what I want to do actually, this may go badly wrong, I’m going to try and actually run a demonstration of the software. I would run it from the web but the bandwidth here, I think someone mentioned earlier is a little bit difficult. So this actually represents a proven concept of one of the arches of the Roman Forum. The pictures you’ll see are actually taken from the scanner itself and what is interesting actually in this, you’ll see how the pictures get mapped onto it so this three viewpoint of pictures, because they’re all odd shapes and you know, so this just makes it more difficult trying to do this manually.
A lot of the photographs from the right-hand side, actually when I say right-hand side, I mean from my viewpoint so your left-hand side, were taken from quite a low position at the bottom and so they would have been very stretched out at the top. You can see it’s reconstructed those in a sort of sensible manner. Okay, so that’s an example of how we can use that and this is showing you the statue again, the colors on the projector are horrible. I’ll have to show people individually if you want to see that they do actually look like serious photographs.
Our interest really was not just to have a mapping technology be able to put this in a place that people can access it, whether it’s people at Notre Dame or perhaps people elsewhere. So we came up with this sort of architecture for trying to store the data and make it accessible. In this case, we’re going to have a database of the Leica data, a database of the Gigapan images, we’re going to have some xml files which should give us some information about the mapping between the two and then we can serve this up to a web portal. I’ll actually show you an example of the web portal so this is in a beta stage but it is actually up there on the web now. What this allows you to do actually, in addition to disseminating data, you can actually upload data to it. So this allows you to select your data exported from Cyclone so in something like a ptx format. You can tell it how many locations you have photographs from and then by taking, if you’re going to use the internal camera to the scanner, you can actually upload the metadata and by uploading that with the pictures, it will actually map it for you. Then you can actually, as I mentioned before with the Java technology, you can actually embed that in the website so you can actually see and rotate the model. you can also download it for viewing in other manners. We also have an iPad and iPhone app which allows you to download the data from the website and then view it locally.
To cover the advantage then, I think the, yeah one thing I actually didn’t mention in the preceding policies is not only do we have the mapping between the picture and the point cloud, but we still have all the point cloud data and so by clicking on this, you can actually do measurements and you can actually find where parts are and that’s actually supported by the apps as well so you can click on there and do post processing of information. I think that some of the things we found with this is that actually getting a work flow that allows you to capture data is very important. Some of the data we had actually had some missing parts, so I think that this sort of processing work flow will allow you to capture data and make sure you actually have what you think. Obviously if you come back in ten years and you find that there bits missing, there’s probably little you can do about it unless the thing is still there in the original state. But obviously we’re going to continue to improve the process to have real Gigapan pictures, so we have some limited work on that so far but we’ll be extending that.
So obviously we have some interest in continuing this research beyond just Gigapan and the internal Leica pictures. We’d like to produce a sort of an immersive environment so if we were to take scans and pictures at different times we could actually remove shadowing and actually do artificial lighting so that we have something that you can use at different times of the day and show what sort of things are going on. Obviously we’ve already done some work in terms of point reduction. As I said before, some of these things are many gigabytes. Obviously to make it web accessible that wouldn’t be practical and we have algorithms which will remove the less interesting data and give you the structural data and of course, that in terms of importing this into CAD packages of interest so that’s something we certainly want to continue with.
So I think that covers most things. I think Dr. Krupali mentioned earlier there’s some additional projects that we hope to be involved with. There’s the Indian tombs which we want to scan, and as Dr. Krupali mentioned as well there’s an exhibit at the Roman Forum and obviously we want to catalogue this data and make it available in other formats. So I think that covers most things.
Just some acknowledgements. Paul Turner from Academic Technologies obviously was responsible for Gigapan; James Sweet is actually my son as well, he’s a grad student in computer science and did all the coding on this project; Ben Keller, he worked on most of the sites you’ve seen, the Roman Forum and also the India project we didn’t see, he was looking after the Gigapan data and acquiring it; Ryan Hughes, who’s been involved in most of these things, one of the experts on the scanning technology. So, that’s it.
Krupali Krusche is presently working as an Assistant Professor at the University of Notre Dame where she teaches architectural design, historic preservation and structural design. In 2006, she along with Prof. John Stamper started the Historic Preservation Concentration, which offers a wide range of courses including one on World Heritage Documentation and Research.
In 2007, she started the DHARMA (Digital Historic Architectural Research and Material Analysis) research team that is specializing in 3D documentation of World Heritage Sites. In summer 2008, the D.H.A.R.M.A. team spent four weeks in India documenting some of the country’s historic monuments including the Taj Mahal. In summer of 2010, the team spent research time digitally documenting the Roman Forum, Rome, Italy. In 2012, she is continuing both projects of documenting these sites to create international exhibits of their work.
She received a Merle D. Blue Excellence in Humanities Award from the Northern Indiana Center for History for her work in documenting historic sites in Indiana.