This lecture was presented at the 3D Digital Documentation Summit held July 10-12, 2012 at the Presidio, San Francisco, CA

3D Modeling of a Gravestone Exploiting Low Cost Range and Image Based Techniques

Inside the Medieval Museum of the city of Bologna are housed a series of gravestones which originally adorned the tombs of the Doctors belonging to the ancient city studium.
One of the most significant in importance and finesse of ductus, which refers to the contemporary genre painting, is definitely the one realized in Istria stone by stonemason Bettino da Bologna for to the tomb of Bonifacio Galluzzi during the first half of the fourteenth century.

The iconography shows on either side of the master, pictured in the center of the scene behind the desk, three students per side, sitting on benches, intent and focused on daily lectio.


This artifact, among all those preserved in the museum, is the only one that has clear traces of color, thus leaving to assume that originally was completely depicted.
The 3D model of the gravestone was made using the low cost triangulation laser scanner NextEngine Desktop 3D Scanner.  For the digitization of the entire artifact have been acquired around 100 range maps following a boustrophedon path from left to right, from bottom to top. The survey lasted for one and half day.

Each range map took 5 to 7 minutes with an average of 10 scans per hour.
The high number of range maps acquired is justified by the facts that:
•    the Nextengine Desktop 3D Scanner, with the sensor in wide mode, has a field of view of 13.5 x 10.1 inches;
•    for an optimal registration two contiguous range maps need a considerable part of overlapping region.
The same object was subsequently reconstructed with image-based techniques and software (computer vision).
For this study all software were installed on a quad core 24 GB RAM workstation.
The 3D models obtained with Agisoft Photoscan and 123DCatch were mesh models. The final outputs of Visual SFM and Apero were point clouds instead, which need to be post post processed and triangulated at a later stage.
Besides 123dCatch, which is a web service by Autodesk, all other software run on local machines.
It was intended to test also another web service (Arc3D) but the models reconstructed by the remote server were sometime incomplete in large amount of the surface, sometime completely wrong in the main shape and then unusable.
As well-known Computer Vision techniques are not geometrically accurate. During the post processing step all 3D objects were scaled according to a known distance calculated on the laser model, used as reference model, and subsequently aligned to it in order to estimate the average deviation.
This parameter was calculated among portions of the meshes, and associated section profiles, along X and Y axis.
The results obtained through different methods were analyzed and compared in order to evaluate some parameters such as:
•    models accuracy,
•    processing time,
•    ease of use of hardware and software.
This study aims to present the results achieved, the operating methodologies applied, and the problems encountered during the different phases of the research.

Nextengine 3D Desktop Scanner;
Canon EOS550D Digital Camera.
Meshlab (laser scanner data post processing);
Rapidform XOV (deviation analysis);
Autodesk123DCatch (open-source software until December 2012);
Visual SFM (open-source software);
Apero (open-source software);
Agisoft Photoscan (commercial software).


Striegel:    Good morning. My name is Mary Striegel. All of you have probably met me at some point today because I was working on the registration desk. I’ll be chairing the session this morning through until lunch. As I tell people that I am very strict on time. We have a lot of papers to go through and I think that there’s a lot of information for you to enjoy and digest today.

Our first speaker this morning is Dante Abate. Dante is a research fellow at the ENEA Research Center of Bologna, Italy. He has a degree with honor in Humanities and a specialization in the Protection and Valorization of Historical, Artistic Heritage. His main research interests are related to 3-D surveying and modeling, virtual reality and visualization in the field of cultural heritage. He has a good familiarity with preparation with laser scanning instruments and software as well as image based modeling methodology. And I’ll turn it over to Dante. Thank you.

Abate:    Good morning. I’ll try to speak my very best English but I’m still a little bit confused from the nine hour time shift from Italy to here. I hope you have mercy on me. So I work for the ENEA Research Center. ENEA is a public nonprofit institution in Italy and particularly I belong to the technical unit of informatics and ICT.

So the project  I am presenting today is about the 3-D  modeling of the gravestone exploring local scanning techniques. So this is a brief summary of my presentation. First, I will show you the initial aim of the project and the results. Then I will go through the problems that emerged during the modeling analysis and it came up a new project parallel to the first one and the new result and final conclusion.

So we were asked to make the 3-D models of this gravestone. This gravestone is placed in the Medieval Museum in the seat of Bologna and among the collection of gravestones this is the most important since it is still preserved in many areas with the original colors. So the museum wanted to make an exhibition with 3-D models of this gravestone performing the restoration. It was made during the first half of the fourteenth century and it was supposed to adorn the tomb of Bonifacio Galluzzi, which is the main character of the gravestone. The final goals were the restoration of the gravestone and of course the documentation of this object.   Since it was a non-funded project, we had to deal with a small amount of money. We just used what we had in the lab at that time. It means that we used lots of scanning and in particular this model, which is a low cost device, it cost something like $2000. The Next Engine #D Desktop scanner is a box and has a field of view of 30 to 40 centimeters. This means that we had to acquire many, many range maps in order to model an object. I think it was born to model small objects instead of big ones.

We started the survey acquiring range maps starting from the bottom left of the gravestone and moving to the right side. We acquired something like one hundred range maps, actually ninety five range maps and for each range map, it took something like five minutes of scanning process, then the computer has to compute the acquisition, we have to move the scanner, make a new scan and so on. So for each scan, it takes something like five minutes.  The museum was open to us only on Monday afternoon. The museum was closed to the public but the office was open, so we could stay there only from 2:00 pm to 6:00pm because if you stay in the museum and you have people around and you have to put wires around, then they fall on the ground and it could be dangerous. So that object seemed to me to be maybe three meters long and one and a half high, but we took something like four days of surveys, which means it was four weeks of service on each Monday per week. This is a lot of time to model such an object plus you have to add the post processing step, which is something like four to five days working in the lab in the office.

For the post process, we used the Meshlab software. Meshlab is developed by the ISTI, the Institute of the National Research Council of Italy. First of all we filtered each range map, then we go through the alignment process, which Meshlab is a semi-automatic alignment. It means that in the first step, you have to click on at least four analogous points for each range map and then you can run the ICP algorithm to make the final alignment. At the end of the alignment process, we run the surface reconstruction in order to have a closed model without any holes.  The final models consisted of fifteen million polygons. So here we have some screen shots of the final models. This is the entire model and here we have some details. This is the main character of the gravestone. Here we have other details. This is one of the students on the left side and this is the three students on the right side. So at the end we were sort of satisfied from the acquisition modeling but then we realized that we had some problems so we went through the modeling analysis. I put these lights after the keynote presentation of yesterday because they say that they made a lot of analysis on Palazzo Vecchio Firenze in Florence and at the end they end up with no results. We ended up with some results but there were errors. If you see the model, it ended up that we had a lot if distortion along the main axis. Here we have some slides that show you the distortion of the model. So we spent a lot of time modeling this object, four days in four weeks of surveys plus the post process in the office and we ended up with a model that was wrong. So we tried to figure out how to save time and have a result that looks better than this with almost the same accuracy.

So the distortion was due to the missing of overlapping area of a single range map. So in order to correct the alignment, we needed something like 50 to 60% of overlapping area among to range map. But to speed up the process as we hadn’t much time, we decided to reduce the overlapping area between each range map. But this missing in the overlapping area resulted in a distortion among all the models which cannot be corrected afterwards. We decided in order to not make the same mistake the next time, to check if we could use other different techniques to model such an object. We decided to check some Computer Vision software. As you may know, Computer Vision techniques makes 3-D models from 2-D images. Comparing it with photogrammetry is an automatic process and means that you keep the software image, click a button and wait for almost for the result. But in the end, you have the 3-D model but it doesn’t have any magic results. It means that ultimately you have to scale the model using ground control points or known measures.

We tested these softwares; Agisoft Photoscan which is a commercial software product by a Russian software house, 123D Catch produced by Autodesk, Visual SFM and Apero/ MicMac which is a software produced by a French research institute. Here you have some details. So the output for 123D Catch which is a client server software is a manager like Photoscan, whereas with Visual SFM and Apero you get back point cloud. The computing time for the first three was almost the same whereas with Photoscan it was much longer. We used this software at a work station with sixteen cores and twenty-four gigabytes of RAM. The user interface just Apero/MicMac can be installed only on Linux operation system and we had to perform all the common through the common line.

So you have some details with laser scanner and 123 Catch and back curve exaggeration means bigger is the back curve and larger is the error. Here you have the first section and the second one. This is the section with Apero Software. Basically you have the biggest error is on the top of the section where you don’t take photos from that angle but basically we just moved in front of the gravestone taking pictures and we didn’t go up to take pictures from that angle. Here you have the same thing and Photoscan shows that there is a really, really big overlap between the two sections. The next two are the models that you obtained triangulating the point clouds and they show pretty much big errors. This is the first one, this is the second one with the error on the top part and the lower part. And this one is the last one with Apero.

So the conclusion Apero showed the most important divergence compared with the laser scanner but we just performed some basic field tests among the point clouds before the triangulation so maybe improving the post processing on the point clouds and then the post process of the triangulated mesh, maybe you can get a better result. Photoscan showed the best results compared with the laser scanner models.

So the next time we are going to make a project like this we will evaluate better the technique we are going to use according to the final [ ? ] project. It doesn’t mean if you have a scanner you have to use it no matter what. You can use some other techniques which may be cheaper and faster so you can get almost the same results. This project showed us that you can get with the image technique which as I said is much more cheaper and quicker. You can get better results in terms of cost, accuracy of the model and working time.

Thank you.

Speaker Bio

Dante Abate is a research fellow at the ENEA research centre of Bologna (Italy). He has a degree with honour in Humanities and a specialization in Protection and Valorisation of Historical Artistic Heritage. His main research interests are related to 3D surveying and modeling, virtual reality and visualization in the field of Cultural Heritage. He has a good familiarity and preparation with laser scanning instruments and software as well as with all the image-based modeling methodology. Indeed during his studies and research activities he attended different national and international courses and conferences in order to strength his knowledge and capabilities.
Dante Abate has thus an outstanding record of education and training activities as well as a salient number of publications.

Share →

2 Responses to 3D Modeling of a Gravestone

  1. Wow, that is incredible that people can do that. What a beautiful outcome on that headstone, or any sort of rock carving for that matter.

  2. It would be great to have a three-dimensional model of the Palazzo Vecchio of Florence, which for many centuries was the seat of government of the city, it would be possible to make a tour of the building without being in Italy,

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>