This lecture was presented at the 3D Digital Documentation Summit held July 10-12, 2012 at the Presidio, San Francisco, CA

A Comparative Study Using LiDAR Digital Scanning and Photogrammetry

Nulty

Combined Point Clouds of Photogrammetry and LiDAR.

Through a National Park Service grant, the Center of Preservation Research (CoPR) at the
University of Colorado Denver and the Bureau of Land Management (BLM) have partnered to develop a
comparative study between LiDAR digital scanning and Photogrammetry technology, using point cloud
data capture. CoPR employs the use of LiDAR digital scanning and the BLM utilizes Photogrammetry.

The rapid evolution of digital cameras and increasing capabilities of computers and analytical
software has dramatically expanded the variety of situations to which photogrammetry may be
applied, while simultaneously decreasing the costs of acquisition, processing, and analysis. A
variety of resource specialists (such as hydrologists, soil scientists, archaeologists,
paleontologists, biologists, range conservationists, and engineers) can greatly benefit from 3D
products derived from modern photogrammetric techniques. This is especially true in the field of
ground-­‐based or close-­‐range photogrammetry.1

In comparison, LiDAR technology has been developed allowing the development of high definition,
high accuracy, and high productivity digital documentation. Major innovations in digital image
processing and 3D modeling software and computer hardware capabilities have allowed for access to
vast amounts of data and information.

The comparative study focuses on the accuracy of the data collected, time factors, gathering
and processing the digital data, cost of labor, and hardware and software needs in the areas of:

1) Project Planning Assessment including: project goals; deliverables and outcomes desired; data
collection methods, data management planning and site assessment.
2) Data Gathering on site including: acquisition techniques; accuracy,
completeness, resolution and time, as well as resources required.

3) Data Management including: processing of digital data; application of the
data; representation; and accessibility for short-­‐term applications.
Both CoPR and the BLM scanned/photographed the same structure at the same time. This presentation
will show an analysis of man hours, computer time, hardware costs, software costs and deliverable
products, in addition to an analysis of the two different data sets describing accuracy reports as
well as noted differences and similarities.

Transcript

Striegel:    If we can take our seats, we’ll begin again. If we can have everyone back in the room and take your seats please. Okay, it’s my pleasure to introduce Mike Nulty. Mike is with the University of Colorado at Denver at the Center of Preservation Research. He is their technical coordinator. Mike manages the center’s state of the art digital technology including interactive website construction, sketch up 3-D site maps, virtual tools and other similar tools. His private sector work has involved historic and adaptive reuse projects.
Mike’s research interests lie in examining how the applications of digital technology can enhance our understanding, appreciation, and investigation of historic cultural landscapes.
Nulty:    Thank you Mary. Good morning. I will be presenting this study, comparative study, using Lidar digital scanning and photogrametry.
This all started with a grant from the National Park Service that tackled several issues, one of which included Kat’s presentation yesterday on best practices and it also included in that grant, an opportunity to better understand digital documentation focusing on photogrametry and Lidar. We were looking at this part of the grant as a way for us and the Park Service to better understand the strengths and weaknesses of different technologies, as well as how these technologies can best be applied in different situations.
As we got into these two technologies, the partnership expanded to include Tom Noble and Nefra Matthews at the Bureau of Land Management, as well as Kevin Akin at Caltrans. Kevin couldn’t be here today but I’d like to send him a big thank you. A lot of the analysis and third party understanding of the datasets that were created, a lot of that analysis was completed by Kevin. We were also, as a university, very excited to work with Tom and Nefra. They have an incredible expertise in photogrametry and so it was a great honor to work with them and we learned quite a bit from them throughout the process.
So the first piece was to identify a subject, and we picked a site at Four Mile Historic Park, which is located approximately four miles southeast of downtown Denver, hence the name, Four Mile Historic Park. The site itself is approximately twelve acres and there are a dozen or so structures on that site. It’s located in sort of urbanized environment that’s encroached around it with apartment buildings, homes and commercial centers around it. The Four Mile barn itself is an historic hand-hewn log structure. It’s approximately thirty feet long by twenty feet wide and  about sixteen and a half feet tall to give you a sense of scale.
So we also created a set of rules and parameters that would help us inform how we were going to collect the data and what we were going to do with it. The first piece was to not assume that one dataset was definitive. Both datasets were sort of treated as equals and we were comparing them as equals. We were to scan and photograph the same structure on the same day. We were really looking at the structure itself and not really looking at context. The structure itself had a diversity of materials and faces that would help us understand the datasets on different materials.
We looked at acquiring a five millimeter resolution minimum throughout and obviously we achieved higher resolutions at different locations and that the coordinate system would be determined in post processing. The way in which we collected the data, we also were doing it in such a way that it would be analyzed as datasets and nothing was sort of acquired to create additional deliverables.
So a quick image from our field notes. This shows a sketch of the structure, some of the context four scan locations throughout and then four locations for targets. We used three twin tip HDS targets and one single swivel HDS target for a total of seven HDS targets. Again, some more field notes. This is showing our scan locations, what we named them, scan details, the resolutions at each location, target data and how long each scan took.
Here’s some details from our field capture as well as some costs and time analysis. We used a Leica Scan Station 2 that was purchase in 2010 as part of a larger package. That $62,000 number also included the HDS targets and tripods as well as the cases. It also included the Leica Tribrach, the Nodal Ninja, which is something we use to mount our digital SLR camera on, to take more accurate panoramic images. It also included the batteries and an adapter bar that helps us place the camera at the same location as the scan head. We also purchased and used a Nikon D-90 with a 10.5 millimeter fisheye lens and we used a Dell latitude ATG laptop to control the scanner.
We looked very closely at our time allocations and we spent quite a bit of time in field setting up all the necessary targets and equipment. We spent a little bit more time in the field setting everything up as opposed to not using targets, which would make the post processing faster. So we made that decision to spend more time in the field so that our processing would be quicker.
The registration process; we achieved about one to three millimeters of accuracy and we relied heavily on target registration for a highly automated registration process.
The post processing; here’s a breakdown of our time. Again the processing was fairly quick. The bulk of the time we used to do the photo texturing so that the images that we acquired required quite a bit of time to texture map that onto the point cloud data. We utilized Leica Geosystems Cyclone 7.2 and  PTGui Pro and Pano2QTVR to register and manipulate the point cloud data and then also create our HDR panoramas and break those up in a way that the Cyclone  software could photo texture within Cyclone. So with that I’m going to turn it over to Tom.
Noble:    So the photogrametric process, again we were at the site at the same day and we set up basically at the same time. While Michael was setting up the scanner, we laid out our scale bars, essentially. We had a few obstacles that didn’t come into play for the LiDAR stuff. We had some trees and goats and horses and stuff but for the most part, it wasn’t too bad. Just a note, the manger on the back of that building was an obstacle that we didn’t plan on trying to really worry about getting inside of. It’s just there.
So jumping right into what we did in the field. The equipment we used was a Nikon D700. It’s only a 12 megapixel camera but it’s a full size censure, a 28 millimeter lens, we use a remote trigger device setup using at the present time, an iPhone. The camera’s tethered to a netbook and it’s triggered and you can get full control over the camera and actually preview the images remotely from your iPhone or iPad or whatever and a monopod just to put the camera on so we could elevate the camera to get higher and lower.
The field set up, lying out the sticks and talking about what we want to do about the thirty minutes. The time stamp on the first photos to the last was about an hour and fifteen minutes. What we didn’t use and had no need to use was any other external data capture device. We didn’t use the total station, no EDM, nothing. All the data that was captured and all of the scale information acquired was only using our scale bars and our photos.
So this is a real quick oblique look at some of the preliminary results and in the inset are, it’s very difficult to see well zoom in a little bit, are the camera locations as I walked around the building and had photos triggered. Looking at the barn, top down, again you can see the camera locations in 3-D space as I was rounding the building around the barn. In general the medium or average distance that I was away turns out to be a little over six meters, which at that distance, that resolution is about two millimeters per pixel up to 1.4 millimeters.
So in all, I took 148 photos. I threw those all into a software called ADAM Technology, Calibcam, oriented all the photos to each other and did a block adjustment, solved for all the camera lens system calibration. The thing I want to point out is Adam Tech as well as all of the image matching or photogrammetry software that we use is capable of matching to the sub pixel level, typically to 1/10th of one pixel if you have high quality images. So if your resolution is two millimeters, you can accurately orient the images to each other to 1/10th of that. And that’s very typical and Kalubcam solves for the camera calibration very robustly and completely, more parameters than most other software sold, that you have a very robust aero triangulated bundle adjustment that you can actually use to survey with using nothing but the camera and scale.
Once I generated the orientation parameters and got the camera calibration and applied scale, I picked the targets and assigned point id’s and those points were then used as control points for other processes. We chose to use a secondary piece of photogrammetry software called Photoscan to deliver the dense point cloud because it solves the entire scene at once instead of a stereo period of time. That was my choice. There were other options. One of the things that I have to do for Photoscan at the present time is provide high quality control to it. I get my high quality control from the previous photogrammetry process. A very conservative estimate of the results of all of this is about 3/10ths of a pixel. It’s actually better than that.
So what did we get? We get a dense point cloud from Photoscan. It’s a combination of processes. The original process generated 180 million faces, almost 100 million vertices. Then we decimated that down for performance and space considerations to 20 million faces, about 10,000 points, all of those points and faces have RGB because that’s inherent in the process and that’s a surface as well as a point cloud. As it turns out, when you decimate about 20 million faces, you end up with about the same number of points that the Lidar collection captured.
Another look at the decimated point cloud without any color, just for comparative, another visualization, if you decimate it even more, you can see the aspect.
So what software did I use? All of the images were preprocessed, I suppose in batch, generally using actually camera raw applied some corrections, but basically you take the raw file information and produce uncompressed jpgs just for further processing using Calibcam and then Photoscan Pro. There are alternatives that you can use that would give pretty much identical results but that’s the cost of all of that software. We did use open source free software for data conversion,LAStools, Meshlab, and some comparison using Cloud Compare, which is actually remarkably good software that is also open source.
How much time did it take? All of the pieces of the software can be run in batch. You have to set them up and let them run. Calibcam  doing the orientation and actually picking the targets assigning point id’s can take some time. I made a mistake that I got punished for severely, so it took a lot more time than it should have but I reported because that can happen. So that’s the amount of time. Now the automated time to solve for the very dense point cloud can take a lot of time but again, that is set up to run overnight or if you want really, really high density point cloud it can take you a really long time. Whether or not that dense of a point cloud is actually necessary is subject to debate. The time in processing goes up exponentially the higher density point cloud you want.
There’s two things that happened. Mike exported the LiDAR data into a package so that we could send it off to our aid in this project for analysis, Kevin Akin, who does LiDAR quality control kind of as his job in Caltrans. To facilitate that, I exported the control values from the photogrammetry process so that it could be used to trans locate the LiDAR system to the photogrammetry point cloud so we could put them in the same 3-D space for comparison. I exported the decimated surface to appeal wide file which I then purged into a last file so they could be imported into Cyclone for comparison by Kevin. Kevin Akin got the data and did some initial validation of what Mike had done to confirm that the scans were oriented together and were registered correctly and did another check of what Mike did and came up with the same results confirming that all the LiDAR data were oriented to each other correctly. Then, because I had provided the coordinates for the LiDAR point cloud, Kevin went in and picked our scale bars which were in the scene when they were scanned and picked those points so that they could be used to trans locate, not scale or do anything else other than just move the LiDAR data into the same 3-D space. Picking the targets in the direction to and from the scale is easy, left, right, or up or down but the depth, because of the scatter of the data, to accurately locate “z” away from the scanner is a little more difficult.
We imported the photogrammetry values and did some analyses on how well he picked those points and the statistics came back there was an error two to eight millimeters more or less of him picking the points. I talked to Kevin after the fact. Most of that error was being able to pinpoint the depth rather than the left, right, or up or down according to the targets. He was able to pick left, right, or up or down within about a millimeter, but pinpointing a deep point cloud is more difficult.
He brought those two data sets into Cyclone. The photogrammetry shown in blue LiDAR is inside as it turns out it’s inside slightly.  So we did a cross section. He zoomed in on the corner to do some comparisons and some distance measuring and those distances are within essentially, the tolerances or the specifications of the scanner itself more or less. Trying to pick points between two point clouds is somewhat difficult but he did a pretty good job and basically confirmed that the data sets are in the same 3-D space within the tolerances of the scan. Another look at one section of the wall.
So from here on out, I’m going to take the data into Cloud Compare for comparison so here’s another quick look at the data sets point cloud. The photogrammetry is in the upper left, the LiDAR with the RGB draped on it in the upper right and then kind of a reversal, the Lidar data is inside of the blue photogrammetry point cloud and here the photogrammetry data with the LiDAR point cloud shown superimposed. As just point clouds, that’s their relationship, again shown inside of Cloud Compare.
So Cloud Compare has the ability to subtract one dataset from another if you will. We have more than just two point clouds however. Integral to the photogrammetric process, there is a surface. All of the points of the photogrammetry process are in a surface. So we cannot only subtract or compare a point cloud to a point cloud, we can take a mesh and difference or subtract the point cloud from that mesh. So we can get a more accurate distance from a surface to a point cloud rather than just random points to point between the datasets.
So here is Cloud Compare showing the distances so the data only in the range of plus or minus ten millimeters is shown. The grey in this area is data that is not included in one dataset or the other so it’s excluded from the processing. Take the data that’s not in one dataset or another or is outside of those limits away, that’s what the cloud looks like. If you look at a histogram of the data, the data more generally, about 90% of the data is confined between the plus one millimeter to minus ten millimeters so that is restricting that dataset to those values. Take the grey away again, that’s’ the scale of the data from that plus one to minus ten millimeters. If we restrict the data even further to plus one to minus five millimeters, we can take another look at how the data relates. The data is more in common and you can see a lot of the data that went away was the part on the roof, that was our weak link for both of our datasets.
I’m doing the same thing for the north end of the building, the same sequence. Here’s plus one to minus ten millimeters. Take the non-data away, restrict it to plus one to minus five millimeters and you can probably see a pattern.  This is another side of the building. Again, the same sequence, plus one to minus ten, plus one to minus five millimeters, that’s the dataset comparison. Another look at the north end and the east end of the building just to see potentially some patterns.
Some final observations. Now we’re not necessarily going to form any conclusions. We’re just going to allow you to make some observations and form your own conclusions. But here are some of our observations. There are differences between how the two technologies and I guess I won’t even read it but there are differences, we noted that. They’re within specifications. The differences were predictable. We are within tolerance. Either methodology would be sufficient to quantify that building for structural documentation, according to what Kevin found and what we found and some of our other observations about how much time it took, the costs involved, alright then. We won’t have, we can’t possibly dissect the data in a twenty minute talk, so we will open it up to questions afterwards and welcome any insights or observations and questions and conclusions. So, I think that’s it.
Lessons learned? If we were to go back and do it over again, this is what we would do differently. We would do some high resolution scans on the control stick so that we would be able to better quantify those differences in the point picking and control used and we would have probably picked a coordinate system beforehand but it wouldn’t necessarily be necessary. The roof, we would both probably do additional scans to capture the roof, but again it wasn’t a point of action capturing this building. It was more to capture datasets for comparison. I guess that’s it.

 

Speaker Bio
Mike Nulty is CoPR’s Technical Coordinator. Michael manages the center’s state of the art digital technology, including interactive website construction, Sketchup 3-D site maps, virtual tours, and other similar tools.  His private sector work has involved historic and adaptive reuse projects.  Michael’s research interests lie in examining how the applications of digital technology can enhance our understanding, appreciation, and investigation of historic cultural landscapes.

Tom Noble is a member of the Branch of Resource Technology at the Bureau of Land Management.  His expertise includes all aspects of photgrammetric projects, image manipulation and rectification; close range photogrammetric techniques, geodesy, cadastral surveying, 3D modeling, AutoCAD, and programming LISP routines inside AutoCAD.  A graduate of Oregon Institute of Technology with a Bachelor of Science degree in Civil Engineering.

Share →

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>