This lecture was presented at the 3D Digital Documentation Summit held July 10-12, 2012 at the Presidio, San Francisco, CA

Four Light Total Appearance Imaging of Paintings

Berns

Rendering using studio lights at 45° from the left and right where the left side has three times more light intensity from a distance of two feet.

by Roy S. Berns, Tongbo Chen, and Jim Coddington

Imaging artwork for documentation and reproduction has a long and rich history. The vast majority of such imaging reduces an illuminated three-dimensional object onto a two-dimensional plane, rendering a specific observing experience, defined by the photographer, conservator, or curator. It we can separate capture and rendering, the object can be re-rendered as criteria change. This can be accomplished by imaging the object’s total appearance followed by computer-graphics rendering.

For paintings, a complete physical description includes spatially varying spectral reflectance factor, R, surface macrostructure (depth or surface normal, n), and surface microstructure (bi-directional reflectance distribution function, BRDF). Measurements of R, BRDF, and n can be accomplished with a single imaging system. However, if the object surface has appreciable impasto and is not matte, hundreds of images may be required to assure that both diffuse and specular reflections have been captured for every point on the object, which when combined with true spectral imaging, would require a complex research apparatus.

Beginning in 2006 a research program was initiated to develop a practical approach to measuring the total appearance of paintings, resulting in two lighting systems: one to measure n and R and one to measure BRDF. The first system is the subject of this submission, referred to as “4LI”: four light imaging.

Photometric stereo is a straightforward technique to measure n, requiring a minimum of three light directions using point sources. The images cannot have any specular highlights, achieved in this approach using cross-polarization. The imaging system was comprised of four polarized Broncolor strobes and a polarized Canon Mark II camera. Calibration included imaging a glossy black ball to define lighting geometry (RGB), setting cross polarization, and imaging a diffuse white board and color target (for spectral estimation using the Dual-RGB imaging system). An advantage of this approach is that the object size is not limited. Eight images are collected for a given object: each light source and the dual-RGB (4 x 2 = 6). Automated software outputs diffuse color and surface normal floating-point images (PFM).

Software was written, “Artviewer,” to render images interactively for specific lighting conditions, either a point source or museum lighting. The Ward model was used to define BRDF, set interactively or using an artist material database. “Isee” was also written to view the floating-point images, similar to HDR Shop.

Transcript

Welcome back. Hopefully everyone has enjoyed the poster session and lunch. We’ll start back with our next set of talks. Today we are going to hear from Roy Burns on color and spectral archiving using dual RGB imaging. Roy is a      Richard-Hunter professor in color science appearance and technology, and Director of the Munsell Center for Image Science at Rochester Institute of Technology, has M.S. and B.S. degrees in textile, from the University of California Davis, and a Ph.D. in chemistry from Rensselaer Polytechnic Institute. He is the author of the third edition of Billmeyer and Saltzman’s Principles of Color Technology. His main area of research focuses using color and imaging science through the visual arts. I will hand it over to Roy, thank you.

Thanks Jason for the nice introduction, thanks everybody for coming back after the poster session, and thanks for introducing my poster session. That was great.

What I would like to talk about today is something that we’ve been doing, which is kind of an alternate approach to RTI. It’s a different way of gathering similar information, and we are calling it Four Light Total Appearance Imaging of Paintings.

So, this is a bunch of different pictures of Starry Night taken from different angles, and obviously we can see that there is quite rich texture in this painting, and we like to capture that information. Yet normally, when a photographer takes this dynamic experience and reduces it to a single image, right? So the photographer is looking at it and we can look at it from different orientations, but the photograph is a single view of this. So the idea here, is can we start to capture information so that we retain more of this dynamic experience?

And so, the research goals for this work would then be to record appearance properties of paintings and drawings so that computer graphics can be then used to render the artwork, so then the angle of view and the lighting geometry is a decision that’s made at the end and not at the beginning. We would also like to develop hardware and software that is appropriate to be used in a museum imaging department, and then finally to try to capture some data to provide new and rich content. So what can we do if we have such data?

So this is idea of total appearance imaging and to do to this you have three main areas of information that have to be captured. The first is spectral data, which can then be used to define color. Then there is the surface microstructure which we can kind of think about this gloss, and typically what’s measured here is the bidirectional reflectance distribution function, or BRDF. So this is giving you information about  how an object is glossy or not glossy as the function of view angle, and then there is the surface macrostructure which can be defined two ways, one is either a height map, and that would be the equivalent of what we have been hearing about up until this moment though the photogrammetry and the laser scanning, or there is something called the surface normal which is giving you shading information. The surface normal is the angle which is perpendicular to the surface of an object, so if you have a curve, the surface normal is going to be changing in its direction and that surface normal can then be used as a way to introduce shading in graphics rendering.

So, for colorimetry, we have been using a dual RGB approach, so in this case, it’s a Cannon Mark II camera. So there’s the camera, there is a filter wheel here that we have designed and built and ordered to then capture a pair of RGB images. We’re using brown color strobes and that the calibration is being done with a color checker classic target measured with a I-1 spectrophotometer.

This is kind of typical of the calibration accuracy of this system, and so , in this case I am making a plot of A star versus B star in C lab, and the goal is that all the arrows, which would be errors, are the arrowheads within the filled dot, which is the calibration color. So this is telling me that my system can give high accuracy as long as my paintings have similar properties, in this case, to a color checker. This is the kind of spectral accuracy that is typical of the system. So again, it’s  not perfect, but it is not bad considering that it is way under sampled spectrally.

The surface micro structure is coming from an artist material database. So we had a project in the past in which we prepared, I think, around 100 different panels, well different samples, and these materials were all painted using different painting techniques, painting materials, a tempera oil, acrylic, drawing materials, pencil. And that each one of these samples has been measured in order to characterize their BRDF and using the ward model, we then have a BRDF library. So, in our image rendering you can choose then what surface properties you would like to impose on that object. This type of imaging does not capture spatially varying gloss, so it is a diffuse imaging technique, then the gloss gets imposed during the rendering process.

The surface macrostructure is then we’re using polarization enhanced photometric stereo, so here you can see that there is a light source here, here, and here, and that we’re using polarization, cross-polarization, this is in the Museum of Modern Art in the paintings conservation lab, this is a Jackson Pollok painting.

Here you can see a four light version of this is my laboratory at RIT, and the idea that these four lights are then at 180 degrees apart aimed at 45 degrees towards the painting. So in this case I can then capture with conventional lighting and with this photometric stereo method, where we are then capturing four successive images, each light being shot one light at a time. So then and with the cross polarization, I’m only then measuring diffuse light. So one of the requirements for photometric stereo is that the scene is diffused.

So calibration, we have some linear polarizer’s because the camera has a circular polarizer. I’ve got to set its state of polarization. We then have a diffuse whiteboard attached to a metal plate and then we’re using that in order to then do flat fielding, and to allow for a cosine falloff and also because our light sources are not point lights. There is a black cue ball that is imaged and the black cue ball is used both to determine cross polarization, so that when you are looking for the specular peak to be minimized in its energy, as well as to define the position of the four lights very precisely. And then we are imaging a color checker and then there is a transformation of our dual RGB images to floating point images. We have been, for this application, generating floating point images that are encoded in SRGB, so when you display them on screen they look reasonable. So here is this idea of the specular highlight. We have a cue ball that I drilled a four-twenty into it, so that it attaches to what looks like a microphone stand.

So we’ve written software that does the colorimetry. There’s calibration and processing in order to take the flat field images, the cue ball images, define the lighting, and then to calculate the surface normals for a given image, and then we’ve written two pieces of software so that we can have software that can read our image format, which is pfm’s, one is Artviewer, and we drew heavily upon he software that Chi developed, so I have to thank them for providing the prototype for the software. So we had the kind of usual stuff where you can change your view angle. One of the things we added to this was we wanted to render images that were typical of studio lighting. So you can define studio lights at 45 degrees and you can even change their relative energy if you want to create the equivalent of a rendering, such as a painting that is done in normal studio. We also wrote something that is similar to HDR Shop, so that we can then quantitatively analyze and view the floating point images called IC.

So we have tested this at MoMA. These are the paintings that we’ve imaged at different time periods, and if you are familiar with these paintings, you’ll know that they are not to scale especially this poly. I was trying to lay them out in a way that looked pleasing and then I just started laughing because I thought, “Oh my gosh! This is so not correct.” But life’s too short, so I just wrote not to scale. That’s a good thing, when things are not going your way, you just write “Not to scale.”

So, this is a painting I made specifically for this type of 3D imaging. It has some amount of impasto, and I also added a gloss medium selectively to the painting, so this has got spatially varying gloss. It turned out that I’ve used the painting as a way to get funding. Some people have also made fun of me for the painting as well. The first time I showed this at the National Gallery, after about a half hour into the meeting, they we like, “Roy, could you like, turn that thing over?” So anyway, this is my nod to the Fovis. It also has dioxazine purple which is kind of a challenging pigment to image. So know I can see if my two days fiddling with the projector has paid off.

So this is conventional imaging, where we are using two strobes at 45 degrees. This is kind of typical of at least the way we tend to photograph paintings in our laboratory. This is the diffuse image, so this is the average of the four lights, without any gloss. And as you can see of course, it generates a very soft effect here, a lot of detail in the canvas can’t be seen. And what is interesting is that this type of photography is actually typical of the Museum of Modern Art because they tend to like to generate fairly soft images with their lighting setup in their photo studio.

This is the four light approach in which we’ve rendered to try to match the conventional lighting, and again, you can see some amount of highlight detail is now coming back.

This is RTI’s system, so at MoMA, there is an RTI Dome System. We had this painting imaged there. And you can see in this case, because the lighting from the  [ ? ] so we then chose an angle that was as similar as possible to the angles of our conventional lighting. One of the things you see here is that there are black lines because the lighting is much more directional than conventional lighting, so we are getting stronger shadows here. So you can see you’re getting good representation of the surface properties.

We also had the painting laser scanned with an RES 3D up in Canada and what was pretty gratifying to me was that the surface normals that are coming out of our four light approach are quite close to the laser scan. So I was pretty happy that this very abridged method is doing a good job at pulling out surface properties.

We then recently had done some visual psychophysical testing in which we’ve compared different approaches here, the strobes with soft boxes, we used the laser scanning for the surface normal, the dual RGB for the diffuse color, we looked at the three light imaging, four light imaging, the RTI, and then we have a very complicated system that’s using polarized enhanced linear light source reflectometry, which would be a separate presentation.

So what was done is that we’re doing a pair comparison experiment. So let’s look at this first, every image is compared pair wise to the actual painting. We then set up the painting to be lit from one angle and  then all the images were similarly rendered. So we tried to have the rendering to match the lighting. We then have matched white points and luminance, so you can see an observer looking at the screen, it’s a split screen, so they could freely look back and forth and move their heads around and then we asked different questions. First, the paintings were evaluated and we asked the question, “Which image looks most like the real painting?” So just kind of a naive question, “Which do you think looks closest” and then we went in more specific questions about gloss and shininess, texture, and finally color.

So these are the images in this case for what we are calling total appearance and what I was hoping for would be that some of the systems,  I sort of see conventional lighting as my bench mark in this case, because this is the typical museum approach to photography, and I am trying to create a system that can both mimic conventional photography and allow all this extra data to be available. So that’s sort of what I am looking for in this approach and you can see here in this case that, so what’s being plotted here is the perceptual result versus each system and these are the averages and this is the visual uncertainty based on the twenty five observers. And if the mean value does not intersect the air bars, then there is a statistically significant difference. So for this particular painting, the four light image was statistically superior to the other images. Here we can see that the laser, conventional and linear were the same, and then you have all these different categories.

The conventional soft box, so this is the soft imaging, was the least preferred approach and again, this makes sense, because of how the lighting system was set up to be 60 degrees.

This is a second image, which is a fake Van Gough that I bought in Chinatown in L.A., and I cut it to fit as a 12×12, so it is a riff on, I guess, one of his Wheatfield images, and here we can see that the conventional was the best but the basic sort of trends were the same where the conventional is the poorest. This was an image of tulips that was painted to have a moderate level of impasto and again you can see the same kind trends except in this case, the three light approach was the poorest. So here we can see the average results and so that we are then going from conventional for the linear, the most complicated system of the three, the RTI system, and then the conventional.

We then wanted to look at more details about image shininess and gloss, and the RTI four light and laser do not generate gloss information directly. Well that’s not true, RTI has it built into it because it is measuring gloss in there. So this linear system has the most information compared to conventional and so you can kind of see these results. Conventional soft box again, the worst.

In the case of texture, we added in the surface normal map so you can do a rendering from the surface normal. We also did a lightness and chroma histogram equalization of the RTI image in order to have it more closely match. One of the reasons I didn’t use this in the other experiments is that when we are doing this sort of color-enhancement, in order to make it more similar, it was then reducing some of the texture that is apparent in the image, so I didn’t like the idea that I was losing appearance attributes so that was why we didn’t do this in the other experiments. I was hoping that in this case, the surface normal maps would be the best for texture. That would be what you would expect and we found that to be true so in this case, the four lights surface normal was the best, laser was also good, and here you can see the thing that I was concerned with, that the RTI image and then when I did the equalization here, I’m losing information about texture, so that was why I was worried about this.

Aas we would expect these surface normal maps are very useful. Color, conventional, four, and the linear systems were all about the same. The diffuse and the soft boxes were least preferred and I was surprised by this, so that observers do feel that having some amount of information about surface topography is important even when you are judging color.

So in conclusion, the polarization enhanced stereo dual RGB imaging seems like a viable approach for archiving paintings, the four light approach can be implemented pretty easily with the strobes and a regular digital camera, so I think it would fit in quite well in a normal studio. I found it interesting that naive observers didn’t require spatially variant information when judging total appearance. The soft box and RTI had the worse conveyed total appearance for this set of observers, whereas the four light surface normal best conveyed texture.

In the future, I would like to repeat these experiments with museum personnel, do more testing, and then improving our software.

Thank You for your attention.

Speaker Bios

Roy S. Berns is the Richard S. Hunter Professor in Color Science, Appearance, and Technology and Director of the Munsell Color Science Laboratory within the Center for Imaging Science at Rochester Institute of Technology, USA. He has B.S. and M.S. degrees in Textiles from the University of California at Davis and a Ph.D. degree in Chemistry from Rensselaer Polytechnic Institute (RPI). He is the author of the third edition of Billmeyer and Saltzman’s Principles of Color Technology. Berns’ main research focus is using color and imaging sciences for the visual arts.

Tongbo Chen is a Postdoctoral Fellow in the Munsell Color Science Laboratory. He has B.S. (Harbin Institute of Technology), M.S. (Beijing University of Technology), and Ph.D. (Max-Planck-Institut Informatik and Saarland University) degrees in Computer Science. Before joining RIT, he was a Postdoctoral Research Associate at the USC ICT Graphics Laboratory.

Jim Coddington is Agnes Gund Chief Conservator at MoMA. He has a B.A. from Reed College and an M.S. from the University of Delaware/Winterthur Museum Conservation Program.

Share →

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>