Digital Photographic Methods for
Discerning Artistic Images of Hopewellian Copper Artifacts
Christopher Carr, Professor
Department of Anthropology
Arizona State University
Tempe, AZ 85287-2402
Andrew D. W. Lydecker, MS, MA
ASC Group, Inc.
4620 Indianola Avenue
Columbus, OH 43214
Edward Kopala, Manager
Sensor Systems Simulation, Integration, and Testing
Battelle Columbus Laboratories
505 King Avenue
Columbus, OH 43201-2693
Evan B. Preston, M.S., Principle Research Scientist
Sensor Systems Simulation, Integration, and Testing
Battelle Columbus Laboratories
505 King Avenue
Columbus, OH 43201-2693
Duane Simpson, MA
Center for Advanced Spatial Technologies
University of Arkansas
Fayetteville, Arkansas 72701
Jeff Barron, M.A.
Center for Advanced Spatial Technologies
University of Arkansas
Fayetteville, Arkansas 72701
This chapter describes the digital photographic aspects of the NCPTT sponored research. Sections discuss the hardware and software selected for specific purposes for color and infrared imaging; image capture for flat and curved objects; photomosaicing of images of curved objects; photographic enhancement routines; registration methods in preparation for combining color and infrared images into hybrid, composite images; and evaluation of which bands, band calculations, hybrid band combinations, and hybrid combination procedures are most effective for rendering visible the artistic compositions on Hopewell copper artifacts.
Hardware and Software
Breastplates, celts, and headplates within the collections of the Ohio Historical Society, Columbus, OH, were digitally photographed with state-of-the-art digital cameras (Battelle Laboratories; ASC, Inc.) of three kinds: (a) ultra-high resolution color, (b) near-infrared, and (c) midrange infrared. For color image capture, we used a combination of a high resolution digital camera and a computer system. The camera, a Leaf Lumina manufactured by Leaf Systems, uses a CCD array to produce images with a spatial resolution of 3380 X 2254, and a color depth of 36-bit, and uses standard Nikon bayonette-mount lenses. During all phases of this project, we used a Nikon 60 mm 1:2.8 lens. Two different computers were used during the course of image capture, both running the Macintosh OS. Computer #1 consisted of a Power Computing PowerCenter 132, with a 132 MHZ Power PC 604, 80 MB of RAM and a 1 GB HD. Computer #2consisted of a Power Macintosh 7100/66, with a 66 mhz Power PC 601, 56 MB of RAM and 6 GB worth of hard drive space, and a recordable CD drive. Software used for the capture (and subsequent analysis) included both NIH Image and Adobe Photoshop versions 3.0 and 4.0, as well as the Leaf Lumina Photoshop plug-in Twain scanning utility. Previously, we had used Leaf’s twin compatible scanner plug-in EasyScan 1.2 written for Macintosh. The camera was mounted on a standard 35 mm copy stand and connected to the computer via standard SCSI cables. Two 500 watt incandescent flood lamps were used for illumination. The two lights were set up perpendicular to each other, in order to cast shadows multidirectionally and to bring out specimen relief optimally.
Photography of artifacts in the field was a relatively simple process. The camera and computer were set up in a designated area, either in the collections room or very close by to minimize the distance the fragile objects had to travel. Objects to photograph were prioritized based on their importance relative to the overall corpus of imagery, and brought to the photography area in lots. They were lined up so that they could be photographed in an assembly line style. Exposure times for the Lumina ranged from 30 seconds to 10 minutes, depending on the f-stop and lighting, with the average being 2-4 minutes. Higher f-stops were used when an object had high relief, such as the curved headplates or those objects covered with burned bone. Once an image was captured in Photoshop, it was reviewed for focus and evenness of lighting, cropped to eliminate any extra background, saved as an LZW-compressed .TIF file to the hard drive. The cropping and compressing was done solely to reduce the unwieldly size of each file, which was more than 22 MB in raw form.
For infrared image capture, we used a Cohu 4810 near-infrared camera and a Hamamatsu C1000-03 midrange infrared camera have sensing ranges of 0.7 – 1.0 microns and 1.0 – 1.8 microns, respectively. Diffuse quartz-halogen lighting operated at 30000K provided specimen illumination. A Nikkor 105mm ƒ4.5 IR lens was used with the near- infrared camera, and when coupled with a spectral filter, results in the imaged bandwidth of .700-1.0 microns. The camera has a 2/3-inch format frame transfer charge coupled (CCD) with an active imaging area measuring 8.8mm (horizontal) x 6.6mm (vertical).
The active imaging area is an array of 754 horizontal by 488 vertical picture elements. This results in a field-of-view of 84 mrad (horizontal) by 63 mrad (vertical) and, at a viewing distance of 28 inches, a spatial resolution of 0.1 lmm (horizontal) by 0.26 (vertical). A Nikon 24mm ƒ1.4 IR lens was used with the midrange infrared camera. The camera uses a special infrared vidicon that is responsive to energy out to 1.8 microns. With an active imaging area of 12.7mm (horizontal) x 9.5mm (vertical), the camera has a field-of-view of 529 mrad (horizontal) by 398 mrad (vertical). At a specimen viewing distance of 28 inches, this camera has a spatial resolution 0.54mm (horizontal) by 0.63mm (vertical). Spectral filters used with this camera result in a wavelength sensing region of 1.0 to 1.8 microns.
Video signals from the near-infrared and midrange infrared camera/sensors are processed with a 266 MHz Pentium II PC equipped with a Meteor Frame Grabber board (Matrox Incorporated). The board produces digital output at a 640 x 480 pixel resolution and adjustable in brightness and contrast from an analog video camera signal.
The Matrox Meteor board key features include: (1) Captures NTSC/PAL/SECAM, RS-170/CCIR and standard RGB; (2) single slot PCI frame grabber; (3) real-time transfer to system or display RAM; (4) multiple video inputs (up to 4 channels); (5) high-quality video scaling unit; (6) live video-in-a-window; (7) stable synchronization; (8) support for Windows NT, Windows 95, and DOS4GW 32-bit DOS extender.
The Matrox Meteor is a high-quality color and monochrome PCI frame grabber that provides real-time image transfer to host, video-in-a-window, and support for the Matrox Imaging Library (MIL) and Matrox Inspector interactive imaging software. The use of this board and its associated image processing software allow users to develop powerful, yet cost-effective host-based machine vision, image analysis and medical imaging systems. The Matrox Meteor transfers image data in real-time to the CPU RAM for processing or the display buffer for real-time display. The Meteor is capable of up to 45 MB/sec transfers.
Other features of this frame grabber board include: (1) the incoming video stream can be tuned through software adjustable brightness, contrast, hue, and saturation; (2) excellent synchronization even when grabbing from still video cameras and VCRs in playback and pause modes; (3) high-quality live video-in-a-window display that can be scaled down to any size and positioned anywhere on the screen; and (4) the Digital Video to PCI Interface unit, which supports various data transfer formats (8-bit mono, 15-bit and 24-bit RGB).
Software used in support of the frame-grabbing board included two packages developed by Matrox, Incorporated: Inspector and the Matrox Imaging Library (MIL).
Matrox Inspector is a Windows-based software that offers interactive access to an extensive set of imaging operations. Features of this software package include: (1) complete set of imaging functions; (2) easy-to-use interactive work environment; (3) interfaces to standard and non-standard cameras; (4) loading and saving in many file formats (5) display of color and monochrome images; (6) scaling, zooming, panning and scrolling; (7) selection arid processing of non-rectangular regions of interest; (8) returning of results with sub-pixel accuracy; (9) image annotation; (10) automation of routines with powerful scripting; and (11)“Collection” for visually tracking and managing images.
MIL is a high-level ‘C’ library with commands for image processing, pattern matching, blob analysis, gauging/measurement and OCR, as well as image acquisition, transfer, and display. MIL has been designed to fully exploit the power of Intel MMXTM technology. The MIL software allows more flexibility for image processing/analysis than does the Inspector software.
Object Shape and Image Capture Methods
Two basic types of objects were photographed – flat objects and three dimensional objects. Flat objects could be photographed as is, but special attention had to be paid to the 3 dimensional objects, which included all headplates, and some celts. Headplates and celts required different approaches to imaging them. As the celts were merely tall in relation to the focal distance of the camera, it was sufficient just to increase the f-stop to between 16 and 32, thereby increasing the depth of field. The resulting digital image could then proceed immediately to the image processing stage.
Headplates required a somewhat more complicated technique. In order to best display the imagery on all objects, we needed a straight-on photograph. Since headplates were curved in at least one dimension and many times in two dimensions, accomplishing this with one photograph was not possible. To obtain a photograph of surface patterns minimally distorted by parallax error, for each headplate, multiple digital photographs were taken at different points along its curature, perpendicular to it. The multiple photographs were then fitted together in the computer, providing a flat layout of the plate. Operationally, this was achieved by keeping the digital camera in one position and rotating the headplate approximately about its center of curvature on a styrofoam support cut to the form of the item, so as to orient the desired section of the object parallel to the focal plane of the camera. The support ensured that the plate remained a constant distance from the camera as each photograph of the series was taken, which retained the scaled of the image from photograph to photograph. Photographs of a series were taken so as to ensure a few centimeters of overlap of undistorted image between them. This required an average of four photographs for each side of a headplate. Some plates required as few a two, while some required eight per side.
The multiple photographs of a series were spliced together to create a flat layout using the image scaling, rotating, skewing, and stretching capabilities of Adobe Photoshop. The overall name for this procedure of taking multiple photographs and fitting them together is photomosaicing.
Ultra-high resolution color digital photographs made with the above-described system and procedures were taken of 219 sides of Ohio Hopewellian copper items bearing artwork or thought to bear artwork. Of these, 122 sides were of breastplates, 22 sides were of headplates, and 75 were of celts. The items come from a diversity of sites (11) dispersed over south-central to northeast Ohio and represent a range of natural and archaeological preservation processes. In total, the items also bear the full range of kinds of inorganic and organic materials known to occur on Ohio Hopewellian copper items (see Chapters 4 and 5). The sample obtained is approximately 10% larger than that originally proposed.
Near-infrared and midrange infrared digital photographs made with the above- described system and procedures were each taken of 263 sides of the copper items, including all of the 219 sides photographed in color. The greater breadth of the infrared sample reflects the decision to photograph more than the proposed number of item-sides that do not seem to bear much or any indications of artwork, and to explore the power of IR in revealing artwork essentially not visible to the naked eye. Examples include copper surfaces that appear largely uniform in their corrosion, copper surfaces that are entirely hidden by a uniform textile wrapping or textile-pseudomorph wrapping, and intensely burned (cremated) copper surfaces.
Digital Photographic Enhancement
Preparation of Digital Photographs for Enhancement
Each of the 219 color digital images of the copper artifacts were prepared in Adobe Photoshop for image processing. Each artifact was outlined, its background was changed to a uniform 30% grey, and a rule of standard format was placed within the photograph. Excessive background had already been cropped from the images during the image capture and storage phase of the project.
Digital Photographic Enhancement
Commonly used enhancement methods include several major classes of display and mathematical routines, including color band selection, contrast stretch histogram modification, histogram equalization, spectral analysis, band-pass filtering in the spatial and Fourier domains, boundary enhancement, and interband calculations. Different methods are designed for different tasks, such as improving image contrast, sharpening image boundaries, determining the frequencies/scales at which image intensity varies more or less, partitioning overlayed images of different scales, and removing high- frequency noise or disjunctures or low-frequency trends. Relevant oveviews of how each method works are provided by Castleman (1979), Gonzalez and Winz (1977), and Carr (1987).
For this project, contrast stretch histogram modification performed in two different ways, as well as band selection, interband calculations, and hybridizing of color and infrared bands, were used to enhance color and infrared digital images.
Contrast Enhancement. Black and white digital images, or a single color band within a color image or a single infrared band, typically consist of an array of pixels, each with a value that can range between a minimum of 0 and a maximum of 255. This number represents the brightness value that is assigned to a given pixel – the higher the number, the darker the brightness value. Brightness values of color or infrared bands of digital photographs commonly are limited to some segment of the 255 value range. Since the human eye is readily able to distinguish only 7 – 10 brightness values, limitation in the value range makes the band appear more uniform and lacking in contrast. To overcome this problem, a black and white band, or color or infrared band, can be altered to take advantage of the full 255 levels of brightness. A contrast stretch expands the limited range of brightness values of a black and white, color, or infrared band over the entire 255 value range of brightness values, thus making any differences between adjacent brightness values greater, and more readily apparent to the naked eye. In other words, image contrast is improved.
Color Contrast Enhancement. For this project, the contrast of each of the 219 color RGB images was improved in two ways: a total histogram stretch, in which the combined histogram of all three bands of the RGB image were stretched at the same time, and an individual color band histogram stretch, in which the histogram of each color band was stretched separately. It was desirable to do it both ways for two reasons. First, the total histogram stretch enhanced contrast but maintained the color balance of the original photograph. This was found helpful in identifying regions and various substances in the image relative to the original artifact, by color. Second, the individual band histogram stretch served to more equally balance the three colors relative to each other. This produced a sometimes dramatic and unnatural shift in the color of the image – the reason being that many of the objects are primarily green in color. The individual stretch de- emphasizes this green dominance, and allows the other bands to show through. This may reveal aspects of the object that differ in their red or blue channel signatures and that cannot be seen in an image with normal color balance.
A third contrast-changing routine – histogram equalization – proved to be seldom effective in revealing artistic representations early in the grant research period, so it was not applied to most of the digital photographs taken.
Infrared Contrast and Sharpness Enhancement. For both near-infrared and midrange-infrared digital images of all 263 item-sides photographed, contrast was improved by individual band stretches. The two infrared images of an object were not combined into two channels of a single image and stretched together as a whole. Image contrast was optimized for an infrared band when it was sharped with the sharpening filter in Adobe Photoshop before stretching it. When stretching was done before sharpening, image contrast was not improved quite as much, although the differences in results between these two ordered procedures is small.
Color Band Calculations. Band calculations are essentially mathematical operations that use the numerical brightness value of each hue channel of each pixel in an image. Two selected bands from the same image (or the bands or total RGB response of two separate images) can be added, subtracted, multiplied, or otherwise operated on to create a new image. Matrices algebra applied to matrices of pixel brightness values of various bands of a photograph and/or its total RGB response is used to accomplish the operations. For example, in a Red x Blue calculation, a matrix comprised of each pixel’s Red brightness value is multiplied by a matrix comprised of each pixel’s Blue brightness value. For each pixel, the resulting calculated number can fall outside of the 0 – 255 brightness scale range, causing extreme values to be lumped with either 0 (solid white) or 255 (solid black). The resulting image is usually of low contrast, so it is then desirable to perform a contrast stretch on it, rescaling all pixel values between 0 and 255.
Because there are three bands in each image, and many different calculations, the potential existed in this research project for a very large number of images to be created. However, it was determined through trial-and-error observation that many potential combinations of bands and calculations were not of much value for improving the visibility of artistic compositions on the copper artifact. This determination was broken into two parts: selection of which of the three bands to use and selection of which calculations to use.
Color Band Selection. The color bands that were investigated the most for their calculations was initially based a priori on a logic that assumes the nature of formation of coloration on the objects. In general, the green band was not often used, because it was thought that the green color was mostly the result of natural copper corrosion, and that natural corrosion would mask any pigment found on the object. Conversely, it was argued that the red and blue bands would carry less information on natural corrosion and more information on any pigments that were applied to the object, consequently improving the visibility of artistic paintings on the objects. This logic turned out, by the end of the project, to be only somewhat true, because the copper objects were commonly patinated rather than painted, and some green corrosion was intentionally produced as part of artistic compositions, along with corrosion minerals of other colors.
A second factor that was considered in determining which bands to use was which bands displayed the greatest differences from each other in the spatial layout of their pixel brightness values. For example, two bands that looked very similar in layout will yield very little in new information when they are combined through band calculation, even after a contrast stretch. Contrarily, combining two bands that looked very different in their spatial layout of pixel brightness values could yield a great deal of new information.
A quantitative assessment of the relative effectiveness of the three color bands in revealing material differences and imagery on the copper artifacts is made below in the sections authored by Jeff Barron.
Calculation Selection. For both each of the 438 digital images, the contrast of which was increased by the total histogram stretch or individual color band histogram stretch procedures described above, seven calculations using two or three color bands were selected for study: (1) R x B, (2) R x B-inverse, (3) R-inverse x B, (4) R-inverse x B-inverse, (5) B x G, (6) B-inverse x G-inverse, and (7) B – G. After calculation, the contrast of the resulting grey-scale image was then improved again with a histogram stretch. These seven calculations were selected because they usually resulted in distinct and diverse image representations that brought out different features of a given copper artifact. Band calculations that involved red and/or blue and/or their inverses were generally the most informative.
For some specimens, some calculations produced images that were largely black or largely white, even after a histogram stretch. This resulted from the aforementioned phenomenon of calculated pixel values lying outside the 0-255 range of values, and in these cases, large numbers of pixels being collapsed into the 0 or 255 category. This condition arose idiosyncratically, depending on the character of the image, and most frequently with the B – G band. This effect did not alter our impression of the general importance of all seven band calculations just enumerated.
Hybridizing Color and Infrared Digital Photographs
Hybrid color-infrared images were created by replacing the R, G, and/or B channels of a digital photograph with a near-infrared band and/or midrange-infrared band. Generally, one or two calculated color bands were also substituted for R, G, or B channels in these hybrid images, in order to bring in more color information. The various hybrid images were then explored for their relative effectiveness in making visible the artistic representations on the copper artifacts.
In order to produce color and infrared hybrid images, it was first necessary to rescale and register the color, near- infrared and midrange infrared images, which each were captured with a different camera system. The following two sections describe the rescaling and registration methods that were used.
For each color digital image, RGB channels were imported into Adobe Photoshop 5.5. A standardized processing method was then followed. Most excess background on the imagery was then cropped, in order to reduce the file size as well as to facilitate the registration process. The cropped images were then reduced in size to approximate 2400 pixels in the x-axis with the y-axis pixels being constrained in proportion to the x-axis reduction. All reduction and expansion of the imagery was done within Photoshop using its bicubic interpolation algorithm. Imagery with less than 2400 pixels were not expanded to this size but left at their smaller original size. The reduction of the imagery to 2400 pixels was determined by the necessity of reducing the much larger RGBs to a size similar to the IR images prior to registration with them, balanced against the necessity of the project to produce large high-resolution imagery. The cropped RGBs on average were approximately five times greater in size than the cropped IR images. This drastic size difference made for a difficult registration process, requiring a reduction in the size of the RGB images. . It was due to this large size variance between the two media that some size reduction was necessary in the RGBs prior to their registration with the IRs. The cropped and interpolated RGB images were then split into their three respective channels using the split channels function in PHOTOSHOP. The images then were exported to the GIS program, IDRISI 32.
The IR imagery was similarly cropped to reduce excess background, and in some cases rotated to match the RGB orientation. The IR imagery was then doubled in size using the bicubic interpolation algorithm in Photoshop. This expansion was done to reduce the size disparity between the IRs and the much larger RGB images, with the hopes that this early interpolation would make the registration process easier. Later analysis indicated that this early interpolation was not necessary and that registration would have been better or at least the same from the original cropped IR imagery. The original quality of the IR imagery was poor and with the expansion during interpolation it was degraded further. In an effort to improve the imagery and the resulting registration process an unsharpen mask filter was applied to all IR imagery. This filter was applied with a 200% increase in pixel contrast, a radius of 2 for filter width and a threshold of 0. The unsharpen mask filter improved both edge and internal boundaries. The IR imagery was then exported to the GIS program, IDRISI 32.
The preprocessed imagery was imported into IDRISI 32; a GIS developed by Clark University. The imagery was imported utilizing IDRISI’s canned bitmap import module. Since the imagery had been preprocessed in Photoshop, the registration process commenced immediately after importation. The IR imagery was registered into the planar coordinate system of the RGBs. The R, B, and G channels of the color imagery had simply been split into three color channels and thus retained the same coordinates, requiring no registration.
A mapping function was required for the transformation of the IRs into the RGB’s coordinate system. IDRISI supports three orders of polynomial fit: (1) linear, (2) quadratic and (3) cubic. Linear, a first order polynomial, requires a minimum of 3 registration points to create the plane necessary for transformation. Quadratic, a second order polynomial, requires 6. Cubic, a third order, requires a minimum of 10 registration points. The lowest order polynomial that provides a reasonable fit should be utilized, since the higher the order of the polynomial the greater the distortion due to misaligned registration points. Because the majority of the images are flat or nearly flat, a simple linear transformation appeared to be the easiest as well as the best method for registering the IRs to the RGBs. In some cases a linear transformation would not suffice and a second order quadratic fit was required. This higher order transformation was necessary only in those images that possessed a substantial curvature (headplates and some celts).
The registration process in IDRISI requires the location of common points between both the base and the transformed image. These common points are recorded and entered as a correspondence file; the GIS then creates a polynomial equation to describe the spatial mapping of the transformed data. In order to evaluate a particular registration, it is necessary to locate at least double the minimum registration points required for its order fit. Since a linear transformation was chosen for the majority of the plates, a minimum of 6 points was necessary to truly evaluate the quality of the registration. The transformation process is referred to as rubber sheeting because, like a piece of rubber, the transformed image is pulled and stretched to fit to the new coordinate system. Because the transformed image can become distorted during this process by both a lack of good correspondence between registration points and a lack of registration points in certain portions of the image, a specific registration process was followed. In order to reduce the above problems, an average of 15 registration points was used for the linear transformations, with these points being separated into quadrants across the imagery. This placed at least three registration points in all four quadrants, with an additional three down the central axis of the piece. This process allowed for a reduction in the RMS (root mean square) error that would have been difficult to achieve using normal registration techniques. IDRISI calculates the allowable RMS error for each point and the total RMS error for the transformation equation, allowing for the exclusion of points that are deemed to erroneous. RMS error is an indication of the potential degree of error in ground units that the registration points may be from their actual image locale. A goal of 1.5 RMS or less was observed for the project–slightly higher than the normal acceptance range–but this was due to the poor quality and the sizable expansion of the IR images compared to the RGB images. Registration point exclusion was limited to one of the points per quadrant if possible. This allowed for a better boundary fit between the IRs and the RGBs.
Combination Procedures for Their Effectiveness in Revealing
Artistic Compositions on Hopewell Copper Artifacts
It would be erroneous to state that any of these bands are always helpful or unhelpful, or more or less effective than others, in revealing the artistic compositions.
The compositions vary widely in the kinds and diversity of surface materials that they bear; thus, different bands are effective in different material circumstances. However, considering the corpus of 219 copper items studied, in general, the viewer’s ability to see artistic compositions in digital imagery was found to decrease from the Red to the Blue to the Green bands, and from NIR to MIR bands. Considering color band calculations, better to worse viewing conditions were found for: R x B-inverse and R-inverse x B- inverse, to B x G-inverse and B-inverse x G inverse, to R x B, to B x R-inverse and B – G. R x B images generally looked very similar to the original RGB image, and provided little new information, whereas other bands often brought out features not as visible or not visible in the RGB image. R x B-inverse usually looked very similar to B x G- inverse, with the latter usually providing somewhat less contrast. Analogously, R- inverse x B-inverse usually looked very similar to B-inverse x G-inverse, with the latter usually providing somewhat less contrast. The B – G band generally was grainy, and sometimes could not be contrast-stretched to give an image that wasn’t very dark and discriminating. In general, the Red and NIR bands and these combined with other colors yielded the greatest visibility to the artistic compositions.
With IDRISI, it is possible to combine any three bands or band calculations together to produce a hybrid, false-color digital image. It is also possible to classify or group pixels with various algorithms and to map “like” and “unlike” pixels of a group in varying color palettes. Exploration of these operations for a sample of 17 artistic compositions yielded the following conclusions. (1) An artistic composition was most effectively rendered when the bands used for making a hybridized, composite image were carefully selected by examining each band individually for its relative effectiveness. No one or few band combinations were always or usually effective across artistic compositions. If only one or two bands were individually effective in revealing an artistic composition, introducing a third, suboptimal band into the composite image usually degraded the image, as expected. In other words, diversity of band information in a composite image did not necessarily guarantee better renditions of an art work. (2) One band might be effective for discriminating one feature of an artistic composition, while other bands might be effective for discriminating other features. To effectively render an entire art work in such cases required selecting bands that differed in the artistic features that they brought out. (3) Hybridized, composite images were most effective for displaying artistic works when the most discriminating band was mapped to the R channel, the next-most discriminating band to the G channel, and the least-most discriminating band the B channel. (4) The visibility of artistic compositions in composite hybrid images generally could be improved with the use of supervised or unsupervised clustering routines to classify pixels, or with manually selected palette redefinition of pixel classes defined by cluster analysis. Manually selected palette redefinition of pixels within a single band, which involves dividing the 0 – 255 brightness values along the continuum optimally in some way to bring out features of an art work, also worked well. (5) In general, unsupervised clustering provided better renderings of an artistic composition than supervised clustering when the “logical block areas” within the composition (e.g., a person’s head) were spiky or grainy. (6) In general, cluster analysis of a composite image was found more effective in revealing an artistic composition than palette redefinition of a single-band image, with manually selected boundaries between assigned colors, when “logical block areas” within the composition were spiky and grainy. When “logical block areas” were fairly homogeneous, palette redefinition of a single-band image was found more useful, and this tended to be the case for most of the art works that were examined. (7) Palette blending was found helpful in rendering heterogenous areas of a “logical block” of an image, whereas palettes defined with crisp threshold levels were found helpful in rendering homogeneous areas of a “logical block” of an image. (8) Optimal color palettes for displaying artworks were found to be cool and neutralized blue and yellow colors. Primary colors produced too jolty an effect. (9) When the foreground image in an artistic composition was relatively well defined and not grainy, its visibility could be increased by making it into a silhouette using palette redefinition procedures with crisp threshold levels. Positive, dark-on-light silhouettes, as one would normally see in the real world, were found to be more effective than negative, light-on-dark silhouettes. Grey-scale images were found to be more effective than color images. (12) When the foreground image in an artistic composition was relatively well defined and has multiple “logical blocks” of features within it, the foreground is best rendered with a blended palette and the background with one, flat color. (13) A foreground figure can be additionally pulled out from its background by making a frame/matt around the art work that is the same color as the mid tones to darkest tones of the figure, as in picture matting theory. (14) Principal components classification of pixels was not found useful when one or two individual bands provided good definition of an art work. As might be expected, it was found more helpful when only bands of poor to moderate definition were available.