+ All Categories
Home > Documents > Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via...

Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via...

Date post: 16-Apr-2020
Category:
Upload: others
View: 30 times
Download: 0 times
Share this document with a friend
5
Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein Dept. of Engineering and Design Western Washington University Bellingham, WA 98225 Contact email: [email protected] Anh H. Do and Christopher A. Brown Dept. of Mechanical Engineering Worcester Polytechnic Institute Worcester, MA 01609 Philip Klausmeyer Worcester Art Museum 55 Salisbury St. Worcester, MA 01609 Abstract—An image processing algorithm for photo- graphic and inkjet paper texture classification is developed based on area-scale fractal analysis. This analysis has been applied in surface metrology, and relies on the fact that the measured area of a surface depends on the scale of obser- vation. By comparing relative areas at various scales, the technique computes a measure of the topological similarity of two surfaces. Results show the algorithm is successful in detecting affinities among similarity groupings within a dataset of silver gelatin photographic papers and a dataset of inkjet papers. I. I NTRODUCTION Being able to identify the manufacturer and type of pa- per on which a given photograph or inkjet print is made is a question of considerable interest to art historians and paper conservators. Knowledge of the specific type of paper used in a given print can assist with attribution and verifying authenticity when prints have uncertain provenance. Current approaches to photographic and inkjet paper identification rely on experts trained in paper conservation who inspect a variety of paper features such as the surface texture, thickness, and gloss. Due to the large number of different photographic and inkjet papers in existence, however, visually identifying the paper used for a given photographic print is an immense challenge, and automated or semi-automated approaches to photographic paper classification are desirable [1]. Recently, a reference collection of known silver gelatin photographic papers, all identified by manu- facturer, brand, and date, has been assembled [2]. In parallel, a similar reference collection of inkjet papers has been assembled [3]. Digital photomicrographs of the surfaces of papers in both of these reference collections were acquired while using a “raking light”, which is a linear light source at an oblique angle to the surface. The raking light is widely used by art conservators in the examination of art works since it enhances the highlights and shadows so that surface features are more clearly visible during image capture. The presence of these two reference databases of raking light images brings about the possibility of automated image-processing- based approaches: by comparing raking light images in the known database with those taken from a print made on unknown paper, it is possible that photographic and inkjet papers can be identified in an automated fashion. As part of the Historic Photographic Paper Classi- fication (HPPC) challenge [4], we and several other independent teams of researchers whose papers also appear in these conference proceedings [5]–[7] have de- veloped image-processing-based approaches to perform automated paper classification. Several datasets of raking light images were assembled, consisting of a mix of known matches and known non-matches taken from both the Paul Messier Historic Photographic Papers Collection [2] and The Wilhelm Analog and Digital Color Print Materials Reference Collection [3], and these datasets were distributed to all teams for classification. In this paper, we describe our specific technical ap- proach to paper texture classification using raking light images, and we report on its performance when used to classify actual photographic and inkjet prints. II. TECHNICAL APPROACH A. Feature Extraction via Area-Scale Fractal Analysis Image-processing-based texture classification is a topic which has seen a lot of research interest over the past several decades for use in a wide range of applications (see [8]–[10] and references therein for a survey of approaches). While the use of filter banks, wavelets, and textons have been quite popular in modern texture classification approaches for feature extraction (and indeed, several of the teams in the HPPC challenge
Transcript
Page 1: Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein ... conservation who inspect

Texture Classification via Area-ScaleAnalysis of Raking Light Images

Andrew G. KleinDept. of Engineering and DesignWestern Washington University

Bellingham, WA 98225Contact email: [email protected]

Anh H. Do and Christopher A. BrownDept. of Mechanical EngineeringWorcester Polytechnic Institute

Worcester, MA 01609

Philip KlausmeyerWorcester Art Museum

55 Salisbury St.Worcester, MA 01609

Abstract—An image processing algorithm for photo-graphic and inkjet paper texture classification is developedbased on area-scale fractal analysis. This analysis has beenapplied in surface metrology, and relies on the fact that themeasured area of a surface depends on the scale of obser-vation. By comparing relative areas at various scales, thetechnique computes a measure of the topological similarityof two surfaces. Results show the algorithm is successfulin detecting affinities among similarity groupings within adataset of silver gelatin photographic papers and a datasetof inkjet papers.

I. INTRODUCTION

Being able to identify the manufacturer and type of pa-per on which a given photograph or inkjet print is madeis a question of considerable interest to art historians andpaper conservators. Knowledge of the specific type ofpaper used in a given print can assist with attributionand verifying authenticity when prints have uncertainprovenance. Current approaches to photographic andinkjet paper identification rely on experts trained in paperconservation who inspect a variety of paper featuressuch as the surface texture, thickness, and gloss. Dueto the large number of different photographic and inkjetpapers in existence, however, visually identifying thepaper used for a given photographic print is an immensechallenge, and automated or semi-automated approachesto photographic paper classification are desirable [1].

Recently, a reference collection of known silvergelatin photographic papers, all identified by manu-facturer, brand, and date, has been assembled [2]. Inparallel, a similar reference collection of inkjet papershas been assembled [3]. Digital photomicrographs of thesurfaces of papers in both of these reference collectionswere acquired while using a “raking light”, which is alinear light source at an oblique angle to the surface.The raking light is widely used by art conservators in the

examination of art works since it enhances the highlightsand shadows so that surface features are more clearlyvisible during image capture. The presence of thesetwo reference databases of raking light images bringsabout the possibility of automated image-processing-based approaches: by comparing raking light images inthe known database with those taken from a print madeon unknown paper, it is possible that photographic andinkjet papers can be identified in an automated fashion.

As part of the Historic Photographic Paper Classi-fication (HPPC) challenge [4], we and several otherindependent teams of researchers whose papers alsoappear in these conference proceedings [5]–[7] have de-veloped image-processing-based approaches to performautomated paper classification. Several datasets of rakinglight images were assembled, consisting of a mix ofknown matches and known non-matches taken from boththe Paul Messier Historic Photographic Papers Collection[2] and The Wilhelm Analog and Digital Color PrintMaterials Reference Collection [3], and these datasetswere distributed to all teams for classification.

In this paper, we describe our specific technical ap-proach to paper texture classification using raking lightimages, and we report on its performance when used toclassify actual photographic and inkjet prints.

II. TECHNICAL APPROACH

A. Feature Extraction via Area-Scale Fractal Analysis

Image-processing-based texture classification is atopic which has seen a lot of research interest overthe past several decades for use in a wide range ofapplications (see [8]–[10] and references therein for asurvey of approaches). While the use of filter banks,wavelets, and textons have been quite popular in moderntexture classification approaches for feature extraction(and indeed, several of the teams in the HPPC challenge

Page 2: Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein ... conservation who inspect

have adopted approaches based on wavelets [6] andtextons [4]), we draw upon techniques from the fieldof surface metrology to leverage the unique featuresof raking light images. In particular, we rely on area-scale fractal analysis [11] which is a technique thathas been applied to a diverse range of problems insurface metrology including, for example, analyzing theroughness of machined surfaces [12], characterizing foodsurfaces [13], and analyzing tooth wear in hominidfossils [14].

It is well-known that the measured length of a coast-line depends on the scale of observation [15] since,as we zoom in, more detail is captured in the lengthmeasurement. Similarly, the measured area of a surfacedepends on the scale of observation. The area-scaleapproach [11] uses fractal analysis to decompose asurface into a patchwork of triangles of a varying sizescorresponding to varying scales of observation. Sucha triangular patchwork decomposition of a surface isshown in Fig. 1, where the nominal area is the areaof the flat portion projected underneath the surface, andthe measured area is the sum of the areas of each of thetriangles.

Fig. 1. Triangular Patchwork Decomposition of a Surface

As the size of the triangles in the patchwork isdecreased (i.e. as we zoom in to finer scales), moresurface features are captured in the measurement of thesurface area. We define the relative area of a surface ata given observation scale as the ratio of the measuredarea divided by the nominal area, i.e.

relative area =measured areanominal area

. (1)

Note that the nominal area does not change with the scaleof observation, and relative area is always greater than orequal to 1. As the size of the triangles increases, smallersurface features become less resolvable and the relativearea of the surface decreases, eventually approaching 1;an example of this effect is shown in Fig. 2.

Area-scale fractal analysis relies on the fact that therelative area versus scale of observation curve can beused to characterize a surface texture [12], [12]–[14].

Fig. 2. Relative Area Decreases with Increasing Scale

As such, we consider the use of area-scale fractal anal-ysis for feature extraction in the classification of papertextures.

B. Pseudo Area-Scale Applied to Raking Light Images

Area-scale fractal analysis relies on the availabilityof direct surface measurements, typically taken with aconfocal laser scanning microscope (CLSM) or a scan-ning tunneling microscope (STM). Here, however, theprovided datasets consist of microphotographs acquiredwith an optical microscope illuminated by a raking light[1], and direct surface measurements are not available.As such, we adopt greyscale illumination intensity asa proxy for surface height, by assuming that surfaceheights are roughly proportional to surface illumination.That is, we make the assumption that brighter greyscaleintensities correspond to higher surface heights, whiledarker greyscale intensities correspond to lower surfaceheights.

The “scale” in this context is a measure of the size ofthe right triangles used for the triangular patchwork de-composition, and represents the length in pixels of eachof the two non-hypotenuse sides of the triangles whenprojected into the 2-D plane (i.e. ignoring the “pseudo-height” of each pixel as given by the illumination).As such, at a scale of s, the 2-D projected patchworkconsists of right triangles having sides equal in lengthto s pixels, so the length of the hypotenuse is equal to√2s pixels, and the nominal area of a single triangle is

equal to 12s

2 square pixels. Because larger values of scorrespond to “zooming out” or analyzing the image atcoarser scales, the image effectively gets downsampledby larger and larger factors of s in both the horizontal andvertical direction as the scale increases. At a given scales, there are s possible downsampling phases, yielding

Page 3: Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein ... conservation who inspect

(a) Constant intensity

(b) Random i.i.d. intensities

(c) Sum of two sinusoids (one in x, one in y)

Fig. 3. Example images and their relative area vs. scale curves

s different relative areas. As such, we average these sphases to use all of the information in the image.

Once the pseudo-height information is taken intoaccount, the area of each triangle in the patchwork islarger due to “stretching” into the third dimension. Ifthe three vertices of the first triangle in 3-D space aregiven by (0, 0, h1), (0, s, h2), and (s, 0, h3), for example,where these vertices have pseudo-heights (or greyscaleintensities) h1, h2, and h3, respectively, then the surfacearea of this triangle in 3-space can be shown to be

1

2s√

(h1 − h2)2 + (h1 − h3)2 + s2

which, again, has nominal area equal to 12s

2 since the 2-D projected triangle has vertices (0, 0), (0, s), and (s, 0).Using the greyscale illumination as a proxy for the heightof each pixel, then, we can sum the measured areas of allof the decomposed triangular elements of the patchworkto compute the measured area, and ultimately the relativearea from equation (1).

In Fig. 3 we show several synthetic but illustrativeimage surfaces, as well as the relative area versus scalecurve for each of these surfaces. As expected, an imagethat has a constant intensity (i.e. when the texture isperfectly flat) has a relative area equal to 1 at all scalessince the measured area always equals the nominal area.For a random i.i.d. surface which is highly faceted, therelative area is very high at small scales because allof the minute surface features are resolvable, whereas

the image looks “flatter” at larger scales and eventuallyapproaches a relative area of 1. In Fig. 3c we showthe relative area versus scale curve for the sum of twosinusoids having period 100; in this case, we see that therelative area is 1 at every multiple of the period since theimage looks perfectly flat when downsampled at scaless which are a multiple of the period.

These examples also demonstrate that the relative areaas a function of scale is very different for different typesof textures, and motivates area-scale fractal analysis asa choice for texture feature extraction. Since all textureshave the property that the relative area approaches 1 asthe scale grows significantly large, there is very little“information” at scales that exceed the size of the surfacefeatures in the underlying texture. Consequently, thechoice of which scales are relevant for feature extractiondepends heavily on the types of textures under investi-gation.

C. Summary of Approach

We now summarize the technical approach, includingthe algorithm parameters that were used, as well asthe choice of distance metric. These parameters weredetermined through experimentation with the providedtraining dataset.

1) Preprocessing. All images were provided as2080×1536 color images, representing 1.35×1.00cm2 of surface area. As a first step, we extracta 1024×1024 portion from the center, where thelighting is generally most consistent. Then, weconvert the color image to greyscale and normal-ize the intensity to equalize the brightness of allimages.

2) Feature Extraction. We perform area-scale fractalanalysis on the preprocessed photomicrographs byusing light intensity as a proxy for height. Throughexperimentation, we found that scales larger than34 pixels (corresponding to lengths of 0.22mm)were not useful to classification since their in-clusion did not change our results. We chose 8logarithmically spaced scales between 1 and 34pixels, which corresponded with the Fibonacciseries, S = {1, 2, 3, 5, 8, 13, 21, 34}. The 1024 ×1024 grid of equally spaced points (representingpixel locations) is decomposed into a patchworkof 2

(N−1s

)2 isosceles right triangles at each scales ∈ S. The area of each triangle in 3-D spaceis then computed and the areas of all triangularregions are summed, resulting in the total relativearea As at the chosen scale s. To conduct feature

Page 4: Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein ... conservation who inspect

extraction, then, the relative area for an image iscomputed over 8 scale values ranging from 1 pixelto 34 pixels, which correspond to lengths of 6.5µm to 0.22 mm, respectively.

3) Pairwise Distance Computation. The topologicalsimilarity of two surfaces is computed by compar-ing the relative areas between two images at eachscale. To classify and compare the similarity oftwo images i and j, a χ2-distance measure d(i, j)is computed via

d(i, j) =∑s∈S

(A

(i)s −A(j)

s

)2A

(i)s +A

(j)s

where A(i)s is the relative area of image i at scale

s and S is the set of chosen scale values. Smallvalues of d(i, j) indicate high similarity betweenimages i and j, while large values of d indicatelow similarity.

The algorithm was written in Octave-compatibleMATLAB code, and is freely available [16].

III. NUMERICAL RESULTS

A. Datasets

The first dataset consists of 120 silver gelatin photo-graphic paper samples selected from [2]. The manufac-turer, brand, texture, reflectance, and date have been cata-logued by art conservators to serve as a reference bench-mark. The dataset contains three levels of similarity: (1)samples from one same sheet (3 subsets of 10 samples),(2) samples from sheets taken from one same package (3subsets of 10 samples); (3) samples from papers madeto the same manufacturer specifications over a period oftime (3 subsets of 10 samples). In addition, 30 sheetsof interest to art conservators representing the diversityof silver gelatin photographic papers are included in thedatabase. This dataset has been described in more detailin [1], [4] and is publicly available at papertextureid.org.

The second dataset consists of 120 inkjet paper sam-ples selected from [3]. Again, the dataset contains thesame three levels of similarity as described above (i.e.same sheet, same package, same manufacturing specifi-cations), as well as a 30 additional diversity papers. Thisdataset has been described in more detail in [17].

B. Performance of Approach

Using the approach outlined in Section II-C, weprocessed the 120 images in each of the two datasetsand generated pairwise distance metrics d(i, j) for allpossible pairs of images. For these results, the matrix

of distance metrics was then converted to a grey-scaleimage with the darkest intensities indicating the greatestaffinity and the lightest the least affinity.

Fig. 4. Silver gelatin dataset results. Top: Known pairwise similari-ties, Bottom: Predicted pairwise similarities

The top of Fig. 4 shows known similarities for thesilver gelatin dataset within the sample group suggestedby the metadata including manufacturer, texture, brand,and date. As expected, the nine dark blocks starting in theupper left and continuing down along the diagonal showa high degree of affinity (dark gray and black) as theseblocks depict the nine groups of similar textures. Lesserdegrees of similarity are scattered throughout the figurewith the 30 samples selected to show diversity (poorerlevels of similarity) falling in the lower right quadrantand along the right side and bottom edge. The bottomof Fig. 4 shows the performance of our algorithm. Theresults largely coincide with the metadata and suggestthat raking light photomicrographs have sufficient textureinformation to support the automated classification ofhistoric photographic silver gelatin papers.

Page 5: Texture Classification via Area-Scale Analysis of Raking ... · Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein ... conservation who inspect

Fig. 5. Inkjet dataset results. Top: Known pairwise similarities,Bottom: Predicted pairwise similarities

Similarly, the top of Fig. 5 shows known similaritiesfor the inkjet dataset within the sample group suggestedby the metadata. Again, the nine dark blocks starting inthe upper left and continuing down along the diagonalshow a high degree of affinity, and again the resultslargely coincide with the metadata and suggest thatraking light photomicrographs have sufficient textureinformation to support the automated classification ofinkjet papers. In addition, the proposed results suggestthe an area-scale approach yields good performance inclassifying textures of photographic paper.

Future work will extend this approach to wove pa-pers, and will investigate the connection between thistriangular patchwork approach and existing wavelet-based schemes. The authors would like to thank PaulMessier, Henry Wilhelm, and the Museum of ModernArt (MoMA) for providing data for this project.

REFERENCES

[1] P. Messier and C. R. Johnson, Jr., “Texture feature extractionfor the classification of photographic papers,” in Proc. AsilomarConf. on Signals, Systems, and Computers, Nov. 2014.

[2] P. Messier. (2014) The Paul Messier historicphotographic papers collection. [Online]. Available:http://paulmessier.com/pm/collection.html

[3] H. Wilhelm, C. Brower, K. Armah, and B. Stahl. (2014)The Wilhelm analog and digital color print materialsreference collection. [Online]. Available: http://www.wilhelm-research.com

[4] C. R. Johnson, Jr., P. Messier, W. Sethares, A. Klein et al., “Pur-suing automated classification of historic photographic papersfrom raking light photomicrographs,” Journal of the AmericanInstitute for Conservation, no. 3, pp. 159–170, Aug. 2014.

[5] W. A. Sethares, A. Ingle, T. Krc, and S. Wood, “Eigentextures:An SVD approach to automated paper classification,” in Proc.Asilomar Conf. on Signals, Systems, and Computers, Nov. 2014.

[6] P. Abry, S. Roux, H. Wendt, and S. Jaffard, “Hyperbolic wavelettransform for photographic paper texture characterization,” inProc. Asilomar Conf. on Signals, Systems, and Computers, Nov.2014.

[7] D. Picard, S. Vu, and I. Fijalkow, “Second order model de-viations of local Gabor features,” in Proc. Asilomar Conf. onSignals, Systems, and Computers, Nov. 2014.

[8] J. Zhang and T. Tan, “Brief review of invariant texture analysismethods,” Pattern Recognition, vol. 35, no. 3, pp. 735–747,2002.

[9] M. Varma, “Statistical approaches to texture classification,”Ph.D. dissertation, University of Oxford, 2004.

[10] T. Ojala, M. Pietikainen, and D. Harwood, “A comparativestudy of texture measures with classification based on featureddistributions,” Pattern recognition, vol. 29, no. 1, pp. 51–59,1996.

[11] C. A. Brown, P. D. Charles, W. A. Johnsen, and S. Chesters,“Fractal analysis of topographic data by the patchwork method,”Wear, vol. 161, no. 1, pp. 61–67, 1993.

[12] C. A. Brown, W. A. Johnsen, R. M. Butland, and J. Bryan,“Scale-sensitive fractal analysis of turned surfaces,” CIRPAnnals-Manufacturing Technology, vol. 45, no. 1, pp. 515–518,1996.

[13] F. Pedreschi, J. M. Aguilera, and C. A. Brown, “Characteri-zation of food surfaces using scale-sensitive fractal analysis,”Journal of Food Process Engineering, vol. 23, no. 2, pp. 127–143, 2000.

[14] R. S. Scott, P. S. Ungar, T. S. Bergstrom, C. A. Brown,F. E. Grine, M. F. Teaford, and A. Walker, “Dental microweartexture analysis shows within-species diet variability in fossilhominins,” Nature, vol. 436, no. 7051, pp. 693–695, 2005.

[15] B. B. Mandelbrot, “How long is the coast of Britain?” Science,vol. 156, no. 3775, pp. 636–638, 1967.

[16] A. G. Klein and A. Do. (2014) Matlab source code forcomputing texture difference in papers. [Online]. Available:http://aspect.engr.wwu.edu/photopaper.m

[17] P. Messier, C. R. Johnson, Jr., H. Wilhelm, W. Sethares,A. Klein et al., “Automated surface texture classification ofinkjet and photographic media,” in Proc. Intl. Conf. on DigitalPrinting Technologies (NIP 29), Sep. 2013.


Recommended