+ All Categories
Home > Documents > Fractal Analysis

Fractal Analysis

Date post: 20-Apr-2015
Category:
Upload: angelika-l
View: 255 times
Download: 5 times
Share this document with a friend
43
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 08 Mar 2012 18:37:29 UTC Fractal Analysis A Wikipedia Collection compiled by Loren Cobb
Transcript
Page 1: Fractal Analysis

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.PDF generated at: Thu, 08 Mar 2012 18:37:29 UTC

Fractal AnalysisA Wikipedia Collection compiled by LorenCobb

Page 2: Fractal Analysis

ContentsArticles

Fractal dimension 1Fractal analysis 8Box counting 10Multifractal system 14Entropy (information theory) 19Rényi entropy 30Hausdorff dimension 33

ReferencesArticle Sources and Contributors 39Image Sources, Licenses and Contributors 40

Article LicensesLicense 41

Page 3: Fractal Analysis

Fractal dimension 1

Fractal dimension

- 11.5 x 200 = 2300 km - 28 x 100 = 2800 km - 70 x 50 = 3500 km A fractal dimensionis a ratio providing a statistical index of complexity comparing how detail in a pattern (strictly speaking, a fractalpattern) changes with the scale at which it is measured. It has also been characterized as a measure of thespace-filling capacity of a pattern that tells how a fractal scales differently than the space it is embedded in; a fractaldimension is greater than the dimension of the space containing it and does not have to be an integer.[1][2][3]

The essential idea of "fractured" dimensions has a long history in mathematics, but the term itself was brought to thefore by Benoît Mandelbrot based on his 1967 paper on self-similarity in which he discussed fractional dimensions.[4]

In that paper, Mandelbrot cited previous work by Lewis Fry Richardson describing the counter-intuitive notion that acoastline's measured length changes with the length of the measuring stick used (see Fig. 1). In terms of that notion,the fractal dimension of a coastline quantifies how the number of scaled measuring sticks required to measure thecoastline changes with the scale applied to the stick.[5] There are several formal mathematical definitions of fractaldimension that build on this basic concept of change in detail with change in scale.One non-trivial example is the fractal dimension of a Koch snowflake. It has a topological dimension of 1, but it isby no means a rectifiable curve: the length of the curve between any two points on the Koch Snowflake is infinite.No small piece of it is line-like, but rather is composed of an infinite number of segments joined at different angles.The fractal dimension of a curve can be explained intuitively thinking of a fractal line as an object too detailed to beone-dimensional, but too simple to be two-dimensional[6]:3-4. Therefore its dimension might best be described not byits usual topological dimension of 1 but by its fractal dimension, which in this case is a number between one andtwo.

Page 4: Fractal Analysis

Fractal dimension 2

Introduction

Figure 2. A 32-segment quadric fractal scaledand viewed through boxes of different sizes. Thepattern illustrates self similarity. The theoreticalfractal dimension for this fractal is log32/log8 =1.67; its empirical fractal dimension from box

counting analysis is ±1%[7]:86 using fractalanalysis software.

A fractal dimension is an index for characterizing fractal patterns orsets by quantifying their complexity as a ratio of the change in detail tothe change in scale.[5]:1 Several types of fractal dimension can bemeasured theoretically and empirically (see Fig. 2).[8][3] Fractaldimensions are used to characterize a broad spectrum of objectsranging from the abstract[9][3] to practical phenomena, includingturbulence[5]:97-104, river networks:247-246, urban growth[10][11], humanphysiology[12][13], medicine[8], and market trends[14]. The essentialidea of fractional or fractal dimensions has a long history inmathematics that can be traced back to the 1600s[5]:19[15], but the termsfractal and fractal dimension were coined by mathematician BenoîtMandelbrot in 1975.[16] [8] [5] [9][2][14]

Fractal dimensions were first applied as an index characterizingcomplex geometric forms for which the details seemed more importantthan the gross picture.[16] For sets describing ordinary geometricshapes, the theoretical fractal dimension equals the set's familiarEuclidean or topological dimension. Thus, it is 0 for sets describingpoints (0-dimensional sets); 1 for sets describing lines (1-dimensionalsets having length only); 2 for sets describing surfaces (2-dimensional sets having length and width); and 3 for setsdescribing volumes (3-dimensional sets having length, width, and height). But this changes for fractal sets. If thetheoretical fractal dimension of a set exceeds its topological dimension, the set is considered to have fractalgeometry.[17]

Unlike topological dimensions, the fractal index can take non-integer values, indicating that a set fills its spacequalitatively and quantitatively differently than an ordinary geometrical set does.[3][9][2] For instance, a curve withfractal dimension very near to 1, say 1.10, behaves quite like an ordinary line, but a curve with fractal dimension 1.9winds convolutedly through space very nearly like a surface. Similarly, a surface with fractal dimension of 2.1 fillsspace very much like an ordinary surface, but one with a fractal dimension of 2.9 folds and flows to fill space rathernearly like a volume.[17]:48[18] This general relationship can be seen in the two images of fractal curves in Fig.2 andFig. 3 - the 32-segment contour in Fig. 2, convoluted and space filling, has a fractal dimension of 1.67, compared tothe perceptibly less complex Koch curve in Fig. 3, which has a fractal dimension of 1.26.

Page 5: Fractal Analysis

Fractal dimension 3

Figure 3. The Koch curve is a classic iteratedfractal curve. It is a theoretical construct that is

made by iteratively scaling a starting segment. Asshown, each new segment is scaled by 1/3 into 4new pieces laid end to end with 2 middle piecesleaning toward each other between the other two

pieces, so that if they were a triangle its basewould be the length of the middle piece, so that

the whole new segment fits across thetraditionally measured length between the

endpoints of the previous segment. Whereas theanimation only shows a few iterations, the

theoretical curve is scaled in this way infinitely.Beyond about 6 iterations on an image this small,

the detail is lost.

The relationship of an increasing fractal dimension with space-fillingmight be taken to mean fractal dimensions measure density, but that isnot so; the two are not strictly correlated.[7] Instead, a fractal dimensionmeasures complexity, a concept related to certain key features offractals: self-similarity and detail or irregularity.[19] These features areevident in the two examples of fractal curves. Both are curves withtopological dimension of 1, so one might hope to be able to measuretheir length or slope, as with ordinary lines. But we cannot do either ofthese things, because fractal curves have complexity in the form ofself-similarity and detail that ordinary lines lack.[5] The self-similaritylies in the infinite scaling, and the detail in the defining elements ofeach set. The length between any two points on these curves isundefined because the curves are theoretical constructs that never stoprepeating themselves[20]. Every smaller piece is composed of aninfinite number of scaled segments that look exactly like the firstiteration. These are not rectifiable curves, meaning they cannot bemeasured by being broken down into many segments approximatingtheir respective lengths. They cannot be characterized by finding theirlengths or slopes. However, their fractal dimensions can be determined, which shows that both fill space more thanordinary lines but less than surfaces, and allows them to be compared in this regard.

Note that the two fractal curves described above show a type of self-similarity that is exact with a repeating unit ofdetail that is readily visualized. This sort of structure can be extended to other spaces (e.g., a fractal that extends theKoch curve into 3-d space has a theoretical D=2.5849). However, such neatly countable complexity is only oneexample of the self-similarity and detail that are present in fractals.[3][14] The example of the coast line of Britain, forinstance, exhibits self-similarity of an approximate pattern with approximate scaling.[5]:26 Overall, fractals showseveral types and degrees of self-similarity and detail that may not be easily visualized. These include, as examples,strange attractors for which the detail has been described as in essence, smooth portions piling up[17]:49, the Julia set,which can be seen to be complex swirls upon swirls, and heart rates, which are patterns of rough spikes repeated andscaled in time.[21] Fractal complexity may not always be resolvable into easily grasped units of detail and scalewithout complex analytic methods but it is still quantifiable through fractal dimensions.[5]:197; 262

HistoryThe terms fractal dimension and fractal were coined by Mandelbrot in 1975[16], about a decade after he publishedhis paper on self-similarity in the coastline of Britain. Various historical authorities credit him with also synthesizingcenturies of complicated theoretical mathematics and engineering work and applying them in a new way to studycomplex geometries that defied description in usual linear terms.[22][15][23] The earliest roots of what Mandelbrotsynthesized as the fractal dimension have been traced clearly back to writings about undifferentiable, infinitelyself-similar functions, which are important in the mathematical definition of fractals, around the time that calculuswas discovered in the mid 1600s[5]:405. There was a lull in the published work on such functions for a time after that,then a renewal starting in the late 1800s with the publishing of mathematical functions and sets that are today calledcanonical fractals (such as the eponymous works of von Koch[20], Sierpinski, and Julia), but at the time of theirformulation were often considered antithetical mathematical "monsters".[23][15] These works were accompanied byperhaps the most pivotal point in the development of the concept of a fractal dimension through the work ofHausdorff in the early 1900s who defined a "fractional" dimension that has come to be named after him and isfrequently invoked in defining modern fractals.[22] [4][5]:44 [17]

Page 6: Fractal Analysis

Fractal dimension 4

See Fractal history for more information

Role of scaling

Figure 4. Traditional notions of geometry fordefining scaling and dimension.

The concept of a fractal dimension rests in nonconventional views ofscaling and dimension.[24] As Fig. 4 illustrates, traditional notions ofgeometry dictate that shapes scale predictably according to intuitiveand familiar ideas about the space they are contained within, such that,for instance, measuring a line using first one measuring stick thenanother 1/3 its size, will give for the second stick a total length 3 timesas many sticks long as with the first. This holds in 2 dimensions, aswell. If one measures the area of a square then measures again with abox 1/3 the size of the original, one will find 9 times as many squaresas with the first measure. Such familiar scaling relationships can bedefined mathematically by the general scaling rule in Equation 1,where the variable stands for the number of new sticks, for thescaling factor, and for the fractal dimension:

(1)

This scaling rule typifies conventional rules about geometry and dimension - for lines, it quantifies that, because =3 when =1/3 as in the example above, =1, and for squares, because =9 when =1/3, =2.

Figure 5. The first four iterations of the Kochsnowflake, which has an approximate Hausdorff

dimension of 1.2619.

The same rule applies to fractal geometry but less intuitively. Toelaborate, a fractal line measured at first to be one length, whenremeasured using a new stick scaled by 1/3 of the old may not be theexpected 3 but instead 4 times as many scaled sticks long. In this case,

=4 when =1/3, and the value of can be found by rearrangingEquation 1:

(2)

That is, for a fractal described by =4 when =1/3, =1.2619, a non-integer dimension that suggests the fractal has a dimension not equal to the space it resides in.[3] The scaling used in this example is the same scaling of the Koch curve and snowflake. Of note, these images themselves are not true fractals because the scaling described by the value of cannot continue infinitely for the simple reason that the images only exist to the point of their smallest component, a pixel. The theoretical pattern that the digital images represent, however, has no discrete pixel-like pieces, but rather is composed of an infinite number of infinitely scaled segments joined at different angles

Page 7: Fractal Analysis

Fractal dimension 5

and does indeed have a fractal dimension of 1.2619. [24][5]

D is not a unique descriptor

Figure 6. Two L-systems branching fractals thatare made by producing 4 new parts for every 1/3scaling so have the same theoretical as the

Koch curve and for which the empirical boxcounting has been demonstrated with 2%

accuracy[7].

As is the case with dimensions determined for lines, squares, andcubes, fractal dimensions are general descriptors that do not uniquelydefine patterns.[25][24] The value of D for the Koch fractal discussedabove, for instance, quantifies the pattern's inherent scaling, but doesnot uniquely describe nor provide enough information to reconstruct it.Many fractal structures or patterns could be constructed that have thesame scaling relationship but are dramatically different from the Kochcurve, as is illustrated in Figure 6.

For examples of how fractal patterns can be constructed, see Fractal,Sierpinski triangle, Mandelbrot set, Diffusion limited aggregation.

Examples

The concept of fractal dimension described in this article is a basic view of a complicated construct. The examplesdiscussed here were chosen for clarity, and the scaling unit and ratios were known ahead of time. In practise,however, fractal dimensions can be determined using techniques that approximate scaling and detail from limitsestimated from regression lines over log vs log plots of size vs scale. Several formal mathematical definitions ofdifferent types of fractal dimension are listed below. Although for some classic fractals all these dimensionscoincide, in general they are not equivalent:

• Box counting dimension: D is estimated as the exponent of a power law.

• Information dimension: D considers how the average information needed to identify an occupied box scales withbox size; is a probability.

• Correlation dimension D is based on as the number of points used to generate a representation of a fractal andgε, the number of pairs of points closer than ε to each other.

•• Generalized or Rényi dimensionsThe box-counting, information, and correlation dimensions can be seen as special cases of a continuousspectrum of generalized dimensions of order α, defined by:

• Multifractal dimensions: a special case of Rényi dimensions where scaling behaviour varies in different parts ofthe pattern.

•• Uncertainty exponent•• Hausdorff dimension•• Packing dimension• Local connected dimension[26]

Page 8: Fractal Analysis

Fractal dimension 6

Estimating from real-world dataThe fractal dimension measures described in this article are for formally-defined fractals. However, many real-worldphenomena also exhibit limited or statistical fractal properties and fractal dimensions have been estimated forsampled data from many such phenomena using computer based fractal analysis techniques. Practical dimensionestimates are affected by various methodological issues, and are sensitive to numerical or experimental noise andlimitations in the amount of data. Nonetheless, the field is rapidly growing and as evidenced by searching databasessuch as PubMed[27], the past decade has seen methods develop from being largely theoretical to the point whereestimated fractal dimensions for statistically self-similar phenomena have many practical applications in multifariousfields including diagnostic imaging[28][29], physiology[30], neuroscience[31], medicine,[32][33][34], physics[35][36],image analysis [37] [38], acoustics[39], Riemann zeta zeros[40] and electrochemical processes.[41].

Notes[1] Falconer, Kenneth (2003). Fractal Geometry. New York: Wiley. p. 308. ISBN 9780470848623.[2] Sagan, Hans (1994). Space-Filling Curves. Berlin: Springer-Verlag. p. 156. ISBN 0387942653.[3] Vicsek, Tamás (1992). Fractal growth phenomena. Singapore New Jersey: World Scientific. p. 10. ISBN 9789810206680.[4] Mandelbrot, B. (1967). "How Long is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension". Science 156 (3775):

636–638. doi:10.1126/science.156.3775.636. PMID 17837158.[5] Benoît B. Mandelbrot (1983). The fractal geometry of nature (http:/ / books. google. com/ books?id=0R2LkE3N7-oC). Macmillan.

ISBN 978-0716711865. . Retrieved 1 February 2012.[6] Harte, David (2001). Multifractals. London: Chapman & Hall. ISBN 9781584881544.[7] Karperien (2004). "Defining Microglial Morphology: Form, Function, and Fractal Dimension. Charles Sturt University. p. 95.[8] Losa, Gabriele A.; Nonnenmacher, Theo F., eds. (2005). Fractals in biology and medicine (http:/ / books. google. com/

books?id=t9l9GdAt95gC). Springer. ISBN 978-3-7643-7172-2. . Retrieved 1 February 2012.[9] Falconer, Kenneth (2003). Fractal Geometry. New York: Wiley. p. 308. ISBN 9780470848623.[10] Chen, Y. (2011). Hernandez Montoya, Alejandro Raul. ed. "Modeling Fractal Structure of City-Size Distributions Using Correlation

Functions". PLoS ONE 6 (9): e24791. doi:10.1371/journal.pone.0024791. PMC 3176775. PMID 21949753.[11] "Applications" (http:/ / library. thinkquest. org/ 26242/ full/ ap/ ap. html). . Retrieved 2007-10-21.[12] Popescu, D. P.; Flueraru, C.; Mao, Y.; Chang, S.; Sowa, M. G. (2010). "Signal attenuation and box-counting fractal analysis of optical

coherence tomography images of arterial tissue". Biomedical Optics Express 1 (1): 268–277. doi:10.1364/boe.1.000268. PMC 3005165.PMID 21258464.

[13] King, R. D.; George, A. T.; Jeon, T.; Hynan, L. S.; Youn, T. S.; Kennedy, D. N.; Dickerson, B.; the Alzheimer’s Disease NeuroimagingInitiative (2009). "Characterization of Atrophic Changes in the Cerebral Cortex Using Fractal Dimensional Analysis". Brain Imaging andBehavior 3 (2): 154–166. doi:10.1007/s11682-008-9057-9. PMC 2927230. PMID 20740072.

[14] Peters, Edgar (1996). Chaos and order in the capital markets : a new view of cycles, prices, and market volatility. New York: Wiley.ISBN 0471139386.

[15] Edgar, Gerald (2004). Classics on Fractals. Boulder: Westview Press. ISBN 9780813341538.[16] Albers; Alexanderson (2008). "Benoît Mandelbrot: In his own words". Mathematical people : profiles and interviews. Wellesley, Mass: AK

Peters. p. 214. ISBN 9781568813400.[17] Mandelbrot, Benoît (2004). Fractals and Chaos. Berlin: Springer. ISBN 9780387201580. "A fractal set is one for which the fractal

(Hausdorff-Besicovitch) dimension strictly exceeds the topological dimension"[18] See a graphic representation of different fractal dimensions[19][19] See Fractal characteristics[20] Helge von Koch, "On a continuous curve without tangents constructible from elementary geometry" In Gerald Edgar, ed. (2004). Classics

on Fractals. Boulder: Westview Press. p. 25-46. ISBN 9780813341538.[21] Tan, C. O.; Cohen, M. A.; Eckberg, D. L.; Taylor, J. A. (2009). "Fractal properties of human heart period variability: Physiological and

methodological implications". The Journal of Physiology 587 (15): 3929. doi:10.1113/jphysiol.2009.169219.[22] Gordon, Nigel (2000). Introducing fractal geometry. Duxford: Icon. p. 71. ISBN 9781840461237.[23] Trochet, Holly (2009). "A History of Fractal Geometry" (http:/ / www. webcitation. org/ 65DCT2znx). MacTutor History of Mathematics.

Archived from the original on 4 February 2012. . Retrieved 4 February 2012.[24] Iannaccone, Khokha (1996). Fractal Geometry in Biological Systems. ISBN 978-0849376368.[25] Vicsek, Tamás (2001). Fluctuations and scaling in biology. Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-850790-9.[26] Jelinek (2008). "Automated detection of proliferative retinopathy in clinical practice". Clinical Ophthalmology: 109–122.

doi:10.2147/OPTH.S1579.[27] "PubMed" (http:/ / www. ncbi. nlm. nih. gov/ pubmed?term=fractal dimension). Search terms fractal analysis, box counting, fractal

dimension, multifractal. . Retrieved January 31,2012.

Page 9: Fractal Analysis

Fractal dimension 7

[28] Landini, G.; Murray, P. I.; Misson, G. P. (1995). "Local connected fractal dimensions and lacunarity analyses of 60 degrees fluoresceinangiograms". Investigative ophthalmology & visual science 36 (13): 2749–2755. PMID 7499097.

[29] Cheng, Q. (1997). Mathematical Geology 29 (7): 919–932. doi:10.1023/A:1022355723781.[30] Popescu, D. P.; Flueraru, C.; Mao, Y.; Chang, S.; Sowa, M. G. (2010). "Signal attenuation and box-counting fractal analysis of optical

coherence tomography images of arterial tissue". Biomedical Optics Express 1 (1): 268–277. doi:10.1364/boe.1.000268. PMC 3005165.PMID 21258464.

[31] King, R. D.; George, A. T.; Jeon, T.; Hynan, L. S.; Youn, T. S.; Kennedy, D. N.; Dickerson, B.; the Alzheimer’s Disease NeuroimagingInitiative (2009). "Characterization of Atrophic Changes in the Cerebral Cortex Using Fractal Dimensional Analysis". Brain Imaging andBehavior 3 (2): 154–166. doi:10.1007/s11682-008-9057-9. PMC 2927230. PMID 20740072.

[32] Liu, J. Z.; Zhang, L. D.; Yue, G. H. (2003). "Fractal Dimension in Human Cerebellum Measured by Magnetic Resonance Imaging".Biophysical Journal 85 (6): 4041–4046. doi:10.1016/S0006-3495(03)74817-6. PMC 1303704. PMID 14645092.

[33] Smith, T. G.; Lange, G. D.; Marks, W. B. (1996). "Fractal methods and results in cellular morphology — dimensions, lacunarity andmultifractals". Journal of Neuroscience Methods 69 (2): 123–136. doi:10.1016/S0165-0270(96)00080-5. PMID 8946315.

[34] Li, J.; Du, Q.; Sun, C. (2009). "An improved box-counting method for image fractal dimension estimation". Pattern Recognition 42 (11):2460. doi:10.1016/j.patcog.2009.03.001.

[35] Dubuc, B.; Quiniou, J.; Roques-Carmes, C.; Tricot, C.; Zucker, S. (1989). "Evaluating the fractal dimension of profiles". Physical Review A39 (3): 1500–1512. doi:10.1103/PhysRevA.39.1500. PMID 9901387.

[36] Roberts, A.; Cronin, A. (1996). "Unbiased estimation of multi-fractal dimensions of finite data sets". Physica A: Statistical Mechanics andits Applications 233 (3–4): 867. doi:10.1016/S0378-4371(96)00165-3.

[37] Pierre Soille and Jean-F. Rivest (1996). "On the Validity of Fractal Dimension Measurements in Image Analysis" (http:/ / mdigest. jrc. ec.europa. eu/ soille/ soille-rivest96. pdf). Journal of Visual Communication and Image Representation 7 (3): 217–229.doi:10.1006/jvci.1996.0020. ISSN 1047-3203. .

[38] Tolle, C. R.; McJunkin, T. R.; Gorsich, D. J. (2003). "Suboptimal minimum cluster volume cover-based method for measuring fractaldimension". IEEE Transactions on Pattern Analysis and Machine Intelligence 25: 32. doi:10.1109/TPAMI.2003.1159944.

[39] Maragos, P.; Potamianos, A. (1999). "Fractal dimensions of speech sounds: Computation and application to automatic speech recognition".The Journal of the Acoustical Society of America 105 (3): 1925–1932. doi:10.1121/1.426738. PMID 10089613.

[40] Shanker, O. (2006). "Random matrices, generalized zeta functions and self-similarity of zero distributions". Journal of Physics A:Mathematical and General 39 (45): 13983. doi:10.1088/0305-4470/39/45/008.

[41] Eftekhari, A. (2004). "Fractal Dimension of Electrochemical Reactions". Journal of the Electrochemical Society 151 (9): E291–E296.doi:10.1149/1.1773583.

References

Further Reading• Mandelbrot, Benoît B., The (Mis)Behavior of Markets, A Fractal View of Risk, Ruin and Reward (Basic Books,

2004)

External links• (http:/ / www. trusoft-international. com) TruSoft's Benoît - Fractal Analysis Software product calculates fractal

dimensions and hurst exponents.• (http:/ / www. stevec. org/ fracdim/ ) Fractal Dimension Estimator Java Applet• (http:/ / rsb. info. nih. gov/ ij/ plugins/ fraclac/ FLHelp/ Fractals. htm) Fractal Analysis Software for Biologists;

free from NIH ImageJ website

Page 10: Fractal Analysis

Fractal analysis 8

Fractal analysisFractal analysis is assessing fractal characterisics of data. It consists of several methods to assign a fractaldimension and other fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signalextracted from phenomena including natural geometric objects, sound, market fluctuations[1], heart rates[2], digitalimages[3], molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science[4]. Animportant limitation of fractal analysis is that arriving at an empirically determined fractal dimension does notnecessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered.[5]

Types of fractal analysisSeveral types of fractal analysis are done, including box counting, lacunarity analysis, mass methods, andmultifractal analysis[5][1]. A common feature of all types of fractal analysis is the need for benchmark patternsagainst which to assess outputs.[6] These can be acquired with various types of fractal generating software capable ofgenerating benchmark patterns suitable for this purpose, which generally differ from software designed to renderfractal art.

ApplicationsApplications of fractal analysis include:[7]

• Heart rate analysis[2] • urban growth[8] • Computer and video game design, especially computergraphics for organic environments and as part ofprocedural generation

• Diagnostic imaging [9] • neuroscience[10][11] • Fractography and fracture mechanics

• Cancer research [12] • diagnostic imaging[13] • Fractal antennas — Small size antennas using fractalshapes

• Classification of histopathology slides inmedicine[14]

• pathology[15][16] •• Small angle scattering theory of fractally roughsystems

• Fractal landscape or Coastlinecomplexity[5][17]

• geology[18] • T-shirts and other fashion

• Enzyme/enzymology (Michaelis-Mentenkinetics)

• geography[19] • Generation of patterns for camouflage, such asMARPAT

• An Equation of State for the Coexistence ofStability and Flexibility in Proteins [20]

• archaeology[21][22] •• Digital sundial

•• Generation of new music • Seismology[23][24] • Technical analysis of price series (see Elliott waveprinciple)

•• Generation of various art forms • Wave propagation in self-similar(fractal) media [25]

• Fractal analysis in music [26][27]

• search and rescue[28] • Soil studies[29]

• Signal and image compression

Page 11: Fractal Analysis

Fractal analysis 9

References[1] Peters, Edgar (1996). Chaos and order in the capital markets : a new view of cycles, prices, and market volatility. New York: Wiley.

ISBN 0471139386.[2] Tan, C. O.; Cohen, M. A.; Eckberg, D. L.; Taylor, J. A. (2009). "Fractal properties of human heart period variability: Physiological and

methodological implications". The Journal of Physiology 587 (15): 3929. doi:10.1113/jphysiol.2009.169219.[3] Fractal Analysis of Digital Images http:/ / rsbweb. nih. gov/ ij/ plugins/ fraclac/ FLHelp/ Fractals. htm[4] Fractals: Complex Geometry, Patterns, and Scaling in Nature and Society (http:/ / www. worldscinet. com/ fractals/ fractals. shtml).

ISSN 1793-6543. .[5] Benoît B. Mandelbrot (1983). The fractal geometry of nature (http:/ / books. google. com/ books?id=0R2LkE3N7-oC). Macmillan.

ISBN 978-0716711865. . Retrieved 1 February 2012.[6] Digital Images in FracLac, ImageJ[7] "Applications" (http:/ / library. thinkquest. org/ 26242/ full/ ap/ ap. html). . Retrieved 2007-10-21.[8] Chen, Y. (2011). Hernandez Montoya, Alejandro Raul. ed. "Modeling Fractal Structure of City-Size Distributions Using Correlation

Functions". PLoS ONE 6 (9): e24791. doi:10.1371/journal.pone.0024791. PMC 3176775. PMID 21949753.[9] Karperien, A.; Jelinek, H. F.; Leandro, J. J.; Soares, J. V.; Cesar Jr, R. M.; Luckie, A. (2008). "Automated detection of proliferative

retinopathy in clinical practice". Clinical ophthalmology (Auckland, N.Z.) 2 (1): 109–122. PMC 2698675. PMID 19668394.[10] Karperien, A. L.; Jelinek, H. F.; Buchan, A. M. (2008). "Box-Counting Analysis of Microglia Form in Schizophrenia, Alzheimer's Disease

and Affective Disorder". Fractals 16 (2): 103. doi:10.1142/S0218348X08003880.[11] Liu, J. Z.; Zhang, L. D.; Yue, G. H. (2003). "Fractal Dimension in Human Cerebellum Measured by Magnetic Resonance Imaging".

Biophysical Journal 85 (6): 4041–4046. doi:10.1016/S0006-3495(03)74817-6. PMC 1303704. PMID 14645092.[12] Kam, Y.; Karperien, A.; Weidow, B.; Estrada, L.; Anderson, A. R.; Quaranta, V. (2009). "Nest expansion assay: A cancer systems biology

approach to in vitro invasion measurements". BMC Research Notes 2: 130. doi:10.1186/1756-0500-2-130. PMC 2716356. PMID 19594934.[13] Karperien, A.; Jelinek, H. F.; Leandro, J. J.; Soares, J. V.; Cesar Jr, R. M.; Luckie, A. (2008). "Automated detection of proliferative

retinopathy in clinical practice". Clinical ophthalmology (Auckland, N.Z.) 2 (1): 109–122. PMC 2698675. PMID 19668394.[14] Losa, Gabriele A.; Nonnenmacher, Theo F., eds. (2005). Fractals in biology and medicine (http:/ / books. google. com/

books?id=t9l9GdAt95gC). Springer. ISBN 978-3-7643-7172-2. . Retrieved 1 February 2012.[15] Smith, R. F.; Mohr, D. N.; Torres, V. E.; Offord, K. P.; Melton Lj, 3. (1989). "Renal insufficiency in community patients with mild

asymptomatic microhematuria". Mayo Clinic proceedings. Mayo Clinic 64 (4): 409–414. PMID 2716356.[16] Landini, G. (2011). "Fractals in microscopy". Journal of Microscopy 241 (1): 1–8. doi:10.1111/j.1365-2818.2010.03454.x.

PMID 21118245.[17] Mandelbrot, B. (1967). "How Long is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension". Science 156 (3775):

636–638. doi:10.1126/science.156.3775.636. PMID 17837158.[18] Cheng, Q. (1997). Mathematical Geology 29 (7): 919–932. doi:10.1023/A:1022355723781.[19] Chen, Y. (2011). Hernandez Montoya, Alejandro Raul. ed. "Modeling Fractal Structure of City-Size Distributions Using Correlation

Functions". PLoS ONE 6 (9): e24791. doi:10.1371/journal.pone.0024791. PMC 3176775. PMID 21949753.[20] http:/ / prl. aps. org/ abstract/ PRL/ v100/ i20/ e208101[21] Burkle-Elizondo, G.; Valdez-Cepeda, R. D. (2006). "Fractal analysis of Mesoamerican pyramids". Nonlinear dynamics, psychology, and life

sciences 10 (1): 105–122. PMID 16393505.[22] Brown, C. T.; Witschey, W. R. T.; Liebovitch, L. S. (2005). "The Broken Past: Fractals in Archaeology". Journal of Archaeological Method

and Theory 12: 37. doi:10.1007/s10816-005-2396-6.[23] Vannucchi, P.; Leoni, L. (2007). "Structural characterization of the Costa Rica décollement: Evidence for seismically-induced fluid

pulsing". Earth and Planetary Science Letters 262 (3–4): 413. doi:10.1016/j.epsl.2007.07.056.[24] Didier Sornette (2004). Critical phenomena in natural sciences: chaos, fractals, self organization, and disorder : concepts and tools.

Springer. pp. 128–140. ISBN 9783540407546.[25] http:/ / fr. arxiv. org/ abs/ 0904. 0780[26] http:/ / www. brotherstechnology. com/ math/ fractal-music. html[27] Brothers, H. J. (2007). "Structural Scaling in Bach's Cello Suite No. 3". Fractals 15: 89–95. doi:10.1142/S0218348X0700337X.[28] Panteha Saeedi, and Soren A. Sorensen. "An Algorithmic Approach to Generate After-disaster Test Fields for Search and Rescue Agents"

(http:/ / www. iaeng. org/ publication/ WCE2009/ WCE2009_pp93-98. pdf). Proceedings of The World Congress on Engineering 2009:93-98. ISBN 978-988-17012-5-1. .

[29] Hu, S.; Cheng, Q.; Wang, L.; Xie, S. (2012). "Multifractal characterization of urban residential land price in space and time". AppliedGeography 34: 161. doi:10.1016/j.apgeog.2011.10.016.

Page 12: Fractal Analysis

Fractal analysis 10

Further reading• Fractals and Fractal Analysis (http:/ / rsb. info. nih. gov/ ij/ plugins/ fraclac/ FLHelp/ Fractals. htm)• Fractal analysis (http:/ / www. fch. vutbr. cz/ lectures/ imagesci/ download_ejournal/ 01_O. Zmeskal. pdf)• Benoit - Fractal Analysis Software (http:/ / www. trusoft. netmegs. com/ )• Fractal Analysis Methods for Human Heartbeat and Gait Dynamics (http:/ / www. physionet. org/ tutorials/ fmnc/

index. shtml)

Box counting

Figure 1. A 32-segment quadric fractal viewed through "boxes" of different sizes. Thepattern illustrates self similarity.

Box counting is a method of gatheringdata for analyzing complex patterns bybreaking a dataset, object, image, etc.into smaller and smaller pieces,typically "box"-shaped, and analyzingthe pieces at each smaller scale. Theessence of the process has beencompared to zooming in or out usingoptical or computer based methods toexamine how observations of detailchange with scale. In box counting,however, rather than changing themagnification or resolution of a lens,the investigator changes the size of theelement used to inspect the object orpattern (see Figure 1). Computer basedbox counting algorithms have beenapplied to patterns in 1-, 2-, and3-dimensional spaces.[1][2] Thetechnique is usually implemented insoftware for use on patterns extractedfrom digital media, although thefundamental method can be used to

investigate some patterns physically. The technique arose out of and is used in fractal analysis. It also has applicationin related fields such as lacunarity and multifractal analysis.[3][4]

The methodTheoretically, the intent of box counting is to quantify fractal scaling, but from a practical perspective this would require that the scaling be known ahead of time. This can be seen in Figure 1 where choosing boxes of the right relative sizes readily shows how the pattern repeats itself at smaller scales. In fractal analysis, however, the scaling factor is not always known ahead of time, so box counting algorithms attempt to find an optimized way of cutting a pattern up that will reveal the scaling factor. The fundamental method for doing this starts with a set of measuring elements—boxes—consisting of an arbitrary number, called here for convenience, of sizes or calibres, which we will call the set of s. Then these -sized boxes are applied to the pattern and counted. To do this, for each in , a measuring element that is typically a 2-dimensional square or 3-dimensional box with side length corresponding to is used to scan a pattern or data set (e.g., an image or object) according to a predetermined scanning plan to

Page 13: Fractal Analysis

Box counting 11

cover the relevant part of the data set, recording, i.e.,counting, for each step in the scan relevant features capturedwithin the measuring element. [3][4]

Figure 2. The sequence above shows basic steps inextracting a binary contour pattern from an original

colour digital image of a neuron.

The data

The relevant features gathered during box counting depend on thesubject being investigated and the type of analysis being done.Two well-studied subjects of box counting, for instance, are binary(meaning having only two colours, usually black and white)[2] andgray-scale[5] digital images (i.e., jpgs, tiffs, etc.). Box counting isgenerally done on patterns extracted from such still images inwhich case the raw information recorded is typically based onfeatures of pixels such as a predetermined colour value or range ofcolours or intensities. When box counting is done to determine afractal dimension known as the box counting dimension, theinformation recorded is usually either yes or no as to whether or not the box contained any pixels of thepredetermined colour or range (i.e., the number of boxes containing relevant pixels at each is counted). For othertypes of analysis, the data sought may be the number of pixels that fall within the measuring box,[4] the range oraverage values of colours or intensities, the spatial arrangement amongst pixels within each box, or properties suchas average speed (e.g., from particle flow).[6][7][5][8]

Scan typesEvery box counting algorithm has a scanning plan that describes how the data will be gathered, in essence, how thebox will be moved over the space containing the pattern. A variety of scanning strategies has been used in boxcounting algorithms, where a few basic approaches have been modified in order to address issues such as sampling,analysis methods, etc.

Figure 2a. Boxes laid over an image as afixed grid. Figure 2b. Boxes slid over animage in an overlapping pattern.Figure

2c. Boxes laid over an imageconcentrically focused on each pixel of

interest.

Fixed grid scans

The traditional approach is to scan in a non-overlapping regular grid or latticepattern.[3][4] To illustrate, Figure 2a shows the typical pattern used in softwarethat calculates box counting dimensions from patterns extracted into binarydigital images of contours such as the fractal contour illustrated in Figure 1 orthe classic example of the coastline of Britain often used to explain the methodof finding a box counting dimension. The strategy simulates repeatedly layinga square box as though it were part of a grid overlaid on the image, such thatthe box for each never overlaps where it has previously been (see Figure 4).This is done until the entire area of interest has been scanned using each andthe relevant information has been recorded.[9] [10] When used to find a boxcounting dimension, the method is modified to find an optimal covering.

Page 14: Fractal Analysis

Box counting 12

Figure 3. Retinal vasculature revealed through boxcounting analysis; colour coded local connected fractal

dimension analysis done with FracLac freeware forbiological image analysis.

Figure 4. It takes 12 green but 14 yellow boxes tocompletely cover the black pixels in these identical

images. The difference is attributable to the position ofthe grid, illustrating the importance of grid placement

in box counting.

Sliding box scans

Another approach that has been used is a sliding box algorithm, inwhich each box is slid over the image overlapping the previousplacement. Figure 2b illustrates the basic pattern of scanning usinga sliding box. The fixed grid approach can be seen as a sliding boxalgorithm with the increments horizontally and vertically equal to

. Sliding box algorithms are often used for analyzing textures inlacunarity analysis and have also been applied to multifractalanalysis[8][2][11][12][13]

Subsampling and local dimensions

Box counting may also be used to determine local variation asopposed to global measures describing an entire pattern. Localvariation can be assessed after the data have been gathered andanalyzed (e.g., some software colour codes areas according to thefractal dimension for each subsample), but a third approach to boxcounting is to move the box according to some feature related tothe pixels of interest. In local connected dimension box countingalgorithms, for instance, the box for each is centred on eachpixel of interest, as illustrated in Figure 2c.[7]

Methodological considerations

The implementation of any box counting algorithm has to specifycertain details such as how to determine the actual values in ,including the minimum and maximum sizes to use and the methodof incrementing between sizes. Many such details reflect practical matters such as the size of a digital image but alsotechnical issues related to the specific analysis that will be performed on the data. Another issue that has receivedconsiderable attention is how to approximate the so-called "optimal covering" for determining box countingdimensions and assessing multifractal scaling. [14][5][15][16]

Edge effectsOne known issue in this respect is deciding what constitutes the edge of the useful information in a digital image, asthe limits employed in the box counting strategy can affect the data gathered.

Scaling box sizeThe algorithm has to specify the type of increment to use between box sizes (e.g., linear vs exponential), which canhave a profound effect on the results of a scan.

Grid orientationAs Figure 4 illustrates, the overall positioning of the boxes also influences the results of a box count. One approachin this respect is to scan from multiple orientations and use averaged or optimized data.[17][18]

To address various methodological considerations, some software is written so users can specify many such details, and some includes methods such as smoothing the data after the fact to be more amenable to the type of analysis

Page 15: Fractal Analysis

Box counting 13

being done.[19]

References[1] Liu, J. Z.; Zhang, L. D.; Yue, G. H. (2003). "Fractal Dimension in Human Cerebellum Measured by Magnetic Resonance Imaging".

Biophysical Journal 85 (6): 4041–4046. doi:10.1016/S0006-3495(03)74817-6. PMC 1303704. PMID 14645092.[2] Smith, T. G.; Lange, G. D.; Marks, W. B. (1996). "Fractal methods and results in cellular morphology — dimensions, lacunarity and

multifractals". Journal of Neuroscience Methods 69 (2): 123–136. doi:10.1016/S0165-0270(96)00080-5. PMID 8946315.[3] Mandelbrot (1983). The Fractal Geometry of Nature. ISBN 978-0716711865.[4] Iannaccone, Khokha (1996). Fractal Geometry in Biological Systems. pp. 143. ISBN 978-0849376368.[5] Li, J.; Du, Q.; Sun, C. (2009). "An improved box-counting method for image fractal dimension estimation". Pattern Recognition 42 (11):

2460. doi:10.1016/j.patcog.2009.03.001.[6] Karperien, A.; Jelinek, H. F.; Leandro, J. J.; Soares, J. V.; Cesar Jr, R. M.; Luckie, A. (2008). "Automated detection of proliferative

retinopathy in clinical practice". Clinical ophthalmology (Auckland, N.Z.) 2 (1): 109–122. PMC 2698675. PMID 19668394.[7] Landini, G.; Murray, P. I.; Misson, G. P. (1995). "Local connected fractal dimensions and lacunarity analyses of 60 degrees fluorescein

angiograms". Investigative ophthalmology & visual science 36 (13): 2749–2755. PMID 7499097.[8] Cheng, Q. (1997). Mathematical Geology 29 (7): 919–932. doi:10.1023/A:1022355723781.[9] Popescu, D. P.; Flueraru, C.; Mao, Y.; Chang, S.; Sowa, M. G. (2010). "Signal attenuation and box-counting fractal analysis of optical

coherence tomography images of arterial tissue". Biomedical Optics Express 1 (1): 268–277. doi:10.1364/boe.1.000268. PMC 3005165.PMID 21258464.

[10] King, R. D.; George, A. T.; Jeon, T.; Hynan, L. S.; Youn, T. S.; Kennedy, D. N.; Dickerson, B.; the Alzheimer’s Disease NeuroimagingInitiative (2009). "Characterization of Atrophic Changes in the Cerebral Cortex Using Fractal Dimensional Analysis". Brain Imaging andBehavior 3 (2): 154–166. doi:10.1007/s11682-008-9057-9. PMC 2927230. PMID 20740072.

[11] Plotnick, R. E.; Gardner, R. H.; Hargrove, W. W.; Prestegaard, K.; Perlmutter, M. (1996). "Lacunarity analysis: A general technique for theanalysis of spatial patterns". Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics 53 (5): 5461–5468.PMID 9964879.

[12] Plotnick, R. E.; Gardner, R. H.; O'Neill, R. V. (1993). "Lacunarity indices as measures of landscape texture". Landscape Ecology 8 (3): 201.doi:10.1007/BF00125351.

[13] McIntyre, N. E.; Wiens, J. A. (2000). Landscape Ecology 15 (4): 313. doi:10.1023/A:1008148514268.[14] Gorski, A. Z.; Skrzat, J. (2006). "Error estimation of the fractal dimension measurements of cranial sutures". Journal of Anatomy 208 (3):

353–359. doi:10.1111/j.1469-7580.2006.00529.x. PMC 2100241. PMID 16533317.[15] Chhabra, A.; Jensen, R. V. (1989). "Direct determination of the f( alpha ) singularity spectrum". Physical review letters 62 (12): 1327–1330.

PMID 10039645.[16] Fernández, E.; Bolea, J. A.; Ortega, G.; Louis, E. (1999). "Are neurons multifractals?". Journal of neuroscience methods 89 (2): 151–157.

PMID 10491946.[17] Karperien (2004). Defining Microglial Morphology: Form, Function, and Fractal Dimension. Charles Sturt University, Australia.[18] Schulze, M. M.; Hutchings, N.; Simpson, T. L. (2008). "The Use of Fractal Analysis and Photometry to Estimate the Accuracy of Bulbar

Redness Grading Scales". Investigative Ophthalmology & Visual Science 49 (4): 1398. doi:10.1167/iovs.07-1306.[19] Karperien (2002), Box Counting, http:/ / rsb. info. nih. gov/ ij/ plugins/ fraclac/ FLHelp/ BoxCounting. htm#sampling

Page 16: Fractal Analysis

Multifractal system 14

Multifractal system

A Strange Attractor that exhibits multifractal scaling

A multifractal system is a generalization of a fractal system inwhich a single exponent (the fractal dimension) is not enoughto describe its dynamics; instead, a continuous spectrum ofexponents (the so-called singularity spectrum) is needed.[1]

Multifractal systems are common in nature, especiallygeophysics. They include fully developed turbulence, stockmarket time series, real world scenes, the Sun’s magnetic fieldtime series, heartbeat dynamics, human gait, and naturalluminosity time series. Models have been proposed in variouscontexts ranging from turbulence in fluid dynamics to internettraffic, finance, image modeling, texture synthesis,meteorology, geophysics and more.

From a practical perspective, multifractal analysis uses themathematical basis of multifractal theory to investigatedatasets, often in conjunction with other methods of fractalanalysis and lacunarity analysis. The technique entails distorting datasets extracted from patterns to generatemultifractal spectra that illustrate how scaling varies over the dataset. The techniques of multifractal analysis havebeen applied in a variety of practical situations such as predicting earthquakes and interpreting medicalimages.[2][3][4]

DefinitionIn a multifractal system , the behavior around any point is described by a local power law:

The exponent is called the singularity exponent, as it describes the local degree of singularity or regularityaround the point .The ensemble formed by all the points that share the same singularity exponent is called the singularity manifold ofexponent h, and is a fractal set of fractal dimension D(h). The curve D(h) versus h is called the singularity spectrumand fully describes the (statistical) distribution of the variable .In practice, the multifractal behaviour of a physical system is not directly characterized by its singularityspecrum D(h). Data analysis rather gives access to the multiscaling exponents . Indeed, multifractalsignals generally obey a scale invariance property which yields power law behaviours for multiresolution quantitiesdepending on their scale . Depending on the object under study, these multiresolution quantities, denoted by

in the following, can be local averages in boxes of size , gradients over distance , wavelet coefficientsat scale ... For multifractal objects, one usually observes a global power law scaling of the form:

at least in some range of scales and for some range of orders . When such a behaviour is observed, one talks ofscale invariance, self-similarity or multiscaling.[5]

Page 17: Fractal Analysis

Multifractal system 15

EstimationUsing the so-called multifractal formalism, it can be shown that, under some well-suited assumptions, there exists acorrespondence between the singularity spectrum and the multiscaling exponents through a Legendretransform. While the determination of calls for some exhaustive local analysis of the data, which would resultin difficult and numerically unstable calculations, the estimation of the relies on the use of statistical averagesand linear regressions in log-log diagrams. Once the are known, one can deduce an estimate of thanksto a simple Legendre transform.Multifractal systems are often modeled by stochastic processes such as multiplicative cascades. Interestingly, the

receives some statistical interpretation as they characterize the evolution of the distributions of the asgoes from larger to smaller scales. This evolution is often called statistical intermittency and betrays a departure

from Gaussian models.Modelling as a multiplicative cascade also leads to estimation of multifractal properties for relatively small datasets(Roberts & Cronin 1996). A maximum likelihood fit of a multiplicative cascade to the dataset not only estimates thecomplete spectrum, but also gives reasonable estimates of the errors (see the web service [6]).

Practical application of multifractal spectra

Multifractal analysis is analogous to viewing adataset through a series of distorting lenses tohone in on differences in scaling. The pattern

shown is a Henon Map

Multifractal analysis has been used in several fields in science tocharacterize various types of datasets[7]. In essence, multifractalanalysis applies a distorting factor to datasets extracted from patterns,to compare how the data behave at each distortion. This is done usinggraphs known as multifractal spectra that illustrate how thedistortions affect the data, analogous to viewing the dataset through a"distorting lens" as shown in the illustration[8]. Several types ofmultifractal spectra are used in practise.

Page 18: Fractal Analysis

Multifractal system 16

DQ vs Q

DQ vs Q spectra for a non-fractal circle (empiricalbox counting dimension = 1.0), mono-fractal

Quadric Cross (empirical box counting dimension= 1.49), and multifractal Henon Map (empirical

box counting dimension = 1.29).

One practical multifractal spectrum is the graph of DQ vs Q, where DQis the generalized dimension for a dataset and Q is an arbitrary set ofexponents. The expression generalized dimension thus refers to a set ofdimensions for a dataset (detailed calculations for determining thegeneralized dimension using box counting are described below).

Dimensional ordering

The general pattern of the graph of DQ vs Q can be used to assess thescaling in a pattern. The graph is generally decreasing, sigmoidalaround Q=0, where D(Q=0) ≥ D(Q=1) ≥ D(Q=2). As illustrated in thefigure, variation in this graphical spectrum can help distinguishpatterns. The image shows D(Q) spectra from a multifractal analysis ofbinary images of non-, mono-, and multi-fractal sets. As is the case inthe sample images, non- and mono-fractals tend to have flatter D(Q)spectra than multifractals.

The generalized dimension also offers some important specific information. D(Q=0) is equal to the CapacityDimension, which in the analysis shown in the figures here is the box counting dimension. D(Q=1) is equal to theInformation Dimension, and D(Q=2) to the Correlation Dimension. This relates to the "multi" in multifractal wherebymultifractals have multiple dimensions in the D(Q) vs Q spectra but monofractals stay rather flat in that area.[9][8]

vs

Another useful multifractal spectrum is the graph of vs (see calculations). These graphs generally rise to amaximum that approximates the fractal dimension at Q=0, and then fall. Like DQ vs Q spectra, they also showtypical patterns useful for comparing non-, mono-, and multi-fractal patterns. In particular, for these spectra, non-and mono-fractals converge on certain values, whereas the spectra from multifractal patterns are typically humpedover a broader extent.

Estimating multifractal scaling from box countingMultifractal spectra can be determined from box counting on digital images. First, a box counting scan is done todetermine how the pixels are distributed; then, this "mass distribution" becomes the basis for a series ofcalculations[9][10][8]. The chief idea is that for multifractals, the probability, , of a number of pixels, ,appearing in a box, , varies as box size, , to some exponent, , which changes over the image, as in Eq.0.0.NB: For monofractals, in contrast, the exponent does not change meaningfully over the set. is calculated from thebox counting pixel distribution as in Eq.2.0.

(Eq.0.0)

= an arbitrary scale (box size in box counting) at which the set is examined= the index for each box laid over the set for an

= the number of pixels or mass in any box, , at size = the total boxes that contained more than 0 pixels, for each

the total mass or sum of pixels in all boxes for this (Eq.1.0)

Page 19: Fractal Analysis

Multifractal system 17

the probability of this mass at relative to the total mass for a box size (Eq.2.0)

is used to observe how the pixel distribution behaves when distorted in certain ways as in Eq.3.0 and Eq.3.1:

= an arbitrary range of values to use as exponents for distorting the data set

the sum of all mass probabilities distorted by being raised to this Q, for this box size (Eq.3.0)

• When , Eq.3.0 equals 1, the usual sum of all probabilities, and when , every term is equal to1, so the sum is equal to the number of boxes counted, .

how the distorted mass probability at a box compares to the distorted sum over all boxes at this box size (Eq.3.1)

These distorting equations are further used to address how the set behaves when scaled or resolved or cut up into aseries of -sized pieces and distorted by Q, to find different values for the dimension of the set, as in the following:

• An important feature of Eq.3.0 is that it can also be seen to vary according to scale raised to the exponent inEq.4.0:

(Eq.4.0)

Thus, a series of values for can be found from the slopes of the regression line for the log of Eq.3.0 vs the logof for each , based on Eq.4.1:

(Eq.4.1)

•• For the generalized dimension:

(Eq.5.0)

(Eq.5.1)

(Eq.5.2)

(Eq.5.3)

• is estimated as the slope of the regression line for log A ,Q vs log where:

(Eq.6.0)

• Then is found from Eq.5.3.• The mean is estimated as the slope of the log-log regression line for vs , where:

(Eq.6.1)

Page 20: Fractal Analysis

Multifractal system 18

In practise, the probability distribution depends on how the dataset is sampled, so optimizing algorithms have beendeveloped to ensure adequate sampling.[8]

References[1] Harte, David (2001). Multifractals. London: Chapman & Hall. ISBN 9781584881544.[2] Lopes, R.; Betrouni, N. (2009). "Fractal and multifractal analysis: A review". Medical Image Analysis 13 (4): 634–649.

doi:10.1016/j.media.2009.05.003. PMID 19535282.[3] Moreno, P. A.; Vélez, P. E.; Martínez, E.; Garreta, L. E.; Díaz, N. S.; Amador, S.; Tischer, I.; Gutiérrez, J. M. et al (2011). "The human

genome: A multifractal analysis". BMC Genomics 12: 506. doi:10.1186/1471-2164-12-506. PMID 21999602.[4] Atupelage, C.; Nagahashi, H.; Yamaguchi, M.; Sakamoto, M.; Hashiguchi, A. (2012). "Multifractal feature descriptor for histopathology".

Analytical cellular pathology (Amsterdam) 35 (2): 123–126. doi:10.3233/ACP-2011-0045. PMID 22101185.[5] A.J. Roberts and A. Cronin (1996). "Unbiased estimation of multi-fractal dimensions of finite data sets". Physica A 233: 867–878.

doi:10.1016/S0378-4371(96)00165-3.[6] http:/ / www. maths. adelaide. edu. au/ anthony. roberts/ multifractal. php[7] Trevino, J.; Liew, S. F.; Noh, H.; Cao, H.; Dal Negro, L. (2012). "Geometrical structure, multifractal spectra and localized optical modes of

aperiodic Vogel spirals". Optics Express 20 (3): 3015. doi:10.1364/OE.20.003015.[8] Karperien, A (2002), What are Multifractals? (http:/ / rsbweb. nih. gov/ ij/ plugins/ fraclac/ FLHelp/ Multifractals. htm), ImageJ, , retrieved

2012-02-10[9] Chhabra, A.; Jensen, R. (1989). "Direct determination of the f(α) singularity spectrum". Physical Review Letters 62 (12): 1327–1330.

doi:10.1103/PhysRevLett.62.1327. PMID 10039645.[10] Posadas, A. N. D.; Giménez, D.; Bittelli, M.; Vaz, C. M. P.; Flury, M. (2001). "Multifractal Characterization of Soil Particle-Size

Distributions". Soil Science Society of America Journal 65 (5): 1361. doi:10.2136/sssaj2001.6551361x.

External links• Stanley H.E., Meakin P. (1988). "Multifractal phenomena in physics and chemistry" (http:/ / polymer. bu. edu/

hes/ articles/ sm88. pdf) (Review). Nature 335 (6189): 405–9. doi:10.1038/335405a0.• Alain Arneodo, et al. (2008). "Wavelet-based multifractal analysis" (http:/ / www. scholarpedia. org/ article/

Wavelet-based_multifractal_analysis). Scholarpedia 3 (3): 4103. doi:10.4249/scholarpedia.4103.

Page 21: Fractal Analysis

Entropy (information theory) 19

Entropy (information theory)In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, theterm usually refers to the Shannon entropy, which quantifies the expected value of the information contained in amessage, usually in units such as bits. In this context, a 'message' means a specific realization of the random variable.Equivalently, the Shannon entropy is a measure of the average information content one is missing when one does notknow the value of the random variable. The concept was introduced by Claude E. Shannon in his 1948 paper "AMathematical Theory of Communication".Shannon's entropy represents an absolute limit on the best possible lossless compression of any communication,under certain constraints: treating messages to be encoded as a sequence of independent and identically-distributedrandom variables, Shannon's source coding theorem shows that, in the limit, the average length of the shortestpossible representation to encode the messages in a given alphabet is their entropy divided by the logarithm of thenumber of symbols in the target alphabet.A single toss of a fair coin has an entropy of one bit. Two tosses has an entropy of two bits. The entropy rate for thecoin is one bit per toss. However, if the coin is not fair, then the uncertainty is lower (if asked to bet on the nextoutcome, we would bet preferentially on the most frequent result), and thus the Shannon entropy is lower.Mathematically, a single coin flip (fair or not) is an example of a Bernoulli trial, and its entropy is given by thebinary entropy function. A series of tosses of a two-headed coin will have zero entropy, since the outcomes areentirely predictable. The entropy rate of English text is between 1.0 and 1.5 bits per letter,[1] or as low as 0.6 to 1.3bits per letter, according to estimates by Shannon based on human experiments.[2]

IntroductionEntropy is a measure of disorder, or more precisely unpredictability. For example, a series of coin tosses with a faircoin has maximum entropy, since there is no way to predict what will come next. A string of coin tosses with a coinwith two heads and no tails has zero entropy, since the coin will always come up heads. Most collections of data inthe real world lie somewhere in between. It is important to realize the difference between the entropy of a set ofpossible outcomes, and the entropy of a particular outcome. A single toss of a fair coin has an entropy of one bit, buta particular result (e.g. "heads") has zero entropy, since it is entirely "predictable".English text has fairly low entropy. In other words, it is fairly predictable. Even if we don't know exactly what isgoing to come next, we can be fairly certain that, for example, there will be many more e's than z's, or that thecombination 'qu' will be much more common than any other combination with a 'q' in it and the combination 'th' willbe more common than any of them. Uncompressed, English text has about one bit of entropy for each byte (eightbits) of message.If a compression scheme is lossless—that is, you can always recover the entire original message byuncompressing—then a compressed message has the same total entropy as the original, but in fewer bits. That is, ithas more entropy per bit. This means a compressed message is more unpredictable, which is why messages are oftencompressed before being encrypted. Roughly speaking, Shannon's source coding theorem says that a losslesscompression scheme cannot compress messages, on average, to have more than one bit of entropy per bit ofmessage. The entropy of a message is in a certain sense a measure of how much information it really contains.Shannon's theorem also implies that no lossless compression scheme can compress all messages. If some messagescome out smaller, at least one must come out larger. In the real world, this is not a problem, because we are generallyonly interested in compressing certain messages, for example English documents as opposed to random bytes, ordigital photographs rather than noise, and don't care if our compressor makes random messages larger.

Page 22: Fractal Analysis

Entropy (information theory) 20

DefinitionNamed after Boltzmann's H-theorem, Shannon denoted the entropy H of a discrete random variable X with possiblevalues {x1, ..., xn} as,

Here E is the expected value, and I is the information content of X.I(X) is itself a random variable. If p denotes the probability mass function of X then the entropy can explicitly bewritten as

where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the unit ofentropy is bit for b = 2, nat for b = e, and dit (or digit) for b = 10.[3]

In the case of pi = 0 for some i, the value of the corresponding summand 0 logb 0 is taken to be 0, which is consistentwith the limit:

.The proof of this limit can be quickly obtained applying l'Hôpital's rule.

Example

Entropy H(X) (i.e. the expected surprisal) of a coinflip, measured in bits, graphed versus the fairness ofthe coin Pr(X=1), where X=1 represents a result of

heads.Note that the maximum of the graph depends on the

distribution. Here, at most 1 bit is required tocommunicate the outcome of a fair coin flip (2 possiblevalues), but the result of a fair die (6 possible values)

would require at least log26 bits.

Consider tossing a coin with known, not necessarily fair,probabilities of coming up heads or tails.The entropy of the unknown result of the next toss of the coin ismaximized if the coin is fair (that is, if heads and tails both haveequal probability 1/2). This is the situation of maximumuncertainty as it is most difficult to predict the outcome of the nexttoss; the result of each toss of the coin delivers a full 1 bit ofinformation.

However, if we know the coin is not fair, but comes up heads ortails with probabilities p and q, then there is less uncertainty.Every time it is tossed, one side is more likely to come up than theother. The reduced uncertainty is quantified in a lower entropy: onaverage each toss of the coin delivers less than a full 1 bit ofinformation.

The extreme case is that of a double-headed coin that never comesup tails, or a double-tailed coin that never results in a head. Thenthere is no uncertainty. The entropy is zero: each toss of the coindelivers no information. In this respect, entropy can by normalizedby dividing it by information length. The measure is called metricentropy and allowed to measure the randomness of theinformation.

Rationale

For a random variable with outcomes , the Shannon entropy, a measure ofuncertainty (see further below) and denoted by , is defined as

Page 23: Fractal Analysis

Entropy (information theory) 21

(1)

where is the probability mass function of outcome .To understand the meaning of Eq. (1), first consider a set of possible outcomes (events) ,with equal probability . An example would be a fair die with values, from to . Theuncertainty for such a set of outcomes is defined by

(2)

The logarithm is used to provide the additivity characteristic for independent uncertainty. For example, considerappending to each value of the first die the value of a second die, which has possible outcomes

. There are thus possible outcomes . Theuncertainty for such a set of outcomes is then

(3)

Thus the uncertainty of playing with two dice is obtained by adding the uncertainty of the second die tothe uncertainty of the first die .Now return to the case of playing with one die only (the first one). Since the probability of each event is , wecan write

In the case of a non-uniform probability mass function (or density in the case of continuous random variables), welet

(4)

which is also called a surprisal; the lower the probability , i.e. , the higher the uncertainty or thesurprise, i.e. , for the outcome .The average uncertainty , with being the average operator, is obtained by

(5)

and is used as the definition of the entropy in Eq. (1). The above also explained why information entropy

and information uncertainty can be used interchangeably.[4]

One may also define the conditional entropy of two events X and Y taking values xi and yj respectively, as

where p(xi,yj) is the probability that X=xi and Y=yj. This quantity should be understood as the amount of randomnessin the random variable X given that you know the value of Y. For example, the entropy associated with a six-sideddie is H(die), but if you were told that it had in fact landed on 1, 2, or 3, then its entropy would be equal to H(die: thedie landed on 1, 2, or 3).

Page 24: Fractal Analysis

Entropy (information theory) 22

Aspects

Relationship to thermodynamic entropyThe inspiration for adopting the word entropy in information theory came from the close resemblance betweenShannon's formula and very similar known formulae from thermodynamics.In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic systemis the Gibbs entropy,

where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J.Willard Gibbs in 1878 after earlier work by Boltzmann (1872).[5]

The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumannentropy, introduced by John von Neumann in 1927,

where ρ is the density matrix of the quantum mechanical system and Tr is the trace.At an everyday practical level the links between information entropy and thermodynamic entropy are not evident.Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves awayfrom its initial conditions, in accordance with the second law of thermodynamics, rather than an unchangingprobability distribution. And, as the minuteness of Boltzmann's constant kB indicates, the changes in S / kB for eventiny amounts of substances in chemical and physical processes represent amounts of entropy which are so large as tobe off the scale compared to anything seen in data compression or signal processing. Furthermore, in classicalthermodynamics the entropy is defined in terms of macroscopic measurements and makes no reference to anyprobability distribution, which is central to the definition of information entropy.But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy,although it took many years in the development of the theories of statistical mechanics and information theory tomake the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamic entropy, as explained bystatistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropyis interpreted as being proportional to the amount of further Shannon information needed to define the detailedmicroscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopicvariables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Forexample, adding heat to a system increases its thermodynamic entropy because it increases the number of possiblemicroscopic states for the system, thus making any complete state description longer. (See article: maximum entropythermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by usinginformation about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, tofunction the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannoninformation he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (whichresolves the paradox).

Page 25: Fractal Analysis

Entropy (information theory) 23

Entropy as information contentEntropy is defined in the context of a probabilistic model. Independent fair coin flips have an entropy of 1 bit perflip. A source that always generates a long string of B's has an entropy of 0, since the next character will always be a'B'.The entropy rate of a data source means the average number of bits per symbol needed to encode it. Shannon'sexperiments with human predictors show an information rate of between 0.6 and 1.3 bits per character,[6] dependingon the experimental setup; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per characterin English text.From the preceding example, note the following points:1.1. The amount of entropy is not always an integer number of bits.2.2. Many data bits may not convey information. For example, data structures often store information redundantly, or

have identical sections regardless of the information in the data structure.Shannon's definition of entropy, when applied to an information source, can determine the minimum channelcapacity required to reliably transmit the source as encoded binary digits (see caveat below in italics). The formulacan be derived by calculating the mathematical expectation of the amount of information contained in a digit fromthe information source. See also Shannon-Hartley theorem.Shannon's entropy measures the information contained in a message as opposed to the portion of the message that isdetermined (or predictable). Examples of the latter include redundancy in language structure or statistical propertiesrelating to the occurrence frequencies of letter or word pairs, triplets etc. See Markov chain.

Data compressionEntropy effectively bounds the performance of the strongest lossless (or nearly lossless) compression possible, whichcan be realized in theory by using the typical set or in practice using Huffman, Lempel-Ziv or arithmetic coding. Theperformance of existing data compression algorithms is often used as a rough estimate of the entropy of a block ofdata.[7][8] See also Kolmogorov complexity.

Limitations of entropy as information contentThere are a number of entropy-related concepts that mathematically quantify information content in some way:• the self-information of an individual message or symbol taken from a given probability distribution,• the entropy of a given probability distribution of messages or symbols, and• the entropy rate of a stochastic process.(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by agiven stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Otherquantities of information are also used to compare or relate different sources of information.It is important not to confuse the above concepts. Oftentimes it is only clear from context which one is meant. Forexample, when someone says that the "entropy" of the English language is about 1.5 bits per character, they areactually modeling the English language as a stochastic process and talking about its entropy rate.Although entropy is often used as a characterization of the information content of a data source, this informationcontent is not absolute: it depends crucially on the probabilistic model. A source that always generates the samesymbol has an entropy rate of 0, but the definition of what a symbol is depends on the alphabet. Consider a sourcethat produces the string ABABABABAB... in which A is always followed by B and vice versa. If the probabilisticmodel considers individual letters as independent, the entropy rate of the sequence is 1 bit per character. But if thesequence is considered as "AB AB AB AB AB..." with symbols as two-character blocks, then the entropy rate is 0bits per character.

Page 26: Fractal Analysis

Entropy (information theory) 24

However, if we use very large blocks, then the estimate of per-character entropy rate may become artificially low.This is because in reality, the probability distribution of the sequence is not knowable exactly; it is only an estimate.For example, suppose one considers the text of every book ever published as a sequence, with each symbol being thetext of a complete book. If there are N published books, and each book is only published once, the estimate of theprobability of each book is 1/N, and the entropy (in bits) is -log2 1/N = log2 N. As a practical code, this correspondsto assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer tothe book. This is enormously useful for talking about books, but it is not so useful for characterizing the informationcontent of an individual book, or of language in general: it is not possible to reconstruct the book from its identifierwithout knowing the probability distribution, that is, the complete text of all the books. The key idea is that thecomplexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization ofthis idea that allows the consideration of the information content of a sequence independent of any particularprobability model; it considers the shortest program for a universal computer that outputs the sequence. A code thatachieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is onesuch program, but it may not be the shortest.For example, the Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, ... . Treating the sequence as a message and each numberas a symbol, there are almost as many symbols as there are characters in the message, giving an entropy ofapproximately log2(n). So the first 128 symbols of the Fibonacci sequence has an entropy of approximately 7bits/symbol. However, the sequence can be expressed using a formula [F(n) = F(n-1) + F(n-2) for n={3,4,5,...},F(1)=1, F(2)=1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.

Limitations of entropy as a measure of unpredictabilityIn cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key. Forexample, a 128-bit key that is randomly generated has 128 bits of entropy. It takes (on average) guesses tobreak by brute force. If the key's first digit is 0, and the others random, then the entropy is 127 bits, and it takes (onaverage) guesses.However, this measure fails if the possible keys are not of equal probability. If the key is half the time "password"and half the time a true random 128-bit key, then the entropy is approximately 65 bits. Yet half the time the key maybe guessed on the first try, if your first guess is "password", and on average, it takes around guesses (not ) to break this password.Similarly, consider a 1000000-digit binary one-time pad. If the pad has 1000000 bits of entropy, it is perfect. If thepad has 999999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) itmay still be considered very good. But if the pad has 999999 bits of entropy, where the first digit is fixed and theremaining 999999 digits are perfectly random, then the first digit of the ciphertext will not be encrypted at all.

Data as a Markov processA common way to define entropy for text is based on the Markov model of text. For an order-0 source (eachcharacter is selected independent of the last characters), the binary entropy is:

where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a characteris dependent only on the immediately preceding character), the entropy rate is:

where i is a state (certain preceding characters) and is the probability of given as the previous character.For a second order Markov source, the entropy rate is

Page 27: Fractal Analysis

Entropy (information theory) 25

b-ary entropyIn general the b-ary entropy of a source = (S,P) with source alphabet S = {a1, ..., an} and discrete probabilitydistribution P = {p1, ..., pn} where pi is the probability of ai (say pi = p(ai)) is defined by:

Note: the b in "b-ary entropy" is the number of different symbols of the "ideal alphabet" which is being used as thestandard yardstick to measure source alphabets. In information theory, two symbols are necessary and sufficient foran alphabet to be able to encode information, therefore the default is to let b = 2 ("binary entropy"). Thus, theentropy of the source alphabet, with its given empiric probability distribution, is a number equal to the number(possibly fractional) of symbols of the "ideal alphabet", with an optimal probability distribution, necessary to encodefor each symbol of the source alphabet. Also note that "optimal probability distribution" here means a uniformdistribution: a source alphabet with n symbols has the highest possible entropy (for an alphabet with n symbols)when the probability distribution of the alphabet is uniform. This optimal entropy turns out to be .

EfficiencyA source alphabet with non-uniform distribution will have less entropy than if those symbols had uniformdistribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio:

Efficiency has utility in quantifying the effective use of a communications channel.

CharacterizationShannon entropy is characterized by a small number of criteria, listed below. Any definition of entropy satisfyingthese assumptions has the form

where K is a constant corresponding to a choice of measurement units.

In the following, and .

ContinuityThe measure should be continuous, so that changing the values of the probabilities by a very small amount shouldonly change the entropy by a small amount.

SymmetryThe measure should be unchanged if the outcomes xi are re-ordered.

etc.

MaximumThe measure should be maximal if all the outcomes are equally likely (uncertainty is highest when all possibleevents are equiprobable).

Page 28: Fractal Analysis

Entropy (information theory) 26

For equiprobable events the entropy should increase with the number of outcomes.

AdditivityThe amount of entropy should be independent of how the process is regarded as being divided into parts.This last functional relationship characterizes the entropy of a system with sub-systems. It demands that the entropyof a system can be calculated from the entropies of its sub-systems if the interactions between the sub-systems areknown.Given an ensemble of n uniformly distributed elements that are divided into k boxes (sub-systems) with b1, b2, ... , bkelements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxesand the individual entropies of the boxes, each weighted with the probability of being in that particular box.For positive integers bi where b1 + ... + bk = n,

Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero:

This implies that the efficiency of a source alphabet with n symbols can be defined simply as being equal to its n-aryentropy. See also Redundancy (information theory).

Further propertiesThe Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as theamount of information learned (or uncertainty eliminated) by revealing the value of a random variable X:•• Adding or removing an event with probability zero does not contribute to the entropy:

.• It can be confirmed using the Jensen inequality that

.

This maximal entropy of is effectively attained by a source alphabet having a uniform probabilitydistribution: uncertainty is maximal when all possible events are equiprobable.• The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y

simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluatingthe value of Y, then revealing the value of X given that you know the value of Y. This may be written as

• If Y=f(X) where f is deterministic, then applying the previous formula to yieldsso ,

thus the entropy of a variable can only decrease when the latter is passed through a deterministic function.• If X and Y are two independent experiments, then knowing the value of Y doesn't influence our knowledge of the

value of X (since the two don't influence each other by independence):

Page 29: Fractal Analysis

Entropy (information theory) 27

• The entropy of two simultaneous events is no more than the sum of the entropies of each individual event, and areequal if the two events are independent. More specifically, if X and Y are two random variables on the sameprobability space, and (X,Y) denotes their Cartesian product, then

Proving this mathematically follows easily from the previous two properties of entropy.

Extending discrete entropy to the continuous case: differential entropyThe Shannon entropy is restricted to random variables taking discrete values. The formula

where f denotes a probability density function on the real line, is analogous to the Shannon entropy and could thus beviewed as an extension of the Shannon entropy to the domain of real numbers.

A precursor of the continuous entropy given in (1) is the expression for the functional in the H-theorem ofBoltzmann.Formula (1) is usually referred to as the continuous entropy, or differential entropy. Although the analogy betweenboth functions is suggestive, the following question must be set: is the differential entropy a valid extension of theShannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has –it can even be negative – and thus corrections have been suggested, notably limiting density of discrete points.To answer this question, we must establish a connection between the two functions:We wish to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the(implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As we generalize tothe continuous domain, we must make this width explicit.To do this, start with a continuous function f discretized as shown in the figure. As the figure indicates, by themean-value theorem there exists a value xi in each bin such that

and thus the integral of the function f can be approximated (in the Riemannian sense) by

where this limit and "bin size goes to zero" are equivalent.We will denote

and expanding the logarithm, we have

As , we have

Page 30: Fractal Analysis

Entropy (information theory) 28

and also

But note that as , therefore we need a special definition of the differential or continuousentropy:

which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limitof the Shannon entropy for . Rather, it differs from the limit of the Shannon entropy by an infinite offset.It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure ofuncertainty or information. For example, the differential entropy can be negative; also it is not invariant undercontinuous co-ordinate transformations.Another useful measure of entropy for the continuous case is the relative entropy of a distribution, defined as theKullback-Leibler divergence from the distribution to a reference measure m(x),

The relative entropy carries over directly from discrete to continuous distributions, is always positive or zero, and isinvariant under co-ordinate reparameterizations.

Use in combinatoricsEntropy has become a useful quantity in combinatorics.

Loomis-Whitney inequality

A simple example of this is an alternate proof of the Loomis-Whitney inequality: for every subset , wehave

where , that is, is the orthogonal projection inthe ith coordinate.The proof follows as a simple corollary of Shearer's inequality: if are random variables and are subsets of such that every integer between 1 and d lie in exactly r of these subsets, then

where is the Cartesian product of random variables with indexes j in (so the dimension of thisvector is equal to the size of ).We sketch how Loomis-Whitney follows from this: Indeed, let X be a uniformly distributed random variable withvalues in A and so that each point in A occurs with equal probability. Then (by the further properties of entropymentioned above) , where |A| denotes the cardinality of A. Let

. The range of is contained in and hence. Now use this to bound the right side of Shearer's inequality and exponentiate the

opposite sides of the resulting inequality you obtain.

Page 31: Fractal Analysis

Entropy (information theory) 29

Approximation to binomial coefficient

For integers let . Then

where .[9]

Here is a sketch proof. Note that is one term of the expression

. Rearranging gives the upper bound. For the lower bound one

first shows, using some algebra, that it is the largest term in the summation. But then,

since there are terms in the summation. Rearranging gives the lower bound.A nice interpretation of this is that the number of binary strings of length with exactly many 1's isapproximately .[10]

References[1] Schneier, B: Applied Cryptography, Second edition, page 234. John Wiley and Sons.[2] Shannon, Claude E.: Prediction and entropy of printed English, The Bell System Technical Journal, 30:50–64, January 1951.[3] Schneider, T.D, Information theory primer with an appendix on logarithms (http:/ / alum. mit. edu/ www/ toms/ paper/ primer/ primer. pdf),

National Cancer Institute, 14 April 2007.[4] Jaynes, E.T. (May 1957). "Information Theory and Statistical Mechanics" (http:/ / bayes. wustl. edu/ etj/ articles/ theory. 1. pdf). Physical

Review 106 (4): 620–630. Bibcode 1957PhRv..106..620J. doi:10.1103/PhysRev.106.620. .[5] Compare: Boltzmann, Ludwig (1896, 1898). Vorlesungen über Gastheorie : 2 Volumes – Leipzig 1895/98 UB: O 5262-6. English version:

Lectures on gas theory. Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover ISBN0-486-68455-5

[6] Mark Nelson (2006-08-24). "The Hutter Prize" (http:/ / marknelson. us/ 2006/ 08/ 24/ the-hutter-prize/ ). . Retrieved 2008-11-27.[7] T. Schürmann and P. Grassberger, Entropy Estimation of Symbol Sequences (http:/ / arxiv. org/ abs/ cond-mat/ 0203436), CHAOS,Vol. 6,

No. 3 (1996) 414–427[8] T. Schürmann, Bias Analysis in Entropy Estimation (http:/ / arxiv. org/ abs/ cond-mat/ 0403192) J. Phys. A: Math. Gen. 37 (2004)

L295-L301.[9][9] Aoki, New Approaches to Macroeconomic Modeling. page 43.[10][10] Probability and Computing, M. Mitzenmacher and E. Upfal, Cambridge University Press

This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the CreativeCommons Attribution/Share-Alike License.

External links• Introduction to entropy and information (http:/ / pespmc1. vub. ac. be/ ENTRINFO. html) on Principia

Cybernetica Web• Entropy (http:/ / www. mdpi. com/ journal/ entropy) an interdisciplinary journal on all aspect of the entropy

concept. Open access.• Information is not entropy, information is not uncertainty ! (http:/ / alum. mit. edu/ www/ toms/ information. is.

not. uncertainty. html) – a discussion of the use of the terms "information" and "entropy".• I'm Confused: How Could Information Equal Entropy? (http:/ / alum. mit. edu/ www/ toms/ bionet. info-theory.

faq. html#Information. Equal. Entropy) – a similar discussion on the bionet.info-theory FAQ.• Description of information entropy from "Tools for Thought" by Howard Rheingold (http:/ / www. rheingold.

com/ texts/ tft/ 6. html)• A java applet representing Shannon's Experiment to Calculate the Entropy of English (http:/ / math. ucsd. edu/

~crypto/ java/ ENTROPY/ )• Slides on information gain and entropy (http:/ / www. autonlab. org/ tutorials/ infogain. html)

Page 32: Fractal Analysis

Entropy (information theory) 30

• An Intuitive Guide to the Concept of Entropy Arising in Various Sectors of Science (http:/ / en. wikibooks. org/wiki/ An_Intuitive_Guide_to_the_Concept_of_Entropy_Arising_in_Various_Sectors_of_Science) – a wikibookon the interpretation of the concept of entropy.

• Calculator for Shannon entropy estimation and interpretation (http:/ / www. shannonentropy. netmark. pl)

Rényi entropyIn information theory, the Rényi entropy, a generalisation of Shannon entropy, is one of a family of functionals forquantifying the diversity, uncertainty or randomness of a system. It is named after Alfréd Rényi.The Rényi entropy of order α is defined for α ≥ 0 and α ≠ 1 as

where X is a discrete random variable, pi is the probability of the event {X =xi}, and the logarithm is base 2. If theprobabilities are all the same then all the Rényi entropies of the distribution are equal, with Hα(X) = log n. Otherwisethe entropies are weakly decreasing as a function of α.Higher values of α, approaching infinity, give a Rényi entropy which is increasingly determined by consideration ofonly the highest probability events. Lower values of α, approaching zero, give a Rényi entropy which increasinglyweights all possible events more equally, regardless of their probabilities. The intermediate case α=1 gives theShannon entropy, which has special properties. When α=0, it is the logarithm of the size of the support of X.The Rényi entropies are important in ecology and statistics as indices of diversity. The Rényi entropy is alsoimportant in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spinchain model, the Rényi entropy as a function of α can be calculated explicitly by virtue of the fact that it is anautomorphic function with respect to a particular subgroup of the modular group.[1][2]

Hα for some particular values of αSome particular cases:

which is the logarithm of the cardinality of X, sometimes called the Hartley entropy of X.In the limit that approaches 1, it can be shown using L'Hôpital's Rule that converges to

which is the Shannon entropy.Collision entropy, sometimes just called "Rényi entropy," refers to the case ,

where Y is a random variable independent of X but identically distributed to X. As , the limit exists as

and this is called Min-entropy, because it is the smallest value of .

Page 33: Fractal Analysis

Rényi entropy 31

Inequalities between different values of α

The two latter cases are related by . On the other hand the Shannon entropy can bearbitrarily high for a random variable X with fixed min-entropy.

is because .

is because .

since according to Jensen's inequality .

Rényi divergenceAs well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising theKullback–Leibler divergence.The Rényi divergence of order α, where α > 0, from a distribution P to a distribution Q is defined to be:

Like the Kullback-Leibler divergence, the Rényi divergences are non-negative for α>0. This divergence is alsoknown as the alpha-divergence ( -divergence).Some special cases:

: minus the log probability under Q that pi>0;

: minus twice the logarithm of the Bhattacharyya coefficient;

: the Kullback-Leibler divergence;

: the log of the expected ratio of the probabilities;

: the log of the maximum ratio of the probabilities.

Why α = 1 is specialThe value α = 1, which gives the Shannon entropy and the Kullback–Leibler divergence, is special because it is onlywhen α=1 that one can separate out variables A and X from a joint probability distribution, and write:

for the absolute entropies, and

for the relative entropies.The latter in particular means that if we seek a distribution p(x,a) which minimizes the divergence of someunderlying prior measure m(x,a), and we acquire new information which only affects the distribution of a, then thedistribution of p(x|a) remains m(x|a), unchanged.The other Rényi divergences satisfy the criteria of being positive and continuous; being invariant under 1-to-1co-ordinate transformations; and of combining additively when A and X are independent, so that if p(A,X) =p(A)p(X), then

Page 34: Fractal Analysis

Rényi entropy 32

and

The stronger properties of the α = 1 quantities, which allow the definition of conditional information and mutualinformation from communication theory, may be very important in other applications, or entirely unimportant,depending on those applications' requirements.

Exponential familiesThe Rényi entropies and divergences for an exponential family admit simple expressions (Nielsen & Nock, 2011)

and

where

is a Jensen difference divergence.

Footnotes[1] Its, A. R.; Korepin, V. E. (2010). "Generalized entropy of the Heisenberg spin chain" (http:/ / www. springerlink. com/ content/

vn2qt54344320m2g/ ). Theoretical and Mathematical Physics (Springer) 164 (3): 1136–1139. doi:10.1007/s11232-010-0091-6. . Retrieved07-Mar-2012.

[2] Franchini, F.; Its, A.R., Korepin, V.E. (2008). "Rényi entropy as a measure of entanglement in quantum spin chain" (http:/ / arxiv. org/ pdf/0707. 2534). Journal of Physics A: Mathematical and Theoretical (IOPScience) 41 (025302). doi:10.1088/1751-8113/41/2/025302. .Retrieved 07-Mar-2012.

References• A. Rényi (1961). "On measures of information and entropy" (http:/ / digitalassets. lib. berkeley. edu/ math/ ucb/

text/ math_s4_v1_article-27. pdf). Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics andProbability 1960. pp. 547–561.

• A. O. Hero, O.Michael and J. Gorman (2002). Alpha-divergences for Classification, Indexing and Retrieval(http:/ / www. eecs. umich. edu/ ~hero/ Preprints/ cspl-328. pdf).

•• F. Nielsen and S. Boltz (2010). "The Burbea-Rao and Bhattacharyya centroids". arXiv:1004.5049.•• Nielsen, Frank; Nock, Richard (2011). "On Rényi and Tsallis entropies and divergences for exponential families".

arXiv:1105.3259.• Rosso, O.A., "EEG analysis using wavelet-based information tools", Journal of Neuroscience Methods, 153

(2006) 163–182.• T. van Erven (2010). When Data Compression and Statistics Disagree (Ph.D. thesis). hdl:1887/15879. Chapter 6• Frank Nielsen and Richard Nock (2012). "A closed-form expression for the Sharma-Mittal entropy of exponential

families" (http:/ / iopscience. iop. org/ 1751-8121/ 45/ 3/ 032003/ ). Journal of Physics A: Mathematical andTheoretical.

Page 35: Fractal Analysis

Hausdorff dimension 33

Hausdorff dimension

Estimating the Hausdorff dimension of the coast of Great Britain

In mathematics, the Hausdorffdimension (also known as theHausdorff–Besicovitch dimension) isan extended non-negative real numberassociated with any metric space. TheHausdorff dimension generalizes thenotion of the dimension of a real vectorspace. That is, the Hausdorffdimension of an n-dimensional innerproduct space equals n. This means,for example, the Hausdorff dimensionof a point is zero, the Hausdorffdimension of a line is one, and theHausdorff dimension of the plane istwo. There are, however, many irregular sets that have noninteger Hausdorff dimension. The concept was introducedin 1918 by the mathematician Felix Hausdorff. Many of the technical developments used to compute the Hausdorffdimension for highly irregular sets were obtained by Abram Samoilovitch Besicovitch.

Sierpinski triangle. A space with fractal dimension log 3 / log 2, which isapproximately 1.5849625

Intuition

The intuitive dimension of a geometricobject is the number of independentparameters you need to pick out a uniquepoint inside. But you can easily take a singlereal number, one parameter, and split itsdigits to make two real numbers. Theexample of a space-filling curve shows thatyou can even take one real number into twocontinuously, so that a one-dimensionalobject can completely fill up a higherdimensional object.

Every space filling curve hits every pointmany times, and does not have a continuousinverse. It is impossible to map twodimensions onto one in a way that iscontinuous and continuously invertible. The topological dimension explains why. The Lebesgue covering dimensionis defined as the minimum number of overlaps that small open balls need to have in order to completely cover theobject. When you try to cover a line by dropping intervals on it, you always end up covering some points twice.Covering a plane with disks, you end up covering some points three times, etc. The topological dimension tells youhow many different little balls connect a given point to other points in the space, generically. It tells you howdifficult it is to break a geometric object apart into pieces by removing slices.

But the topological dimension doesn't tell you anything about volumes. A curve which is almost space filling can still have topological dimension one, even if it fills up most of the area of a region. A fractal has an integer

Page 36: Fractal Analysis

Hausdorff dimension 34

topological dimension, but in terms of the amount of space it takes up, it behaves as a higher dimensional space. TheHausdorff dimension defines the size notion of dimension, which requires a notion of radius, or metric.Consider the number N(r) of balls of radius at most r required to cover X completely. When r is small, N(r) is large.If N(r) always grows as 1/rd as r approaches zero, then X has Hausdorff dimension d. The precise definition requiresthat the dimension "d" so defined is a critical boundary between growth rates that are insufficient to cover the space,and growth rates that are overabundant.For shapes that are smooth, or shapes with a small number of corners, the shapes of traditional geometry and science,the Hausdorff dimension is an integer. But Benoît Mandelbrot observed that fractals, sets with noninteger Hausdorffdimensions, are found everywhere in nature. He observed that the proper idealization of most rough shapes you seearound you is not in terms of smooth idealized shapes, but in terms of fractal idealized shapes:

clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nordoes lightning travel in a straight line. [1]

The Hausdorff dimension is a successor to the less sophisticated but in practice very similar box-counting dimensionor Minkowski–Bouligand dimension. This counts the squares of graph paper in which a point of X can be found asthe size of the squares is made smaller and smaller. For fractals that occur in nature, the two notions coincide. Thepacking dimension is yet another similar notion. These notions (packing dimension, Hausdorff dimension,Minkowski–Bouligand dimension) all give the same value for many shapes, but there are well documentedexceptions.

Formal definitionLet be a metric space. If and , the -dimensional Hausdorff content of is definedby

In other words, is the infimum of the set of numbers such that there is some (indexed) collection of

balls covering with for each which satisfies . (Here, we use

the standard convention that inf Ø =∞.) The Hausdorff dimension of is defined by

Equivalently, may be defined as the infimum of the set of such that the -dimensionalHausdorff measure of is zero. This is the same as the supremum of the set of such that the -dimensional Hausdorff measure of is infinite (except that when this latter set of numbers is empty theHausdorff dimension is zero).

Examples• The Euclidean space has Hausdorff dimension n.• The circle S1 has Hausdorff dimension 1.• Countable sets have Hausdorff dimension 0.• Fractals often are spaces whose Hausdorff dimension strictly exceeds the topological dimension. For example, the

Cantor set (a zero-dimensional topological space) is a union of two copies of itself, each copy shrunk by a factor1/3; this fact can be used to prove that its Hausdorff dimension is which is approximately TheSierpinski triangle is a union of three copies of itself, each copy shrunk by a factor of 1/2; this yields a Hausdorffdimension of , which is approximately .

• Space-filling curves like the Peano and the Sierpiński curve have the same Hausdorff dimension as the space theyfill.

Page 37: Fractal Analysis

Hausdorff dimension 35

• The trajectory of Brownian motion in dimension 2 and above has Hausdorff dimension 2 almost surely.• An early paper by Benoit Mandelbrot entitled How Long Is the Coast of Britain? Statistical Self-Similarity and

Fractional Dimension and subsequent work by other authors have claimed that the Hausdorff dimension of manycoastlines can be estimated. Their results have varied from 1.02 for the coastline of South Africa to 1.25 for thewest coast of Great Britain. However, 'fractal dimensions' of coastlines and many other natural phenomena arelargely heuristic and cannot be regarded rigorously as a Hausdorff dimension. It is based on scaling properties ofcoastlines at a large range of scales; however, it does not include all arbitrarily small scales, where measurementswould depend on atomic and sub-atomic structures, and are not well defined.

• The bond system of an amorphous solid changes its Hausdorff dimension from Euclidian 3 below glass transitiontemperature Tg (where the amorphous material is solid), to fractal 2.55±0.05 above Tg, where the amorphousmaterial is liquid.[2]

Properties of Hausdorff dimension

Hausdorff dimension and inductive dimensionLet X be an arbitrary separable metric space. There is a topological notion of inductive dimension for X which isdefined recursively. It is always an integer (or +∞) and is denoted dimind(X).Theorem. Suppose X is non-empty. Then

Moreover

where Y ranges over metric spaces homeomorphic to X. In other words, X and Y have the same underlying set ofpoints and the metric dY of Y is topologically equivalent to dX.These results were originally established by Edward Szpilrajn (1907–1976). The treatment in Chapter VII of theHurewicz and Wallman reference is particularly recommended.

Hausdorff dimension and Minkowski dimensionThe Minkowski dimension is similar to the Hausdorff dimension, except that it is not associated with a measure. TheMinkowski dimension of a set is at least as large as the Hausdorff dimension. In many situations, they are equal.However, the set of rational points in has Hausdorff dimension zero and Minkowski dimension one. There arealso compact sets for which the Minkowski dimension is strictly larger than the Hausdorff dimension.

Hausdorff dimensions and Frostman measures

If there is a measure defined on Borel subsets of a metric space such that andholds for some constant and for every ball in , then .

A partial converse is provided by Frostman's lemma. That article also discusses another useful characterization of theHausdorff dimension.

Page 38: Fractal Analysis

Hausdorff dimension 36

Behaviour under unions and products

If is a finite or countable union, then

This can be verified directly from the definition.If and are metric spaces, then the Hausdorff dimension of their product satisfies[3]

This inequality can be strict. It is possible to find two sets of dimension 0 whose product has dimension 1.[4] In theopposite direction, it is known that when and are Borel subsets of , the Hausdorff dimension of is bounded from above by the Hausdorff dimension of plus the upper packing dimension of . These facts arediscussed in Mattila (1995).

Self-similar setsMany sets defined by a self-similarity condition have dimensions which can be determined explicitly. Roughly, a setE is self-similar if it is the fixed point of a set-valued transformation ψ, that is ψ(E) = E, although the exact definitionis given below.Theorem. Suppose

are contractive mappings on Rn with contraction constant rj < 1. Then there is a unique non-empty compact set Asuch that

The theorem follows from Stefan Banach's contractive mapping fixed point theorem applied to the complete metricspace of non-empty compact subsets of Rn with the Hausdorff distance.[5]

To determine the dimension of the self-similar set A (in certain cases), we need a technical condition called the openset condition on the sequence of contractions ψi which is stated as follows: There is a relatively compact open set Vsuch that

where the sets in union on the left are pairwise disjoint.Theorem. Suppose the open set condition holds and each ψi is a similitude, that is a composition of an isometry anda dilation around some point. Then the unique fixed point of ψ is a set whose Hausdorff dimension is s where s is theunique solution of [6]

Note that the contraction coefficient of a similitude is the magnitude of the dilation.We can use this theorem to compute the Hausdorff dimension of the Sierpinski triangle (or sometimes calledSierpinski gasket). Consider three non-collinear points a1, a2, a3 in the plane R² and let ψi be the dilation of ratio 1/2around ai. The unique non-empty fixed point of the corresponding mapping ψ is a Sierpinski gasket and thedimension s is the unique solution of

Taking natural logarithms of both sides of the above equation, we can solve for s, that is:

Page 39: Fractal Analysis

Hausdorff dimension 37

The Sierpinski gasket is self-similar. In general a set E which is a fixed point of a mapping

is self-similar if and only if the intersections

where s is the Hausdorff dimension of E and denotes Hausdorff measure. This is clear in the case of theSierpinski gasket (the intersections are just points), but is also true more generally:Theorem. Under the same conditions as the previous theorem, the unique fixed point of ψ is self-similar.

The Hausdorff dimension theoremThe following theorem deals with existence of fractals with given Hausdorff dimension in Euclidean spaces [7]:Theorem. For any real and integer , there is a continuum fractals with Hausdorff dimension in -dimensional Euclidean space.

Historical references• A. S. Besicovitch (1929). "On Linear Sets of Points of Fractional Dimensions". Mathematische Annalen 101 (1):

161–193. doi:10.1007/BF01454831.• A. S. Besicovitch; H. D. Ursell (1937). "Sets of Fractional Dimensions". Journal of the London Mathematical

Society 12 (1): 18–25. doi:10.1112/jlms/s1-12.45.18.Several selections from this volume are reprinted in Edgar, Gerald A. (1993). Classics on fractals. Boston:Addison-Wesley. ISBN 0-201-58701-7. See chapters 9,10,11

• F. Hausdorff (March 1919). "Dimension und äußeres Maß". Mathematische Annalen 79 (1–2): 157–179.doi:10.1007/BF01457179.

Notes[1] Mandelbrot, Benoît (1982). The Fractal Geometry of Nature. Lecture notes in mathematics 1358. W. H. Freeman. ISBN 0716711869.[2] M.I. Ojovan, W.E. Lee. (2006). "Topologically disordered systems at the glass transition" (http:/ / eprints. whiterose. ac. uk/ 1958/ ). J. Phys.:

Condensed Matter 18 (50): 11507–20. Bibcode 2006JPCM...1811507O. doi:10.1088/0953-8984/18/50/007. .[3] Marstrand, J. M. (1954). "The dimension of Cartesian product sets". Proc. Cambridge Philos. Soc. 50 (3): 198–202.

doi:10.1017/S0305004100029236.[4] Falconer, Kenneth J. (2003). Fractal geometry. Mathematical foundations and applications. John Wiley & Sons, Inc., Hoboken, New Jersey.[5] Falconer, K. J. (1985). "Theorem 8.3". The Geometry of Fractal Sets. Cambridge, UK: Cambridge University Press. ISBN 0-521-25694-1.[6] Tsang, K. Y. (1986). "Dimensionality of Strange Attractors Determined Analytically" (http:/ / prl. aps. org/ abstract/ PRL/ v57/ i12/

p1390_1). Phys. Rev. Lett. 57 (12): 1390–1393. doi:10.1103/PhysRevLett.57.1390. PMID 10033437. .[7] Soltanifar, M.(2006) On A Sequence of Cantor Fractals, Rose Hulman Undergraduate Mathematics Journal, Vol 7, No 1, paper 9.

Page 40: Fractal Analysis

Hausdorff dimension 38

References• Dodson, M. Maurice; Kristensen, Simon (June 12, 2003). "Hausdorff Dimension and Diophantine

Approximation". Fractal geometry and applications: a jubilee of Beno\^it Mandelbrot. Part, --347, Proc. Sympos.Pure Math., 72, Part , Amer. Math. Soc., Providence, RI, . 1 (305). arXiv:math/0305399.

• Hurewicz, Witold; Wallman, Henry (1948). Dimension Theory. Princeton University Press.• E. Szpilrajn (1937). "La dimension et la mesure". Fundamenta Mathematica 28: 81–9.• Marstrand, J. M. (1954). "The dimension of cartesian product sets". Proc. Cambridge Philos. Soc. 50 (3):

198–202. doi:10.1017/S0305004100029236.• Mattila, Pertti (1995). Geometry of sets and measures in Euclidean spaces. Cambridge University Press.

ISBN 978-0-521-65595-8.

Page 41: Fractal Analysis

Article Sources and Contributors 39

Article Sources and ContributorsFractal dimension  Source: http://en.wikipedia.org/w/index.php?oldid=480608392  Contributors: Ajgorhoe, Akarpe, Beaumont, Berland, Bjankuloski06en, Coemgenus, Crisófilax, Crtolle,Diego Moya, Dryazan, Eequor, Farhadbordbar, Gaius Cornelius, Gandalf61, Giftlite, Gsrdzl, Gulmammad, Headbomb, Henrikholm, Huttarl, Ideal gas equation, Igitur, Jheald, Jrolston, KDesk,KamiGolchin, Kieff, Lambiam, Leevanjackson, Lerdsuwa, Lgallindo, Llorenzi, Lpet11, MZMcBride, MarSch, Mcld, Michael Hardy, Nakon, Nazlfrag, Nico2panama, Nousernamesleft,Oshanker, Peteymills, Prokofiev2, RDBrown, Risk one, Rjwilmsi, Schneelocke, Sigmundur, Tcamps42, Tdtsi, Thyamu, Twbaroberts, Twisp, Unara, Vairis, Vcelloho, Venny85, W.F.Galway, 40anonymous edits

Fractal analysis  Source: http://en.wikipedia.org/w/index.php?oldid=479434817  Contributors: Akarpe, Delaszk, Excirial, Famgroup, FatPope, Fractalmichel, Gavinthorp, Hallows AG, Hjb,Qetuth, RHaworth, Rjwilmsi, Tinucherian, 3 anonymous edits

Box counting  Source: http://en.wikipedia.org/w/index.php?oldid=480099360  Contributors: Akarpe, Headbomb, Michael Hardy, Niceguyedc, Questionable pulse

Multifractal system  Source: http://en.wikipedia.org/w/index.php?oldid=480450091  Contributors: A5b, Ael 2, Akarpe, Alansohn, Dougher, Giftlite, Kevin Baas, Kreachure, Lavateraguy,Ldecola, Linas, MFMCguy, MichaelMcGuffin, Netsnipe, RDBrown, Rjwilmsi, SimpsonDG, Stevelihn, Twbaroberts, Utopiah, 16 anonymous edits

Entropy (information theory)  Source: http://en.wikipedia.org/w/index.php?oldid=480571920  Contributors: Abdull, Aelkiss, Afelton, Ahoerstemeier, AlanUS, Albmont, Ale2006, Alejo2083,Alexnye, Algorithms, Alksentrs, Alpt, Andrei Stroe, Ap, Army1987, AugPi, B4hand, BD2412, Beland, Belkovich, BlackAce48, Blaisorblade, Blueyeru, Boaz, Bowsmand, Br77rino, Brandon,Brona, Bryan Derksen, Btyner, Buster79, CBM, Cassandra Cathcart, Cburnett, Cesarth73, Christopherlin, Citrus538, Coffee2theorems, CommonsDelinker, Constructive editor, Conversion script,Coppertwig, CorbinSimpson, Cretog8, Cybercobra, DMacks, Dailyknowledge, Dan Gluck, Daniel.Cardenas, Dauto, Deepmath, Derek Ross, Diegotorquemada, Djr32, Dmh, DmitriyV, Dougher,Dragon 280, Dysprosia, ESkog, Eastereaster, EdJohnston, Edchi, Elsehow, EnOreg, Entropeter, Erianna, Erkcan, FarzanehSarafraz, Fibonacci, FilipeS, FilippoSidoti, First Harmonic, Flammifer,Flashmorbid, Foobaz, Fryed-peach, Fulldecent, GEBStgo, Giftlite, Graham87, Gubbubu, Guettarda, Gusshoekey, Gutworth, Hagedis, Hakeem.gadi, Hans de Vries, Hanspi, Heysan,Hirstormandy, Husond, Ignoramibus, Informationtheory, InverseHypercube, IstvanWolf, JaGa, Jabowery, Jann.poppinga, Jdthood, Jeffq, Jetekus, Jheald, Johnuniq, JoseREMY, Jough, Kace7,Karol Langner, Kencf0618, Kestasjk, Kimchi.sg, Kjells, Kku, Kurykh, Kymacpherson, LOL, Landon1980, Linas, Lotje, Magioladitis, Male1979, MarkSweep, Maxlittle2007, Mcld, Mcstrother,Mdsam2, Mduteil, Melcombe, Mfeadler, Mhadi.afrasiabi, Michael C Price, Michael Hardy, Michael Rogers, Michel.machado, Mintrick, MisterSheik, Mitch.mcquoid, Mkweise, Music Sorter,Musides, Nbarth, Nearfar, Neilc, Neonleonb, Netpilot43556, Ninjagecko, Nneonneo, Noeckel, Nonsuch, Ohnoitsjamie, Oleg Alexandrov, Olivier, Omegatron, Orubt, OverlordQ, PAR, Paul-L,Pax:Vobiscum, Phancy Physicist, Phy1729, PhysPhD, PierreAbbat, Pmagrass, Purplie, Qwfp, R'n'B, Ra2007, Radagast83, Rade Kutil, Rc3002, Rich Farmbrough, Rick.G, Rinconsoleao,Rjwilmsi, Robma, Romanpoet, Rursus, Ryan Reich, Ryan256, Saigyo, Sbwoodside, Schuermann, Sctfn, Seanmadsen, Seaphoto, Severoon, Shaul1, Shockem, Shreevatsa, Sigma0 1, Sjö, Sligocki,Snowgrouse, Snoyes, Sobec, Spakin, Spinningspark, SteveMcCluskey, Stevertigo, Stirling Newberry, SudoGhost, Svick, Swpb, TedDunning, The Anome, TheObtuseAngleOfDoom, ThomasArelatensis, Tide rolls, TimProof, Time3000, Tkircher, Tobias Bergemann, Tobias Hoevekamp, Tomash, Tomchiukc, Tommy Herbert, Trachten, Trevorgoodchild, Tschijnmotschau, Vql,Waveguy, WikiC, Wikomidia, Wiml, Ww, Youandme, Zandr4, Zeman, Ziddy, Zylorian, 298 anonymous edits

Rényi entropy  Source: http://en.wikipedia.org/w/index.php?oldid=480714131  Contributors: Aetheling, Bailo26, Calbaer, Charles Matthews, Clovis Sangrail, DRLB, David Eppstein,Entropeter, Gene Nygaard, HappyCamper, Headbomb, Hexiang, JASAlways, Jheald, Korepin, Ligulem, Linas, Michael Hardy, Moala, Ohjaek33, Rjwilmsi, Schmei, Shakir, Vegalabs, Wullj,Ywaz, 28 anonymous edits

Hausdorff dimension  Source: http://en.wikipedia.org/w/index.php?oldid=474428036  Contributors: Aleph4, Alex Schreiber, AugPi, AxelBoldt, BernardH, BigK101, Bomazi, Bonev,CRGreathouse, CSTAR, Charles Matthews, Chebyshev, Cheeser1, CiaPan, Ciphergoth2, ComplexZeta, Conversion script, David Eppstein, DavidCary, Dcoetzee, Diannaa, Dirac1933, Drstuey,Druiffic, Eequor, Everyking, Expert in topology, Feraudyh, Furrykef, Gadykozma, Gdr, Gene Nygaard, George Burgess, Gervasecb, Giftlite, Graham87, Graphite Elbow, Habj, HighwoodFool,IVAN3MAN, Iacommo, JF Manning, Jannex, Joriki, JunCTionS, Kaini, Kakofonous, Karshon, Kdammers, Kier07, Koeplinger, Lethe, Linas, Link hyrule5, MIT Trekkie, MathMartin, Meocjt,Michael Hardy, Mohsen.soltanifar, Mpd1989, Naraht, Nol Aders, OdedSchramm, Oleg Alexandrov, PL290, Pjacobi, Poccil, Poor Yorick, Prokofiev2, RDBrown, Rgrg, Rjwilmsi, Rlove, Sade,Sam Hocevar, Schmloof, Schneelocke, Scleria, Scorwin, Slightsmile, Smjg, Solkoll, Somejan, TeeEmCee, The Anome, TheSeven, Thenub314, Tinus, Tobias Bergemann, TomyDuby, TopologyExpert, Tosha, TurilCronburg, Uffish, User23456789, Volland, WLior, Wikimol, WingedPig, Wood Thrush, XJamRastafire, Zeromaru, Zundark, 88 anonymous edits

Page 42: Fractal Analysis

Image Sources, Licenses and Contributors 40

Image Sources, Licenses and ContributorsImage:britain-fractal-coastline-200km.png  Source: http://en.wikipedia.org/w/index.php?title=File:Britain-fractal-coastline-200km.png  License: GNU Free Documentation License Contributors: User:Acadac, User:Avsa, User:WapcapletImage:britain-fractal-coastline-100km.png  Source: http://en.wikipedia.org/w/index.php?title=File:Britain-fractal-coastline-100km.png  License: GNU Free Documentation License Contributors: User:Acadac, User:Avsa, User:WapcapletImage:britain-fractal-coastline-50km.png  Source: http://en.wikipedia.org/w/index.php?title=File:Britain-fractal-coastline-50km.png  License: GNU Free Documentation License  Contributors:User:Acadac, User:Avsa, User:WapcapletImage:32 segment fractal.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:32_segment_fractal.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors:User:AkarpeImage:blueklineani2.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Blueklineani2.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeImage:Fractaldimensionexample.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Fractaldimensionexample.PNG  License: Public Domain  Contributors: Brendan Ryan. Originaluploader was Nazlfrag at en.wikipediaImage:KochFlake.svg  Source: http://en.wikipedia.org/w/index.php?title=File:KochFlake.svg  License: GNU Free Documentation License  Contributors: D-Kuru, Wxsfile:onetwosix.png  Source: http://en.wikipedia.org/w/index.php?title=File:Onetwosix.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeImage:Binarizing neuron image.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Binarizing_neuron_image.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors:User:AkarpeImage:Slidestack.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Slidestack.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeImage:lcfd.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Lcfd.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeImage:Fixedstack.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Fixedstack.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeImage:Retina lcfd.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Retina_lcfd.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeImage:optimal covering grids.png  Source: http://en.wikipedia.org/w/index.php?title=File:Optimal_covering_grids.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors:User:AkarpeImage:Karperien Strange Attractor 200.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Karperien_Strange_Attractor_200.gif  License: Creative Commons Attribution-Sharealike3.0  Contributors: User:AkarpeFile:Distort.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Distort.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeFile:Dqvsq.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Dqvsq.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:AkarpeFile:Binary entropy plot.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Binary_entropy_plot.svg  License: GNU Free Documentation License  Contributors: Brona and AlessioDamatoimage:Great Britain Hausdorff.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Great_Britain_Hausdorff.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors:ProkofievImage:Sierpinski deep.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Sierpinski_deep.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Sega sai

Page 43: Fractal Analysis

License 41

LicenseCreative Commons Attribution-Share Alike 3.0 Unported//creativecommons.org/licenses/by-sa/3.0/


Recommended