+ All Categories

MT_BGW

Date post: 08-Sep-2015
Category:
Upload: 6cyu
View: 216 times
Download: 2 times
Share this document with a friend
Description:
Thesis on the study of actin cytoskeleton of cells living in 2D and 3D world
Popular Tags:
133
T HE I NFLUENCES OF THE ACTIN C YTOSKELETON ON THE P ROPERTIES OF THE N UCLEUS IN A DHERENT C ELL : C OMPARISON BETWEEN 2D AND 2.5D M ASTER T HESIS by Bayu Gautama Wundari Institute of Zoology Karlsruhe, 30.04.2014 Examiner: Prof. Dr. Martin Bastmeyer Supervisor: Dr. Franco Weth
Transcript
  • THE INFLUENCES OF THE ACTINCYTOSKELETON ON THE PROPERTIES OF

    THE NUCLEUS IN ADHERENT CELL:COMPARISON BETWEEN 2D AND 2.5D

    MASTER THESIS

    by

    Bayu Gautama Wundari

    Institute of Zoology

    Karlsruhe, 30.04.2014

    Examiner: Prof. Dr. Martin BastmeyerSupervisor: Dr. Franco Weth

  • Declaration of Original WorkI do hereby declare that the present thesis is original work by me alone and that I have indicatedcompletely and precisely all aids used as well as all citations, whether changed or unchanged, ofother theses and publications.

    EigenstndigkeitserklrungIch versichere wahrheitsgem, die Arbeit selbststndig angefertigt, alle benutzten Hilfsmittelvollstndig und genau angegeben und alles kenntlich gemacht zu haben, was aus Arbeiten andererunverndert oder mit Abnderungen entnommen wurde.

    Karlsruhe, den 30.04.2014. Unterschrift / Signature:

  • ii

  • Contents

    Abstract 4

    1 Introduction 7

    2 Theoretical Background 132.1 Actin Stress Fibers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2 Cell-Matrix Adhesion Contacts . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3 Microcontact Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.4 Image Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    2.4.1 Degradation Due to the Blurring Process . . . . . . . . . . . . . . . . 202.4.2 Degradation Due to Noise . . . . . . . . . . . . . . . . . . . . . . . . 26

    2.5 Image Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.5.1 Non-iterative Image Restoration . . . . . . . . . . . . . . . . . . . . . 282.5.2 Statistical Image Restoration . . . . . . . . . . . . . . . . . . . . . . . 32

    2.6 Structure Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    3 Materials and Methods 413.1 Stamp Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2 2D Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    3.2.1 Sample Preparations for 2D Substrates . . . . . . . . . . . . . . . . . . 433.2.2 Cell Fixation and Staining . . . . . . . . . . . . . . . . . . . . . . . . 443.2.3 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.2.4 Image Processing for 2D Cases . . . . . . . . . . . . . . . . . . . . . 45

    3.3 2.5D Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.3.1 Sample Preparations for 2.5D Substrates . . . . . . . . . . . . . . . . . 493.3.2 Cell Fixation and Staining . . . . . . . . . . . . . . . . . . . . . . . . 503.3.3 Data Acquisition for 2.5D Cases . . . . . . . . . . . . . . . . . . . . . 503.3.4 Image Processing for 2.5D Cases . . . . . . . . . . . . . . . . . . . . 50

    4 Results and Data Analysis 654.1 Cell Nucleus and Stress Fiber Orientation . . . . . . . . . . . . . . . . . . . . 654.2 Cell Nucleus and Cell Body Area Measurements . . . . . . . . . . . . . . . . 864.3 Cell Nucleus Elongation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974.4 Cell Volume and Surface Area . . . . . . . . . . . . . . . . . . . . . . . . . . 974.5 The Actin Cytoskeleton Supports the Cell Nucleus in 2.5D Environment . . . . 994.6 Nucleus Indentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    iii

  • Contents 1

    5 Discussion and Outlook 109

    6 Summary 115

    Bibliography 122

    Acknowledgment 127

  • 2 Contents

  • Abstract

    The structure of the extracellular matrix (ECM) defines the cell shape which turns out todetermine the life cycle of cells. The information from cells environments are transmitted to thenucleus by sensing the mechanical forces transmitted via the actin cytoskeleton. By defining theshape and the size of adhesive islands, the actin cytoskeleton distribution near the nucleus and theglobal distribution of the actin stress fibers seem to play a main role in defining the geometricalaspects of the nucleus. In particular, cross-sectional area, elongation, long axis orientation,surface area and volume will be investigated. Furthermore, the actin cytoskeleton turns out tobe the mediator in connecting all those geometrical aspect of the nucleus affecting the cell lifecycle. The actin stress fiber orientation seem to affect the long axis nucleus orientation leadingto the spindle axis orientation. It is also demonstrated that the actin stress fiber orientation givesclue about the cell division axis on two-dimensional patterns. In this thesis, there will be alsoa comparison between 2-dimensional and 2.5-dimensional environments. In 2.5-dimensionalenvironments, stress fibers orientate along the adhesive structures and the actin cytoskeleton isdistributed to support the floating nucleus in space. In addition, an indentation near the peripheryof the nucleus is observed.

    3

  • 4 Contents

  • Contents 5

  • 6 Contents

  • Chapter 1

    Introduction

    Cells care how big or small they are because the basic processes of cell physiology, such as fluxacross membranes are dependent on cell size. Therefore, changes in volume and surface areawill be directly affecting the metabolic flux, biosynthetic capacity, and nutrient exchange [1]. Itturns out that the cell size and shape is defined by the geometry of the adhesive region. Theinformation of the adhesive pattern can be interpreted by the cell through the organization oftheir acto-myosin traction forces and passing this signal to the nucleus via actin cytoskeleton[2]. Thus, the actin cytoskeleton has a strong coordination with the cell nucleus. This isproved by many studies indicating that forces coming from the environment of the cell inducethe regulation of the cells gene expression [3]. The regulation of the gene expression due tomechanical transmission is likely to occur due to a physical connection of proteins connectingthe nuclear lamina to chromatin (such as emerin [4]) and also other proteins that connect thenuclear lamina to the cytoskeleton (SUN-KASH bridges) [5]. Instead of osmotic pressure thatmay act on the nucleus, the tension generated by the cytoskeleton would alter the shape of thenucleus and increases the nucleocytoplasmic transport [6], however, the nucleus to cell volumeratio (karyoplasmic ratio) has to be constant for unknown reason [7]. Physical conformation ofthe nucleus or the disrupted volume ratio is often associated with cancer cells. For example, thenuclei of epithelial cells enlarge dramatically and become hyperchromatic (they stain abnormallydarkly with a contrast dye as a result of changes in their chromatin content) when they becomecancerous [8]. Furthermore, the nuclear volume changes might be affecting the concentration ofnuclear proteins, DNA, and RNA, and then it eventually would disturb the activities of RNA andDNA polymerases [9].

    In order to study the correlation between the actin cytoskeleton and the nucleus, it is importantto design an experiment that can mimic the in situ environments of cells. In situ, withinorgan or tissues, cells behave totally different than in classical culture conditions. The cellmicroenvironments, for instance the extracellular matrix (ECM) and the neighbouring cells,provide a variety of cues (represented in red color in figure 1.1) for cell morphogenesis, rangingfrom geometrical constraints to biochemical signalling and mechanical resistance.

    However, when cells are cultured on flat, homogeneous and isotropic substrates, they willcompletely lose those information and form a large number of random shape deformation sincethey actively assemble and disassemble their cytoskeleton during migration. For this reason,micropatterning techniques are used in order to engineer the environment that can mimic specific

    7

  • 8 Chapter 1. Introduction

    Figure 1.1: In situ vs in vitro cell microenvironment [10].

    microenvironmental cues. One important aspect about the physiological ECM network is that itis fibrillar and heterogeneous, i.e. the ECM network does not completely surround the cells,consequently there will be some free contact surfaces on cells. These circumstances can beclosely mimicked by using micropatterns of various shapes like a V shape with variety of angles[10]. Cells plated on micropatterned substrates precisely adapt their cytoskeleton architectureto the geometry of their microenvironment by remodelling of actin and microtubule networkswhich further impacts cell migration, growth, and differentiation [2, 10]. And it has beendemonstrated that the geometry of the ECM plays many essential roles in the morphogenesisof cells: in determining cell polarities, actin cytoskeleton distribution, cell shape and spindlepositioning [11, 2, 12].

    As outlined above that the cell shape is determined by the adhesive pattern on which the celladheres that can be interpreted through the actin cytoskeleton. It turns out that one response thatcells give to these changes is by altering the spatial coordination of the cell-division axis withcellular polarity and/or with the position of neighbouring cells which is crucial for embryonicdevelopment, organogenesis and tissue homeostasis [13]. The former understanding of celldivision axis is based on Hertwig rule stating that the tendency of cells to divide is along theirlong axis [14, 15]. However, recent understanding of cell division is based on the cortical cuestogether with the retraction fibers that have been shown to be able to guide spindle orientation[12]. And the cortical cues have connection with the location of the adhesive sites.

    The discussion above shows how close the connection between the actin cytoskeleton with thenucleus is. However, it is also important to show that the physical connection between themalters the geometrical aspect of the nucleus in terms of cross-sectional area, elongation, longaxis orientation, surface area and volume. Therefore, the main goal of this thesis is to showthe changes of the nucleus geometry by comparing cells cultured in two-dimensional (2D) and2.5-dimensional (2.5D) environments and trying to explain the effects to the cell life cycle.

    In chapter 2 of this thesis, the principles of the actin stress fibers and cell-matrix contacts are

  • 9

    introduced as they are the main cell components that are measured. Furthermore an explanationabout the soft lithography and image processing methods that are used for the quantificationare given: the structure tensor and the image restoration. Chapter 3 contains the materials andmethods which were used for the experiments and the image processing tasks. The results andthe data analysis are given in chapter 4. Chapter 5 gives a discussion about the results and theoutlook. Finally, in chapter 6 the summary of the thesis is given.

  • 10 Chapter 1. Introduction

  • 11

  • 12 Chapter 1. Introduction

  • Chapter 2

    Theoretical Background

    The actin cytoskeleton plays a key role in supporting the live of animal cells. It maintains a largenumber of cellular processes within the cells including polarity establishments, morphogenesis,and motility. The actin cytoskeleton also acts as the mediator that helps the cell to communicatewith its environment by changing its mechanical properties to carry the mechanical forcescoming from the environment to the cell nucleus. To modulate their mechanical properties, actinfilaments have capabilities to organize into a variety of architectures generating a diversity ofcellular organizations including branched or crosslinked networks in the lamellipodium, parallelbundles in filopodia, and antiparallel structures in contractile fibers [16]. In this thesis, the actindistribution and the stress fiber orientation are considered to be the main actor that links the cellenvironment to the nucleus. This connection is described in the geometry changes of the nucleussuch as elongation, long axis orientation, area, volume, and surface area.

    2.1 Actin Stress FibersAs one of the actin cytoskeleton architecture, actin stress fibers are the major mediators forcell contraction. They are composed bundles of 10-30 actin filaments [17]. The bundles aretightened with the help of actin-crosslinking proteins like -actinin, fascin, espin, and filamin[18]. -actinin crosslinking proteins held the actin filaments in parallel arrays as displayed infigure 2.1a.

    In order to enable the sliding contraction between each block of bundled actin filaments, it turnsout that they should have polarity to its successive blocks so that the myosin motors whichare interconnected within the bundles can slide between the bundles to cause contraction orretraction. This polarity is represented in terms of a plus (barbed) end and a minus (pointed)end and the myosin motors move towards the barbed end. The distribution of polarity could beuniform or random [17, 18].

    Furthermore, the actin stress fibers also play an important role in mechanosensing. Contractileforces generated by stress fibers regulate the assembly and dynamics of focal adhesions. Cellsassemble stress fibers only when they encounter mechanical stress (force). Most animal celltypes form stress fibers and focal adhesions which are aligned along the major cell axis whenthey are grown on a rigid matrix. When they are grown on compliant matrix, focal adhesionsare smaller and stress fibers are poorly aligned. This is demonstrated in figure 2.2. Cells on soft

    13

  • 14 Chapter 2. Theoretical Background

    (a) Models of stress fibers structure and contractility [18]

    (b) Model of stress fibers formation [18]

    (c) Types of stress fibers in cultured animal cells [19]

    Figure 2.1: Actin stress fibers structures

    substrates have a diffuse cytoskeleton, composed of near random arrangements of actin filaments.In contrast, cells on stiff substrates contain many stress fibers, aggregations of actin , and otherproteins that slowly contract under the influence of nonmuscle myosin II [19].

  • 2.2. Cell-Matrix Adhesion Contacts 15

    Figure 2.2: Effect of substrate stiffness on stress fibers. Left figure shows fibroblast cells platedon a rigid (2 MPa) substrate display thick and well aligned stress fibers. Right figure shows cellsplated on a compliant (5 kPa) substrate display thinner and poorly oriented stress fibers [19].

    Formation of Actin Stress Fibers

    Actin stress fibers are formed into at least four different categories in accordance with theirmorphology and association with focal adhesion and briefly shown in figure 2.1c [19]:

    1. Dorsal stress fibersDorsal stress fibers are anchored to focal adhesions at their distal ends. They typicallydo not contain myosin II, and therefore it cannot contract. However, dorsal stress fibersappear to serve as a platform for the assembly of other types of stress fibers, as well as tolink them to focal adhesion.

    2. Transverse arc stress fibersThis kind of stress fibers have a curved shape bundles displaying a periodic -actinin-myosin pattern which is typical for contractile actomyosin bundles. They do not directlyattach to the surrounding environment, but they could transfer contractile force to theconnected dorsal stress fibers and therefore the forces still can be transferred to thesurrounding.

    3. Ventral stress fibersThe most distinguished characteristic of these fibers is that they have attachments to focaladhesion at both ends. Ventral stress fibers are often located at the posterior part of cellswhere occasional contraction cycles promote rear constriction and facilitate cell movement.

    4. The perinuclear cap stress fibersThese stress fibers are located above the nucleus and have the function to regulate theshape of the nucleus in interphase of cells. They also have a function to convey force fromthe environment to the nucleus and therefore acting as a mechanotransducer. Some certainstress fibers also have a connection to the nucleus through specific membrane proteins tostabilize the position of the nucleus and to regulate cell movements.

    2.2 Cell-Matrix Adhesion ContactsCells can attach to their ECM with the help of matrix adhesion contacts on their cell surfacewhich contains integrin receptors as their main transmembrane proteins. The assembly of

  • 16 Chapter 2. Theoretical Background

    cell-matrix adhesions are initiated at protruding cell edges, and the adhesions to the ECMis initiated within lamellipodia or filopodia at the cell periphery [20]. The outer parts of thematrix contacts (extracellular domains) bind to specific ECM components (like fibronectin) withthe help of distinct integrin heterodimers (like v1). Whereas the inner parts (cytoplasmictails) connects the F-actin cytoskeleton with the help of adaptor proteins (figure 2.3a). Thephysical connection of the F-actin cytoskeleton with the ECM regulates the adhesion by dynamicassemblies of structural and signalling proteins that couple the cytoskeleton and the ECM [21].

    There are 4 phases of matrix contacts maturation that can be distinguished, namely nascentadhesions, focal complexes, focal adhesions (or focal contacts), and fibrillar adhesions (figure2.4). Adhesion of a protruding cell edge is initiated by the nascent adhesion whose size issmaller than 0.25 m. A subpopulation of the nascent adhesions disassembles within minutes asthe leading edge advances. Whereas the remainder grow and mature into focal complexes whosesize is about 5 m and mature further into focal adhesions [21].

    The characteristics of focal complexes are described by their size which is small (

  • 2.2. Cell-Matrix Adhesion Contacts 17

    (a) Formation of focal complexes (b) Formation of focal adhesions (or focalcontacts) which mature from focal complexes

    (c) Fibrillar adhesions are pulled out fromfocal contacts in the process of extracellularfibronectin matrix assembly

    Figure 2.3: Schematic representation of matrix adhesion contacts commonly found in fibroblastscultured on 2D substrates. Nascent adhesions are not shown here [22].

    Figure 2.4: Schematic diagram of morphological phases of adhesion maturation. Curved blackarrows show the adhesion turnover after a certain amount of time [21].

    the unmasked sites results in the focal adhesions growth and reinforces the linkage between theECM and the cytoskeleton [21].

  • 18 Chapter 2. Theoretical Background

    (a) Side view schematic. (1) The lamellipodium (LP) drives nascentadhesions assembly at the leading edge cell. (2) Nascent adhesions isgetting mature by recruiting adaptor proteins (such as talin and kindlin,results in the enhancement of the linkage strength to the F-actin) andECM molecules (results in the increased size of the adhesion spots).(3) The adhesion spots further mature which occurs within the lamella(LM).

    (b) Top view schematic. TA denotes transverse arcs which are parallelto the cell edge and the LP-LM boundary. SF denotes stress fiberswhich are anchored in focal adhesions and in perpendicular to the celledge.

    Figure 2.5: Schematic diagram of adhesion assembly stages [20].

    2.3 Microcontact PrintingMicrocontact printing (P) is one of the best methods to create a micrometer or submicrometersized pattern on a variety of substrates [30]. It is done by coating molecular ink to anelastomeric stamp having a raised relief pattern. Then by bringing the raised areas of the stampinto contact to the substrate surface, the ink then will be transferred from the stamp to thesubstrate and form self assembled monolayers (SAM).

    According to [30], there are 3 kinds of stamp deformations that result the most undesiredconsequences:

    1. Lateral sticking of high aspect ratio platesThis kind of deformation occurs when the capillary forces due to retained liquids on theplates are strong enough to cause them to come into contact. Once contact occurs, platesmay stick to each other as a result of surface adhesive forces.

  • 2.4. Image Formation 19

    2. Buckling with high aspect ratio platesBuckling might happen if the aspect ratio (h/2a) is too large so that the plates can collapsewhen loaded or even under their own weight.

    3. Roof collapse of low aspect ratio plates (raised regions of the stamp that have the pattern)Roof collapse might happen if the aspect ratio is too small so that the recessed area of thestamp and the raised area can be deformed and make a contact with the substrate surface.

    (a) Stamp with rectangular cross section

    (b) Buckling (c) Lateral Collapse (d) Roof Collapse

    Figure 2.6: Micropattern stamp and its failure modes [30]

    It has been shown that if the aspect ratio is too high, the plates can collapse when loaded or evenunder their own weight (buckling).

    2.4 Image FormationGenerally, people are familiar with an image as a recorded picture captured by an opticalimaging system such as a camera, a microscope, a telescope, etc. That recorded picture containsinformation about the physical object that is being observed. Therefore, speaking formally, animage can be defined as a signal carrying information about a physical object [31]. The imagereceived by any imaging system (a microscope in particular), however, does not resemble thewhole object and all original information contained in it due to the diffraction limit. Moreover,there is still another strong factor which is adding to the degradation: noise. The degradationthen can be caused by two main sources as described in [31]:

    1. Degradation because of the process of image formationThe associated degradation is called blurring

  • 20 Chapter 2. Theoretical Background

    2. Degradation because of the process of image recording.The associated degradation is called noise

    Based on the degradation definition above, the schematic of the image formation path wherethe signal is coming from the object until being recorded can be clearly depicted in figure 2.7.By definition, the image is considered as a noise-free (but blurred) image after passing throughthe imaging system. Then, after being recorded by the recording system, noise is added to theimage signal resulting a noisy image.

    In this thesis, the imaging is considered as incoherent because in fluorescence microscopy thereis no fixed phase relationship exists among the fluorescent molecules that compose the object[32].

    Figure 2.7: Image formation pathway from the object until being recorded [31].

    2.4.1 Degradation Due to the Blurring ProcessAs outlined previously, blurring is a degradation due to the diffraction limit problem (frequencylimited problem) of the imaging system in collecting all the information contained inthe object. It is also important to point out that the blurring process happens during theimaging process and before detection by the sensor. Hence the formed image is a noiseless image.

    It turns out that the diffraction limit is not the only main cause of the degradation. Itshould be pointed out that other degradation such as the motion which is happening duringcapturing the image would also contribute to the degradation. Other examples are aberrationsproduced by the lens and atmospheric disturbances. In order to simplify the problem,

  • 2.4. Image Formation 21

    the degradation which are considered in this thesis are only due to the diffraction limitproblem and can be modelled by the point spread function (PSF) of the optical system(in this case is the microscope). The lens is also assumed to be free of manufacturedefects. Motion blurring can be neglected since the microscope is firmly embedded to itsstage. Whereas the atmospheric disturbances can also be neglected because the temperatureis relatively constant and the distance between the object and the optical system is relatively short.

    The diffraction limit problem cannot be avoided, even if the lenses of the optical system havebeen produced perfectly in such a way that no aberration takes place. The Diffraction limitproblem comes from the failure of the optical system to collect all the frequency informationcontained in the object. In order to understand which frequency information the object has, it ishelpful to understand Fourier optics in which the signal is analyzed in the frequency domain.From this point of view, an image is considered as a superposition of harmonic functions ofdifferent frequencies.

    To understand Fourier optics, one should start from the Fourier series. According to the Fouriertheorem, any periodic signal can be expressed as a weighted linear combination of harmonicfunctions (sine and cosine in particular) having different frequencies. Non-periodic signalshowever can be expressed as the integral of sines and cosines multiplied by a weighing functionas long as the area under the curve of that signal is finite. Fourier series are given mathematicallyby equation 2.1.

    g(x) =n=0

    an cos2nx

    +n=0

    bn sin2nx

    =n=0

    an cos knx+n=0

    bn sin knx

    where

    g(x) = signal in spatial domain = spatial periodkn = spatial frequency

    = 2

    an = Fourier coefficient

    = 2

    /2/2

    g(x) cos knx

    bn = Fourier coefficient

    = 2

    /2/2

    g(x) sin knx

    (2.1)

    Consider as a simple example the expansion of a unit-step function into Fourier series depictedin figure 2.8. Notice that as the expansion takes more and more terms into account, the expandedsignal is approaching the original signal. It can be explicitly stated that the expanded signal

  • 22 Chapter 2. Theoretical Background

    which has more terms contains more higher frequency information than the one that has lessterms. Signals that have less terms only contains lower frequency information of the originalsignal.

    Figure 2.8: Expansion of a unit-step function using Fourier series [33].

    A lens actually performs a Fourier transform to the objects to be imaged. And people can observethe image with the help of an objective lens which perform an inverse transform. Images withhigher frequency components and lower frequency components are compared in figure 2.9. Fromthat comparison, one can immediately distinguish the differences of the high-pass and low-passfiltered image. When the image is low-pass filtered, only the lower frequency components of theobject are allowed to pass the imaging system. The lower frequency components correspond toareas with little variation in intensities (like background). This results a blurry image denotingthat the finer structure are lost and therefore the resolution is decreased. In the image that hasbeen high-pass filtered, the finer details of the image look more prominent (high contrast) buthalos are observed around the edges.

    The description of images in the frequency domain has been explained briefly. Now it comes tothe point of giving the reason why the image is degraded. Recall in the previous section that theblurring process is defined as the degradation that occurs during the process of image formationin the optical imaging system before being recorded by the recording system and caused mainlyby the limitation of the imaging system to collect all the information contained in the object. Thislimitation however caused by the finite dimension of the entrance pupil of the imaging system(spatial limitation) to let all the incoming signal in the form of optical wave passing through thesystem. This situation is explained beautifully by Goodman, [35] (see figure 2.10) and will berepeated briefly here.

  • 2.4. Image Formation 23

    (a) Original image without frequency componentsdistortion

    (b) Low-pass filtered image

    (c) High-pass filtered image (d) Band-pass filtered image

    Figure 2.9: Image comparison showing how the image looks like when the frequencycomponents are distorted [34]

    Figure 2.10: A generalized model of an imaging system [35].

    The imaging system in figure 2.10 is assumed to be able to produce real images in space. Theblack box in the system is a lumped system containing a set of lens system. The most importantparts of the black box are the terminal input and output that are represented as the entrance andexit pupils. To represent imaging system as a black box, one needs to know the mapping betweenan input distribution (object) and an output distribution (image of the object) by neglecting thecomplexity of how the imaging system is built.

    Diffraction comes to play in the passage of light through the black box. From the fact thatthe pupils are limited in size, there will be some information of the object that cannot betransferred completely to the image plane (u-v plane in the figure 2.10). This is the main reasonof diffraction limit. This situation further gives bad impact to the resolution of the output.

  • 24 Chapter 2. Theoretical Background

    Ernst Abbe (1873) gave an explanation about this limited resolution during his studies ofcoherent imagery with a microscope which is depicted in figure 2.11 for the case of an objectthat is a grating with several orders. He gave his famous formula showing the connectionbetween a wavelength and the numerical aperture NA (equation 2.2).

    d = 2NA (2.2)

    Figure 2.11: Diffraction limited according to Abbe theory [35].

    Figure 2.11 shows that due to the limited size of the lens, not all diffraction order can becollected by the lens. These uncollected information is in the higher order of the diffractionpattern and corresponds to the high-frequency components of the object.

    This limitation also plays a role in biological imaging. The size of biological samples are inthe range of micrometers and nanometers, therefore diffraction can also play a role during theimaging of the sample. Accordingly, the limited resolution can be explained in the same way asbefore. And this causes the image that is formed by an imaging system looks blurry before beingrecorded by the recording system (see figure 2.7).

    How blurry the image formed by the imaging system is, is determined by the so called as pointspread function (PSF). This function describes how a point object in the object plane will looklike in the image plane. Thus, it specifies the behaviour of the imaging system. Additionally,the image will be also affected by noise. An example of PSF is given in figure 2.12. The lateralbandwidth of the PSF is around three times smaller than the axial bandwidth, meaning that thelateral resolution is roughly three times better than the axial one.

    The behaviour of the imaging system in describing the object is mathematically given in equation

  • 2.4. Image Formation 25

    Figure 2.12: PSF of 200nm bead taken with Confocal Laser Scanning Microscope(LSM510,Zeiss) at KIT, Institute of Zoology. The protocal for taking the PSF measurement isbased on [36].

    2.3 and a schematic representation is given in figure 2.13.

    g(x, y) =

    f(x, y)h(x, y;x, y)dxdy

    whereg(x, y) = image function in the image space (x,y)

    h(x, y;x, y) = point spread function (PSF)f(x, y) = object function in the object space (x,y)

    (2.3)

    Figure 2.13: A Model of the image degradation and restoration process (adapted from [37]).

    The equation 2.3 points out that the response g(x,y) in the image plane (x,y) of the imagingsystem is given as the result of a weighted sum of all input points described by f(x,y) in theobject plane. It means that the output located at (x,y) in the image plane is affected by all points

  • 26 Chapter 2. Theoretical Background

    in the object plane weighted with the PSF h(x, y;x, y). In describing the image of the object,most imaging systems obey 2 important rules which are of primary importance in simplifying theimaging process both analytically and numerically. Those rules are linearity and shift-invariance(or space-invariance) rules. Under those two rules, then the imaging equation 2.3 becomes:

    g(x, y) =

    f(x, y)h(x x, y y)dxdy (2.4)

    It is clear from equation 2.4 that a large spreading of the PSF would contribute to a severeblurring of the image since many points in the object plane contribute to one point in the imageplane. Consequently, the resolution of the image is reduced. Besides blurring, other factorscontributing to influence the degradation are the background originating from autofluorescence,scattering and offsets in the detector gain.

    2.4.2 Degradation Due to Noise

    Noise discussed in this section is caused by photodetectors (the camera of the microscope). It isobvious that noise reduces the quality of the detected signal. Therefore, a good modelling ofnoise is one important step to improve the quality of the observed image. There are however,two types of noise in terms of physical nature: quantum noise (or shot noise) and thermal noise.Quantum noise results from the statistical nature of a quantum event due to particle characteristicof photons. Whereas thermal noise, also known as Johnson noise or Nyquist noise in electronicsand photonics, is the consequence of thermal fluctuations and is directly associated with thermalradiation [38]. Quantum noise is of importance in this thesis. Thus, equation 2.4 can be furtherrefined:

    g(x, y) =

    f(x, y)h(x, y;x, y)dxdy + n(x, y)

    wheren(x, y) = noise function

    (2.5)

    The photons emitted by the object are not distributed uniformly in time, but arrive at the detectorrandomly in time. This random behaviour causes the optical signal and the number of photonsreceived in a given time interval to fluctuate around the average values which are characterizedby the Poisson statistics. Therefore, the first contribution of noise to the observed image is dueto the radiation emitted by the object in a Poisson process. The Poisson distribution is given byequation 2.6. For low intensities, the expected value is near to one, and therefore the fluctuationis greater at this intensity level, causing the intensity changes from one pixel to the next.

    p(n) = en

    n!where

    = Expected value of the countn = Number of particles

    (2.6)

  • 2.5. Image Restoration 27

    The second contribution comes from the background emission due to the auto-fluorescence ofthe medium embedding the sample. This emission is again following Poisson distribution [39].Other sources of noise, for instance, dark current, electronic noise, and by reflections of theexcitation light are also following Poisson distribution [40].

    2.5 Image Restoration

    Because degradation due to blurring and noise affects the accuracy of quantitative measurementsof the image, the distortion of the image signal needs to be fixed. Image restoration refers to theattempt of giving solutions to this problems.

    Again, coming back to the imaging equation 2.4, provided that the knowledge of the PSF and thestatistical characteristic of noise are given, principally the true object function f(x,y) can bereconstructed. This kind of approach is also known as deconvolution, although deconvolutioncan be regarded as subfields of image restoration. However, deconvolution more refers to astechniques that invert the blurring process in a deterministic way [32].

    Formally speaking, deconvolution is an computational approach based on objective criteria toimprove the quality of an image through the knowledge of physical processes that leads to imageformation on the screen (focal plane) [41, 33, 42]. The concept is to reconstruct the degradedobserved image by using a priori knowledge of how the image is degraded during the imageformation and then apply an inverse process (therefore its called de-convolution) to get theoriginal object [43]. The attempts to find the estimated object from the observed image is alsoknown as inverse problem.

    An inverse problem is the opposite of forward problem. For examples in Newtonian classicalmechanics, where the equation for getting the force applied to an object is the mass multipliedby its acceleration during its motion (F = ma). In the forward problem, the mass m and theacceleration a of the object are given. Thus, one can get straightforwardly the force which isapplied to that object that causes it to be in motion. Whereas in inverse problem, one is onlygiven the force. Then the task is to estimate the mass m and the acceleration a of the object.Consider another example of the black box of the imaging system that mentioned before. Theforward problem is that the true object is given and are to be imaged by using the black boxwhich will generate an image according to the parameters of the black box. Now the inverseproblem is that the output image and the parameters of the black box are known, but the inputobject needs to be found. This task is obviously more difficult than the forward problem becausethere might be no solutions for this.

    A convenient way to do the inverse calculation is by Fourier-transforming the imaging equationso that the integral calculation can be treated with a simple arithmetic calculation (however, thisis a naive approach since it cannot be applied to all frequency range). One of the tasks of Fouriertransform is to simplify the calculation by bringing the calculation into frequency domain inthis case. So, to put them together mathematically, the imaging equation in frequency domain is

  • 28 Chapter 2. Theoretical Background

    given by equation 2.7.

    G(kx, ky) = H(kx, ky)F (kx, ky) +N(kx, ky) (2.7a)G(kx, ky) = G0(kx, ky) +N(kx, ky) (2.7b)G0(kx, ky) = H(kx, ky)F (kx, ky) (2.7c)

    whereG(kx, ky) = Fourier transform of the recorded imageF (kx, ky) = Fourier transform of the true objectH(kx, ky) = Fourier transform of the PSF (Optical Transfer Function / OTF)G0(kx, ky) = Noiseless imageN(kx, ky) = Fourier transform of the noise

    The band-limited problem of the imaging system is described by the OTF H(kx, ky) whichis zero outside a frequency band domain . The frequency band is therefore called thebandwidth of the imaging system or the cut-off frequency of the optical system. Thus,given that the OTF is band-limited, obviously the noiseless image G0(kx, ky) is also band-limited. This property implies that the imaging system is behaving like a low-pass filter,meaning that it only allows low frequency signal of the object to be transmitted to the image plane.

    It turns out that noise is not band-limited in the sense that it exists in all frequency range.Consequently, although the noiseless image is band-limited, the observed image does not have tobe band-limited because of the addition of noise term according to equation 2.7b.

    2.5.1 Non-iterative Image RestorationThe task of deconvolution is to estimate an image F to be as close as possible to the true object F .If the noise is small, no background (only signal of interest comes to the detector) and providedthat the PSF is known, then the observed image can calculated by using equation 2.7a as follows:

    G(kx, ky) = F (kx, ky)H(kx, ky) (2.8)

    where

    F (kx, ky) = Fourier transform of the estimated object

    Equation 2.8 already shows the difficulties in image deconvolution. As explained beforethe PSF is basically band-limited and the observed image is generally not. Therefore,outside the frequency band , there will be inconsistencies happening. The solution of theequation might not exist or be not unique. This kind of error is often referred as ill-posed problem.

    Ill-posed problem can be described in three words: existence, uniqueness and stability. Thestability here means that small perturbation would lead to big errors in solving the inverseproblem. Then, by substituting equation 2.7a into 2.8 and solving for the estimated object F ,

  • 2.5. Image Restoration 29

    yields:

    F (kx, ky) = F (kx, ky) +N(kx, ky)H(kx, ky)

    (2.9)

    As an example of how this type of deconvolution works is given in figure 2.14 which is showing aNIH3T3 cell adhering on a micropatterned substrate with crossbow shape. This image was takenwith wide field fluorescence microscope. The inverse filtering was done under assumptions thatthe PSF has Gaussian form with lateral sigma value 0.25 m and without noise. The Gaussianfunctions used in this thesis have the form given in equation 2.10.

    h(x, y) = e(x2 + y2)

    22r (2.10a)

    h(r, z) = er2

    22r ez2

    22z (2.10b)FWHMr = 2

    2 ln 2r (2.10c)

    FWHMz = 2

    2 ln 2z (2.10d)

    (a) Original image (b) Actin filaments in grayscale (c) Actin filaments after inverse filtering

    Figure 2.14: Image comparison showing how the image looks like after the inverse filteringmethod. The image is showing a NIH3T3 cell adhering on a crossbow micropatterned substrate.

    It is obvious that the result of the inverse filtering cannot be used for further quantificationanalysis. The result cannot be recognized (2.14c). The main reason of this is because of theunknown properties of the microscope PSF which results the amplification of noise. As statedbefore, the computation of inverse filtering is in the whole range of frequencies. Since the PSFwas generated by assuming that it has Gaussian form, therefore it has a cut-off frequency. Noiseis not band-limited while the PSF is band-limited. The noise can have low values (even zero) inthe high frequency regime, once the inverse filtering is applied, the division by low values wouldgive greater values (unstable solutions) to the estimated image F . Furthermore, the noise term in

  • 30 Chapter 2. Theoretical Background

    equation 2.9 would be more dominant in high frequency regime.

    The previous discussion makes clear that the band-limited problem leads to serious problemsin image deconvolution. The incapabilities of the imaging system to convey the true objectinformation at all frequencies (plus the degradation due to noise and background) give no uniquesolutions to the imaging equation for estimating the true object. Concretely, this naive approachcannot reconstruct certain information at certain frequency which in fact does not exist in theimage plane due to the failure of transmitting this information. To be more precise, the imagingsystem fails to transmit complete information about the Fourier transform of the object at certainfrequencies. Following the statement taken from this reference [44, 31], A lack of informationcannot be remedied by any mathematical trickery.

    In order to solve that ill-posed problem, one must reject the concept of an exact solution of theimaging equation and look for approximate solutions, i.e. objects which reproduce approximatelythe noisy image (observed image). The second approach is to use the knowledge of additionalproperties of the unknown object for selecting from the set of approximate solutions those whichare physically meaningful (also known as a priori information) [31].

    The Wiener-Helstrom filter

    Although this type of filter is not used in this thesis, a brief explanation of this filter wouldbe a good introduction to understand the problem of deconvolution better. As discussed inthe precious section that the incomplete information transmission of imaging system leads toband-limited problem. As the result of this incomplete information in the image plane, it isvery difficult to get back the true object form the observed image. Since the direct approach ofdeconvolution by inverse filtering is trying to solve the problem in the whole frequency domain,it turns out that this approach induces ill-posed problem which then yields unsatisfying result.

    It turns out that one solution to avoid the ill-posed problem is by introducing a modified filterWH(kx, ky) (see equation 2.11) that satisfies the following conditions:

    1. At the high frequency regime, the filter should be able to avoid division by zero in orderto prevent noise amplification. This is done by setting the modified filter to zero. Sincein the high frequency regime the signal information is dominated by noise, setting themodified filter to zero will also prevent restoring information which does not belong to thetrue object.

    2. At the low frequency regime (regime where the noise component is much smaller than theobject signal), the modified filter should be able to recover the true object signal.

    3. At the frequency regime where the noise signal and the object signal are comparable, themodified filter should be able to make a compromise between complete acceptance andtotal suppression of the noise.

    F (kx, ky) = WH(kx, ky)G(kx, ky)= WH(kx, ky)[H(kx, ky)F (kx, ky) +N(kx, ky)]

    (2.11)

  • 2.5. Image Restoration 31

    The Wiener-Helstrom filter has the capabilities to fulfil those requirements. For brevity, theWiener-Helstorm filter is defined as:

    WH(kx, ky) =H(kx, ky)

    |H(kx, ky)|2 +NSR(kx, ky)where

    NSR(kx, ky) = Noise to Signal Ratio

    = WN(kx, ky)WF (kx, ky)

    WN(kx, ky) = Noise power spectra= |N(kx, ky)|2

    WF (kx, ky) = Input power spectra= |F (kx, ky)|2

    (2.12)

    The Wiener-Helstrom filter is a least square solution of an optimization criteria which minimizesa cost function that is defined in equation 2.13. As given in equation 2.12, the term NSR enablesthe filter to accept low-noise frequency components and reject high noise frequency components.Figure 2.15 is an illustration of how this deconvolution method works.

    Q =

    [F (x, y) F (x, y)]2dxdy (2.13)

    (a) Original image (b) Actin filaments after applyingWiener-Helstrom filter with NSR =15.5437

    (c) Actin filaments after applyingWiener-Helstrom filter with NSR =100

    Figure 2.15: Image comparison showing how the image looks like after Wiener-Helstromfiltering method. The PSF is assumed to have Gaussian shape with lateral sigma 0.25 m anddistorted by random noise whose NSR is 15.5437 and 100. The image processing was done withMatlab.

    Figure 2.15 clearly shows that the deconvolution works much better compared with the

  • 32 Chapter 2. Theoretical Background

    inverse filtering method. The PSF is again assumed to have a Gaussian shape with lateralsigma 0.25 m. As for comparison, the term NSR was generated with Matlab with values15.5437 and 100. When the NSR is 15.5437, the deconvolved image subjectively seems tohave more and more detailed structures compared to the original image. But parts of theimage where no cell is adhering seems worse. Moreover, it has a lot of artifacts like thehalos pattern along the boundary of actin filament curvature which makes more difficultto analyze. When the NSR is set to 100, the background seems better, but it has lessdetail structures. The main reason of this variation is because the nature of the PSF of themicroscope and the noise are unknown. They were just given as an assumption, not the true value.

    As stated in equation 2.12 that Wiener-Helstrom filter needs the power spectra of the true objectwhich is not directly provided. Thus, the Wiener-Helstrom filter cannot be precisely implementedin order to get the estimation of true object. But it has given a lot of improvement compared withthe previous generation of deconvolution.

    2.5.2 Statistical Image Restoration

    The previous discussion made clear that one needs another approach to reconstruct the true objectfrom the observed image. The non-iterative approaches still induce ill-posedness, implyingthat there might be no solution or no unique or no stable solution of the imaging problem.Therefore, its necessary to have additional information as a way to reduce the uncertainty of theapproximate solutions. To do this, ones must reformulate the problem by taking into account allthe available information both on the process of image acquisition (noise) and on the objectitself (a priori information, such as non-negativity). The first step in the reformulation of imagedeconvolution is to model the noise which is corrupting the data [45].

    Before modelling the noise, it is appropriate to introduce the matrix formulation (digitizedversion) of equation 2.5 since in digital computing, continuous signal will be converted intodigital signal for further process in a computer. The discretized version of the imaging equationis given as follows:

    y = Hx+ b (2.14)

    Where y = {yi}iS is the output or image column vector containing all the pixels in the image,where i is a multi-index of the image, and S is an appropriate range of i. x = {xj}jR is theinput column vector, where j is a multi-index of the object, and R is the range of j. H is thediscrete form of the PSF, describing the transfer of each pixel in the input to each and everypixel in the output. Also known as a transfer or imaging matrix. b is background column vector.

    The cardinality of S and R doesnt have to have the same dimensions. Moreover, the imaging

  • 2.5. Image Restoration 33

    matrix H has to satisfy the following conditions [46]:

    Hi,j 0,iS

    Hi,j > 0,

    j R,jR

    Hi,j > 0,

    i S

    (2.15)

    Equation 2.15 means that for each fixed value of the index or multi-index i or j, there exists atleast one non-zero entry. The matrix H is also assumed to satisfy the normalization conditions:

    iSHi,j = 1, j R, (2.16)

    in case the value of vector b is 0, then it implies that the total number of photons is the same inthe original object and in the image, or the object and the image have the same total flux [46].

    In case of Poisson noise, the detected value yi of the image g is the realization of a Poissonrandom variable (RV) Yi with the expected value (Hx+ b), i.e.

    Yi Poisson(Hx+ b)i (2.17)

    By assuming statistical independence of the RV Yi (because each pixel in a recorded image isstatistically independent from the others), the probability distribution of Y, for given H, x and b,is equal to the product of individual probabilities, which is given as follows [47]:

    pY (y;x) =mi=1

    e(Hx+b)i(Hx+ b)yiiyi!

    (2.18)

    Equation 2.18 can be used to find an estimate x of the unknown object corresponding to theobserved image y. Since the probability density pY (y;x) of the data is assumed to be known, andin this density, the unknown object appears as a set of unknown parameters, then the estimationof the object reduces to a parameter estimation problem which can be calculated by using theso-called maximum likelihood (ML) approach by introducing the likelihood function defined by:

    LYy (x) = pY (y;x) (2.19)

    Thus, the solution of the estimated object x is the one which maximizes that likelihood function,i.e.

    x = arg maxxRn

    LYy (x) (2.20)

    Moreover, the likelihood function above is more convenient to be calculated in negative logform (neglog) since there is a large number of factors. Hence, the maximization problem is

  • 34 Chapter 2. Theoretical Background

    transformed into a minimization problem which is given as follows:

    x = arg minxRn

    J0(x; y)

    J0(x; y) = logLYy (x)(2.21)

    In case of Poisson noise, the function J0(x; y) is given by the so-called Kullback-Leibler (KL)divergence (or Cziszr I-divergence), defined by:

    DKL(y(1), y(2)) =mi=1{y(1)i ln

    y(1)i

    y(2)i

    + y(2)i y(1)i } (2.22)

    HenceJ0(x; y) = DKL(y;Hx+ b)

    =mi=1{yi ln

    yi(Hx+ b)i

    + (Hx+ b)i yi}(2.23)

    Scaled Gradient Projection (SGP)

    A scaled gradient projection method is a class of iterative image restoration that lead to minimizeconvex non-linear functions subject to non-negativity constrains and flux conservation constraint[48]:

    min J(x)sub. to x 0

    ormin J(x)

    sub. to x 0Ni=0

    xi = c

    (2.24)

    Where J(x) is a Continuous Differentiable Convex Function measuring the difference betweenreconstructed and measured data and possibly containing a penalty term expressing additionalinformation on the solution. The convergency of this method is provided in the same paper [48].

    The objective function is the Kullback-Leibler divergence:

    J0(x; y) = DKL(Ax+ bg, b)

    =Ni=0

    (Nj=0

    Aijxij + b bi bi log(

    Nj=0

    Aijxij + bg

    bi))

    (2.25)

  • 2.6. Structure Tensor 35

    2.6 Structure Tensor

    The detection of edges and lines has many application in image processing since it provides away to recognize an object in an image. Patterns in an image can be recognized because of thespatial brightness changes that allows to model the direction of the neighbourhood by applying agradient operator that are sensitive to the changes of the brightness. However, it turns out thatdetermining the orientation by just merely using gray scale value changes would not be robustagainst noise [49]. Therefore, another approach needs to be developed that is able to determinea unique orientation. The structure tensor is a suitable representation that can determine theorientation of a local neighbourhood. Measuring the orientation of actin stress fibers is donewith the help of the structure tensor. The basic idea is to detect the gradient image and determinethe dominant local orientation of the image gradient.

    The structure tensor evaluates the local orientation in a small region of an image [50, 51].Orientation here means the direction of a vector perpendicular to the lines of constant grayvalues. This technique can be used to determine the presence and orientation of edges, theorientation of texture, and the velocity in image sequences [52].

    Firstly, orientation has to be defined since it will conflict with the definition of direction. Theangle of orientation in the structure tensor calculation is defined as the least deviation from theimage gradient [53, 52]. The gradient vector of a structure in an image always points towards thehigher intensity object (to brighter or whiter location in an image). The gradient is representedas a vector, which can be represented with a magnitude and an angle ranging from 0 to 360degrees. However, the case is different for orientation. The angle of the orientation is onlydefined in the range of 0 to 180 degrees. Assuming that there is a boundary marked by black andwhite like shown in figure 2.16. Then the gradient will be pointing towards the white region,and the orientation is also pointing to the same direction as the gradient. Now if this image isrotated 180 degree, then the gradient will be pointing towards the opposite direction (downward).However, the orientation will not change. Because what is important information about theorientation is that the boundary marking off black and white is still lying in the same direction(horizontal). Therefore, it is sufficient to limit the range of orientation angle up to 180 degree.

    Although the formal definition of the structure tensor orientation demands a parallel situationwith the intensity gradient vector, it can however be converted to be perpendicular with thegradient vector. This is done for the sake of simplicity in observing the actin stress fibersorientations together with the nucleus long axis orientation (see Section Experimental Methodsfor the discussion). Figure 2.17 demonstrates how the orientation is represented in the structuretensor calculation used in this thesis.

    Let f(r) denotes a two-dimensional image, with spatial dimensions in r = (x, y). Assumingthatf is oriented in a region . Since the orientation vector u = (cos(), sin ) should havethe least deviation from f , it means that the dot product between those two vectors is to bemaximized:

    max(f(r).u)2 = max(|f(r)|2|u|2 cos2 (f(r),u)) (2.26)

    Thus, the dot product will be maximized if the orientation is parallel or anti-parallel with the

  • 36 Chapter 2. Theoretical Background

    (a) A Bundary represented as black and white (b) Rotated image

    Figure 2.16: Gradient vector showing the direction from black region to white region.

    gradient vector and will be the lowest when they are perpendicular each other. Finding (denoting the orientation) is done by maximizing the integral of that dot product over the wholeframe of the image (within the local neighbourhood ) [54], meaning:

    E(u) = maxu=1

    (uTf(r))2d

    = maxu=1

    (uTf(r))(uTf(r)T )d

    = maxu=1

    (uTf(r))(f(r)Tu)d

    = maxu=1

    uT (f(r)f(r)T )u)d

    = maxu=1

    uT [

    f(r)f(r)Td]u

    (2.27)

    Equation 2.27 will be more convenient if the structure tensor is introduced which is defined as:

    J =

    f(r)f(r)Td

    =

    [f 2x fxfyfyfx f

    2y

    ]d

    (2.28)

    Hence, by substituting equation 2.28 to equation 2.27,

    E(u) = maxu=1

    uTJu (2.29)

    In order to introduce local neighbourhood around pixel r, a window function w(r r) is

  • 2.6. Structure Tensor 37

    (a) Original image (b) Structure tensor calculation showingthe filtered structure. The colour showsthe orientation.

    (c) Histogram of the orientation angle

    Figure 2.17: Orientation definition in structure tensor calculation.

    introduced to the structure tensor matrix, yielding

    J =

    w(r r)f(r)f(r)Td

    =

    [f 2xw fxfywfyfxw f 2yw

    ]d

    where

    g, hw =

    w(r r)g(r)h(r)d

    (2.30)

    In Practice, computing the structure tensor can be done by applying a gradient operator like theSobel operator for computing the gradient and then apply a smoothing filter (like Gauss filter)for windowing computation. If the smoothing operator is denoted by B and the gradient operator

  • 38 Chapter 2. Theoretical Background

    with respect to coordinate p and q is denoted as Dp and Dq respectively, then

    Jpq = B(Dp.Dq) (2.31)

    where the dot sign (.) denotes the pixel-wise multiplication.

    It turns out that the tensor J is a symmetric, positive semi-definit matrix, and therefore there willexist two orthogonal eigenvectors with non-negative eigenvalues where the one with the largesteigenvalues 1 maximizes equation 2.29 and also acts as the local orientation [55, 50].

    Having found the components of structure tensor matrix, then the local orientation, coherency,and energy for each pixel respectively can be calculated easily by using equation 2.32 [51, 56].

    tan 2 = 2JxyJyy Jxx

    (2.32a)

    C =(Jyy Jxx)(Jyy Jxx) + 4J2xy

    (Jxx + Jyy)(2.32b)

    E = Trace(J) = Jxx + Jyy (2.32c)

    Where

    C = Coherency = Orientation angleE = Energy

    Coherency is in the range between 0 and 1, with 1 indicating highly oriented structures, and 0indicating isotropic area. High energy means high values of the gradient, indicating an abruptchanges in brightness. Therefore, pixels with higher energy values indicates the structure isprominent in those pixels.

    Eigen values have a role in giving information regarding the classification of orientation:

    1. 1 = 2 0This situation indicates that the region is homogeneous, no preferred orientation occurs.

    2. 1 > 0, 2 0This situation indicates that the region has single orientation.

    3. 1 > 0, 2 > 0This situation indicates that the region has multiple orientation.

  • 2.6. Structure Tensor 39

  • 40 Chapter 2. Theoretical Background

  • Chapter 3

    Materials and Methods

    The experimental methods which are discussed here include the stamp fabrication, samplepreparation, data acquisition, and image processing.

    3.1 Stamp FabricationThe master stamp for 2D substrates is produced by using two-photon lithography process (DirectLaser Writing - DLW). It involves the absorption of two photons at once in the focal spot offemtosecond laser in the photosensitive liquid material. Once the energy of the absorbed photonsexceeds a specific threshold of the photoinitiator molecules inside the photoresist, a highlylocalized chemical polymerization event would occur within the focus of the laser beam [57].

    The design of the pattern in one master stamp consists of 10 10 fields. The dimension of onefield is 300 300 m corresponds to the stage size defined by the DLW machine. And withinone field, there are 4 same patterns which is shown in figure 3.1. The design is generated withMatlab in which the design consists of coordinates showing the tracks for the piezo stage to writethe patterns.

    The writing process is done by polymerizing the photoresist along a line track defined by themovement of the piezo stage. In writing the stamp, the piezo stage moves line per line, thismeans that the piezo stage moves from left to right horizontally, then shifts vertically for adefined distance (in this case, the distance between lines is 0.25 m) and redo the line writing.As the piezo stage is in horizontal motion, the power can be turned on or off defined by theuser. The on state is indicated by blue lines and off state is by red lines in figure 3.1. Since thepolymerization occurs during the on state, this will make the tracks depicted by blue lines tobe embossed while the tracks depicted by red lines will show caved patterns. The depth of thepatterns is about half the size of the laser beam focus (about 500 nanometer).

    There are 5 different pattern designs in one stamp, so each pattern has 20 fields drawn on it. Thewriting process for 1 master stamp may take up to 70 hours. After the writing process is done,the sample is taken out from the machine and go to the developing process. The developingprocedure is as follows:

    1. Incubate the sample in a mixture solution of isopropanol and MIBK (Methylisobutylketon)with ratio 1:1 (by volume) and wait for 10 min.

    41

  • 42 Chapter 3. Materials and Methods

    Figure 3.1: Design of pattern on stamp in 1 field.

    2. Dry the sample carefully with N2 gas so that the patterns are not damaged.

    3. Having the sample cleaned, put the sample into the plasma chamber. When the pressurebar reaches at 0.3 mbar, apply the plasma for 1 min.

    4. Take out the sample from the plasma chamber and put it into glass petri dish filled withTrichloro(octadecyl) silane in toluene (1 mM). Wait for 10 min.

    5. Rinse the sample with isopropanol and dry it again with N2 gas carefully.

    6. Embed the sample on a microscope slide and the master stamp is ready to be a negativereplica (template) for making the replicate stamp.

    Once the master is produced, it will become the template for making a replicate stamp. Thereplicate stamp is made of silicone, PDMS (Polydimethylsiloxane) that resembles the cavitypatterns on the master stamp. The cavity patterns on the master stamp will make relief patterns onthe PDMS stamp after PDMS casting and peeling off, i.e. the replicate stamp will have invertedpatterns of the master stamp. To mold a stamp from the template, the steps are as follows:

    1. Prepare the PDMS with ratio 10:1, ten parts of base (liquid PDMS) and one part curingagent. Then mix it well to make sure that all base and the curing agent are mixedhomogeneously. During the mixing, it will incorporate a lot of air that traps gases insidethe solution.

    2. Wait for about 30 minutes for degassing to make the gases inside are gone.

    3. Dispense a tiny amount of mixture (a single drop would be sufficient) onto the template.

    4. Put a glass stripe onto the drop and add some weight (about 0.5 g) on the glass stripe.

    5. For curing process, put them into the incubator at 60 and wait for about 2 hours.

  • 3.2. 2D Environments 43

    6. The next step is to peal off the stamp from the template.

    7. The finalizing step is to cut the boundary of the thin PDMS layer to get a homogeneousthickness. The best cutting line is in the form of octagon where the patterns are in themiddle of it.

    3.2 2D Environments

    3.2.1 Sample Preparations for 2D Substrates

    The PDMS stamp is then used for functionalizing the gold substrate area, i.e. to print a variety ofmolecules in submicrometer resolution patterns on gold substrates where the cells will attachand spread on that location. The flexible PDMS stamp was coated with hydrophobic thiol(Octadecylmercaptone/ODM, Aldrich - fibronectin protein binds to these molecules) and wasdried gently with an inert gas, N2 gas. Then it was brought into tight contact with the surface ofa cover glass containing a thin layer of deposited gold (with a thin titanium adhesion layer). Thisstep is very critical in transferring the patterns onto the gold substrates due to the low aspectratio making the stamp easily deformed as mention in theoretical section. This is the reason whythe stripe glass is used as the backbone of the PDMS stamp.

    The time applying the stamp and removing it again is about 10-20 seconds. The hydrophobicalkanethiol molecules are transferred only to regions of the glass surface that contact theraised regions of the stamp and thus the patterns will correspond to the shape of the raisedregions. When transferred to the gold surfaces, these molecules self-assemble into a molecularmonolayer or self-assembled monolayer (SAM) that is limited to the regions of the islandscreated on the original master. Having removed the stamp, the gold substrate is rinsed with pureethanol and dry it again with N2 gas. The next step, a solution containing non-adhesive thiolwas added to the patterned substrate to passivate the remaining regions so that cells can onlyadhere on the functionalized regions. Hydroxyl-terminated Hexa(ethylene glycol) alkanethiolHS(CH2)11(OCH2CH2)6OH , referred to as EG-6OH solution is used for the passivationwith the incubation time 10-15 minutes. The non-adhesive thiol self-assembles between thehydrophobic SAM-covered islands, thus forming a continuous SAM over the entire substrate.Figure 3.2 is a schematic drawing of the stamping procedures.

    After incubation, the substrate is again cleaned with pure ethanol and dried carefully with N2gas. Now the gold substrate is ready to be coated with the FN protein. The concentration ofthe protein is 1:100 (by volume in which the protein stock solution is 1 mg/mol), 1 parts ofprotein and 100 parts of buffer solution PBS (Phosphate buffered saline). The incubation time is1 hour in incubator at 37C. The FN adsorbed only on the hydrophobic surfaces in the definedislands, while the inrvening PEG-covered barrier regions remained uncoated (Figure 3.3) andhence nonadhesive.

    After the FN incubation, NIH3T3 fibroblasts were plated on the FN-coated micropatternedsubstrates in DMEM (Dulbeccos Modified Eagle Medium). The cells preferentially attachedand spread on the location where the surface was functionalized. Before fixation and staining,

  • 44 Chapter 3. Materials and Methods

    (a) Preapartion of a PDMS stamp using replica molding. (b) Pattern transfer by microcontact printing ( CP).

    Figure 3.2: A schematic outline of patterning by preparation of a PDMS stamp using replicamolding, and pattern transfer by microcontact printing ( CP). [58]

    the cells were incubated for 2.5-3 hours in the incubator at 37C.

    3.2.2 Cell Fixation and StainingAfter incubation, the cells were fixed in 100 l of 4% paraformaldehyde (PFA) in PBS for 10min. Then the washing process is done 3 times with 0.1% Triton X-100 in PBS for 5-10 min ineach washing step. The cell staining procedure is done in 2 steps:

    1. Primary antibody staining:

    (a) Anti-fibronectin polyclonal (rabbit) with concentration 1:400 in BSA (Bovine SerumAlbumin).

    (b) Anti-Paxilin monoclonal (mouse) with concentration 1:500 in BSA.

    2. Incubate for 1 hour and wash with 0.1% Triton X-100 in PBS 3 times for 5-10 min in eachwashing step.

    3. Secondary antibody staining:

    (a) Cell nucleus staining with DAPI (4,6-diamidino-2-phenylindole) with concentration1:1000 in BSA.

    (b) Actin filaments staining with Phalloidin Alexa 488 with concentration 1:200 in BSA.

    (c) Fibronectin staining with anti-rabbit Cy3 with concentration 1:500 in BSA.

  • 3.2. 2D Environments 45

    (a) 60 V-shape. The length is 30m, the width is 3.5 m.

    (b) Equilateral triangle. Thelength is 30 m, the width is3.5 m.

    (c) Perpendiculat triangle. Thelength is 30 m, the width is3.5 m.

    (d) L-shape. The length is 30m, the width is 3.5 m.

    (e) Cross bow. The length is45 m, the width is 3.5 m andthe radius is 18 m.

    (f) Arrow. The length is 45 m,the width is 3.5 m and thelength of the head is 25 m.

    Figure 3.3: Fibronectin pattern on the functionalized surface

    (d) Paxilin staining with anti-mouse Alexa 647 with concentration 1:200 in BSA.

    4. Incubate for 1 hour and wash with 0.1% Triton X-100 in PBS 3 times for 5-10 min in eachwashing step.

    BSA is used for reducing the unspecific. After fixation and staining are done, the samples areembedded on a microscope slide by using mowiol solution.

    3.2.3 Data Acquisition

    An Axio Observer equipped with Apotome module microscope is used for data acquisition ofthe samples. For getting high quality images, a 63x oil objective lens (NA = 1.4) was used. Theillumination time was set automatically by the microscope system.

    3.2.4 Image Processing for 2D Cases

    Matlab is mostly used for doing image processing and analyzing. In 2D cases, the analyzesinclude cell body and nucleus area measurement, nucleus orientation, nucleus elongation andstress fiber orientation.

  • 46 Chapter 3. Materials and Methods

    Cell Body and Nuclei Area Measurement

    The actin channel is considered to represent the cell body and therefore this channel is usedfor the measurement of cell body area. The cell nucleus area is measured in DAPI channel.The image processing steps for measuring the area are depicted in figure 3.4. First, canny edgedetection is performed to detect the boundary of the cell. Then image dilation is performed with1-3 pixel size diamond structure element to connect the unconnected pixels. Having connectedall the pixels along the boundary, filling holes can be performed in order to make a binary imageof the cell body. The cell area is represented by the total number of white pixels contained inthe binary image multiplied with the pixel area resolution. The same method is also applied formeasuring the cell nucleus area.

    Figure 3.4: Steps in measuring cell body area with Matlab.

    The significancy test or student-t test is performed on the following pair test: (angular-bar;eq.triangle),(angular-bar;perp. triangle),(angular-bar;L-Shape),(eq.triangle,perp.triangle),(eq.trianlge;L-Shape),(perp.triangle,L-Shape),(arrow;cross-bow).

    Cell Nucleus and Actin Stress Fiber Orientation Measurement

    The cell nucleus orientation measurement is done by fitting the ellipse and the angle defined bymajor ellipse axis and x axis is the orientation angle of the nucleus, whereas the distribution ofactin stress fiber is measured by using structure tensor algorithm. For both nucleus and actinstress fiber, the orientation is defined within 0 and 180 degree.

    The performance of the structure tensor was first evaluated with synthetic random fiber networkby comparing the measured orientation distribution with the expected values.

  • 3.2. 2D Environments 47

    Evaluation of Structure Tensor

    A similar way like evaluating the deconvolution in the previous section, evaluating the structuretensor will also be done in 2 ways, first by evaluating the synthetic random fiber network, andafterwards by evaluating the real data.

    A synthetic image in the form of random fiber network was used for evaluating the performanceof the structure tensor. The results are shown in figure 3.5.

    (a) Original image (b) HSV mode, showing the fiber networkorientation calculated by the structure tensor

    (c) Relative coherency of the orientation distribution (d) Rose plot showing the evaluation of the sructure tensor

    Figure 3.5: The orientation of the synthetic random fiber network calculated by the structuretensor.

    Comparison of the orientation distribution between the true data and the one calculated by thestructure tensor is presented in rose plot as given in figure 3.5d. From that rose plot, one canobserve that the orientation distribution calculated by the structure tensor has a good correlationwith the true data. The difference in the number of data taken into the calculation causes theinexpediency in the amplitude of the rose plot. The number of true data was 95, indicating thatthere was 95 bars in figure 3.5a, while the number of data for the calculation of the structuretensor was about 25.000. The number of data processed by the structure tensor is much more

  • 48 Chapter 3. Materials and Methods

    than the true data because of the local neighbourhood calculation which is dependent upon thesize of the smoothing window function.

    For real data measurements, cell on the arrow island was used as an example. The results aregiven in figure 3.6. Notice that the orientation can also be represented in a rose plot diagram asdisplayed in figure 3.6d after the angles are converted to be perpendicular to the gradient vector,i.e. the orientation now lies along the iso-gray intensities values.

    (a) Original image (b) HSV mode, displaying the color code for theorientation

    (c) Relative cohorency of the actin stress fibers (d) A rose plot showing the orientation of the stress fibers

    Figure 3.6: The actin stress fibers of cell adhering on the arrow island evaluated by the structuretensor.

  • 3.3. 2.5D Environments 49

    3.3 2.5D Environments

    3.3.1 Sample Preparations for 2.5D Substrates

    The 2.5D microscaffold structures on cover slips were also built with DLW machine where thewriting process may take 1 day for 10x10 fields. The fields consisted of microscaffolds in theshape like U letter (3080) m and V letter whose length is 80 m and with angle variation: 30,45, 60, and 90. The height was set to 20 micron. The writing process was done in 2 steps. Thefirst step was to polymerise the ormocomp (a member of ORMOCER (ORganically MOdifiedCERamics family, a protein-binding photoresist containing Igracure 369 as a photoinitiator[57]) resist according to the design (see figure 3.7a). The radius of the bars was 2 m and theradius of the Ormo pillar was 4.5 m. The second step was to coat the supporting pillar of thestructure with a photoresist composed of monomer PEG-DA (poly-ethylene glycol diacrylate),pentaerythritol tetraacrylate (PETA, Sigma Aldrich) and Irgacure 369 (Ciba) as the photoinitiator[59] (see figure 3.7b). The radius of the PEG pillar was 7.5 m. The pillar coating process wasdone after washing the polymerised ormocomp structure from the first writing step with ethanol.Once the writing was done, then the sample was rinsed with a mixture of ethanol and MIBK ,then dried carefully with N2 gas. The schematic of the structures are given in figure 3.7.

    (a) Ormo structures. The radius of the Ormo pillar was 4.5m, the height was 20 m and the length was 80 m.

    (b) PEG pillar structures. The radius of the PEG pillar was.5 m and the height was 20 m.

    Figure 3.7: Two steps of writing: Ormo writing and PEG pillar coating. The pictures arewithout scale.

  • 50 Chapter 3. Materials and Methods

    The incubation time for fibronectin was 30 minutes with the same concentration as in the 2Dcase. Afterwards, the sample was washed with PBS and then NIH3T3 cells were seeded on thestructure and were incubated in DMEM medium for 2.5-3 hours in incubator at 37C.

    3.3.2 Cell Fixation and StainingAfter incubation, the cell were fixed in 100 l of 4% paraformaldehyde (PFA) for 10 min. Thewashing process was done 3 times with 0.1 % Triton X-100 in PBS for 10 min in each washingstep. The next step was to stain the cell in 2 steps:

    1. Primary antibody staining:

    (a) Anti-fibronectin polyclonal (rabbit) with concentration 1:400 in BSA.(b) Anti-Paxilin monoclonal (mouse) with concentration 1:500 in BSA.

    2. Incubate for 1 hour and wash with 0.1% Triton X-100 in PBS 3 times for 5-10 min in eachwashing step.

    3. Secondary antibody staining:

    (a) Cell nucleus staining with DAPI with concentration 1:1000 in BSA.(b) Actin filaments staining with phalloidin Alexa 568 with concentration 1:100 in BSA.(c) Fibronectin staining with anti-rabbit Alexa 647 with concentration 1:200 in BSA.(d) Paxilin staining with anti-mouse Alexa 488 with concentration 1:200 in BSA.

    4. Incubate for 1 hour and wash with 0.1% Triton X-100 in PBS 3 times for 5-10 min in eachwashing step.

    3.3.3 Data Acquisition for 2.5D CasesThe acquisition was done by using Confocal Laser Scanning Microscope (LSM510, Zeiss). Forthe best resolution, a 63x oil objective lens (NA = 1.4) was used. The general procedure wasdone according to this protocol [60].

    3.3.4 Image Processing for 2.5D CasesThe image processing and analysis that were involved in the 2.5D case are cell surface areaand volume measurements which include cell body and nucleus. Since images taken withconfocal microscope has poor SNR, the images are first deconvolved with SGP algorithm forfast processing and good results [61] to increase the accuracy of the geometric measurements.Before the deconvolution algorithm is applied to the real data, it will be evaluated with synthetic3D images like cube and sphere and also with 2D synthetic random fiber network.

    The synthetic objects are convolved with 3D PSF Gaussian (2D Gaussian PSF for syntheticrandom fiber) and then added with Poisson noise to mimic the imaging process in the confocalmicroscopy. The performance of the SGP algorithm is viewed in terms of how the resultscorrelate with the true data, the number of iteration. Least square error (LSE) with the originaldata and discrepancy error (error between after and previous iteration).

  • 3.3. 2.5D Environments 51

    Evaluation of Deconvolution

    Evaluation of deconvolution performance is done by analyzing the synthetic images (randomfiber network, cubes and spheres) image generated with Matlab and compared with the real datawhich were taken from the lab.

    Synthetic Image Analysis

    The synthetic image for 2D is generated with Matlab. Its a kind of fiber network whoseorientation, length, and location are in random. The original image is then can be assumed tobe the true object. This true object then will blurred (convolved) with normalised GaussianPSF (sigma radius 0.25 m) as a representation of passing through a microscope system. Thenfor representing detector recording, the noiseless image will be added with Poisson noise. Theoriginal image, blurred image, and blurred-noisy image are given in figure 3.8.

    (a) Synthetic fiber network in grayscale

    (b) Blurred synthetic fiber network with Gaussian PSF (c) Blurred and noisy synthetic fiber network. Thenoise is Poisson distributed

    Figure 3.8: Synthetic random fiber network

    The SGP deconvolution matlab codes has several optional input arguments. However, 4 differentoptional input arguments that were used in this thesis: INITIALIZATION, STOPCRITERION,BG, and TOL. Other input arguments are set as default. INITIALIZATION input argumentis the choice for starting point and it has 5 options:

  • 52 Chapter 3. Materials and Methods

    1. 0The starting point is initialized with all zero starting point

    2. 1The starting point is random

    3. 2The starting point is initialized with the same value of the matrix input image gn

    4. 3The starting point is initialized with: ones(size(gn))*sum(gn(:) - bg) / numel(gn)

    5. x0The starting point is initialized by user (double array data type)

    The value of BG is defined by estimating the background of the matrix input image. Thebackground estimation was done by using imopen command and take the average of all pixels.TOL values are between 101 and 102 for both actin and nucleus deconvolution. There are 4kinds of stopping criteria which are used in the SGP deconvolution program:

    1. iter > MAXIT .Meaning that the iterations will be stopped when the iteration reaches the maximumiteration that is set.

    2. ||xk xk1|| MAXIT .Meaning that the iterations will be stopped when the reconstruction error is reaching theminimum value denoted by the tolerance value. The reconstruction error is calculated bytaking the absolute value of the difference of the next and previous iteration.

    3. |KLk KLk1| MAXITMeaning that the iterations will be stopped when the KL divergence is reaching the leastvalue.

    4. (2/N) KLk MAXITMeaning that the iterations will be stopped when the discrepancy error is reaching the leastvalue.

    Different variation of input arguments would yield different result. For Actin cytoskeleton andnucleus deconvolution, stopping criteria number 2 or 3 is often used. The INITIALIZATIONvalue of 2 or 3 is often used for actin cytoskeleton and 0 for nucleus. The evaluation of thedeconvolution is mainly subjectively evaluated how the blurriness is reduced in all stack of theimage and whether or not there are vanishing structures. Deconvolving nucleus is easier thandeconvolving actin cytoskeleton.

    The deconvolution of object in figure 3.8 is given in figure 3.9 with difference value of tolerance.

    As shown in figure 3.9 that the tolerance value at 104 seems give the best looking amongothers. When the tolerance is at 106, some structure seems vanishing. However, in practice,tolerance value of 101 is often used because of the poor signal contained in actin cytoskeleton.

  • 3.3. 2.5D Environments 53

    (a) Original image (b) Tolerance 103. The number of iterations is 25

    (c) Tolerance 104. The number of iterations is 46 (d) Tolerance 105. The number of iterations is 75

    (e) Tolerance 106. The number of iterations is 170 (f) Tolerance 105. The number of iterations is 14

    Figure 3.9: Deconvolution of synthetic fiber network with difference tolerance.

    Therefore to prevent vanishing structures, the tolerance values will be set between 101 to 102for deconvolving the real data, depending on subjective criteria in evaluating the vanishingstructures. The LSE and discrepancy values are given in figure 3.10.

    Both LSE and discrepancy error of the deconvolution for every tolerance values give the sameresults at the same iterations. This can be observed from the overlapping graph in figure 3.10.This is obvious because the value of tolerance is just used for stopping the iteration, not changingthe LSE or the discrepancy error at every iterations. By looking at the LSE graph in figure 3.10,

  • 54 Chapter 3. Materials and Methods

    the lowest value of LSE is reached at iteration 14. And the result of the deconvolution at 14thiteration with tolerance 105 is given in figure 3.9f.

    Its obvious that although the LSE is the lowest at 14th iteration, the results of the deconvolutionseems worse. LSE method seems does not always a good parameter for evaluating the results.

    Evaluation of Minkowski Algorithm for Geometric Measurements

    The geometric measurements are done with the help of Matlab codes from [62]. The performanceof Minkowski algorithm for measuring the geometric measurements like volume and surface areais performed in synthetic cube and sphere which are blurred and noisy. The measurements willalso be compared after the cube and the sphere are deconvolved to see if the deconvolution wouldincrease the accuracy. The size of the cube is 5 micron, and the radius of the sphere is 2 micron.Then this cube is convolved with 3D Gaussian PSF whose radial FWHM is 0.394 m and itsaxial FWHM is 0.626 m. These values were taken from the average of the PSF measurementsfor LSM510 in the lab. After convolution, Poisson noise will be added to the objects. The voxelsizes in x, y, z are assumed to be 0.081 m, 0.081 m, and 0.31 m respectively which mimicthe voxel sizes of the real data. The original cube, blurred cube, blurred-noisy cube, and thedeconvolved cube are given in figure 3.11. While figure 3.12 corresponds to the sphere.

    Due to the blurring process and noise effect, the lateral and axial resolution will be degraded.Table 3.1 demonstrates the comparison tests to assess the significance of these degradation aswell as to demonstrate the performance of the Minkowski algorithm in computing the volumeand the surface area.

    Table 3.1: Comparison tests for assessing the significance of blurring and noise degradation aswell as demonstrating the performance of the Minkowski algorithm

    Shape Condition Startingslice numberEnding slicenumber Volume Surface Area

    Cube

    Original 42 58 128.66 (theory: 125) 128.10 (theory: 150)Blurry 35 63 236.12 208.83Blurry + noisy 35 63 260.60 267.81Deconvolved 41 60 148.47 139.89

    Sphere

    Original 45 55 33.62 (theory:33.51)50.63 (theory:50.27)

    Blurry 43 57 47.99 72.63Blurry + noisy 43 57 45.02 74.00Deconvolved 44 58 37.74 55.22

    As demonstrated in table 3.1 that the blurring and noising process cause the starting and theending slice number to be different from the original objects. But after deconvolution, they seemcloser to the original one.

    The calculation of volume and surface area are strongly dependent on how good the segmentationis, because the Minkowski code need the 3D image input in binary format. And there is someerrors between the expected value and the measured value. The errors would be worse in case of

  • 3.3. 2.5D Environments 55

    shapes with sharp edges (or corners, like cube). But for the shapes which do not have too manyedges like spheres, the errors would be decreased as given in table 3.1 where the volume andthe surface area of the original image, the measured values are pretty close to the theoreticalvalues. These results show that the deconvolution is able to increase the accuracy of geometricmeasurements.

    Cell Nucleus and Cell Body Volume Measurement in 2.5D Cases

    The methods for estimating the volume and the surface area of the cells adhering on 2.5Dmicroscaffolds are as follows:

    1. Create a mask from the maximum intensity projection of the 3D stack image. This is donewith the help of canny operator in the Matlab Image Processing Toolbox. In particularfor nucleus, the size of the mask is increased for several pixel in order to ensure that thenucleus periphery is conserved during the pixelwise multiplication which will be done inthe next step.

    2. Perform pixelwise multiplication to all slices of the image by this mask.

    3. Estimate a global thresholding by using Otsu method (in this thesis, the Otsu is applied tothe MIP) and employ this value to all slices. If the thresholding resulted from Otsu methodis zero, then the thresholding value is set manually to 0.0039 which is very often a goodnumber in most cases.

    4. Perform a median filtering (the size that was used in this thesis is [10 10]) to all slices inorder to reduce the speckles resulted from the thresholding.

    5. The 3D binary image (see figure 3.13f) as the result of the filtering is then used to estimatethe volume and the surface area. The calculation is directly done by using the Matlabroutine taken from [62]. The Matlab file is freely distributed in Matlab Exchange Filewebsite by the author.

    These steps are also applied to measure the cell body volume, including the deconvolved 3Dimages.

    Actin Distribution at the Bottom Side and the Top Side of the Nucleus

    In 2.5D environments, the actin filaments cover the whole surface of the nucleus to support itsposition in the cell. Since the function is for supporting, it is intuitive to estimate and to comparethe distribution of the filaments at the bottom and the top side of the nucleus. The procedure isas follows:

    1. Perform the MIP of the DAPI channel and crop the nucleus (see figure 3.14b).

    2. Perform the edge detection with canny operator to detect the boundary of the nucleus. Thisprocess will create a binary image of the nucleus (see figure 3.14b).

    3. Find the centroid and estimate the radius of the nucleus binary image.

  • 56 Chapter 3. Materials and Methods

    4. Create a bigger mask by increasing the estimated radius by 2-3 m depending on howclose the nucleus to the structure. Avoid creating the mask that includes the structure sothat the actin filaments that binds to the structure not to be included in the calculation (seefigure 3.14d.

    5. Perform the thresholding by using the Otsu method to the MIP of 3D stack in actin channeland apply this value to all slices.

    6. Perform a pixelwise multiplication with the big mask to all slices of the 3D stack imagesthat have been thresholded.

    7. Assess the density of the actin filaments at the bottom and the top side of the nucleus. Thebottom side is defined as the slices from the middle section of the nucleus goes to the firstslice. The top side is defined from the middle section goes to the last slice. And the middlesection is defined by estimating the middle slice number of the deconvolved nucleus sincethe dec