+ All Categories
Home > Documents > Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff...

Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff...

Date post: 31-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
32
Cortical Thickness Atlas Building Sophia Han April 15, 2009 Contents 1. Pipeline of the steps ......................................................................... 3 2. Software Description ........................................................................4 3. Preprocessing Steps .......................................................................... 5 4. Cortical Thickness Atlas Building ................................................... 5 4.1 Atlas Building Procedure ........................................................................... 5 4.1.1) Cortical Thickness computation for each image by using CortThick tool ...................................................................................................................... 5 4.1.2) Groupwise Registration .........................................................................................9 4.1.2.1) Input and Output files ..............................................................................10 4.1.2.2) Parameters ...............................................................................................12 4.1.2.3) Running GroupwiseRegistration .............................................................17 4.1.3) Compute Outputs .................................................................................................17 4.1.4) Compute Deformation Fields ...............................................................................19 4.1.5) Multiply Images by factor f ..................................................................................19 4.1.6) Transformation .....................................................................................................19 4.1.7) Atlas Tissue Segmentation ...................................................................................23 1
Transcript
Page 1: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Cortical Thickness Atlas Building

Sophia Han

April 15, 2009

Contents

1. Pipeline of the steps ......................................................................... 3

2. Software Description ........................................................................4

3. Preprocessing Steps .......................................................................... 5

4. Cortical Thickness Atlas Building ................................................... 5

4.1 Atlas Building Procedure ........................................................................... 5

4.1.1) Cortical Thickness computation for each image by using CortThick 

tool ...................................................................................................................... 5

4.1.2) Groupwise Registration .........................................................................................9

4.1.2.1) Input and Output files ..............................................................................10

4.1.2.2) Parameters ...............................................................................................12

4.1.2.3) Running GroupwiseRegistration .............................................................17

4.1.3) Compute Outputs .................................................................................................17

4.1.4) Compute Deformation Fields ...............................................................................19

4.1.5) Multiply Images by factor f ..................................................................................19

4.1.6) Transformation .....................................................................................................19

4.1.7) Atlas Tissue Segmentation ...................................................................................23

1

Page 2: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

4 .1.8) Extract label with white matter and grey matter masks from the tissue

segmentation(ImageMath) .................................................................................27

4 .1.9) Combining all the deformed cortical thickness maps into the one image using

AtCortThick .......................................................................................................28

4 .1.10) Create Mesh Cortical Thickness using MeshCortThick ............................29

4 .1.11) 3D Cortical Thickness Visualization using KWMeshVisuUNC ......................30

References …............................................................................................................................32

2

Page 3: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

In this project I have built an atlas for cortical thickness and will provide an overview of the

procedure and its software environment. Before being used as input, the T1 brain images of two- and

four-year old patients were post-processed with the expectation-maximization segmentation (EMS)

algorithm.

There are four sections in this documentation. The first is a pipeline of the steps, the second is

about the software description, the third is a preprocessing steps, and the fourth is the procedure used

for the cortical thickness atlas building.

1. Pipeline of the steps

Figure 1. Pipeline of the cortical thickness atlas building

3

4. Cortical Thickness Computation for each image using CortThick tool

6. Group Wise Image

Registration

7. Compute Outputs

8. Compute Deformation

Fields

9. Compute Scalar

Transformation

10. Atlas Tissue Segmentation

12. Combining all the deformed cortical thickness maps into

the one image (AtCortThick)

1. Original Image

2. Tissue Segmentation and

labeling (itkEMS)

3. Skull Stripping

11. Extract Label with white and grey matter by

using ImageMath

13. Create Mesh for 3D Visualization

using MeshCortThick

14. 3D Cortical Thickness

Visualization by using

KWMeshVisuUNC

Preprocessing Steps

3D Visualization

5. Multiply Images using ImageMath

Cortical Thickness Atlas Building

Page 4: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

2. Software Discription

To build the atlas of cortical thicknesses, it required multiple software tools. The following is a

list of software tools:

CortThick: CortThick computes a cortical thickness for each individual case.

CortThick is located in

/usr/sci/projects/neuro/software_kraken/bin/CortThick

GroupwiseRegistration:  This tool builds a b-spline atlas that takes input as multiple images. 

GroupwiseRegistration is located in

/usr/sci/projects/neuro/software_kraken/na­mic_sandbox/multiimagereg­

bin/bin/GroupwiseRegistration

ComputeOutputs: Once the registration has run, the deformed images are computed as output.

It is located in usr/sci/projects/neuro/software_kraken/na­

mic_sandbox/multiimagereg­bin/bin/ComputeOutputs

ComputeDeformationFields: This tool converts b-spline parameters files into deformation

fields.

It is located in /usr/sci/projects/neuro/software_kraken/na­

mic_sandbox/multiimagereg­bin/bin/ComputeDeformationFields

scalartransform: This tool transforms an image using a deformation field.

scalartransform located in

/usr/sci/projects/neuro/software_kraken/bin/scalartransform

Tissue Segmentation:

The tissue segmentation tool located in

/usr/sci/projects/neuro/software_kraken/bin/brainseg_1.8d_linux64

4

Page 5: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

ImageMath:

Using the ImageMath, create white and grey matter label from the tissue segmentation of atlas. It also

multiply the images. The ImageMath located in

/usr/sci/projects/neuro/software_kraken/bin/ImageMath

AtCortThick:  Finally, build an atlas cortical thicknesses by combining all the deformed distance

map of white and grey matter images. AtCortThick located in

/usr/sci/projects/neuro/software_kraken/bin/AtCortThick

MeshCortThick: To visualize the 3D cortical thicknesses, create mesh files by using the

MeshCortThick.  MeshCortThick located in

/usr/sci/projects/neuro/software_kraken/bin/MeshCortThick

KWMeshVisuUNC: 3D Visualization tool that render the 3D cortical thickness mesh file.

KWMeshVisuUNC located in

/usr/sci/projects/neuro/software_kraken/bin/KWMeshVisuUNC  

3. Preprocessing steps

Preprocessing procedure includes three steps. The first is obtain original image data set. The

second step is a tissue segmentation and labeling using itkEMS tool. Finally, the last step is a skull

stripping of the segmented image.

4. Cortical Thickness Atlas Building

4.1 Atlas Building Procedure

4.1.1) Cortical Thickness computation for each image using CortThick tool

To compute the cortical thickness for each individual case, the ­seg option is used in the

5

Page 6: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

CortThick tool, which requires segmentation for each image. As a result, the white matter and the

gray matter are labeled separately as 1 and 2. For input image, the EMS, tissue segmentation labeled

images are used.

The command is as follows:

usage : CortThick ­seg <SegImageFileName> <labelWhite> <labelGrey> 

<OutputFileName> [options]

usage : CortThick ­sep <WhiteMatterFileName> <GreyMatterFileName> 

<OutputFileName> [options]

Input:

­seg <WhiteMatterFileName> <GreyMatterFileName> <OutputFileName>

­sep <WhiteMatterFileName> <GreyMatterFileName> <OutputFileName>

Note:

­seg: Load the segmentation file (1 segmentation image) such as segmented in white matter, grey

matter, and csf.

­sep: Load Separately (2 images: 1 image for white matter and 1 image for grey matter) white

matter and grey matter.

Options:

 ­par <ParcellationFileName>     Parcellation file

 ­Wm      Write white matter distance map image average along 

boundary

 ­Gm      Write Danielson map on the gray matter

 ­Wc:     WhiteMatterComponent

 ­Gc:     GreyMatterComponent

 ­Vtk:    Write VtkFile

 ­Sdm:    Save cortical thickness on the white matter border (values 

in the histogram)

 ­BvsI:   Write two images : boundary cortical thickness and non­

6

Page 7: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

boundary cortical thickness

 ­GMMaps: Write two images: the distance map on gray matter boundary 

and the average values along gray matter boundary

An example is as follows:

> CortThick ­seg 5066­004­02_10_T1_labels_EMS.gipl.gz 1 2 /cortthick/ 

­Wm ­Sdm ­GMMaps

The following are figures of two­ and four­year old after running the 

CortThick:

Figure 2. Two-year old 5066-004-01_10_T1_labels_EMS Grey Matter Image

Left: Average cortical thickness on the grey matter along the boundary. Right: Cortical Thickness

values computed by the algorithm (scattered points) on the grey matter.

Output files are:

1) 5066­004­01_10_T1_labels_EMS­DistanceMapAverageOnGrey.mha

2) 5066­004­01_10_T1_labels_EMS­DistanceMapOnGrey.mha

7

Page 8: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 3. Two-year old 5066-004-01_10_T1_labels_EMS White Matter ImageLeft: Average cortical thickness on the white matter along the boundary. Right: Cortical Thickness values computed by the algorithm (scattered points) on the white matter.

Output files are:

1) 5066­004­01_10_T1_labels_EMS­DistanceMapOnWhiteAvg.mha2) 5066­004­01_10_T1_labels_EMS­DistanceMapOnWhite.mha

Figure 4. Four-year old 5066-004-02_10_T1_labels_EMS White Grey Matter Image

8

Page 9: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Left: Average cortical thickness on the grey matter along the boundary. Right: Cortical Thickness

values computed by the algorithm (scattered points) on the grey matter.

Output files are:

1) 5066­004­02_10_T1_labels_EMS­DistanceMapAverageOnGrey.mha

2) 5066­004­02_10_T1_labels_EMS­DistanceMapOnGrey.mha

Figure 5. Four-year old 5066-004-02_10_T1_labels_EMS White Matter Image

Left: Average cortical thickness on the white matter along the boundary. Right: Cortical Thickness

values computed by the algorithm (scattered points) on the white matter.

Output files are:

1) 5066­004­02_10_T1_labels_EMS­DistanceMapOnWhiteAvg.mha

2) 5066­004­02_10_T1_labels_EMS­DistanceMapOnWhite.mha

4.1.2) Group-wise Registration

A b-spline basis for unbiased atlas building, a technique released by Polina Golland and Serdar Balci at

MIT, was used to build the atlas. Group-wise registration is one of the b-spline atlas building tools that

take input as a list of image files and gives out put as b-spline atlas images.

9

Page 10: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

4.1.2.1) Input and Output files

There are two parameter files specified in the b-spline atlas building. The first is a text file that

specifies a list of input files and an input and output directory. The ten input images are T1 skull-

stripped images. The second file is a text file that takes parameters, which will be explained in sub-

section 3.3.

The following is an example of skull­filename.txt:

#

# The path of the input folder for images.# All images are assumed to be in the same folder.# (don’t forget to have slash as the last character)## If images are in different folders ignore# this parameter and supply the full pathname# as filename.#­i /home/sci/ehan/nworkspace/Research/CortThickData/skullstrip/

## The path of the output folder.# All outputs are saved to this folder##­o /home/sci/ehan/nworkspace/Research/CortThickData/computeOutputs/

# names of the input files# if inputFolder is specified, the pathname is relative to that# folder. Otherwise supply the full pathname#­f 5066­004­01_10_T1_labels_EMS_skullstrip.gipl.gz­f 5090­003­02_10_T1_labels_EMS_skullstrip.gipl.gz­f 5066­004­02_10_T1_labels_EMS_skullstrip.gipl.gz­f 5132­005­01_10_T1_labels_EMS_skullstrip.gipl.gz­f 5084­003­01_10_T1_labels_EMS_skullstrip.gipl.gz­f 5132­005­02_10_T1_labels_EMS_skullstrip.gipl.gz­f 5084­003­02_10_T1_labels_EMS_skullstrip.gipl.gz­f 5158­004­01_10_T1_labels_EMS_skullstrip.gipl.gz­f 5090­003­01_10_T1_labels_EMS_skullstrip.gipl.gz­f 5158­004­02_10_T1_labels_EMS_skullstrip.gipl.gz

10

Page 11: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

The following are figures of input images:

11

Page 12: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 6. Input of ten skull-stripped images for group-wise registration.Input files are: 1) 5066­004­01_10_T1_labels_EMS_skullstrip.gipl.gz2) 5066­004­02_10_T1_labels_EMS_skullstrip.gipl.gz3) 5084­003­01_10_T1_labels_EMS_skullstrip.gipl.gz4) 5084­003­02_10_T1_labels_EMS_skullstrip.gipl.gz5) 5090­003­01_10_T1_labels_EMS_skullstrip.gipl.gz6) 5090­003­02_10_T1_labels_EMS_skullstrip.gipl.gz7) 5132­005­01_10_T1_labels_EMS_skullstrip.gipl.gz8) 5132­005­02_10_T1_labels_EMS_skullstrip.gipl.gz9) 5158­004­01_10_T1_labels_EMS_skullstrip.gipl.gz10)5158­004­02_10_T1_labels_EMS_skullstrip.gipl.gz

4.1.2.2) Parameters

The second text file takes lists of parameters. The useBspline option should be enabled.

The useBsplineHigh option should be disabled initially. This option controls whether the b-spline

control points are refined after the initial b-spline registration. Users can experiment with this option,

but it does not always run reliably. The three numberOfSpatialSamples options control the

percentage of the image used in computing the image match term. Increasing this value improves the

stability of the optimization, but requires more computation time. If the objective function does not

decrease consistently try increasing this term. The useNormailizeFilter should be disabled

because the intensities of the images should be standardized prior to running this algorithm.

Using Affine Alignment to Template

12

Page 13: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

If users wish to use an affine alignment for each image from previous registration to a template

they should create in the output directory the directory tree Affine/TransformFiles. In this

directory there should be a file for each input image specified in the skull­filenames.txt file.

The file should end in a .txt extension and should contain an itk AffineTransform in double

precision.

The following is an example of parameters.txt:

## PARAMETERS OF BINARY FILE### metricType specifies which objective function to use# possible options:## entropy: congealing with entropy# variance: registering to the mean template image using# sum of square differences­metricType variance## If useBpline is set off, only affine registration is done,# if it is on and useBspline high is off, bspline registration# is done with specified grid region. If useBsplineHigh is on# bspline registration with mesh refinement is performed## Options to use:# useBspline on/off# useBsplineHigh on/off#­useBspline on­useBsplineHigh off## defines the initial bspline grid size along each dimension#­bsplineInitialGridSize 8## When using Bspline grid refinement, this options defines# how many number of refinements to use. After each level# number B­spline control points are doubled (8­>16­>32)#­numberOfBsplineLevel 2## All objective functions make use of stochastic subsampline

13

Page 14: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

# following options define the number of spatial samples as# percentage of the total number of voxels in the image (Try# to increase the number of samples if the registration# accuracy is poor)#­numberOfSpatialSamplesAffinePercentage 0.050­numberOfSpatialSamplesBsplinePercentage 0.1­numberOfSpatialSamplesBsplineHighPercentage 0.2## following options define number of multiresolution levels# used in optimization if set to one no multiresolution# optimization is performed. Affine/Bspline/BsplineHigh# define number of multiresolution levels used for each# registration stage## (For high resolution anatomical images at 256x256x128 voxels# we used 3 levels, decrease the number of levels if the# resolution of the input image is low )­multiLevelAffine 2­multiLevelBspline 2­multiLevelBsplineHigh 2## Following options define the number iterations to be# performed Optimization is terminated after a fixed number# of iterations no other termination options are used#­optAffineNumberOfIterations 50­optBsplineNumberOfIterations 40­optBsplineHighNumberOfIterations 40## Following options define the learning rate of the# optimizers for each stage of the registration## (decrease the learning rate if you get# "all samples mapped outside" error#­optAffineLearningRate 1e­10­optBsplineLearningRate 1e­6­optBsplineHighLearningrate 1e­7## Currently there are three optimizer types. "gradient" is# a fixed step gradient descent search. "lineSearch" is# gradient descent search where step size is determined# using line search.## optimizerType gradient/lineSearch/SPSA

14

Page 15: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

#­optimizerType lineSearch## Specifies the percentage increase in the sampling rate# after each multiresolution level#­affineMultiScaleSamplePercentageIncrease 4.0­bsplineMultiScaleSamplePercentageIncrease 4.0## Specifies increase in the number of iterations# after each resolution level#­affineMultiScaleMaximumIterationIncrease 2.0­bsplineMultiScaleMaximumIterationIncrease 2.0## Specifies optimizer step length increase after each# multiresolution level#­affineMultiScaleStepLengthIncrease 4.0­bsplineMultiScaleStepLengthIncrease 4.0## the width of the parzen window to compute the entropy# used by all metric types computing entropy#­parzenWindowStandardDeviation 10.0## Use normalize filter to normalize input images# to have mean zero and standard deviation 1.#­useNormalizeFilter off## Write 3D images to file.# turn off to save disk space#­write3DImages on############################################## ADVANCED OPTIONS############################################### the level of registration to be started# Use this option if you want to start the registration# using the results of a previous registration## 0 (default): no initialization, all registrations are

15

Page 16: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

# performed# 1 : Affine parameters are read from the file# 2 : Bspline parameters are read from file (initial size should# match the transform from file )#­StartLevel 0## Uses a mask on the images. Only pixels inside the mask are# considered during the registration possible options mask# none/single/all none: do not use mask single: only use# mask for the first image all: use mask for all images##­mask all## specifies the mask type# possible options:# maskype connectedThreshold/neighborhoodConnected## connectedThreshold: adds all pixels# to the mask if its value is smaller# than threshold1 than add connected# pixels whose value is smaller than# threshold2# neighborhoodConnected: same as connectedThreshold but a# pixel is added only if it is all# connected within a radius of one##­maskType connectedThreshold#­threshold1 0#­threshold2 1## specifies the translation scale coefficients with respect# to the affine coefficients smaller values mean larger step# size along translation directions 1/scale is used!­translationScaleCoeffs 1e­4## Maximum number of iterations performed for a line search# if the optimizer is lineSearch#­maximumLineIteration 6­BSplineRegularizationFlag on­gaussianFilterKernelWidth 5

16

Page 17: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

4.1.2.3) Running GroupwiseRegistration

The command for running GroupwiseRregistration is as follows:

> GroupwiseRegistration   filenames.txt   parameters.txt

After run GroupwiseRegistration, output files are stored in

output_Directory/Bspline_Grid_8/TransformFiles/.

4.1.3) Compute Outputs

Once the registration has run, the deformed images are computed using the following command:

> ComputeOutputs  skull­filename.txt   parameters.txt

In this step, input files are used as same files and outputs stored in the same output directory from the GroupwiseRegistration.

After run ComputeOutputs, images are located in output_directory/Bspline_Grid_8/Images/.

The following are results of the images:

17

Page 18: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 7. Bspline_Grid_8 images after running group-wise, B-spline atlas registration.

Output files are:

1) 5066­004­01_10_T1_labels_EMS_skullstrip.gipl.gz  2) 5066­004­02_10_T1_labels_EMS_skullstrip.gipl.gz3) 5084­003­01_10_T1_labels_EMS_skullstrip.gipl.gz4) 5084­003­02_10_T1_labels_EMS_skullstrip.gipl.gz5) 5090­003­01_10_T1_labels_EMS_skullstrip.gipl.gz6) 5090­003­02_10_T1_labels_EMS_skullstrip.gipl.gz

18

Page 19: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

7) 5132­005­01_10_T1_labels_EMS_skullstrip.gipl.gz8) 5132­005­02_10_T1_labels_EMS_skullstrip.gipl.gz9) 5158­004­01_10_T1_labels_EMS_skullstrip.gipl.gz10) 5158­004­02_10_T1_labels_EMS_skullstrip.gipl.gz

4.1.4) Compute Deformation Fields

The B-spline parameters should be converted into deformation fields in order to apply the

deformations to the cortical thickness images. Running the ComputeDeformationFields is as

follows:

> ComputeDeformationFields skull­filename.txt parameters.txt

This will create deformation fields for each level of transformation. The output files will be stored in

output_directory/Bspline_Grid_8/DeformationImage/.

4.1.5) Multiply Images by factor f (Image Math)

To avoid lose the floating point precision in the transformation, we want to multiply the

CortThick images by a factor of f. In the cortical thicknesses, we want to multiply images by a factor

of 1000. To multiply an image “image1.mha” by a factor of f, the command line is using ImageMath.

The command is as follows:

>  /usr/sci/projects/neuro/software_kraken/bin/./ImageMath <infile: 

im.mha or im.gipl>  ­constOper 2,f ­outfile outfile_1000.mha

The option constOper apply a same operation to all the voxels of an image, and the number “2”

specifies a multiplication.

4.1.6) Transformation

The deformation of the individual cortical thickness images into the atlas space is done using

scalartransform with nearest neighbor interpolation option. The created deformation fields

19

Page 20: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

located in /Bspline_Grid_8/DeformationImage and are used to deform the *.mha images

created by CortThick.

The input images are *.mha files that have been created by the CortThick tool in the step 1 (see

4.1.1 ).

The command is as follows:

> scalartransform ­i <input image> ­o <output image> ­d <deformation 

field (text file)> ­­interpolation nearestneighbor

For example:

> scalartransform  ­i InsideCorticalThickness.mha ­o 

InsideCorticalThicknessScalarTransform.mha ­d forward­5158­004­

02_10_T1_labels_EMS_skullstrip.mhd ­­interpolation neareastneighbor

The following are results of the scalartransform:

20

Page 21: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 8. Transformed images of distance map on white matter and grey matter (two year). Left images are average distance map on a grey matter and white matter. Right images are scatter distance map on a grey matter and white matter.Output files are:1) 5066­004­01_10_T1_labels_EMS­    DistanceMapAverageOnGreyScalarTransform.mha 2) 5066­004­01_10_T1_labels_EMS­DistanceMapOnGreyScalarTransform.mha

3) 5066­004­01_10_T1_labels_EMS­   DistanceMapOnWhiteAvgScalarTransform.mha 4) 5066­004­01_10_T1_labels_EMS­DistanceMapOnWhiteScalarTransform.mha

21

Page 22: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 9. Transformed images of distance map on white matter and grey matter (four year). Left images are average distance map on a grey matter and white matter. Right images are scatter distance map on a grey matter and white matter.

Output files are:1) 5066­004­02_10_T1_labels_EMS­    DistanceMapAverageOnGreyScalarTransform.mha 2) 5066­004­02_10_T1_labels_EMS­DistanceMapOnGreyScalarTransform.mha

3) 5066­004­02_10_T1_labels_EMS­   DistanceMapOnWhiteAvgScalarTransform.mha 4) 5066­004­02_10_T1_labels_EMS­DistanceMapOnWhiteScalarTransform.mha

4.1.7) Atlas Tissue Segmentation

The atlas tissue segmentation takes xml file. The xml file contains input as Mean.gipl.gz file, atlas

directory, output directory, and other options.

The command is as follows:

> brainseg_1.8d_linux64   EMSparam.xml 

22

Page 23: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

The example of EMSparm.xml is as follows:

<?xml version="1.0"?>

<!DOCTYPE SEGMENTATION­PARAMETERS><SEGMENTATION­PARAMETERS><SUFFIX>EMS</SUFFIX><ATLAS­DIRECTORY>/home/sci/gouttard/Projects/04­Autism­ACENetwork/PhantomDataAll/processing/Atlases/segAtlas</ATLAS­DIRECTORY><ATLAS­ORIENTATION>PSL</ATLAS­ORIENTATION><OUTPUT­DIRECTORY>/home/sci/ehan/nworkspace/Research/CortThickData/computeOutputs/AtlasTissueSegmentation</OUTPUT­DIRECTORY><OUTPUT­FORMAT>GIPL</OUTPUT­FORMAT><IMAGE>  <FILE>/home/sci/ehan/nworkspace/Research/CortThickData/computeOutputs/Bspline_Grid_8/MeanImages/Mean.gipl.gz</FILE>  <ORIENTATION>RIA</ORIENTATION></IMAGE><FILTER­ITERATIONS>10</FILTER­ITERATIONS><FILTER­TIME­STEP>0.01</FILTER­TIME­STEP><FILTER­METHOD>Curvature flow</FILTER­METHOD><MAX­BIAS­DEGREE>4</MAX­BIAS­DEGREE><PRIOR­1>1.2</PRIOR­1><PRIOR­2>1</PRIOR­2><PRIOR­3>0.7</PRIOR­3><PRIOR­4>1</PRIOR­4><DO­ATLAS­WARP>0</DO­ATLAS­WARP><ATLAS­WARP­GRID­X>7</ATLAS­WARP­GRID­X><ATLAS­WARP­GRID­Y>7</ATLAS­WARP­GRID­Y><ATLAS­WARP­GRID­Z>7</ATLAS­WARP­GRID­Z></SEGMENTATION­PARAMETERS>

23

Page 24: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

The below images are results of the atlas Tissue Segmentation:

Figure 10. Atlas labeled tissue segmentation imageOutput files are:

1) Mean_posterior1_EMS.gipl 2) Mean_posterior2_EMS.gipl 

3) Mean_registered_EMS.gipl 4) Mean_corrected_EMS.gipl  

5) Mean_labels_EMS.gipl     6) Mean_posterior0_EMS.gipl  

In order to verify the results I have experimented overlay with transformed image and mean of

atlas tissue segmentation image. As shown figures 9 and 10, there are not much differences between

scalar transformed images and mean corrected atlas tissue segmentation. This shows us that the

deformation of the each cortical thickness images were transformed into the atlas space correctly.

The following are results of the overlay images between transformed image and mean corrected

atlas tissue segmentation image:

24

Page 25: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 11. Overlay images between transformed image (5066-004-01_10_T1_labels-*) and mean corrected atlas tissue segmentation.

25

Page 26: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 12. Overlay images between transformed image (5066-004-02_10_T1_labels-*) and mean corrected atlas tissue segmentation.

26

Page 27: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

4.1.8) Extract label with white matter and grey matter masks from the tissue segmentation

(ImageMath)

Now we want to extract two separate white matter and grey matter masks from the tissue

segmentation of the atlas. This can be done by ImageMath with the extractLabel option.

Usage of this method is:

1) Create white matter label from the tissue segmentation of the atlas (Mean_labels_EMS.gipl)

>ImageMath  ../AtlasTissueSegmentation/Mean_labels_EMS.gipl ­extractLabel   1  ­outfile  Mean_labels_EMS­WM.gipl

2) Create grey matter label from the tissue segmentation of the atlas (Mean_labels_EMS.gipl)

>ImageMath ../AtlasTissueSegmentation/Mean_labels_EMS.gipl 

­extractLabel  2  ­outfile  Mean_labels_EMS­GM.gipl

The output images are as follows:

Figure 13. Extracted grey matter and white matter label from the tissue segmentation of the atlas.

Output files are:

1) Mean_labels_EMS­GM.gipl

2) Mean_labels_EMS­WM.gipl

27

Page 28: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

4.1.9) Combining all the deformed cortical thickness maps into the one image

using AtCortThick

Once we extract labels with grey and white matter from the atlas tissue segmentation, we want to

combine all the deformed images in the atlas space into one image.

The usage of AtCortThick is as follows:

>AtCortThick ­­help

­w / ­­white : WhiteMatter Image­g / ­­grey : GreyMatter Image­d / ­­distance : White DistanceMaps Directory, Grey DistanceMaps Directory­o / ­­output : Output Directories for whiteDestinationMap, Output Directories for greyDestinationMap

­Im / ­­IntermediateMap : Write the intermediate Map for every case­h / ­­help : display this help

For example:

> AtCortThick 

­w <White matter image created from the section 3.1.7 >

­g <Grey matter image created from the section 3.1.7 >

­d <1: White matter image: Deformed images in the atlas space (output 

from the section 3.1.5)> 

   <2: Grey matter image: Deformed images in the atlas space (output 

from the section 3.1.5)> 

­o <Output directory of white matter> <Output directory of grey 

matter>

> AtCortThick 

­w ../ImageMath_ouput/Mean_labels_EMS­WM.gipl ­g 

../ImageMath_ouput/Mean_labels_EMS­GM.gipl ­d ../scalarTransformOutput/

WM/ ../scalarTransformOutput/GM/ ­o ../AtCorThick_output/WM/ 

../AtCorThick_output/GM/

28

Page 29: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

The following are results of AtCortThick:

Figure 14. Output images of grey matter and white matter after run AtCortThick.

Output files are:

1) average­GM.mha 2) average­WM.mha

4.1.10) Create Mesh Cortical Thickness ( MeshCortThick)

To visualize cortical thicknesses in 3D, we want to create 3D volume using MeshCortThick. 

MeshCortThick takes two inputs with the ­i option. The first input is a white mater or grey

matter mask of the atlas created from ImageMath, and the second input is a cortical thickness along

the boundary created with AtCorthThick. This will create 3 outputs such as a mesh file

(*.meta), a text file (*.txt), and a vtk file (*.vtk). The mesh file contains volume information

and the text file contains scalar cortical thicknesses at each mesh points.

The usage of MeshCortThick is as follows:usage: MeshCortThick ­i <ImageToMeshFileName> <ThicknessFileName>or       MeshCortThick ­m <MeshFileName> <ThicknessFileName>

Input:­i <ImageToMeshFileName> <ThicknessFileName>­m <MeshFileName> <ThicknessFileName>

29

Page 30: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

For example:

1) White Matter:>MeshCortThick  ­i <ImageToMeshFileName: White Matter mask of the atlas created from the ImageMath (section 4.1.8)>  <ThicknessFileName: Cortical Thickness along the White Matter boundary created with AtCorthThick>

2) Grey Matter:>MeshCortThick  ­i <ImageToMeshFileName: Grey Matter mask of the atlas created from ImageMath (section 4.1.8)>  <ThicknessFileName: Cortical Thickness along the Grey Matter boundary created with AtCorthThick>

4.1.11) 3D Cortical Thickness Visualization ( KWMeshVisuUNC)

Once 3D mesh cortical thickness is created, we want to visualize the cortical thickness map on white matter and grey matter with a tool KWMeshVisuUNC.  There are two steps required to render the 3D cortical thickness. The first, load a mesh file (*.meta) that created from MeshCortThick and the second, load a 1D attribute file (*.txt) that created from MeshCortThick.

The command is as follows:/usr/sci/projects/neuro/software_kraken/bin/./KWMeshVisuUNC &

The following are results of 3D cortical thickness on the grey matter and white matter:

Figure 15. 3D Visualization of cortical thickness on the grey matter.

30

Page 31: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

Figure 16. 3D Visualization of cortical thickness on the white matter.

Figure 15. 3D Visualization of cortical thickness on the grey matter and white matter.

31

Page 32: Contentsehan/research/neuroimaging/AtlasCorticalThickness.pdff 515800402_10_T1_labels_EMS_skullstrip.gipl.gz 10. The following are figures of input images: 11. Figure 6. Input of ten

References

[1] GOODLETT, C. DTI Atlas Building User Guide (Jan. 2009).

32


Recommended