+ All Categories
Home > Documents > Chapter 2 Literature survey - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/37015/8/08... ·...

Chapter 2 Literature survey - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/37015/8/08... ·...

Date post: 07-Jul-2020
Category:
Upload: others
View: 9 times
Download: 0 times
Share this document with a friend
30
Chapter 2 Literature survey
Transcript

Chapter 2

Literature survey

59

Page No.

Chapter 2: Literature survey

2.1 Introduction 60

2.2 Literature Survey on Image Processing and its Applications 60

2.3 Literature Survey on MATLAB based Image Processing

Medical Applications 69

2.4 Literature Survey on ANN based Image Processing Applications 71

2.5 Literature Survey for the Detection of Tuberculosis 74

2.6 Motivation for the Present Work 80

2.7 Objectives for the Present Research Work 81

References 82

60

2.1 Introduction

This chapter deals with the literature survey of digital image processing and applications

of the image processing using different techniques.This chapter also describes the

literature survey on MATLAB based image processing medical applications, literature

survey on ANN based Image processing applications, literature Survey for the detection

of tuberculosis using medical and technical methods. From the elaborated literature

survey the motivations for the present work is presented. References are provided at the

end of the chapter.

2.2 Literature survey on image processing and its applications

Many of the techniques of digital image processing, or digital picture processing as it

often was called, were developed in the 1960s at the Jet Propulsion Laboratory,

Massachusetts Institute of Technology, Bell Laboratories, University of Maryland. A few

researches such as application to satellite images, wire-photo standards conversion,

medical imaging, videophone, character recognition, and photograph enhancement were

also carried out[1].

SuezouNakadateet al [2] discussed the use of digital image processing techniques for

electronic speckle pattern interferometry. A digital TV-image processing system with a

large frame memory allows them to perform precise and flexible operations such as

subtraction, summation, and level slicing. Digital image processing techniques made it

easy compared with analog techniques to generate high contrast fringes.

Satoshi Kawataet al [3] discussed the characteristics of the iterative image-restoration

method modified by the reblurring procedure through an analysis in frequency space. An

iterative method for solving simultaneous linear equations for image restoration has an

inherent problem of convergence. The introduction of the procedure called “reblur”

solved this convergence problem. This reblurring procedure also served to suppress noise

amplification. Two-dimensional simulations using this method indicated that a noisy

image degraded by linear motion can be well restored without noticeable noise

amplification.

61

William H [4] highlighted the progress in the image processing and analysis of digital

images during the past ten years. The topics included digitization and coding, filtering,

enhancement, and restoration, reconstruction from projections, hardware and software,

feature detection, matching, segmentation, texture and shape analysis, and pattern

recognition and scene analysis.

David W. Robinson [5] presented the application of a general-purpose image-processing

computer system to automatic fringe analysis. Three areas of application were examined

where the use of a system based on a random access frame store has enabled a processing

algorithm to be developed to suit a specific problem. Furthermore, it enabled automatic

analysis to be performed with complex and noisy data. The applications considered were

strain measurement by speckle interferometry, position location in three axes, and fault

detection in holographic nondestructive testing. A brief description of each problem is

presented, followed by a description of the processing algorithm, results, and timings.

S V Ahmed [6] discussed the work prepared by concentrating upon the simulation and

image processing aspects in the transmission of data over the subscriber lines for the

development of an image processing system for eye statistics from eye .

P K Sahooet al [7] presented a survey of thresholding techniques and updated the earlier

survey work. An attempt was made to evaluate the performance of some automatic global

thresholding methods using the criterion functions such as uniformity and shape

measures. The evaluation was based on some real world images.

Marc Antoniniet al [8] proposed a new scheme for image compression taking psycho-

visual features in to account both in the space and frequency domains. This new method

involved two steps. First, a wavelet transform in order to obtain a set of bi orthogonal

subclasses of images; the original image is decomposed at different scales using a

yramidal algorithm architecture. Second, according to Shannon's rate distortion theory,

the wavelet coefficients are vector quantized using a multi resolution codebook.

Furthermore, to encode the wavelet coefficients, a noise shaping bit allocation procedure

was proposed which assumes that details at high resolution are less visible to the human

eye. Finally, in order to allow the receiver to recognize a picture as quickly as possible at

minimum cost, a progressive transmission scheme was presented. It is showed that the

wavelet transform is particularly well adapted to progressive transmission.

62

Harpen MD [9] presented a wavelet theory geared specifically for the radiological

physicist. As a result, the radiological physicist can expect to be confronted with

elementsof wavelet theory as diagnostic radiology advances into teleradiology, PACS,

and computer aided feature extraction and diagnosis.

Salem Saleh Al-amriet al [10] attempted to undertake the study of segmentation image

techniques by using five threshold methods as Mean method, P-tile method, Histogram

Dependent Technique (HDT), Edge Maximization Technique (EMT) and visual

Technique and they are compared with one another so as to choose the best technique for

threshold segmentation techniques image. These techniques are applied on three satellite

images to choose base guesses for threshold segmentation image.

Wiecek B.et al [11] proposed a new image processing tools for conversion thermal and

visual images, mainly for application in medicine and biology. A novel method for area

and distance evaluation based on statistical differencing was discussed. In order to

increase the measurements accuracy, the interpolation and sub pixel bitmap processing

were chosen.

Patnaiket al [12] presented an image compression method using auto-associative neural

network and embeddedzero-treecoding. The role of the neural network (NN) is to

decompose the image stage by stage, which enabled analysis similar to wavelet

decomposition. This works on the principle of principal component extraction (PCE).

Network training is achieved through a recursive least squares (RLS) algorithm. The

coefficients are arranged in a four-quadrant sub-band structure. The zero-treecoding

algorithm is employed to quantize the coefficients. The system outperformed the

embeddedzero-tree wavelet scheme in a rate-distortion sense, with best perceptual quality

for a given compression ratio.

Shanhui Sun Christian Bauer et al [13] presented a fully automated approach for

segmentation of lungs in CT datasets. The method was specifically designed to robustly

segment lungs with cancer masses and consists of three processing steps. First, a ribcage

detection algorithm is utilized to initialize the model-based segmentation method. Second,

a robust active shape model matching approach is applied to roughly segment the outline

of the lungs. Third, the outline of the matched model is further adapted to the image data

by means of an optimal surface finding approach. The method was evaluated on the

63

LOLA11 test set, consisting of 55 chest CT scans with a variety of different lung diseases

and scan protocols. Compared to a reference standard, mean average and median

volumetric overlap scores of 0 949 and 0, 990 were achieved respectively. Several

examples demonstrated the ability of our method to successfully segment lungs with

cancer masses.

Sonalet al [14] presented various types of image compression techniques. There are

basically two types of compression techniques. One is Lossless Compression and other is

Lossy Compression Technique. Comparing the performance of compression technique is

difficult unless identical data sets and performance measures are used. Some of these

techniques are obtained good for certain applications like security technologies. Some

techniques perform well for certain classes of data and poorly for others.

Suzuki Ket al [15 ] developed an image-processing technique for suppressing the contrast

of ribs and clavicles in chest radiographs by means of a multiresolution massive training

artificial neural network (MTANN). An MTANN is a highly nonlinear filter that can be

trained by use of input chest radiographs and the corresponding "teaching" images.

"bone" images obtained by use of a dual-energy subtraction technique as the teaching

images were employed. A validation test database consisting of 118 chest radiographs

with pulmonary nodules and an independent test database consisting of 136 digitized

screen-film chest radiographs with 136 solitary pulmonary nodules collected from 14

medical institutions are used in this study.

Weixingwanget al [16] presented the newly developed ridge detection algorithm to

diagnose indeterminate nodules correctly, allowing curative resection of early-stage

malignant nodules and avoiding the morbidity and mortality of surgery for benign

nodules. The algorithm was compared to some traditional image segmentation algorithms.

All the results are satisfactory for diagnosis.

Md. FoisalHossainet al [17] presented an enhancement technique based upon a new

application of contrast limited adaptive histograms on transform domain coefficients

called logarithmic transform coefficient adaptive histogram equalization (LTAHE). The

method is based on the properties of logarithmic transform domain histogram and contrast

limited adaptive histogram equalization. A measure of enhancement based on contrast

measure with respect to transform was used as a tool for evaluating the performance of

64

the proposed enhancement technique and for finding optimal values for variables

contained in the enhancement. The algorithm's performance was compared quantitatively

to classical histogram equalization using the aforementioned measure of

enhancement.Experimental results were presented to show the performance of the

proposed algorithm alongside classical histogram equalization.

Wenhong Li . Collet al [18] presented the paper on currency classification system using

image processing techniques. The processing effect and recognition accuracy of RMB is

an important part in the paper currency classification system. According to the

characteristics of RMB images, the paper uses the theory of digital image processing and

pattern recognition to put forward the method of RMB image processing based on the

processing and recognition of the part of the RMB serial numbers, the arithmetic of linear

perception based on rewards and punishment method and the extraction method of serial

numbers character. Through the experiment on the paper currency classification system

which uses the CIS sensor as the image acquisition, it testifies that this method of

recognition has a high feasibility and recognition accuracy.

Li Minxiaet al [19] designed defect extraction by image segmentation. Firstly, on the

basis of wavelet analysis, a new wavelet adaptive threshold denoising method based on

genetic algorithm optimization was proposed. Secondly, an algorithm of multi-scale

morphological to local contrast enhancement was designed. Finally, background is

simulated, and the defect regions were extracted using algorithm of digital subtraction.

The experimental results indicated that the methods can achieve automatic extraction of

defect region, which is always a good foundation for flaw feature parameter extraction

and choice.

Kanwal, N.et al [20] deals with contrast enhancement of X-Ray images and presents here

a new approach for contrast enhancement based upon Adaptive Neighborhood technique.

A hybrid methodology for enhancement has been presented. Comparative analysis of

proposed technique against the existing major contrast enhancement techniques has been

performed and results of proposed technique are promising.

Noorhayati Mohamed Noor et al [21] presented the enhancement capability of adaptive

histogram equalization (AHE) on the soft tissue lateral neck radiograph for suspected fish

bone ingestion. Embedded fish bone lodge in the throat is not easily visible in

65

unprocessed plain radiograph. Serious complication may cause perforation of the lodged

and inflammation that can progress to abscess. Due to the high resolution, the images

were cropped before being processed using adaptive histogram equalization. The quality

of the image was assessed and evaluated during pre and post processing by the

radiologists. The result showed AHE as a promising contrast enhancement for detection

of fish bone in soft tissue at the lateral neck radiographs.

Lu Zhang et al [22] described about Diffraction-enhanced imaging (DEI) and the

capability of DEI to observe different types of tissues was investigated.It is a synchrotron

based imaging technique, which generates high spatial resolution and contrast of both

calcified and soft tissues. This technique not only provided the visualization of absorption

information like conventional X-ray imaging, but also refraction and scattering properties.

In this study the MIR is used to extract information from a series of DEI images.

Md. FoisalHossainet al [23] proposed a method of medical image enhancement based

upon non-linear technique and the logarithmic transform coefficient histogram

equalization using EME as a measure of performance. The performance of this algorithm

was compared to a classical histogram equalization enhancement technique. This method

improves visual quality of images that contain dark shadows due to limited dynamic

range of imaging like X-ray images. Experimental results ascertained that the proposed

technique outperform commonly used enhancement technique like the histogram

equalization qualitatively and quantitatively.

HasanDEMIREL[24] introduced a new face recognition technique based on the gray-

level co-occurrence matrix(GLCM). GLCM represents the distributions of the intensities

and the information about relative positionsof neighboring pixels of an image. Two

methods were being proposed to extract feature vectors using GLCM for

faceclassification. The first method extracts the well-known Haralick features from the

GLCM, and the secondmethod directly uses GLCM by converting the matrix into a vector

that can be used in the classification process. The results demonstrated that the second

method, which uses GLCM directly, is superior to the first method that uses the feature

vector containing the statistical haralick features in both nearest neighbour and neural

networks classifiers. The proposed GLCM based face recognition system not only

outperforms well-known techniques such as principal component analysis and linear

66

discriminant analysis, but also has comparable performance with local binary patterns and

Gabor wavelets.

Pu J et al [25] presented a shape “break-and-repair” strategy for medical image

segmentation and applied it to the segmentation of human lung and pulmonary nodules in

this study. In this approach, the regions that may cause any problems in segmentation

were removed and then estimated using implicit surface fitting based on RBFs. Its most

important characteristic is the capability of segmenting anatomical structures depicted on

medical images in a unified framework within a single pass. The preliminary assessment

results are encouraging and demonstrated the feasibility, generality, and robustness of this

strategy in segmentation.

Hongsheng Li et al [26] proposed a novel predictive model, active volume model (AVM),

for object boundary extraction. It is a dynamic “object” model whose manifestation

includes a deformable curve or surface representing a shape, a volumetric interior

carrying appearance statistics, and an embedded classifier that separates object from

background based on current feature information. The model focused on an accurate

representation of the foreground object’s attributes, and does not explicitly represent the

background. They showed, however, the model is capable of reasoning about the

background statistics which can detect when change is sufficient to invoke a boundary

decision.

Sadeer G. Al-Kindiet al [27] proposed a novel hybrid and repetitive smoothing-

sharpening (HRSST) technique and its impacts are assessed to beneficially enhance

sonogram and mammogram images. The technique aimed to gain and combine the

advantages of both the sharpening process that aims to highlight sudden changes in the

image intensity, with the advantages of iterative image smoothing, which is usually

applied to remove random noise from digital images. Nevertheless the developed

technique also eliminated the drawbacks of each of the two sharpening and smoothing

techniques resulting from their individual application in image processing field. The

proposed technique was tested on both breast ultra-sound (BUS) as well as breast X-ray

mammograms. Results showed that the proposed methodology has high potential to

advantageously enhance the image contrast hence giving extra aid to radiologists to detect

and classify sonograms and mammograms.

67

Sandeep Kumar et al [28] provided a frame work for denoising the enhanced image based

on prior knowledge on the Histogram Equalization. Many image enhancement schemes

like Contrast limited Adaptive Histogram Equalization (CLAHE), Equal area dualistic

sub-image histogram equalization (DSIHE), Dynamic Histogram equalization (DHE)

Algorithm were implemented and compared after the denoising using the wavelet

thresholding. The Performance of all these Methods with the denoising has been analyzed

and a number of Practical experiments of real time images have been presented. From the

experimental results, it is found that all the three techniques with the denoising yields

Different aspects for different parameters.

XiaoyanXu [29] implemented the embedded zero tree wavelet algorithm (EZW), which is

a simple, yet remarkably effective, image compression algorithm. The experiment was

done on a set of standard images and the results show the good performance of this

algorithm compared to some other compression scheme. EZW has proved to be a very

effective image compression method based on the mean-square error (MSE) distortion

measure. Coding results shown in this paper illustrats the performance of this

improvement.

Arpita Mittal et al [30] reported that as till date there is no proven cure for the disease i.e.

Rheumatoid Arthritis(RA), hence close monitoring of the disease is important in the

medical treatment of this disease . An application of image processing techniques for

identification of most common disease (RA) is opted. In this paper Fingers and Knee

images of the patient having RA have been analyzed through Morphological Image

processing techniques. The processed images find their application in the field of Medical

Science and can be beneficial for doctors in identification of disease stages from

monitoring point of view.

Jagadeeshet al [31] presented the preprocessing methods of the leukemic blast cells image

in order to generate the features well characterizing different types of cells. The solved

problems include: the segmentation of the bone marrow aspirate by applying the

watershed transformation, selection of individual cells, and feature generation on the

basis of texture, statistical and geometrical analysis of the cells.

KimmiVermaet al [32] did a research which made the use of software with edgedetection

and segmentation methods, which gave the edge pattern and segment of brain and the

68

brain tumor itself. In this research, it has provided a foundation of segmentation and

edges reviewed with an emphasis placed on revealing the advantages and disadvantages

of these methods for medical imaging applications. The use of image segmentation in

different imaging modalities is also described along with the difficulties encountered in

each modality.

Ashraf Anwar et al [33] introduced an inexpensive, user friendly general-purpose image

processing tool and visualization program specifically designed in MATLAB to detect

much of the brain disorders as early as possible. The application provided clinical and

quantitative analysis of medical images. Minute structural difference of brain gradually

results in major disorders such as schizophrenia, Epilepsy, inherited speech and language

disorder, Alzheimer's dementia etc. Here the main focusing is given to diagnose the

disease related to the brain and its psychic nature (Alzheimer’s disease). Medical

imaging is expensive and very much sophisticated because of proprietary software and

expert personalities.

Pallavi T. Suradkar [34] reviewed image analysis studies aimed at automated diagnosis or

screening of malaria infection in microscope images of thin blood film smears.

Md.AmranHossenBhuiyanet al [35]reportedthat in order to achieve an effective way to

identify skin cancer at an early stage without performing any unnecessary skin biopsies,

digital images of melanoma skin lesions were investigated. To achieve this goal, feature

extraction was considered as an essential weapon to analyze an image appropriately In

this paper, different digital images have been analyzed based on unsupervised

segmentation techniques. Feature extraction techniques are then applied on these

segmented images. After this, a comprehensive discussion has been explored based on the

obtained results. Signal and imaging investigations are currently a basic step of the

diagnostic, prognostic and follow-up processes of heart diseases.

R K Samantarayet al [36] has presented an effective way to achieve a high-level

integration of signal and image processing methods in the general process of care, by

means of a clinical decision support system(CDSS), and has discussed the advantages of

such an approach. In particular, significant and suitably designed image and signal

processing algorithms are introduced to objectively and reliably evaluate important

features that, in collaboration with the CDSS, could facilitate decisional problems in the

69

heart failure domain. Further-more, additional signal and image processing tools enrich

the model base of the CDSS.

S.Kannadhasanet al [37] described a method which is not only effectively detecting the

presence of cancer cells but also it is reducing the overall time taken for diagnosis by

carrying the whole process under biotelemetry. On the other hand, biotelemetry is mostly

used for one dimensional signals thus in this project it extended for transferring two

dimensional signals i.e., image if it happen so then complex or time consuming diagnosis

process completes in short duration. The telemetry link was provided by Zigbee

transceivers and diagnosis was carried with the help of digital image processing

technique.

HardikPandit [38]discussed an application of digital image processing and analysis

techniques., which can be useful in healthcare domain to predict some major diseases of

human beings. The application is an image processing system, which works on the basis

of medical palmistry. The images of human palm form input to the system. Then, system

applies digital image processing and analysis techniques on input images to identify

certain features in the image. By using knowledge base of medical palmistry it analyzes

certain features in image and predicts probable disease.

2.3 Literature survey on MATLAB based image processing medical applications

M Bister [39] illustrated some of the important points with fast implementations of

bilinear interpolation, watershed segmentation and volume rendering with MATLAB, as

MATLAB has often been considered an excellent environment for fast algorithm

development but is generally perceived as slow and hence not fit for routine medical

image processing, where large data sets are now available e.g., high resolution CT image

sets with typically hundreds of 512x512 slices. Yet, with proper programming practices –

vectorization, pre-allocation and specialization – applications in MATLAB could run as

fast as in C language.

JiřiBlahuta [40] presented a processing of medical ultrasound images with MATLAB.

This processing is useful to potential diagnosis of Parkinson´s disease in brain-stem

area.Furthermore introduced DICOM standard for medical imaging and modern 3D/4D

scanning for high level and accuracy of diagnoses that was higher than traditionally 2D

scanning.

70

Joaquim Jose Furtado et al [41] aimed to realize the image classification using MATLAB

software. The image was classified using three and five classes, with a population size of

20 and time of 30, 50 and 100. The gotten results showed that the time seems to affect the

classification more than the number of classes.

S. AllinChristeet al [42] presented an efficient architecture for various image filtering

algorithms and tumour characterization using Xilinx System Generator (XSG). This

architecture offered an alternative through a graphical user interface that combines

MATLAB, Simulink and XSG and explored important aspects concerned to hardware

implementation. Performance of this architecture implemented in SPARTAN-3E Starter

kit (XC3S500E-FG320) exceeds those of similar or greater resources architectures. The

proposed architecture reduced the resources available on target device by 50%.

NasrulHumaimiMahmoodet al [43] reported a survey of image processing algorithms

that have been developed for detection of masses and segmentation techniques. 35

students from university campus participated in the Development of Biomedical Image

Processing Software Package for New Learners Survey investigating the use of software

package for processing and editing image. Composed of 19 questions, the survey built a

comprehensive picture of the software package, programming language, workflow of the

tool and captured the attitudes of the respondents. The result of this study showed that

MATLAB is among the famous software package and its result is expected to be

beneficial and able to assist users on effective image processing and analysis in a newly

developed software package.

Ching Yee Yong et al [44] made a survey of image processing algorithms that were

developed for detection of masses and segmentation techniques. The result of this study

showed that MATLAB is among the famous software package; more than 60% of the

respondents prefer to use MATLAB for their image processing work. The Microsoft

Photo Editor is the second popular software for images editing process. More than 30% of

respondents are very likely to use a ready-to-use package for processing image rather than

given source code. The result is expected to be beneficial and is able to assist users on

effective image processing and analysis in a newly developed software package. A

preliminary image processing tool prototype that was developed is also being presented

in the paper.

71

Deepak Kumar Garget al [45] discussed a method that involves processing of ECG paper

records by an efficient and iterative set of digital image processing techniques for the

conversion of ECG paper image data to time series digitized signal form, resulting in

convenient storage and retrieval of ECG information. The method involved are

calculation of Heart rate, QRS Width and Stability (variation in R-R peaks) from the

extracted signal. Comparison of the above calculated parameters with the manually

calculated parameters showed an accuracy of 96.4%, thus proving the effectiveness of the

process. The author also proposed the development of fuzzy based ECG diagnosis system

that assists the doctors in diagnosis.

ShiruiGao [46] emphasized the MATLAB based medical image processing tools. It

includes the theoretical background and examples. Through MATLAB this paper made

the introduction of the post-imaging quality in medical technology and medical imaging.

It also introduces the medical image processing technology and describes the image

processing and processing technologies, including the organ contours, interpolation,

filtering, and segmentation techniques. In medicine, the DICOM image data processing

using MATLAB is also widely used in this type of image processing.

BhausahebShindeet al [47] proposed a method to improve the accuracy of MRI, Cancer,

X-ray and Brain images for easy diagnosis. For this experimental work they took different

medical images like MRI, Cancer, X-ray, and Brain and calculated standard derivations

and mean of all these medical images after finding Gaussian noise and then applied

median filtering technique for removal of noise. After removing noise by using median

filtering techniques, again standard derivations and mean are evaluated. The results,

achieved were more useful and they proved to be helpful for general medical practitioners

to analyze the symptoms of the patients with ease.

2.4 Literature survey on ANN based Image processing applications

Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a

new technology in computer science. Neural Networks are currently a 'hot' research area

in medicine, particularly in the fields of radiology, urology, cardiology, oncology and etc.

It has a huge application in many areas such as education, business; medical, engineering

and manufacturing . Artificial neural networks are finding many uses in the medical

72

diagnosis application. Neural Network plays an importat role in a decision support

system.

HeryPURNOMOet al [48 ] explored the performance testing for PCNN method

compared to the classical standard method for tuberculosis detection. There was

significant improvement in processing time and Diagnosis percentage, which the image is

processed first with Adaptive White Gaussian noise (AWGN) for reliability testing of the

method.

Juan A et al [49] described some important aspects of recent visual cortex-based ANN

models and finally discussed about the conclusions reached throughout the process.

N Ganesanet al [50] made an attempt to make use of neural networks in the medical

field (carcinogenesis (pre-clinical study)). In carcinogenesis, artificial neural networks

have been successfully applied to the problems in both pre-clinical and post-clinical

diagnosis. The main aim of research in medical diagnostics is to develop more cost-

effective and easy–to-use systems, procedures and methods for supporting clinicians. It

has been used to analyze demographic data from lung cancer patients with a view to

developing diagnostic algorithms that might improve triage practices in the emergency

department. For the lung cancer diagnosis problem, the concise rules extracted from the

network ,achieve a high accuracy rate of on the training data set and on the test data set.

Dilip Roy Chowdhuryet al [51] reported the use of artificial neural networks in predicting

neonatal disease diagnosis. The proposed technique involves training a Multi Layer

Perceptron with a BP learning algorithm to recognize a pattern for the diagnosing and

prediction of neonatal diseases. A comparative study of using different training algorithm

of MLP, Quick Propagation, Conjugate Gradient Descent, shows the higher prediction

accuracy. The Back propagation algorithm was used to train the ANN architecture and the

same has been tested for the various categories of neonatal disease. About 94 cases of

different sign and symptoms parameter have been tested in this mode l. This study

exhibits ANN based prediction of neonatal disease and improved the diagnosis accuracy

of 75% with higher stability.

QeetharaKadhimet al [52] presented a method to evaluate artificial neural network in

disease diagnosis. Two cases were studied. The first one is acute nephritis disease; data is

the disease symptoms. The second is the heart disease; data is on cardiac Single Proton

73

Emission Computed Tomography (SPECT) images. Each patient classified into two

categories: infected and non-infected. Classification is an important tool in medical

diagnosis decision support. Feed-forward back propagation neural network is used as a

classifier to distinguish between infected or non-infected person in both cases. The results

of applying the artificial neural networks methodology to acute nephritis diagnosis based

upon selected symptoms showed abilities of the network to learn the patterns

corresponding to symptoms of the person.

Hasanet al [53] introduced a new face recognition technique based on the gray-level co-

occurrence matrix (GLCM). GLCM represents the distributions of the intensities and the

information about relative positions of neighboring pixels of an image. Two methods

were proposed to extract feature vectors using GLCM for face classification. The first

method extracts the well-known Haralick features from the GLCM, and the second

method directly uses GLCM by converting the matrix into a vector that can be used in the

classification process. The results demonstrated that the second method, which uses

GLCM directly, is superior to the first method that uses the feature vector containing the

statistical Haralick features in both nearest neighbor and neural networks classifiers. The

proposed GLCM based face recognition system not only outperforms well-known

techniques such as principal component analysis and linear discriminate analysis, but also

has comparable performance with local binary patterns and Gabor wavelets.

MussaratYasminet al [54] summarized overview of research and development held in

recent past highlighting the role of Neural Networks in advancement of Medical Imaging.

Zhenghao Shi [55] reviewed the application of artificial neural networks in medical image

preprocessing, in medical image object detection and recognition. Main advantages and

drawbacks of artificial neural networks were discussed. By this survey, the paper tried to

answer what the major strengths and weaknesses of applying neural networks for medical

image processing would be.

74

2.5 Literature Survey for the Detection of Tuberculosis

2.5.1 Clinical methods

Tuberculosis is diagnosed by finding Mycobacterium tuberculosis bacteria in a clinical

specimen taken from the patient. While other investigations may strongly suggest

tuberculosis as the diagnosis, they cannot confirm it.

A complete medical evaluation for tuberculosis (TB) must include a medical history, a

physical examination, a chest X-ray and microbiological examination (of sputum or some

other appropriate sample). It may also include a tuberculin skin test, other scans and X-

ray, surgical biopsy.

A definitive diagnosis of tuberculosis can only be made by culturing Mycobacterium

tuberculosis organisms from a specimen taken from the patient (most often sputum, but

may also include pus, CSF, biopsied tissue, etc.). A diagnosis made other than by culture

may only be classified as "probable" or "presumed". For a diagnosis negating the

possibility of tuberculosis infection, most protocols require that two separate cultures both

test negative.[56]

2.5.1.1 Sputum

Sputum smears and cultures are done for acid-fast bacilli if the patient is producing

sputum.[57] The preferred method for this is fluorescence microscopy (auramine-

rhodamine staining), which is more sensitive than conventional Ziehl-Neelsen

staining.[58] In cases where there is no spontaneous sputum production, a sample can be

induced, usually by nebulized inhalation of a saline or saline with bronchodilator solution.

A comparative study found that inducing three sputum samples is more sensitive than

three gastric washings.[59]

2.5.1.2 Abreugraphy

A variant of the chest X-Ray, abreugraphy (from the name of its inventor, Dr. Manuel

Dias de Abreu) was a small radiographic image, also called miniature mass radiography

(MMR) or miniature chest radiograph. Though its resolution is limited (it doesn't allow

the diagnosis of lung cancer, for example) it is sufficiently accurate for diagnosis of

tuberculosis [60].

75

Much less expensive than traditional X-Ray, MMR was quickly adopted and

extensively utilized in some countries, in the 1950s. For example, in Brazil and in Japan,

tuberculosis prevention laws went into effect andabout 60% of the population to

undergoMMR screening.

The procedure went out of favor, as the incidence of tuberculosis dramatically

decreased, but is still used in certain situations, such as the screening of prisoners and

immigration applicants.

2. 5 .1.3 Immunological test

ALS Assay

Antibodies from Lymphocyte Secretion or Antibody in Lymphocyte Supernatant or ALS

Assay is an immunological assay to detect active diseases like tuberculosis, cholera,

typhoid etc. Recently, ALS assay nods the scientific community as it is rapidly used for

diagnosis of Tuberculosis. The principal is based on the secretion of antibody from in

vivo activated plasma B cells found in blood circulation for a short period of time in

response to TB-antigens during active TB infection rather than latent TB infection [61].

2.5.1.4 Nucleic acid amplification tests (NAAT)

This is a heterogeneous group of tests that use the polymerase chain reaction (PCR)

technique to detect mycobacterial nucleic acid. These test vary in which nucleic acid

sequence they detect and vary in their accuracy. The two most common commercially

available tests are the amplified mycobacterium tuberculosis direct test (MTD, Gen-

Probe) and Amplicor (Roche Diagnostics) [62]. In 2007, a systematic review of NAAT

by the NHS Health Technology Assessment Program concluded that "NAAT test

accuracy to be far superior when applied to respiratory samples as opposed to other

specimens. Although the results were not statistically significant, the AMTD test appears

to perform better than other currently available commercial tests."[63].

As the sputum test is the universally accepted medical test for the detection of TB, so, we

have chosen the test as the clinical result

76

2.5.2 Technical Methods

Plikaytis BDet al [64] presented a computerized pattern recognition model used to

speciation mycobacterium based on their restriction fragment length polymorphism

(RFLP) banding patterns. Thirty-nine independent strains of known origin, not included

in the probability matrix, were used to test the accuracy of the method in classifying

unknowns: 37 of 39 (94.9%) were classified correctly. An additional set of 16 strains of

known origin representing species not included in the model were tested to gauge the

robustness of the probability matrix. Every sample was correctly identified as an outlier,

i.e. a member of a species not included in the original matrix.

S APatil [65] presented a computer algorithm for texture analysis of TB chest radiograph.

Algorithm included important steps, like image acquisition, image pre-processing, lung

field segmentation, and features extraction. Total 49 images were used during experiment

to estimate 1st and 2nd order texture features. Gray Level Co-occurrence Matrix (GLCM)

technique is used to estimate texture features .

K. Veropouloset al [66] presented a method of developing an automated method for the

detection of tubercle bacilli in clinical specimens, principally sputum smears to improve

the diagnostic process. A preliminary investigation is presented here, which makes use of

image processing techniques and neural network classifiers for the automatic

identification of TB bacilli on Auramine stained sputum specimens. The developed

system showed a sensitivity of 93.5% for the identification of individual bacilli. As there

are usually fairly numerous TB bacilli in the sputum of patients with active pulmonary

TB. The overall diagnostic accuracy for sputum smear positive patients was expected to

be very high. Potential benefits of automated screening for TB are rapid and accurate

diagnosis, increased screening of the population, and reduced health risk to staff

processing slides.

Rachna H. B et al [67] proposed an algorithm based on image processing technique for

identification of TB bacteria in sputum, as the availability of expertise, time and cost are

the constraints of the human intervention based examinations. The method is based on

Otsu thresholding and k-means clustering approach. The performance of clustering and

thresholding algorithms for segmenting TB bacilli in tissue sections is compared. The

developed automated technique showed good accuracy and efficiency.

77

P. Sadaphalet al [68] demonstrated the proof of principle of an innovative computational

algorithm that successfully recognized Ziehl-Neelsen (ZN) stained acid-fast bacilli (AFB)

in digital images. Automated, multi-stage, color-based Bayesian segmentation identified

possible ‘TB objects’, removed artifacts by shape comparison and color-labeled objects as

‘definite’, ‘possible’ or ‘non-TB’, bypassing photo micrographic calibration.

Superimposed AFB clusters, extreme stain variation and low depth of field were

challenges. This novel method facilitated electronic diagnosis of TB, permitting wider

application in developing countries where fluorescent microscopy is currently

inaccessible and unaffordable. [49]

Stefan Jaeger et al [69] described the medical background of TB detection in chest X-

rays and presented a survey of the recent approaches using computer-aided detection.

After a thorough research of the computer science literature for such systems or related

methods, 16 papers were identified, including our own, written between 1996 and early

2013. These papers showed that TB screening is a challenging task and an open research

problem. They reported on the progress to date and described experimental screening

systems that have been developed.

Manuel G foreroet al [70] developed a new autofocus algorithm and a new bacilli

detection technique with the aim to attain a high specificity rate and reduce the time

consumed to analyze sputum samples. This technique is based on the combined use of

some invariant shape features together with a simple thresholding operation on the

chromatic channels. Some feature descriptors were extracted from bacilli shape using an

edited dataset of samples. A k-means clustering technique was applied for classification

purposes and the sensitivity vs specificity results were evaluated using a standard ROC

analysis procedure.

Ajay Divekar [71] developed a stepwise classification (SWC) algorithm to remove

different types of false positives, one type at a time, and to increase the detection of TB

bacilli at different concentrations. Based on the Shannon cofactor expansion on Boolean

function for classification, Both bacilli and non-bacilli objects are first analyzed and

classified into several different categories including scanty positive, high concentration

positive, and several non-bacilli categories: small bright objects, beaded, dim elongated

objects, etc. The morphological and contrast features were extracted based on a prior

clinical knowledge. The SWC is composed of several individual classifiers. Individual

78

classifier to increase the bacilli counts utilizes an adaptive algorithm based on a

microbiologist’s statistical heuristic decision process. Individual classifier to reduce false

positive is developed through minimization from a binary decision tree to classify

different types of true and false positive based on feature vectors. Finally, the detection

algorithm was tested on 102 independent confirmed negative and 74 positive cases. A

multi-class task analysis showed high accordance rate for negative, scanty, and high-

concentration as 88.24%, 56.00%, and 97.96%, respectively.

Jeannette Chang [72] presented an algorithm for automated TB detection in smear images

taken by digital microscopes such as Cell Scope, a novel low-cost, portable device

capable of bright field and fluorescence microscopy. Automated processing on such

platforms could save lives by bringing healthcare to rural areas with limited access to

laboratory-based diagnostics. Though the focus of the study was the application of

automated algorithm to Cell Scope images, the method may be readily generalized for use

with images from other digital fluorescence microscopes. The algorithm applies

morphological operations and template matching with a Gaussian kernel to identify TB-

object candidates. Then moment, geometric, photometric, and oriented gradient features

were used to characterize these objects and perform discriminative, support vector

machine classification. Then the algorithm was tested on a large set of CellScope

fluorescence images from sputum smears collected at clinics in Uganda (594 images

corresponding to 290 patients). The object- level classification is highly accurate, with

Average Precision of 89:2% _ 2:1%. For slide-level classification, the algorithm

performed at the level of human readers, demonstrating the potential for making a

significant impact on global healthcare.

The main drawback with these methods are that the work is trying to replace a clinical

test. They are working on the detection of bacilli using different techniques such as

neural network, which is bit complicated as for further classification we need to use

another neural network which becomes the usage of multiple neural network. To use this

method we need a high resolution image which is a bit expensive.

The main problem in the texture analysis of chest radiographs is the complex

“background” of superimposed normal anatomical structures to which the analysis must

be somehow insensitive.

79

The enhancement method [14] is quite sufficient in diagnosing the fish bone quickly.

However the enhanced images not only enhance the fish bone but also enhance the noise

that is present in the radiographs images. Hence, all these factors motivated the author to

design and develop a novel technique for TB detection.

2.6 Motivation for the Present Research Work

Tuberculosis (TB) is one of the most important public health problems worldwide. There

are 9 million new TB cases and nearly 2 million TB deaths each year. Case-finding and

the management of pulmonary tuberculosis is an essential target of tuberculosis control

programs. However, pulmonary tuberculosis (PTB) is becoming more and more of a

serious problem, particularly in countries affected by epidemics of human

immunodeficiency virus (HIV)-TB co-infection. The diagnosis of PTB using prompt and

accurate methods is a crucial step in the control of the occurrence and prevalence of TB.

However, the diagnosis of PTB is quite complex, so there is no unified standard at

present. Frequently, there is over diagnosis and missed diagnosis and it is a thorny

question in the field of TB control. That is why TB as the parameter is chosen. If it is not

diagnosed in the early stage it may lead to death. The image processing techniques along

with the usage of Artificial Neural Network in instrumentation is used in this work for the

design of diagnosing system and hence provides better way for the further treatment.

After elaborate literature survey we found that most of the studies on TB diagnosis

reported were on conventional clinical test based or technical methods separately for

identifying the TB. These tests involve identification of presence/absence of bacillus

which is a cause for TB. Besides, no studies reported on the severity (percentage) of TB

in a patient.

In addition, the manual screening for the bacillus identification involves a labour

intensive task with a high false negative rate. Automatic screening will entail several

advantages, like a substantial reduction in the labour workload of clinicians, improving

the sensitivity of the test and a better accuracy in diagnosis by increasing the number of

images that can be analyzed by the computer.

Though the disease seems to be simple but it is very much infectious and has to be cured

in time or in other words in early stage. So, as far as the treatment is concerned, diagnosis

becomes the primary and crucial stage of the disease. There are many ways of diagnosing

80

the TB. One of them is the sputum examination. This test is accepted worldwide. X-ray

analysis is another technique of diagnosing TB. In this method, the radiologist or a

consultant physician has to take the print of X-ray and analyze the presence of the disease

(TB). The second method is quite expensive, tedious and may not yield precise results.

This has motivated the author to design a system which can diagnose the presence of TB

without taking the X-ray film print. And also according to the survey made with senior

doctors, it is learnt that the diagnosis of TB for inexperienced doctors is very difficult.

The cost level can also be reduced as the X ray print need not be taken. At the same time

according to the survey made, people have used either X-ray of chest or the sputum result

or the image of the smear to find out the bacilli. But, in the proposed system sputum as

well as X-ray image which has led to more accuracy in identifying and predicting the

percentage of TB has been used.

As the present systemwill diagnosis of Pulmonary Tuberculosis with the help of X-ray &

sputum results which are mandatory for a specialist to detect TB, So, the system is not

very expensive. At the same time the junior doctor who is not experienced can use the

system for the diagnosis pulmonary TB.

Over all, the aim is to design a system which helps the patient through the doctor. Also, as

the X-ray system is already digitized ithas beenmade very convenient for taking the

softcopy of the images in diagnosing the TB along with the sputum examination result.

These have motivated to carry out this research work. Hence, the proposed design is a

novel approach combining the conventional (sputum analysis) and modern (X-ray

analysis) methods to diagnose and thus better treat the fatal TB in human beings.

81

2.7 Objectives for the Present Research Work

To collect the lung X-ray images of PTB and normal patients.

To preprocess the X-ray images.

To extract the features from the X- ray images.

To design an ANN for further investigation.

To train the ANN with the extracted features.

To design a GUI for user.

To test unknown X-ray images for the detection of PTB.

To check the severity of PTB.

Detailed study on the methodology i.e., design and development of overall system is dealt

in the next chapter.

82

References

[1] Azriel Rosenfeld, “Picture Processing by Computer”, New York: Academic Press,1969.

[2] SuezouNakadate, ToyohikoYatagai, and Hiroyoshi Saito “Electronic speckle patterninterferometry using digital image processing techniques”Applied Optics, vol. 19,Issue 11, 1980, pp. 1879-1883.

[3] Satoshi Kawata and YoshikiIchioka “Iterative image restoration for linearlydegraded images. II. Reblurring procedure,” Journal of the Optical society ofAmerica(JOSA), vol. 70, 1980, pp. 768–772.

[4] William H. Carter San Diego “Evaluation of Peak Location Algorithms WithSubpixel Accuracy For Mosaic Focal Planes” Processing of Images and Data fromOptical Sensors, Conference Volume 0292, 1981.

[5] David W. Robinson “Automatic fringe analysis with a computer image-processingsystem” Applied Optics, vol. 22, Issue 14, 1983, pp. 2169-2176.

[6] S. V. Ahamed ,V. B. “An image processing systemforeye statistics from eyediagrams” Lawrence IAPR Workshop on CV- SpealHarclware and IndustrialApplications October 12-14. 1988. Tokyo.

[7] P. K. Sahoo, S. Soltaniand A. K. C. Wong “A Survey of thresholding Techniques”Computer vision, graphics, and image processing,vol. 41,1988, pp. 233-260.

[8] Marc Antonini, Michel Barlaud“Image Coding Using Wavelet Transform”IEEEtransactions on image processing, vol. 1, no.2. APRIL 1992.

[9] Harpam MD “An introduction to wavelet theory and application for the radiologicalphysicist”. Med Phys. 1998, vol. 25,no.10, pp.1985-93.

[10] Salem Saleh Al-amri ,N.V. Kalyankar and KhamitkarS.D “Image Segmentation byUsing Thershod Techniques” Journal of computing, vol. 2, Issue 5, May 2010,pp.83-86.

[11] Wiecek, B., Danych, R. ; Zwolenik, Z.Jung, A.“Engineering in Medicine andBiology Society”, Proceedings of the 23rd Annual International Conference of theIEEE, vol. 3, 2001, pp. 2805 – 2807.

[12] Patnaik, S.Pal, R.N. “ Image compression using auto-associative neural networkand embedded zero-tree coding”, IEEE Third Workshop on WirelessCommunications Wireless Communications, 2001,pp. 388-390.

[13] Shanhui Sun Christian Bauer and ReinhardBeichel“Robust Active Shape ModelBased Lung Segmentation in CT Scans”, LOLA11 Challenge pp. 213 -223.

[14] Sonal, Dinesh Kumar “A study of various image compression techniques”.www.rimtengg.com/coit2007/proceedings/pdfs/43.pdf

83

[15] Suzuki K, Abe H, MacMahon H, Doi K, “Image-processing technique forsuppressing ribs in chest radiographs by means of massive training artificial neuralnetwork (MTANN)”. IEEE Transactions on Medical Imaging., 2006, vol. 25,no.4,pp. 406-416.

[16] WeixingWang,Shuguang Wu “A Study on Lung Cancer Detection by ImageProcessing” international conference on Communications,Circuits and SystemsProceedings, 2006, pp. 371-374.

[17] Md. FoisalHossain, Mohammad Reza Alsharif “Image Enhancement Based onLogarithmic Transform Coefficient and Adaptive HistogramEqualization”International Conference on Convergence Information Technology,21-23 November, 2007, pp. 1439 – 1444.

[18] Wenhong Li,Yonggang Li,KexueLuo, “Application of Image ProcessingTechnology in Paper Currency Classification System”, IEEE transactions 22-24Oct. 2008, pp. 1- 5.

[19] Li Minxia, ZhengMeng “A Study of Automatic Defects Extraction of X-ray WeldImage Based on Computed Radiography System” International Conference onMeasuring Technology and Mechatronics Automation - ICMTMA, 2011.

[20] Kanwal, N. , Girdhar, A. ; Gupta, S “Region Based Adaptive ContrastEnhancement of Medical X-Ray Images”, Bioinformatics and BiomedicalEngineering, (ICBBE) 2011 5th International Conference, pp. 1-5.

[21] Noorhayati Mohamed Noor, Noor Elaiza Abdul Khalid. “Fish Bone ImpactionUsing Adaptive Histogram Equalization (AHE)” Proceedings of the SecondInternational Conference on Computer Research and Development 2010, IEEEComputer society Washington, pp.163-167.

[22] Lu Zhang, Dongyue Li, ShuqianLuo , “Information extraction of bone fractureimages based on diffraction enhanced imaging” International Conference ofMedical Image Analysis and Clinical Application (MIACA) 10-17 June 2010, pp.106 -108.

[23] Md.FoisalHossain, Mohammad Reza Alsharif, and Katsumi Yamashita “MedicalImage Enhancement Based on Nonlinear Technique and Logarithmic TransformCoefficient Histogram Matching”, IEEE/ICME International Conference onComplex Medical Engineering July,2010, pp. 13-15.

[24] HasanDemirel “Co-occurrence matrix and its statistical features as a new approachfor face recognition”Turk J ElectricalEngeneeringand Computer Society, vol.19,No.1, 2011,pp.97-107.

[25] Pu J, Paik DS, Meng X, Roos JE, Rubin GD“Shape “Break-and-Repair” Strategyand Its Application to Automated Medical Image Segmentation”,IEEE transactionson visualization and computer graphics, vol.17, no. 1, january 2011.

[26] TianShen, Hongsheng LiXiaolei Huang “ Active Volume Models for MedicalImage Segmentation” Medical Imaging, IEEE Transactions on vol. 30, Issue 3March 2011, pp. 774 – 791.

84

[27] Sadeer G. Al-Kindi, “Breast Sonogram and Mammogram Enhancement UsingHybrid and Repetitive Smoothing-Sharpening Technique”, 1st Middle EastConference onBiomedical Engineering (MECBME), 21-24 Feb. 2011, pp. 446 –449.

[28] Sandeep Kumar, PuneetVerma “Comparison of Different Enhanced ImageDenoising with Multiple Histogram Techniques”, International Journal of SoftComputing and Engineering (IJSCE), vol. 2, Issue. 2, May 2012.

[29] XiaoyanXu, “Embedded Zero Tree as Image Coding”, School of Engineering,UniversityofGuelph.qh.eng.ua.edu/classes/spring2007/...files/.../project_report_jpeg2000.pdf

[30] Arpita Mittal, Sanjay Kumar Dubey , “Analysis of MRI Images of RheumatoidArthritis through Morphological Image Processing Techniques” IJCSI InternationalJournal of Computer Science Issues, vol. 10, Issue 2, No. 3, March 2013,pp.118-122.

[31] S.JagadeeshDr.E.NagabhooshanamDr.S.Venkatachalam , “Image processing basedapproach to cancer cell prediction in blood samples”International Journal ofTechnology and Engineering Sciences, vol. 1, pp.1- 4.

[32] KimmiVerma, AruMehrotra, VijayetaPandey, Shardendu Singh, “Image processingtechniques for the enhancement of brain tumour patterns”.

[33] Ashraf Anwar &ArsalanIqbal, “Image Processing Technique for Brain AbnormalityDetection” International Journal of Image Processing (IJIP), vol. 7, Issue 1, 2013,pp. 51-61.

[34] Pallavi T. Suradkar , “Detection of Malarial Parasite in Blood Using ImageProcessing”International Journal of Engineering and Innovative Technology (IJEIT),vol. 2, Issue 10, April 2013.

[35] Md.AmranHossenBhuiyan, Ibrahim Azad, Md.KamalUddi“ Image Processing forSkin Cancer Features Extraction” International Journal of Scientific & EngineeringResearch,Volume 4, Issue 2, February 2013.

[36] R K Samantaray, T K Mohanta , “Image Processing for Decision Support in HeartFailure”.Researcher 2013, Volume 5, no.4, pp.1-8.

[37] S.Kannadhasan, N.BasheerAhamed, M.RajeshBaba , “Cancer Diagonsis with thehelp digital Image Processing using ZIGBEE Technology”, International Journal ofEmerging Trends in Electrical and Electronics (IJETEE), Volume 1, Issue 2, March2013.

[38] HardikPandit ,Dr. D M Shah “Application of Digital Image Processing and Analysisin Healthcare Based on Medical Palmistry”International Conference on IntelligentSystems and Data Processing ICISD 2011 Special Issue published by InternationalJournal of Computer Applications (IJCA), pp. 56-59.

[39] M Bister*, “Increasing the speed of medical image processing in MATLAB”,Biomedical Imaging and Intervention Journal , pp. 2-12.

85

[40] JiřiBlahuta, TomašSoukup, PetrČermak, ”Image processing of medical diagnosticneurosonographical images in MATLAB” Recent Researches in Computer Science,ISBN: 978-1-61804-019-0, pp.85-90.

[41] Joaquim Jose Furtado ,ZhihuaCai , Liu Xiaobo, “Digital image processing:supervised classification using genetic algorithm in matlab toolbox” Report andOpinion, 2010, Volume2, no.6, pp. 53-61.

[42] Mrs. S. AllinChriste, Mr.M.Vignesh, Dr.A.Kandaswamy, “An efficient FPGAimplementation of MRI image filtering and tumour characterization using xilinxsystem generator” International Journal of VLSI design & CommunicationSystems (VLSICS), Volume 2, No.4, December 2011.

[43] NasrulHumaimiMahmood, Ching Yee Yong, Kim Mey Chew and Ismail Ariffin,“Image Processing Software Package in Medical Imaging”, A review InternationalJournal of Computational Engineering Research IJCER, Volume 2, Issue No.1 , Jan-Feb 2012, pp. 199-203.

[44] Ching Yee Yong, Kim Mey Chew, NasrulHumaimiMahmood and Ismail Ariffin“Image Processing Tools Package in Medical Imaging in MATLAB”, Internationaljournal of education and information technologies, Volume 6 Issue 3, 2012.

[45] Deepak Kumar Garg ,seema Sharma, swetha Bharadwaj , Diksha Thakur , “ECGPaper Records Digitization through Image Processing Techniques”, InternationalJournal of Computer Applications Volume 48,No.13, June 2012.

[46] ShiruiGao, “Research on Medical Image Processing Method Based on theMATLAB”, Informatics and management science Lecture notes in electricalengineering Volume. 208, 2013, pp. 687-694.

[47] Bhausaheb Shinde*, DnyandeoMhaske, MachindraPatare, A.R. Dani “NoiseDetection and Noise Removal Techniques in Medical Images”.

[48] HeryPURNOMO, Hiroshi, HASEGAWA, Kazuo, SHIGETA and HideyaTAKAHASHI, “Pulse Coupled Neural Network for Identifying the Tuberculosis onHuman Lung” Volume 44, 2003, pp. 23-30.

[49] Juan A. Ramírez-Quintana, Mario I. Chacon-Murguia and Jose F. Chacon-Hinojos,“Artificial Neural Image Processing Applications: A Survey”, Engineering Letters,2009, volume 20, issue 1, pp 68-80.

[50] Dr. N. Ganesan, Dr.K. Venkatesh ,Dr. M. A. Rama ,”Application of NeuralNetworks in Diagnosing Cancer Disease Using Demographic Data”, InternationalJournal of Computer Applications, Volume. 1, No. 26, pp.76-85.

[51] Dilip Roy Chowdhury, MridulaChatterjee& R. K. Samanta, “An Artificial NeuralNetwork Model for Neonatal Disease Diagnosis”, International Journal of ArtificialIntelligence and Expert Systems (IJAE), Volume 2, Issue 3, 2011.

[52] QeetharaKadhim, Al-Shayea, “Artificial Neural Networks in Medical Diagnosis”,“IJCSI International Journal of Computer Science Issues”, Volume 8, Issue 2, ISSN(Online):1694-0814, March 2011.

86

[53] AlaaELEYAN, HasanDemirel , “Co-occurrence matrix and its statistical features asa new approach for face”, Turk journal Electrical Engineering and Computerscience Volume. 19, No 1,2011, pp.97-107.

[54] MussaratYasmin, Muhammad Sharif and SajjadMohsin, “Neural Networks inMedical Imaging Applications: A Survey”, World Applied SciencesJournal,Volume 22,No.12, 2013, pp. 85-96.

[55] Zhenghao Shi and LifengHe , “Application of Neural Networks in Medical ImageProcessing” Proceedings of the Second International Symposium on Networkingand Network Security , 4th April. 2010, pp. 023-026.

[56] Burke and Parnell. “Minimal Pulmonary Tuberculosis” Canadian MedicalAssociation Journals, 59:348.

[57] Steingart K, Henry M, Ng V, et al. "Fluorescence versus conventional sputum smearmicroscopy for tuberculosis: a systematic review". (2006). Lancet InfectDiseasesVolume 6, No.9, pp. 570–581.

[58] Brown M, Varia H, Bassett P, Davidson RN, Wall R, Pasvol G "Prospective studyof sputum induction, gastric washing, and bronchoalveolar lavage for the diagnosisof pulmonary tuberculosis in patients who are unable to expectorate". Clin InfectDis, Volume44 ,No.11, 2007, pp.1415–1420.

[59] Drobniewski F, Caws M, Gibson A, Young D "Modern laboratory diagnosis oftuberculosis". Lancet Infect Disseases, 2003,Volume.3,No.3, pp. 141–147.

[60] Moore D, Evans C, Gilman R, Caviedes L, Coronel J, Vivar A, Sanchez E, PiñedoY, Saravia J, Salazar C, Oberhelman R, Hollm-Delgado M, LaChira D, Escombe A,Friedland J "Microscopic-observation drug-susceptibility assay for the diagnosis ofTB". (2006). N Engl J Med Volume 355, No. (15):2006 1539–50.//www.ncbi.nlm.nih.gov/pmc/articles/PMC1780278/.

[61] Rossi, S. E.; Franquet, T.; Volpacchio, M.; Gimenez, A. Aguilar, G. "Tree-in-BudPattern at Thin-Section CT of the Lungs: Radiologic-Pathologic Overview".Radiographics,Volume 25, No. 3, 2005, pp. 789–801.

[62] CDC - Guidelines for Using the QuantiFERON-TB Gold Test for DetectingMycobacterium tuberculosis Infection, United States.

[63] Dinnes J, Deeks J, Kunst H, Gibson A, Cummins E, Waugh N, Drobniewski F,Lalvani A "A systematic review of rapid diagnostic tests for the detection oftuberculosis infection" Health Technology Assess, Volume 11,No.3, 2007, pp. 1–314.

[64] Plikaytis BD, Plikaytis BB, Shinnick TM. ”Computer-assisted pattern recognitionmodel for the identification of slowly growing mycobacterium includingMycobacterium tuberculosis”. Journal of General Microbiology1992, 138(11).

[65] PatilS.A “Texture analysis of TB X-ray images using image processingtechniques”, Journal of Biomedical and Bioengineering, Volume. 3, Issue 1, 2012,pp.53-56.

87

[66] K. Veropoulos, C. G. Lear month, B. J. Simpson “The Automated Identification ofTubercle Bacilli using Image Processing and Neural Computing Techniques”,Proceedings of the 8th International Conference on Artificial Neural Networks,Skövde, Sweden, 2-4 September 1998, Volume 2, pp. 797-802.

[67] Rachna H. B., M. S. MallikarjunaSwamy,”Detection of Tuberculosis Bacilli usingImage Processing Techniques”, International Journal of Soft Computing andEngineering (IJSCE), Volume. 3, Issue. 4, September 2013.

[68] Ziehl-Neelsen stains P. Sadaphal, J. Rao, G. W. Comstock, M. F. Beg “Imageprocessing techniques for identifying Mycobacterium tuberculosis “, INT JTUBERC LUNG DIS 12(5): © 2008, pp.579–582.

[69] Stefan Jaeger, AlexandrosKarargyris, SemaCandemir, Jenifer Siegelman, Les Folio,Sameer Antani, George Thoma, Clement J. McDona , “Automatic screening fortuberculosis in chest radiographs,”a survey Quantitative imaging in Medicine andSurgery”, Volume 3, No 2, April 2013, pp. 89-99.

[70] Manuel G Forero, Filipsrobek, Gabriel Christobal, “Real-Time Imaging” - Specialissue on imaging in bioinformatics, Part III archive Volume 10 Issue 4, August 2004pp. 251-262.

[71] Ajay Divekar, CorinaPangilinan, Gerrit Coetzee, TarlochanSondh, Fleming Y. M.Lure, and Sean Kennedy, “Automated detection of tuberculosis on sputum smearedslides using stepwise classification”, Proceeding SPIE Medical ImagingConference, 2012, pp.1-9.

[72] Jeannette Chang “Automated Tuberculosis Diagnosis Using Fluorescence Imagesfrom a Mobile Microscope” thesis.


Recommended