An Efficient Classification Approach Based on Grid Code Transformation and Mask-Matching Method

Post on 06-Jan-2016

27 views 0 download

Tags:

description

An Efficient Classification Approach Based on Grid Code Transformation and Mask-Matching Method. Presenter: Yo-Ping Huang. Outline. Introduction The proposed classification approach The coarse classification scheme The fine classification scheme Experimental results Conclusion. - PowerPoint PPT Presentation

transcript

1

An Efficient Classification Approach Based on Grid Code Transformation and

Mask-Matching Method

Presenter: Yo-Ping Huang

2

Outline1. Introduction2. The proposed classification

approach 3. The coarse classification scheme4. The fine classification scheme5. Experimental results 6. Conclusion

3

1. Introduction Paper documents -> Computer

codes OCR(Optical Character Recognition) The design of classification systems

consists of two subproblems: Feature extraction Classification

4

Classification

Classification of objects (or patterns) into a number of predefined classes has been extensively studied in wide variety of applications such as Optical character recognition (OCR) Speech recognition Face recognition

5

Feature extraction Features are functions of the

measurements that enable a class to be distinguished from other classes.

It has not found a general solution in most applications.

Our purpose is to design a general classification scheme, which is less dependent on domain-specific knowledge.

To do that, reliable and general features are required

6

Discrete Cosine Transform (DCT) It helps separate an image into parts of

differing importance with respect to the image's visual quality.

Due to the energy compacting property of DCT, much of the signal energy has a tendency to lie at low frequencies.

7

Four advantages in applying DCT

The features extracted by DCT are general and reliable. It can be applied to most of the vision-oriented applications.

The amount of data to be stored can be reduced tremendously.

Multiresolution classification and progressive matching can be achieved by nature.

The DCT is scale-invariant and less sensitive to noise and distortion.

8

Two philosophies of classification Statistical

The measurements that describe an object are treated only formally as statistical variables, neglecting their “meaning”

Structural Regard objects as compositions of

structural units, usually called primitives.

9

Two stages of classification Coarse classification

DCT Grid code transformation (GCT)

Fine classification Spatial domain

• Template matching• Mask matching

• Matching degree• Statistical matching Statistical mask-matching

Frequency domain

10

2. The proposed classification approach

The ultimate goal of classification is to classify an unknown pattern x to one of M possible classes (c1, c2,…, cM).

Each pattern is represented by a set of D features, viewed as a D-dimensional feature vector.

11

Figure 1. The framework of our classification approach.

Prepro-cessing

FeatureExtractionvia DCT

Quanti-zation

Grid CodeTransfor-mation

SortingCodestraining

pattern

Prepro-cessing

FeatureExtractionvia DCT

SearchingCandidatestest

pattern

Training

Coarse Classification

Elimination of Duplicated

Codes

candidates

Quanti-zation

Grid CodeTransfor-mation

Calculate Mask

Probability

Statistical Mask

Matching finaldecision

Fine Classification

12

In the training mode: GCT Positive mask Negative mask Mask probability

In the classification mode: GCT (coarse classification) Statistical mask matching (fine

classification)

13

3. The coarse classification scheme

Feature extraction via DCT The DCT coefficients F(u, v) of an N×N image

represented by x(i, j) can be defined as

where

1

0

1

0

),()()(2

),(N

i

N

j

jixvuN

vuF ),2

)12(cos()

2

)12(cos(

N

vj

N

ui

.1

,021

)(otherwise

wforw

14

Figure 2. The DCT coefficients of the character image of “為” .

15

Grid code transformation (GCT) Quantization

The 2-D DCT coefficient F(u,v) is quantized to F’(u,v) according to the following equation:

Thus, dimension of the feature vector can be reduced after quantization.

16

The features of each training sample are first extracted by DCT and quantized.

The most D significant are quantized and transformed to a code, called grid code (GC).

Given a sample Oi, it is quantized into a feature vector in form of [qi1, qi2, .., qiD].

17

The items are sorted in a zigzag order: F(0,0), F(0,1), F(1,0), F(2,0), F(1,1), F(0,2), F(0,3), F(1,2), F(2,1), F(3,0), F(3,1),…, and so on.

This order is derived from the energy compacting property that low-frequency DCT coefficients are often more important than high-frequency ones.

In this way, object Oi can be transformed to a D-digit GC.

18

Illustration of Extracting the 2-D DCT Coefficients

19

Grid code sorting and elimination

All the training samples are transformed into a list of triplets (Ti, Ci, GCi) by GCT

Ti is the ID of a training sample Ci is the class ID GCi is the grid code of the training sample.

The list has to be sorted ascendingly according to the GCs.

Redundancy might occur as the training samples belonging to the same class have the same GC.

20

In summary, the information about the classes within each GC is gathered in the training phase.

In the test phase, on classifying a test sample, a reduced set of candidate classes can be retrieved from the lookup table according to the GC of the test sample.

21

4. The fine classification scheme

Mask Generation A kind of the template matching method The border bits are unreliable Find out those bits that

are reliably black (or white).

22

(a) (b) (c)

Figure 3. Mask generation

(a) Superimposed characters of “佛” , (b) the positive mask of “佛” , and(c) the negative mask of “佛” .

23

Bayes’ classification

P(ci | x): the probability of x in class i when x is observed.

P(x | ci): the probability of the feature being observed when the class is present.P(ci): the probability of that class being present.P(x): the probability of feature x.

)(

)()|()|(

xP

cPcxPxcP ii

i

24

Measures for mask matching

)(

),(),(

ib

ibi

mN

mxMmxd

)(

),(),(

iw

iwi

mN

mxMmxd

The degree of matching between an unknown character x and the positive mask of class i, , can be defined by:

im

Similarly,

Nb( f ): the number of black bits in bitmap f.Mb(f, g): the number of black bits with the same positions in both f and g.

25

Def. 1. If x matches to the positive mask of class i at the degree of , i.e.,

It is called x -match the positive mask of

class i, and denoted by . Def. 2. If x matches to the negative mask

of class i at the degree of , i.e., It is called x -match the negative mask of

class i, and denoted by .

),( imxd

ix

),( imxd

ix

26

Statistical mask-matching

The probability of x in class i when is observed can be described by

Similarly, we get

)(

)()|()|(

iii

ii

i xP

cPcxPxcP

)(

)()|()|(

i

iii

ii xP

cPcxPxcP

ix

27

Statistical decision rule

Rule AMP (Average Matching Probability)

} 2/)|()|( { max arg)( 1i

ii

iNi xcPxcPxE

28

5. Experimental Results

A famous handwritten rare book, Kin-Guan bible (金剛經 ) 18,600 samples. 640 classes.

29

Figure 4. Reduction and accuracy rate using our coarse classification scheme.

The best value of D is 6.

30

Figure 5. Accuracy rate using both coarse and fine classification.

Good reduction rate would not sacrifice the performance of fine classification.

31

Figure 6. Accuracy rate using both coarse and fine classification under different values of AMP.

32

6. Conclusions This paper presents a two-stage

classification approach for vision-based applications.

The first stage is coarse classification, which employs DCT to extract features for each character image.

The grid code transformation (GCT) method is further applied to quantize the most significant DCT coefficients into a finite number of grids.

33

The second stage is fine classification, which uses a statistical mask-matching method to identify the individual target in the set given by the first stage.

The statistical mask-matching method is proved to be effective in recognizing the Chinese handwritten characters.

34

The experimental results show that: The good reduction rate provided by coarse

classification would not sacrifice the performance of fine classification;

The more confident the decision, the better the accuracy rate is.

By selecting features of strong confidence, classification accuracy could be further improved.