Iterative Colour Correction of Multicamera Systems Using ......• Reducing color pattern of target...

Post on 08-Aug-2020

0 views 0 download

transcript

Journal of Visual Communication and Image Representation

vol. 21, Issue. 5-6, 2010

Mehrdad Panahpour Tehrani, Akio Ishikawa, Shigeyuki Sakazawa and Atsushi Koike

Presented by Shu Ran

School of Electrical Engineering and Computer Science

Kyungpook National Univ.

Iterative Colour Correction of

Multicamera Systems Using

Corresponding Feature Points

Abstract

Color distortion in multicamera

– Proposing novel color correction method

• Advantage of proposed method

− Both working for dense and sparse multicamera

− Obtaining average color pattern among all cameras

• Starting from any camera on array sequentially

− Following certain path until reaching starting point and trigger iterations

• Color correction transformation based on energy minimisation

− Using dynamic programming

» By nonlinearly weighted Gaussian-based kernel density function

» From geometrically corresponding feature points

• Guarantee convergence of iteration procedure

− Without any visible color distortion

2/27

Introduction

3-D manner for human visual systems

– Changing our view position according interest

• Used as communication component

− FTV and 3D-TV

– Video sequences captured at same time from different viewpoints

• Dissimilarities between luminance and chrominance

− Appropriating color correction for compensating inconsistency

– Basic color correction in multicamera

• Adjusting parameters

− Self-calibrating each camera

» Using automatic gain and white balance adjustment

• Image processing

− Generally divided into two categories

3/27

– Using color pattern board

• Achieving average color intensity among viewpoints

− Useless for large areas and outdoor situations

– Popular by without color pattern board

• Performed after capturing and applied to indoor and outdoor

− Average color intensity cannot be achieved

• Classified into illumination based and feature-based approaches

− Easy to implement and basically linear transformation

» Without considering geometrical characteristics of multicamera

− Estimating nonlinear color transformation using geometrical characteristics

• Similar to one viewpoint as reference

− Far distances pairs well corrected

4/27

Proposed method

– Advantage of proposed method

• Performed for dense and sparse multicamera

• Achieving average color intensity among viewpoints

– Without color pattern board

• Feature based method

− Nonlinear transformation obtained from geometrical characteristics

– Assuming condition

• Lambertian condition and similarly manufactured cameras

− Similar color filters in multicamera

− Captured images obtained without RGB cross matrix calculation

– Aim goals

• Scalability and accuracy

• Average color intensities and adaptive

• Fully automatic

5/27

– By using iterative color correction

• Obtaining corresponding intensities for neighbour pair

− Reference and target cameras using modified SIFT

» Start correction from any location on camera array

• Suppress outliers from matched feature

− Collect more corresponding intensities

» From blurred intensities of suppressed corresponding locations

» From several frames of reference and target video

» With same procedure

• Using proposed nonlinearly weighted kernel density function

− Generating nonparametric statistical representation of collected correspondences

» For each color channel

− Nonlinear weighting approach

» Decreasing effect of corresponding intensities with large difference

» Causing error in correction transformation

6/27

Several previous approaches

– Two main processes within any color correction

• Showing list of corresponding intensities

• Generating transfer function for color correction

– Previous method of color pattern boards

• Generating linear transformation

− Based on linear least-square matching

» Simultaneously modifying RGB at pixels using 3x3 matrix

• Linear RGB-RGB transformation using 3x4 matrix

• Capturing whole hue and detecting corresponding intensities

− Useless for outdoor and wide-space coverage of multicamera

» Hard for providing color pattern board

Previous works

7/27

– Without color pattern board targeted specific application

• Histograms of two cameras obtaining corresponding intensities

− Generating LUT by using histograms of two views

− Generating linear transformation for YUV channels

» Insufficient for occlusion area

• Compensating illumination for increasing efficiency of coding

− Using macro-block-based illumination change

− Using average and variance in a block

− Adding offsets to compensate for blocks

– Using nonlinear approach without using color pattern board

• Content-adaptive color correction

− Using linear transformation in format of matrix

• Choosing one reference camera

− All target cameras have a color pattern in reference

8/27

Procedure of proposed method

– Choosing reference camera and nearest as target

• Reducing color pattern of target camera to reference

− Using proposed color correction algorithm

» Generating nonlinear transformation functions LUT

» By minimisation of correspondences energy function

» Through dynamic programming

Proposed color correction

9/27

Fig. 1. Flowchart of the proposed

colour correction method.

– Obtaining energy function as follows

10/27

Step1 For first frame of the pair, use SIFT to select feature points

locations and determine the corresponding feature points.

Step2 Suppress outliers and save the location of corresponding points

in set ‘‘Lf”.

Step3 Blurring with different orders (i.e., several Gaussian filters) are

applied to the pair. Then, corresponding intensities are collected

in set ‘‘P” from original and blurred pair in location of

corresponding points obtained in step 2.

Step4 For several further frames of the pair, using the same

approaches from step 1 to step 3, appended corresponding

feature points to set ‘‘P”.

Step5 A Gaussian-based kernel density function is generated using all

corresponding intensities.

Step6 Within the Kernel density function, the Gaussian function is

nonlinearly weighted using the distribution of corresponding

intensities in set ‘‘P”.

Step7 Finally, we generate an energy function for each colour channel

using the statistical model in step 6 and a regulation energy

function. Then, a lookup table for each colour channel is

generated for correction.

Algorithm of proposed method

– Step1: detection of corresponding feature points

• Using modified SIFT finding corresponding feature

• Modified Matching process

11/27

1 1 1 1= , , , , , , , , , , , , ,r r t t r r t t r r t t

f i i i i M M M ML x y x y x y x y x y x y (1)

where is the ith pair of corresponding locations in

target “t” and reference “r” camera. Symbol ‘‘ ” shows the

frame number where the corresponding points are

detected for the pair. The optimal value for ‘‘M = 300” is

estimated experimentally.

, , ,r r t t

i i i ix y x y

f

– Step 2: suppressing outliers

• Proposing new method for suppressing outliers

− Generating histogram of angles of lines connecting correspondence

− Calculating mean and standard deviation of histogram

− Choosing part of histogram

» Rest as outlier

12/27

1 1 1 1= , , , , , , , , , , , , ,f f f f

r r t t r r t t r r t t

f i i i i N N N NL x y x y x y x y x y x y

(2)

(a) (b) (c) (d)

Fig. 2. Suppression (a) detected by modified SIFT, (b) histogram

of lines’ angles in (a) that connect corresponding points before

suppression of outliers, (c) histogram after suppression of outliers,

and (d) remained corresponding points after suppression.

– Step 3: correspondences from blurred pairs

• Image containing high spatial resolution and noise

− Impairing color correction

• Generating several orders of blurred image pairs

− Applying Gaussian filter for G times

− Using all corresponding colors of filtered images at given time

− Blurring increasing total number of corresponding intensities

» Picking better corresponding in edge

13/27

0 1

1 1 1 1

= , , , , ,

= , , , , , , , , , , , , ,

i G

i f f f f

f f f f

f

f f r r f t t f r r f t t f r r f t t

i i i i i i i i i N N i N N

P P P P P

P P x y P x y P x y P x y P x y P x y

(3)

where “ ”is the set of corresponding intensities for a colour

channel that the pair is blurred with ‘‘ ”.

f

iP

i

(4)

• Procedure for three times of Gaussian filtering

14/27

Fig. 3. Gaussian filtering and picking up of corresponding

intensities in filtered images from the same location detected by

modified SIFT in the original images.

– Step 4: correspondences from several time instances

• Choosing more frames for increasing corresponding points

− Limit time range of frame

− Number of corresponding intensities is slightly changed for each time

» Total number of correspondences for each color channel

• Example of set P

15/27

1 1,...,1

,F

i ii S i Ni

P P r t

(5)

iN

11

1F

i Si

N G N

(6)

Table. 1. Example of set ‘‘P” for RGB channels.

– Step 5: generation of a Gaussian-based kernel density function

• Building statistical representation of corresponding intensities of P

− Using general nonparametric kernel density estimation

» Without any assumption about underlying distributions

− Estimate density at point

16/27

1

1 N

i

i

P x K x xN

(7)

where “ ”is a kernel function with bandwidth K

,n nr t

1

1, , ,

N

n n i J i J

i

P r t G r G tN

(8)

where the same kernel function is used with a suitable bandwidth

for each dimension J

– Step 6: nonlinear weighting of kernel density function

• Problem of corresponding intensities with large differences

− Deviate correction transformation

− Cause color distortion during iteration

• Correct target image with large intensity changes

− Adding weight coefficient during generation of statistical model

• Proposing nonlinearly calculating nonlinear weighting coefficient

− For each pair of corresponding intensities

• Nonlinearly weighted 2-D Gaussian kernel density function

17/27

1

1, , , ,

N

n n i i i J i J

i

J r t r t G r G tN

(9)

where the weighting coefficients are calculated as follows. ,i ir t

(10)

2

2, exp

2

i

i

i Q

i i

Q

xr t

where is ‘‘ ” is the number of corresponding intensities in

the histogram that are parallel to .

ix ,i ir t

ir t Q

• Produce of step 6

− Calculating histogram of set P

− Locally cut histogram along lines normal to line r = t

» Giving highest coefficient value to most emphasized corresponding

» Reducing effect of large intensity changes

18/27

Fig. 4. Procedure to calculate weighting coefficient of each

corresponding intensity for Gaussian-based weighted coefficient. For the

given set of corresponding intensities in a colour channel

– Step 7: calculation of lookup table and performing color correction

• Calculating nonlinear transformation as LUT for each color channel

• Example of LUT for RGB color channels

19/27

(11) i ir f t

(12) 0 1 2 255, , ,..., ,...,ir r r r r r

(13) 2550,1,2,..., ,..., 255it t t

Table. 2. Example of LUT for RGB colour channels.

• Defining energy function for calculating

− Global minimum as transfer function

− Consist of two parts

» Energy of corresponding intensities obtained from model

» Step-by-step regularization energy function

» Enforcing be along r=t line

− Energy value for

20/27

if t

(14) 1 2arg minrf E r E r

255

1 0,n nn

E r J r t

255 2

2 101n nn

E r t t

i ir f t

,i jr t

(15) 2

11,...,

, min , , 1i j i j i j i i kk j

e r t J r t e r t t t

where is the energy of corresponding intensities

obtained from the model and is the step-by-

step regularization energy..

,i je J r t2

1 1n nt t

– Iteration

• Proposing iteration based approach for color correction

− Architecture of iteration-based colour correction in general

» For different multicamera configurations

21/27

Fig. 5. Architecture of iteration-based colour correction.

• Step1: choosing a camera randomly as the reference camera

− Calculate the total amount of change ‘‘e1” for all colour intensities in LUT

• Step2: assigning target camera in last correction as reference camera

− Collecting e2 and add to e1

• Step3: decision on stopping iteration of colour correction

− Calculating average value of total change eave for iteration

» Less than a threshold to stop

Color correction of multiview video

– Color correction for all frames within proposed W1 window

• Using generated LUT

− Iteration LUT and window

22/27

(16) 255

1

0

1

3i

RGB i

e f t i

Experiments

Evaluating proposed method

– Using 5 kinds scene

– Using similarity of hue as measure

Adjustment of parameters

– Measure sensitivity of method in RGB channel to parameters

– Sensitivity to parameters

• Changing one parameter

23/27

(17)

1 2

1 2

0 1 2

1, 1

max ,

H

k

h k h ksimilarity h h

H h k h k

Fig. 6. Sensitivity of the proposed method to

parameters that decide the total number of

corresponding intensities (a)–(d), and

parameters during generation of LUT (e) and

(f).

• Sensitivity of proposed method to the different colour channels

24/27

Fig. 7. Comparison of the proposed colour correction in RGB, HSV, and YUV

colour domains applied to four multiview sequences.

• Resulting images of different colour channels

25/27

Fig. 8. Examples of the proposed colour correction in RGB, HSV, and YUV

colour domains applied to the ‘‘flamenco” sequence.

• Sensitivity of proposed method to the starting camera

26/27

Fig. 9. Performance of the proposed colour correction when starting camera

is changed.

– Estimation of parameters

• Value proportional to texture density of the given multiview images

− Measure texture density of the images

» Defining structure matrix

− Relation of with measured average of texture density

» For a pair of images for all five multiview sequences

27/27

(18) , ,yx

x y

p bp bT

r g g

x p b y p b

S x y x y

Fig. 10. Estimation curve of ‘‘k” (regulation coefficient during dynamic

programming) for optimal (i.e., no colour distortion and maximum similarity

value) colour correction vs. the average of texture density for a pair of images.

– Comparison and analysis

• Performance of proposed method against conventional method

− By applied to sparse multicamera system

• Compare performances of conventional and proposed architectures

and algorithms

28/27

Fig. 11. Comparison of the proposed method with the conventional method

• Compare performance of proposed colour correction with conventional

29/27

Fig. 12. Comparison of the proposed method with the conventional method

• Performance of proposed method in comparison with RTRT0100

30/27

Fig. 13. Examples of generated LUT using, (a) RTRT0100 [8,9]; and (b)

RTRT1111.

• Performance of proposed method in comparison with RTRT0100

− Showing on table

31/27

Table. 3. Example of LUT for RGB colour channels.

• Examples of generated model

32/27

Fig. 14. Examples of generated model.

– Resulting images

33/27

Fig. 15. Result of colour correction using the

proposed method ‘‘crowd” (iteration started

from the leftmost image).

Fig. 16. Result of colour correction using the

proposed method ‘‘flamenco” (iteration started

from the leftmost image).

– Another resulting images

34/27

Fig. 17. Result of colour correction using the

proposed method ‘‘object”(iteration started

from the leftmost image).

Fig. 18. Result of colour correction using the

proposed method ‘‘race” (iteration started from

the leftmost image).

– Resulting images

35/27

Fig. 19. Result of colour correction using the proposed

method ‘‘cheer”(iteration started from the leftmost image).

– Impact on stereo matching in a 3-D system

• Compare generated viewpoint between two stereo cameras

− with the actual image captured by an existing camera

» Using colour corrected and uncorrected sequences in different

disparities

36/27

Fig. 20. Comparison of block matching PSNR for free

viewpoint synthesis using colour-corrected

Discussion and conclusion

Proposed general-purpose method

– Iteration based color correction of mulitview camera

• Without using color pattern board

– Generating correction transformation

• Considering geometrical consistency of multicamera

– Using suppression and weighting processes

• Reducing wrong correspondence

37/27