+ All Categories
Home > Documents > Method for predicting motion artifacts in matrix displays

Method for predicting motion artifacts in matrix displays

Date post: 04-Dec-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
8
Method for predicting motion artifacts in matrix displays Kees Teunissen Yuning Zhang Xiaohua Li Ingrid Heynderickx Abstract — A method is proposed to measure and characterize motion artifacts in matrix displays. By using a fast, V(λ)-corrected photodiode and a data-acquisition system, accurate measurements of the temporal luminance behavior (step response) are recorded. The motion artifacts of LCDs and PDP displays are predicted from these measurements using the properties of the human-visual system. The method is validated with perceptual evaluation experiments, for which a new evaluation protocol is established. In the end, new measures are proposed to quantify the motion-rendering performance of these matrix displays. Keywords — Motion artifacts, matrix displays, LCD, plasma, display characterization, human visual perception. 1 Introduction The trend towards larger and flatter displays no longer can be stopped. The good old cathode ray tube (CRT) was for years the one and only display for television (TV) and moni- tor applications. The rapid development of flat displays, such as liquid-crystal display (LCDs) and plasma display panels (PDPs), makes it difficult for the CRT to survive. Although the relative share of televisions with a CRT display decreases over time, currently the performance of a CRT display is, in some aspects, still superior over an LCD (Yamada et al., 2005). In the study conducted by Heynderickx et al. (2005), it was also found that a CRT TV is still pre- ferred above other technologies, such as LCD, PDP, and LCoS. One reason for this may be the occurrence of tempo- ral artifacts in these other technologies, which deteriorate the overall perceived quality. Also, CRTs suffer from tempo- ral artifacts, such as, e.g., large-area flicker. There is, how- ever, a relatively easy solution to avoid this large-area flicker, namely, driving the CRT at higher frame rates (75 Hz or higher). Large-area flicker is not so much a problem in LCDs or PDP-based displays, but it is well known that they exhibit other artifacts, such as motion blur in LCDs. PDPs also suffer from motion artifacts, but in this case they are called dynamic false contours (Doyen et al., 2003; Eo, 2005). It is known that motion blur in a LCD is caused by two factors: (1) the LC response times (LC-RT) and (2) the sam- ple-and-hold effect, i.e., the time that a pixel maintains its driving level. When an observer tracks a pattern on the dis- play, the pattern is stable during a frame time (for a continu- ous backlight), but the eye moves with a constant speed. This results in a blurred image on the retina. This phenome- non is well described in the literature (Kurita, 2001; Sekiya et al., 2002; Shimodaira, 2003; Klompenhouwer, 2005). The LC-RT is used in the LCD panel specification and some- times it is suggested that this value is representative of the dynamic performance of LCDs. But actually it is the frame period (or the related sample-and-hold effect) that becomes the dominant factor for motion blur when the LC-RT is suf- ficiently short (Kurita, 2001; Klompenhouwer, 2005). For a PDP the motion artifacts resulting in dynamic false contours are a direct consequence of its sub-field address- ing. Due to the binary state of the plasma cell (ON or OFF), different luminance levels are created, dividing the frame interval into several weighted sub-fields. This is illustrated in Fig. 1 for an arrangement with which 32 luminance levels can be generated by a selection of the appropriate sub- fields. Assuming that (Kurita, 2001) (1) the viewer perfectly tracks the viewed object by smooth movements of the eye and (2) the light stimulus within one-frame period is per- fectly integrated in the visual system, this results in different values on the retina for a object during motion than when the object is still. The light distribution during motion is perceived as having false contours. Two groups of measuring techniques for motion blur in LCDs are known in the literature, viz., the pursuit or camera-tracking method and simulation methods. In the pursuit method, the edge of a moving pattern is projected onto a fixed position of the camera sensor. A rotating camera or a rotating mirror is used to accurately track the motion and fix the image on the camera sensor (Someya et al., 2005). By using this method, the edge blur is measured directly by the camera system. However, these measuring systems are complicated (accurate tracking system and syn- chronization), expensive, and are not able to include all artifacts related to motion blur (Miseli, 2004). The second approach via simulation uses the luminance transitions curves (temporal step responses) together with eye-tracking and temporal integration to predict and evaluate motion ar- tifacts via the “blur edge” (Sasaki, 2002; Miseli, 2004; Miseli et al., 2005; Su, 2005). The disadvantage of the simulation method is that a time to space conversion is needed to determine Kees Teunissen is with Philips Consumer Electronics, Innovation Laboratory, Bldg. SFJ 5.22, Glaslaan 2, 5616 LW, Eindhoven, The Netherlands; telephone +31-40-27-33114, fax +31-40-27-34-482, e-mail: [email protected]. Yuning Zhang and Xiaohua Li are with Southeast University, Nanjing, China. Ingrid Heynderickx is with Philips Research Laboratories, Eindhoven, The Netherlands, and Delft University of Technology, Delft, The Netherlands.. © Copyright 2006 Society for Information Display 1071-0922/06/1410-0957$1.00 Journal of the SID 14/10, 2006 957
Transcript

Method for predicting motion artifacts in matrix displays

Kees TeunissenYuning ZhangXiaohua LiIngrid Heynderickx

Abstract — A method is proposed to measure and characterize motion artifacts in matrix displays. Byusing a fast, V(λ)-corrected photodiode and a data-acquisition system, accurate measurements of thetemporal luminance behavior (step response) are recorded. The motion artifacts of LCDs and PDPdisplays are predicted from these measurements using the properties of the human-visual system. Themethod is validated with perceptual evaluation experiments, for which a new evaluation protocol isestablished. In the end, new measures are proposed to quantify the motion-rendering performance ofthese matrix displays.

Keywords — Motion artifacts, matrix displays, LCD, plasma, display characterization, human visualperception.

1 Introduction

The trend towards larger and flatter displays no longer canbe stopped. The good old cathode ray tube (CRT) was foryears the one and only display for television (TV) and moni-tor applications. The rapid development of flat displays,such as liquid-crystal display (LCDs) and plasma displaypanels (PDPs), makes it difficult for the CRT to survive.

Although the relative share of televisions with a CRTdisplay decreases over time, currently the performance of aCRT display is, in some aspects, still superior over an LCD(Yamada et al., 2005). In the study conducted by Heynderickxet al. (2005), it was also found that a CRT TV is still pre-ferred above other technologies, such as LCD, PDP, andLCoS. One reason for this may be the occurrence of tempo-ral artifacts in these other technologies, which deterioratethe overall perceived quality. Also, CRTs suffer from tempo-ral artifacts, such as, e.g., large-area flicker. There is, how-ever, a relatively easy solution to avoid this large-area flicker,namely, driving the CRT at higher frame rates (75 Hz orhigher). Large-area flicker is not so much a problem inLCDs or PDP-based displays, but it is well known that theyexhibit other artifacts, such as motion blur in LCDs. PDPsalso suffer from motion artifacts, but in this case they arecalled dynamic false contours (Doyen et al., 2003; Eo,2005).

It is known that motion blur in a LCD is caused by twofactors: (1) the LC response times (LC-RT) and (2) the sam-ple-and-hold effect, i.e., the time that a pixel maintains itsdriving level. When an observer tracks a pattern on the dis-play, the pattern is stable during a frame time (for a continu-ous backlight), but the eye moves with a constant speed.This results in a blurred image on the retina. This phenome-non is well described in the literature (Kurita, 2001; Sekiyaet al., 2002; Shimodaira, 2003; Klompenhouwer, 2005). TheLC-RT is used in the LCD panel specification and some-times it is suggested that this value is representative of the

dynamic performance of LCDs. But actually it is the frameperiod (or the related sample-and-hold effect) that becomesthe dominant factor for motion blur when the LC-RT is suf-ficiently short (Kurita, 2001; Klompenhouwer, 2005).

For a PDP the motion artifacts resulting in dynamicfalse contours are a direct consequence of its sub-field address-ing. Due to the binary state of the plasma cell (ON or OFF),different luminance levels are created, dividing the frameinterval into several weighted sub-fields. This is illustratedin Fig. 1 for an arrangement with which 32 luminance levelscan be generated by a selection of the appropriate sub-fields. Assuming that (Kurita, 2001) (1) the viewer perfectlytracks the viewed object by smooth movements of the eyeand (2) the light stimulus within one-frame period is per-fectly integrated in the visual system, this results in differentvalues on the retina for a object during motion than whenthe object is still. The light distribution during motion isperceived as having false contours.

Two groups of measuring techniques for motion blurin LCDs are known in the literature, viz., the pursuit orcamera-tracking method and simulation methods. In thepursuit method, the edge of a moving pattern is projectedonto a fixed position of the camera sensor. A rotating cameraor a rotating mirror is used to accurately track the motionand fix the image on the camera sensor (Someya et al.,2005). By using this method, the edge blur is measureddirectly by the camera system. However, these measuringsystems are complicated (accurate tracking system and syn-chronization), expensive, and are not able to include allartifacts related to motion blur (Miseli, 2004). The secondapproach via simulation uses the luminance transitionscurves (temporal step responses) together with eye-trackingand temporal integration to predict and evaluate motion ar-tifacts via the “blur edge” (Sasaki, 2002; Miseli, 2004; Miseliet al., 2005; Su, 2005). The disadvantage of the simulationmethod is that a time to space conversion is needed to determine

Kees Teunissen is with Philips Consumer Electronics, Innovation Laboratory, Bldg. SFJ 5.22, Glaslaan 2, 5616 LW, Eindhoven, The Netherlands;telephone +31-40-27-33114, fax +31-40-27-34-482, e-mail: [email protected].

Yuning Zhang and Xiaohua Li are with Southeast University, Nanjing, China.

Ingrid Heynderickx is with Philips Research Laboratories, Eindhoven, The Netherlands, and Delft University of Technology, Delft, The Netherlands..

© Copyright 2006 Society for Information Display 1071-0922/06/1410-0957$1.00

Journal of the SID 14/10, 2006 957

the edge blur, but the system is flexible and easy to build (noaccurate tracking system required).

In this paper, our measuring tool, based on the simu-lation method is described, and it is demonstrated that it canbe used to characterize motion artifacts in both LCDs andPDPs. We first present the measurement system that wasdeveloped to measure the temporal step response (lumi-nance behavior as a function of time) of the displays. Afterthat, the simulation method to predict the various motion-related artifacts is described in more detail. Next, the resultsof the perceptual evaluation experiments, to validate thesimulation method, are presented in Section 4. In Section 5,the results are discussed and ideas to quantify motion arti-facts are proposed. The conclusions of this study are pre-sented in Section 6.

2 Measurement systemIn this section, we present the set-up to measure the luminancebehavior as a function of time. A schematic representation of themeasurement equipment used to characterize the motionartifacts is shown in Fig. 2.

A fast-response eye-sensitivity-corrected photodiodeis used to capture the temporal luminance variations of thedisplays. Additional circuitry is used for amplification andlow-pass filtering of the photodiode signal. The low-passcharacteristics are tuned to the display response speed,which is much faster for a PDP (especially the blue phos-phor) than for a LCD. Furthermore, a specially designedprogrammable video-pattern generator, consisting of a largeFPGA and the required interfaces, is used to generate testpatterns and to send stable synchronized triggering signalsto the Data Acquisition System (analog-to-digital signal con-version) to start acquiring data. A computer is used to con-trol the entire measurement system.

The output voltage of the photodiode is calibratedwith a luminance meter. In this way, the time-varying volt-

age signal v(t) from the photodiode can be carefully con-verted into the measured luminance signal L(t) according toEq. (1).

(1)

The average offset voltage (voff) is measured when thesensor is darkened, and the maximum luminance (Lmax) andthe average maximum voltage (vmax) are measured for themaximum input code.

The synchronization accuracy with which the data arecaptured (see Fig. 3) makes data averaging over multipleacquisitions easy. Hence, the set-up allows detailed meas-urements of various gray-to-gray transitions with a high sig-nal-to-noise ratio. As an example, Fig. 4 shows the measuredluminance of the red, green, and blue phosphors of a PDPdisplay with maximal driving level. These curves exhibit thesub-field arrangement (visible in blue) and phosphor decayinformation (visible in red and green).

L tt

Loff

off( )

( ).

maxmax=

-

-◊

n n

n n

FIGURE 1 — A 5-bit binary sub-field arrangement to explain theprinciple of motion artifacts. Only the active sub-fields are indicated.When no motion is present, the eyes will perceive the values indicatedat the bottom. When the image starts moving, the eye will integrate thesubfield values along the motion trajectory resulting in values indicatedat the top.

FIGURE 2 — Schematic representation of the measurement system tomeasure the temporal behavior of displays under test (DUT).

FIGURE 3 — An example of an LCD response curve averaged over 40data acquisitions. The zoom-in shows the synchronization accuracy.

958 Teunissen et al. / Method for predicting motion artifacts in matrix displays

3 Motion artifactsThe model for perceived motion artifacts in LCDs andPDPs essentially starts in both cases from the accuratemeasurement of the temporal response curves, and adds theassumptions of smooth-pursuit eye-tracking and temporalintegration, as mentioned earlier in literature (Kurita,2001). A more-detailed description, including some results,is given below for the LCD and PDP separately.

3.1 Simulating LCD motion blurFor an LCD with a long response time, the transition fromone gray level to another takes longer than one frame, as canbe seen, e.g., in Fig. 3. In some cases, two or three framesare needed to reach the target gray level, and hence thecomplete temporal step response should be considered inthe model. In our simulation model, we consider the tem-poral step response to extend to a maximum of three frametimes.

When we assume that a white block of n × n pixelsmoves across a dark background with, for instance, 4 pix-els/frame, the blur of the front edge can be calculated asshown in Fig. 5 (assuming that block size n is at least 4 timeslarger than the moving width in one frame). Y1, Y2, and Y3represent the first, second, and third frame-transitioncurves, respectively. For this slow transition, it is obviousthat the final gray level is not yet reached in the first frame,but only in the third frame. The incremental step-size becomesless for each successive frame because of the exponentialbehavior of the liquid crystal. The light intensity V0 as per-ceived by the eyes over a three frames period can be calcu-lated from Eqs. (2):

(2)

where xi is the position projected on the retina, j is an integra-tion index, Tf is the frame time, v is the motion speed of theimage, and Y is the measured temporal luminance behavior(assumed identical for each pixel). Y0 and YF are the initialand final luminance, respectively. From V0(xi), the edgeblur width is calculated as the distance (in pixels) between aluminance of 10 and 90%. The tail can be calculated in thesame way using the falling gray-level transitions instead ofthe rising ones, and defining the edge blur width as the dis-tance between 90 and 10% luminance. Since Fig. 3 indicatesthat our LCD exhibits a different temporal response transitionfrom low to high luminance than from high to low lumi-

nance, both the head and tail edge blur have to be calculatedto characterize motion blur in a LCD.

To fully characterize the motion performance of aLCD, a 19 × 19 matrix was filled with luminance transitiondata, where the luminance levels were spaced such that thecorresponding lightness levels were distributed equidistanton the CIE1976 lightness scale. The 19 levels also includedthe seven levels, indicated as a minimum requirement, inthe literature (VESA, 2005). The matrix contained all risingand falling transition information. With this information,perceived blur at different motion speeds was modeled. Theresulting edge-blur widths for different gray-to-gray transi-tions, different motion speeds, and both for head and tail aregiven in Fig. 6. It illustrates that the edge-blur increaseswith increasing motion speed, that it is larger for smallertransitions, and that it is different for the rising and fallingedge. How well these simulations correspond to what weactually perceive is validated with a perceptual evaluation,the results of which are described in Section 4.1.

3.2 Simulating PDP dynamic false contoursFor a PDP display, the principle for calculating motion arti-facts is similar. In Fig. 7, a moving one-dimensional picture

V xT

Y x j t dt

Y x j t

Y x j

Y x j t x j

Y x j t x j

Y x j t x j

Y x j

if

ijT

j T

j

i

i

i i

i i

i i

F i

f

f0

1

0

1

0

1

2

3

1

0 1

1

2 1

2 1

( ) ( , ) ,

( , )

( )

( , ) ( )

( , ) ( )

( , ) ( )

( )

,

( )= +

+ =

- - - - - - - + ≥+ - - £ + £ -+ - - - £ + £ -+ - - - £ + £ - -

- - - - - - - + £ - -

R

S|||

T|||

+

=

-Â n

nn

nn

nn n

n

u

FIGURE 4 — Example of temporal impulse responses for the differentphosphors (green having the highest luminance, and blue with the fastestresponse) of a PDP display at maximal driving level and 60-Hz refreshrate.

FIGURE 5 — Example of how the step response is used in the calculationof motion blur.

Journal of the SID 14/10, 2006 959

is presented. The blocks (A, B, C, D, and E) are pixels in theoriginal input image. They move horizontally with a speedof 2 pixels/frame. The dashed lines indicate the integrationtrajectory, and the regions over which the integration shouldtake place. The area Sxt in Fig. 7 can be considered as anintegration region for a new pixel, when perceived duringmotion. The value for this pixel can be calculated with Eq.(3):

(3)

Lj is the amount of light for the corresponding pixel area, Tfis the corresponding frame period, Xp is the integrationarea, and L(x,t) is the original luminance for each particularposition and time. This integration formula (3) needs theinformation of the light distribution over space and time.

In our current measurement system, only the lumi-nance as a function of time is available. However, this infor-mation is sufficient when we consider that the light isgenerated in the center of each RGB triplet. This assump-tion is valid for normal viewing conditions when the viewingdistance is large enough compared to the pixel size. In that

case, Lj can be derived from the recorded luminance L(t) ofthe contributing pixels, in this case pixels C, D, and E.

Hence, from a given original still image, together withthe temporal luminance measurements for each input codeand each phosphor (for example, see Fig. 4), we can predict theperceived result when it moves at a certain speed (accordingto Figs. 1 and 7). In this way, most of the motion artifacts,such as dynamic false contour, and motion blur can be pre-dicted. The seriousness of the artifacts is not only deter-mined by the different phosphor behavior and subfieldarrangement but also by the motion speed. See, for instance,Fig. 8, showing an original image and how it would be per-ceived during motion. Both the dynamic false contour andblurring effect can be obviously observed. The correctnessof this simulation model is also validated with a perceptualevaluation experiment of which the results are presented inSection 4.2.

4 Perceptual validation

4.1 Perceived LCD motion blurA perception study was done to validate the simulation ofthe motion blur for LCD panels. A limited number of tran-sition levels and speeds were selected to create the simu-lated images. In the first experiment, a CRT monitor wasused to render the simulated blurred images, and an LCDmonitor displayed the intrinsic motion blur. The set-up ispresented in Fig. 9. Someya et al. (2005) used the same testset-up, and reported a good correspondence between simu-lated and actual edge blur.

Both monitors displayed the same images: the originalimage on the top and the blurred image on the bottom side.

LX T

L x t dxdtjp f

Sxt= zz1

( , ) .

FIGURE 6 — Calculated edge blur width (10 → 90%) for head and tailas a function of motion speed, with the different luminance transitionsas a parameter.

FIGURE 8 — Example of an original input image (on the left) and thesimulated image (on the right) perceived during motion at a given motionspeed.

FIGURE 7 — Basic principle for prediction of dynamic false contours.The light distribution (active subfields), indicated on the time axis,depends on the input code.

FIGURE 9 — Set-up of first experiment to evaluate the correctness of thesimulation model. The left-hand side indicates what was presentedbehind the black cover of the actual set-up (presented on the right).

960 Teunissen et al. / Method for predicting motion artifacts in matrix displays

On the CRT, only the blurred image was visible to theviewer and on the LCD only the original image (see Fig. 9)was visible. Because of motion, the original sharp image onthe LCD became blurred on the retina. The CRT displayedthe simulated blurred image and was assumed not to intro-duce additional (motion) blur. Drawing conclusions fromthe results obtained in this experiment were not straightfor-ward because

� The differences in the physical properties betweenthe LCD and the CRT (different white-point set-ting, different black and white level, etc.)

� A large tail could be seen in the CRT images. Thiswas caused by the long phosphor decay time. Thus,this CRT also demonstrated blur!

In our case, simulation of motion blur on a CRT dis-play appeared to be a bad choice.

A new experimental set-up that enabled presentationof both simulated and intrinsically blurred images on thesame LCD was introduced (see Fig. 10). A still image withthe simulated motion blurred edge was shown on the tophalf of the screen. A moving sharp image, where speedcould be adjusted, was shown on the bottom half of thescreen. In our experiment, the sharp block appeared frombehind a vertical bar, traversed the window (with a visualangle of 30°) with a user adjustable speed, and disappearedbehind the other bar. The block reappeared from the leftvertical bar 500 msec after the block disappeared behind theright bar. This was to allow the observer to reset his eyes tothe left bar before the block appeared again. Otherwise, atsome point in time the direction of the observers’ eye wouldhave been opposite to the direction of the moving block, andunwanted effects might have appeared in the image. Withthe introduced “dead time,” the observer had time to lock-ininto the motion speed much faster, resulting in a longer timeto accurately track and judge the sharpness of the edges.This methodology was derived from experiments performedearlier by Westerink & Teunissen (1995, 1996).

In our set-up, the block distance, block size, and back-ground gray level can be adjusted. In the experiment, the

distance between the still and moving block was 50 pixelsand the block size was 100 × 100 pixels.

For the experiment, five transition levels were se-lected from the seven equidistant lightness levels defined inCIE1976: Y0 → Y6, Y0 → Y5, Y1 → Y5, Y2 → Y4, and Y2 →Y4. Three additional transitions were included with thesame levels, but with a negative contrast (dark block movingon a bright background): Y6 → Y0, Y5 → Y1, and Y4 → Y2.Four motion speeds were used: 4, 8, 12, and 16 pixels/frame(the latter one corresponds to 29°/sec). Care was taken thatthe highest speed was well below the limit at which theimage starts to disintegrate. In total, (5 + 3) × 4 = 32 teststimuli were created for this perception study. The corre-sponding differences in calculated edge blur were alreadygiven in Fig. 6.

During the experiment, the observers (17 in total andbetween 22 and 48 years old) were asked to adjust the speedof the moving block such that its appearance correspondedbest to that of the simulated still image. For that particularspeed, the observer had to assess the match of leading andtrailing edge blur between the simulated block and actualmoving block separately. This procedure was repeated foreach of the eight transitions and all four speeds. The testconditions were in correspondence with ITU-R BT.500-10“Methodology for the subjective assessment of quality oftelevision pictures”: The viewing distance (50 cm) was chosensuch that the observers were able to resolve the individualpixels of the display. The illumination level was less than 5lux, measured on the screen. For scoring the match betweenthe two images at the adjusted speed, the ITU-R BT.500Quality Scale was used. The subjects were allowed to usehalf point values while assessing the match in our experi-ment. The question to the observer was: “What is the matchin blurred edge width between the moving image, at itsoptimal speed, and the still image with the simulated edgeblur?”

The results of the selected matching speed for eighttransitions for four moving speeds are shown in Fig. 11. Onaverage, the adjusted speed was very close to the simulatedspeed (the correlation coefficient is 0.9999). The standarderror in the mean is indicated in Fig. 11 with the error bars.For 16 pixels/frame (at 60-Hz refresh rate, this correspondsto 29°/sec), the deviation in the adjusted speed was a bitlarger since for some observers the block was moving toofast to make a proper judgment.

Statistical analysis of the matching data for the actualmoving edge blur and simulated edge blur indicated anequal match for leading and trailing blur. The average scoresfor the match were above 4.4 in all cases, implying that theedge blur of the simulated image was almost identical to theedge blur perceived in the moving image. Hence, we canconclude that the perceived edge blur during motion is verywell simulated with our model, and thus, that this method isa good alternative for the smooth pursuit camera system.

FIGURE 10 — Experimental set-up to evaluate the correctness of thesimulation results. The top part of the display presents the image withthe calculated edge blur, and the bottom part presents the sharp imageof which the speed can be adjusted.

Journal of the SID 14/10, 2006 961

4.2 Perceived PDP dynamic false contoursFor the perception experiment validating the model forPDP motion artifacts, two pictures were displayed simulta-

neously on the same PDP screen (see Fig. 12). The top halfof the display showed the original picture, moving with auser-adjustable speed. The bottom part of the displayshowed the picture simulated when moving at a givenspeed. The participants controlled the motion speed of thetop picture and adjusted it until the two images showed thebest match. After adjustment, a score for the correspon-dence between the two images, again following the ITU-RBT.500 Quality scale, was given. Figure 12 shows the threeimages used in the experiment: one complicated image(eyeball) and two simple test patterns (RGB blocks withthree slightly different levels and gray blocks). Various simu-lated motion speeds in the range between –8 and +12 pix-els/frame were used. The subjects were seated at a distanceof about six times the screen height (3.1 m), in an otherwisedark room. In total, 13 subjects attended the experiment.

Figure 13 shows the average adjusted speed. Thestandard error in the mean is on average 0.26 pixels perframe (ppf), with a maximum of 0.55 ppf (for eyeball with aspeed of 12 ppf) and a minimum of 0.1 ppf (for the grayblocks moving with 4 ppf). The overall average matchingscore was 4.4, and for each match on average higher than 4(good match). There was one exception for the RGB blockswhen the simulated speed was 1 ppf. Its average score was3.8. The reason for the lower matching score was the visibil-ity of the lines in the RGB blocks. These lines became vis-ible in the simulation model due to the slight difference ininput code within the colored blocks. It appeared to be dif-ficult to simulate this line correctly for this low speed.

In general, one can conclude from the results of theexperiment that different motion speeds result in a differentlevel of the motion artifacts, and that the simulation modelis able to differentiate them. Hence, the simulation modelproposed in this paper is also a very valuable tool to modelmotion artifacts in PDPs. The artifacts that will be perceivedduring motion can be presented on a still image.

FIGURE 11 — The average values for the adjusted speeds together withtheir standard errors in the mean. The number of observations for eachspeed is 136.

FIGURE 12 — The images displayed on the screen during the perceptionexperiment. On the top, the image gray blocks; in the middle, the imageRGB blocks; and on the bottom, the image eyeballs are shown. Eachimage consisted of two parts, viz., the top part with the original imageand the bottom part with the simulated image. The top part of each imagecould move in the horizontal direction with a user adjustable speed.

FIGURE 13 — The average adjusted speed by observers vs. the speedused for simulation. A positive speed indicates movement to the right.

962 Teunissen et al. / Method for predicting motion artifacts in matrix displays

5 DiscussionIn the literature, several measures to indicate the motionperformance can be found. The most often used ones arerelated to the blur edge width (BEW) (VESA, 2005) or bluredge time (BET), which can be calculated from the BEW(Oka et al., 2004). These measures are determined with thesmooth pursuit moving camera system. In our study, we alsoused the BEW to characterize motion blur. We demon-strated that its value can be accurately determined with oursimple photodiode system. But, this does not necessarilyimply that this value is the best measure to characterize per-ceived motion blur. By including the contrast sensitivityfunction (CSF) of the eye in the BEW calculations, providesa better correlation with the perceived edge blur [Oka et al.,2005].

For a PDP, a model that predicts the level of imagedegradation as a function of motion artifacts is not readilyavailable. There are several articles that describe how toreduce these types of artifacts (Klompenhouwer et al., 2000;Hoppenbrouwers et al., 2002; Kim, 2005; Eo et al., 2005),but it is still difficult to quantify the obtained improvements.With our model the appearance of an image during motioncan be predicted. The difference between the original imageand the image perceived during motion could be calculated,and expressed in, for instance, a ∆E2000 value, or a meansquare error (MSE). In our study, the MSE values for theimages simulated at different motion speeds were calcu-lated. The trend of the calculated MSE ratio is in line withwhat we expect, but more extensive experiments should beconducted to validate the use of this measure.

6 ConclusionsA measurement system, simulation method, and a percep-tion protocol have been established to evaluate motion arti-facts of matrix displays. The temporal luminance variationcan be precisely acquired by using a fast, eye-sensitivity-compensated photodiode in combination with a suitabledata-acquisition system. The temporal response waveforms(step-and impulse responses) are used as input for the simu-lation model. The model assumes smooth pursuit eye move-ments and uses temporal integration of the luminance datain one frame time. The models were validated with percep-tion experiments, and a high correlation was found betweenthe simulated artifacts applied on a still image, and the actualartifacts perceived on the moving image. As a next step, wewill use these models to identify the physical properties thatcorrelates best with the perceived LCD motion blur.

References1 D Doyen, et al, “Compensation of false contours on a PDP using pixel

based motion estimator combined with an efficient coding technique,”SID Symposium Digest Tech Papers 34, 780–783 (2003).

2 Y Eo et al, “Histogram-based subfield LUT selection for reducingdynamic false contours in PDPs,” SID Symposium Digest Tech Papers36, 606–609 (2005).

3 I Heynderickx et al, “Image quality comparison of PDPs, LCDs, CRTs,and LCoS projection,” SID Symposium Digest Tech Papers 36,1502–1505 (2005).

4 J J L Hoppenbrouwers et al, “100-Hz upconversion in plasma displays,”SID Symposium Digest Tech Papers 33, 922–925 (2002).

5 C Kim, “Improving motion picture quality of plasma display panels,”Proc 2005 International Symposium on Intelligent Signal Processingand Communication Systems, 633–636 (2005).

6 M A Klompenhouwer et al, “Optimally reducing motion artifacts inplasma displays,” SID Symposium Digest Tech Papers 29, 388–391 (2000).

7 M A Klompenhouwer, “Temporal impulse response and bandwidth ofdisplays in relation to motion blur,” SID Symposium Digest Tech Papers36, 1578–1581 (2005).

8 T Kurita, “Moving picture quality improvement for hold-type AM-LCDs,”SID Symposium Digest Tech Papers 32, 986–990 (2001).

9 J Miseli, “Motion artifacts,” SID Symposium Digest Tech Papers 35,86–89 (2004).

10 J Miseli et al, “Evaluation of motion artifacts and evolving compensationtechniques for LCD monitors,” SID Symposium Digest Tech Papers 36,1014–1017 (2005).

11 K Oka et al, “Moving picture response time (MPRT) measurementsystem,” SID Symposium Digest Tech Papers 35, 1266–1269 (2004).

12 K Oka et al, “Image quality degradation of moving pictures: Perceivedblur edge width,” Proc IDW ‘05, 815–818 (2005).

13 D Sasaki et al, “Motion picture simulation for designing high-picturequality old-type displays,” SID Symposium Digest Tech Papers 33,926–929 (2002).

14 K Sekiya et al, “Eye-trace integration effect on the perception ofmoving pictures and a new possibility for reducing blur on hold-typedisplays,” SID Symposium Digest Tech Papers 33, 930–933 (2002).

15 Y Shimodaira, “fundamental phenomena underlying artifacts inducedby image motion and the solutions for decreasing the artifacts onFPDs,” SID Symposium Digest Tech Papers 34, 1034–1037 (2003).

16 J Someya et al, “Relationship between MPRT measurement and per-ceived LCD motion blur,” Conf Record EuroDisplay, 78–81 (2005).

17 T Su et al, “LCD visual quality analysis by moving picture simulation,”SID Symposium Digest Tech Papers 36, 1010–1013 (2005).

18 VESA FPDM Update: Video Electronics Standards Association, FlatPanel Display Measurement Standard, Version 2 0 (May 19, 2005).

19 J H D M Westerink and C Teunissen, “Perceived sharpness in complexmoving images,” Displays 16, No. 2, 89–97 (1995).

20 J H D M Westerink and C Teunissen, “Disintegration of images movingat high velocities,” Intl J Imaging Systems Technol 7, 92–96 (1996).

21 Y Yamada et al, “Technology trend for high quality display image ofLC-TV,” Proc IDW ‘05, 227–230 (2005).

Kees Teunissen received his B.S. degree in electri-cal engineering from the Poly-Technical School ofDordrecht, the Netherlands, in 1986. In 1987, hejoined Philips Research and worked for 9 years inthe field of visual perception, with focus on thedevelopment of objective metrics. In 1999, heswitched to Philips Consumer Electronics (PCE)and was involved in the development of the firstPlasma TVs. Since 2004, he is working with theInnovation Laboratory of PCE as Program Man-

ager for display-related activities. His field of research includes displaycharacterization in a perceptually relevant way.

Yuning Zhang received his B.S. and M.S. degreesin electronic engineering from Southeast Univer-sity of China in 2003 and 2005, respectively. Hespent a 1-year training period (from 2005.2 to2006.1) at Innovation Laboratory of Philips CE,Eindhoven, The Netherlands. He is currentlyworking towards his Ph.D. degree in electronicengineering at Southeast University of China. Hisresearch interests include electronic-display devicesand display-characterization systems.

Journal of the SID 14/10, 2006 963

Xiaohua Li is a professor at Southeast University,Nanjing, China. He received his B.S. degree inelectronics engineering from Nanjing Institute ofTechnology in 1983 and his M.S. degree in elec-tron physics from Southeast University in 1988.Since 1983, he has been engaged in teaching andresearch work of display devices at the NanjingInstitute of Technology (renamed Southeast Uni-versity in 1988). His research interests are elec-tronic-display devices and systems, as well as the

measurement standards of display devices.

Ingrid Heynderickx received her Ph.D. degree inphysics from the University of Antwerp in Decem-ber, 1986. In 1987, she joined the Philips ResearchLaboratories in Eindhoven and worked in differ-ent areas of research: optical design of displays,processing of liquid-crystalline polymers, andfunctionality of personal-care devices. Since2006, she has been a Research Fellow in the Vis-ual Experiences group, where she is heading theproject on Visual Perception of Display Systems.

She is member of the Society for Information Displays (SID), and for theSID, she is chair of the Applied Vision subcommittee. In 2005, she wasappointed Visiting Professor at the Southeast University of Nanjing, andpart-time Full Professor at the Delft Technical University.

964 Teunissen et al. / Method for predicting motion artifacts in matrix displays


Recommended