+ All Categories
Home > Documents > Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for...

Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for...

Date post: 04-Sep-2016
Category:
Upload: mario-gazziro
View: 215 times
Download: 0 times
Share this document with a friend
9

Click here to load reader

Transcript
Page 1: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

Digital Signal Processing 22 (2012) 367–375

Contents lists available at ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

Expanding the horizontal capabilities of CRT monitors using artificial inter-pixelsteps for neuroscience experiments ✩

Mario Gazziro ∗, Nelson Fernandes, Lirio Almeida, Paulo Matias, Jan Slaets

IFSC, Universidade de São Paulo, Zip Code 13560-970, P.O. Box 369, Brazil

a r t i c l e i n f o a b s t r a c t

Article history:Available online 21 January 2011

Keywords:Digital signal processingNeuroscienceFPGAReal-time video processingCRT monitors

This article describes the development of a visual stimulus generator to be used in neuroscienceexperiments with invertebrates such as flies. The experiment consists in the visualization of a fixedimage that is displaced horizontally according to the stimulus data. The system is capable of displaying640 × 480 pixels with 256 intensity levels at 200 frames per second (FPS) on conventional rastermonitors. To double the possible horizontal positioning possibilities from 640 to 1280, a novel techniqueis presented introducing artificial inter-pixel steps. The implementation consists in using two video framebuffers containing each a distinct view of the desired image pattern. This implementation generates avisual effect capable of doubling the horizontal positioning capabilities of the visual stimulus generatorallowing more precise and movements more contiguous.

© 2011 Elsevier Inc. All rights reserved.

1. Introduction

1.1. Motivation

The way to discover how information is encoded in neuralsystems is an important scientific challenge. A lot of ongoing re-search is performed to discover how stimulus information is en-coded and transmitted by trains of spikes produced by neurons. Anoften-studied system is the visual processing system of the fly. Itselectrophysiological and anatomical characteristics have been thor-oughly studied and its optic center in the brain can be regarded asthe best-known insect visual system. Giant neurons located at thelobula plate witch is regarded as the fly’s neural computing cen-ter of visual motion, are capable to detect horizontal and verticalvisual motions. Some giant neurons located at the lobula plate aredirection selective.

One the fly’s most studied visual information encoding sys-tems is based on the analises of the spike trains produced by thegiant neuron H1 sensitive to an ipsilateral regressive horizontalmotion [1]. It’s size, easy identification, immunity to other brainactivity and responsiveness over long periods make it an ideal neu-ron for in vivo experiments [2].

✩ This article was due to be included in a special issue on New Tools, Algorithmsand Transforms for Time-Frequency-Shape Analysis with Applications in PatternRecognition organized by Professor R.C. Guido. The issue was discontinued but thisarticle is published with thanks to the authors and organizer.

* Corresponding author.E-mail address: [email protected] (M. Gazziro).

1051-2004/$ – see front matter © 2011 Elsevier Inc. All rights reserved.doi:10.1016/j.dsp.2011.01.009

Fig. 1. Fly’s compound eye [3].

1.2. The compound eye

Compound eyes produce a mosaic image of the environmentdue to the fact that each of the facets (or omatids) of the eye isdirected to a slightly different spot in space (Fig. 1). The imageresolution is, therefore, limited by the number of omatids and theangle between them.

In addition, there are different conditions for day vision andnight vision. Insects which are supposed to see efficiently duringthe day have seven or eight photo-receptive cells grouped togetherbeneath each lens of the omatids. Such cells are arranged so thattheir light-collector surfaces (the rabdomers) are merged in thecenter of each omatid, thus creating a structure called Rhabdoma,which is similar to a short baton (Fig. 2a).

Page 2: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

368 M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375

Fig. 2. How the eye of the fly (d) and (e) differs from all other arthropods (a), (b) and (c) [13].

Every omatid is encapsulated in a tube of pigment cells so thatlight beams which focus on angles that are different from the ori-entation of the tube are removed (Fig. 2b). This way, many photonsare lost, but the resolution is improved.

Insects which need to be able to see efficiently at night presenta smaller quantity of pigments in the encapsulation of the rhabdo-mas, thus allowing that light coming from a larger field manages toreach their photo-receptors (Fig. 2c). Such a fact boosts sensibility,but sacrifices resolution though. Many insects are able to dislocatethe pigment cells and thus configure the compound eye so that itperforms its best both during the day and at night [13].

Around 100 million years ago, the Diptera order (mosquitoesand flies) developed a trick so as to get better resolution to theircompound eyes: they separated the photo-receptor rabdomers ineach omatid so that they would be able to see seven spots in-stead of only one Fig. 2d. That is called ‘neural overlapping eye’,which also involved some reconnections inside the brain of the fly(Fig. 2e) [14].

Another aspect in the compound eye of the fly is the responseof the photoreceptors as for the temperature. In 34 ◦C, the responsespeed of the photoreceptors of a fly overtakes the double of therecorded speed at 19 ◦C of temperature [15], as it is displayed inFig. 3.

An important concept related to the response span of any bi-ological visual system is the Flicker Fusion, also known as CFFT(Critical Flicker Fusion Threshold). It determines (in Hz) the thresh-

Fig. 3. Response time of fly’s photo-receptors [15].

old in which an animal begins to see strobes as if they were acontinuous beam [23]. The value of the Flicker Fusion for flies wasevaluated at 200 Hz [17]. However, Tatler et al. determined thatsuch a value might range between 400 and 500 Hz for extremeambient temperatures of 34 ◦C [15].

1.3. Stimuli generation

Living animals have to reconstruct a representation of the ex-ternal world from the output of their sensory systems in order tocorrectly react to the demands of a rapidly varying environment.In many cases, this sensory output is encoded into a sequence ofidentical action potentials, called spikes.

Page 3: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375 369

Fig. 4. Stimuli: variable displacement velocity of a fixed image. In the first 5 s the same stimulus was shown, whereas in the next 5 s the fly saw different stimuli.

If we represent the external world by a time-dependent stimu-lus function s(t), the animal has to reconstruct s(t) from a set ofspikes. This decoding procedure, therefore, takes the set of spiketrains and generates an estimate se(t) of the stimulus, acting as adigital-to-analog converter.

The stimulus was a rigidly moving fixed image with horizontalvelocity v(t). This horizontal position is change every 5 ms ac-cording to a position file in the computer. This set of positions isobtained by integrating a series of velocities generated by a sta-tistical process named Ornstein–Uhlenbeck process [13,12,11]. Thistime series of velocities Vt is obtained through the following way.

Vt+� = c + α × Vt + Φt, |α| < 1 (1)

In which the values Φt are obtained from a Gaussian distribu-tion of null average and deviation σΦ2. The term α representshow much the next value Vt + � is correlated with the previousvalue Vt . The choice of α determines the correlation time scalebetween the values of Vt (Eq. (3)) and the term c represents aconstant velocity.

〈Vt+N Vt〉 = σ 2Φ

1 − α2× e−N/τ (2)

τ = − 1

ln |α| (3)

The choice of this distribution and these parameters are basedon usual characteristics of a natural fly [16].

We discretize time in bins of 5 ms, which is nearly the refrac-tory period of the H1 neurons (2 ms) [21]. The fly therefore sawa new frame on the monitor every δt = 5 ms, whose change inposition δx was given by δx(t) = v(t)δt .

Experimental runs lasted 20 minutes, consisting of 10-s longsegments. In each segment, in the first 5 s the same stimulus wasshown, whereas in the next 5 s the fly saw different stimuli, asshown in Fig. 4.

2. Related work

The most common technology in experiments with the sys-tem visual insects involves the use of LEDs (Light Emitting Diode).Reiser and Dickinson [22] realize experiments with tethered

Fig. 5. Dickinson LED-arena: The panel modules are connected to form controllabledisplays. (A) Individual panel showing the 64-LED display. The LEDs in the dot ma-trix display (3 mm diameter, each). (B) The panels are configured as a flight arena,constructed as a 4 × 11 cylinder of panels, height of 128 mm, diameter of 123 mm.

Drosophila melanogaster in a cylindrical arena composed of 44panels, used to test the contrast dependence of object orientationbehavior, and above a panel floor display, used to examine the ef-fects of ground motion on orientation during flight. They designeda modular system, based on panels composed of an 8 × 8 array ofindividual LEDs, that may be connected together to ‘tile’ an exper-imental environment with controllable display (Fig. 5). Wertz et al.also use the LED-arenas proposals by Reiser and Dickinson in theirexperiments with optical flow in flies.

The use of cathode ray tube (CRT) displays in fly’s visual sys-tems is cited by Neri et al. firstly in 2006 [19]. The fly was po-sitioned so that the monitor (ViewSonic PT795), driven by a VSGgraphics card (Cambridge Research Systems) at 180 Hz (Fig. 6).

Although the VSG equipment meets the time to experimentwith the visual system of insects, it does not meet the re-quirements as spatial resolution, and has no determinism in thestimulus–response correlation. Thus, the development of this workwas proposed.

There are some programming libraries of high-level languagebased on Python that perform experiments with human visionused in CRT monitors.

Page 4: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

370 M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375

Fig. 6. Neri visual experiments: visual stimuli consisted of a sequence of ‘motion frames’, each frame lasting 220 ms and containing 4 × 4 moving patches.

One of this libraries is PsychoPy [20], using OpenGL to generatevery precise visual stimuli on standard personal computers. How-ever, this library does not meet the time requirements needed tocarry out experiments with the visual system of insects, operatingat maximum 85 Hz.

3. Materials and methods

3.1. Fly setup

The signals under analysis correspond to the neural action po-tentials of H1, a motion-sensitive neuron in the fly’s visual system.This neuron is sensitive to horizontal stimuli, which is excited byback-to-front moving scenery and inhibited by opposite movingstimuli [4].

These signals are acquired extracellularly, using a tungsten elec-trode while the fly views a fixed image or pattern moving hori-zontally across its visual field. The experimental setup is shown inFig. 7.

With the aid of a microscope and a mini scalpel, incisions aremade followed by the removal of the protecting exoskeleton in theupper part of the head of the fly. The inner part of the head isthen exposed, and with a micro-hook, a hard-to-see transparentmuscle located along the inner corner of the exposed hemisphereis broken. In case such a procedure is not properly performed, thedata acquisition is then compromised, since this muscle is a greatsource of noise due to its periodic contractions. By using the hookagain, the protecting tissue, which covers the layers of the brain, iscarefully removed. Then, the reference wire is placed in the exter-nal lower corner of the head of the fly (Fig. 8).

The next step entails tracking down the signal of the H1 neuron(which has a size of 20 microns) with an extracellular microelec-trode. Such a task might last for hours and demands experienceand intuition by the experimenter, since its location is made in thedark inside a region of higher probability of finding it, with theaid of the amplified audio output of the captured electrical sig-nal (Fig. 9b). Once the neuron signal is located, microadjustmentsto the microelectrode are made in order to amplify the signal andreduce the noise coming from other neurons and respiratory tra-cheas which are found in the region.

Extracellular acquisition was chosen in which the microelec-trode is placed in an external region but close to the neuron inquestion. This technique, as it is less invasive, allows the acqui-sition of neuronal signals which are the closest possible to thenatural conditions to the H1 functioning, without seriously com-

Fig. 7. The experimental setup.

promising the internal dynamics of the cell and allowing an ade-quate detail level to what interests us.

The capture of this signal was carried out through a tungstenmicroelectrode of the FHC brand, which is appropriate for this kindof experiment. This microelectrode has its body covered in epoxyisolating varnish with only its edge being exposed. We will have ahigher or lower signal selectivity by the microelectrode accordingto the edge exposed area.

Due to the large number of neurons which shoot in response tothe visual stimuli, it is important to have a high signal selectivitygrade so that we can essentially capture the signal in question. Thisway, we try to use microelectrodes with smaller exposed areas,which can guarantee a higher signal selectivity, but with a higherelectrical resistance otherwise – though, with these neural signallevels, they will not significantly contribute with thermal noises.The impedances of the microelectrodes used vary from 2.7–5 M�.

3.2. Digital hardware components

We use electronic kits of the models DE3 and DE2 by Tera-sic, with Altera FPGA chips Stratix III and Cyclone II. The Stratixchip has almost 1 MB of on-chip memory available to the de-signer. The DE2 kit has also a video DAC from Analog Devices,

Page 5: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375 371

Fig. 8. (a) Reference wire is placed in the external lower corner of the head of the fly. (b) Indication of possible H1 neuron location [10].

Fig. 9. (a) Correct positioning of the microelectrode. (b) Insertion of the extracellular microelectrode [10].

Fig. 10. Block diagram of main components.

model ADV7123, that is a triple 10-bit high speed video DAC with140 MHz of video bandwidth.

We only use the video DAC in the DE2 kit, so all the architec-ture was fitted into DE3 kit, using the Stratix III chip. Cyclone IIchip was only used as a pass-through to allow the Stratix III tohave access to the video DAC. The connection between the kitswas made through the GPIO interface, using standard 40 via IDEcables with IDC connectors.

The CRT monitor is a 221U model by LG, with a flat squaretube of 21 inches, deflection angle of 90◦ and low radiation (TCO-99). Maximum scanning frequency is 115 kHz for horizontal and200 Hz for vertical. The active video area is 406.4×304.8 mm. This

monitor has phosphor P22 and 0.26 mm of slot pitch [5]. P22 isthe standard phosphor on color monitors, consisting of R, G andB components which decay to 10% peak emission in 1.5, 6.0 and4.8 ms, respectively [6].

By using an IBM-PC compatible running Matlab™ scripts weprovide the desired stimulus data at 200 Hz using the parallel port(in mode EPP). The main components of the system is presentedin Fig. 10.

Fig. 11 shows the system at work at 200 Hz. The monet can bedynamically changed during the experiment execution. The neu-ral data was acquired by the equipment developed by Almeidaet al. [7] in his master’s thesis. It was composed by an analog

Page 6: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

372 M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375

Fig. 11. Visual stimulation system using FPGA kits and a CRT monitor.

front-end and a time-stamp registering system as shown as partof Fig. 7.

3.3. Experiment requirements

A mandatory requirement for visual neuroscience experimentswith invertebrates is a high refreshing rate monitor. Warzecha andEgelhaaf [8] has carried out experiments with 183 Hz of refreshrate and achieved good results. In this project the refresh rate wasfixed exactly at the maximum vertical scanning frequency of ourCRT monitor. At minimum resolution – 640×480 pixels – the max-imum refresh rate is 200 Hz.

At high refresh rates, with frame duration shorter than phos-phor decay time, the nominal image presented on any given framemerges with residual luminance from the previous frame [9]. Wecannot use the green color once the P22’s green pigment has a de-cay time of 6.0 ms which is greater than 5.0 ms, the period of the200 Hz of vertical scan. Since the fly eye is not excited by the redband of light spectrum we cannot use the red color also. We onlyuse the blue channel of DAC output to provide the visual data toCRT monitor.

Since the experiment is executed in real-time, the stimuli datamust be sent from IBM-PC before the FIFO memory was totallyempty. Otherwise, the whole experience would be ruined.

Another requirement in visual neuroscience experiments is thehorizontal positioning resolution. To achieve the maximum refreshrate in the CRT monitor we set it at minimum resolution, decreas-ing its horizontal positioning precision. To improve the capabilitiesof horizontal positioning a novel architecture is proposed.

3.3.1. Amplification of the horizontal capacityThe only aspect connected to the visual system of the fly, which

was taken into consideration, had been the sample rate, of 200pictures per second. This section aims at analyzing how the spaceresolution of the displayed image is perceived by the photorecep-tors in the omatids of the fly.

First of all, it is necessary to determine the effective dot-pitch ofthe CRT monitor used. It is the Studioworks 221U model of the LGbrand, with screen diameter of 21 inches, band of 250 MHz, max-imum resolution of 1600 × 1200 pixels and nominal dot-pitch of0.266 mm [5]. In order to reach the rate of 200 pictures per sec-ond, it is necessary to reduce the resolution of the monitor so thatthe supported video band is not overtaken. Therefore, the resolu-tion of 640 × 480 pixels was used in this system.

With this resolution and a monitor with a 21-inch screen, wewere able to obtain an effective dot-pitch of 0.666 mm, accordingto Eq. (4) extracted from Compton [18], in terms of EDP (effective

Fig. 12. Structure of the omatids of the fly with almost 1◦ alignment [13].

Dot Pitch), horizontal resolution (H) in pixels, vertical resolution(V) in pixels and main diagonal (D) in inches.

EDP =(

H

(

√H2+V 2

(D∗25.4))

)/H (4)

If we consider the distribution structure of the omatids of thefly, we will see that there is a 1◦ alignment between each omatid.Having the fly placed at a distance of 10 cm (R = 100 mm) fromthe monitor screen (Fig. 12) as well as considering Eq. (5), x =1.745 mm is obtained.

x = R.π

180(5)

In a first analysis, the system is satisfactory, since the minimumvalue of the image displacement (0.666 mm) is lower than theminimum value detected by the omatids at the distance of 10 cm(1.745 mm). However, taking into account what was discussed inSection 1.2, the compound eye of flies has open rhabdomas, whichcontributed to the effective increase of the space resolution per-ceived by the insect.

As there are no conclusive studies about how effective the per-ception capability of the space resolution of flies is, the challengelies in creating a visual system which presents a horizontal dis-placement that is higher than the resolution used in the monitoritself.

3.3.2. The technique ‘artificial inter-pixel steps’To double the possible horizontal positioning possibilities from

640 to 1280, a novel technique is presented introducing artificialinter-pixel steps. The implementation consists in using two videoframe buffers containing each a distinct view of the desired imagepattern, as shown in Fig. 13.

A first ‘normal’ picture representing the used image pattern isstored in the first frame buffer and will be used with odd stimu-lus values. A second picture taken form the image pattern shiftedto the left by a half horizontal inter-pixel distance is stored in thesecond frame buffer and will be displayed using even stimulus dis-placement stimulus values. This is illustrated in Fig. 14.

Depending on the parity of the applied stimulus value, the oddor even frame buffer will be visualized on the CRT monitor. Thefinally applied horizontal offset of the visualized frame is deter-mined by the received stimulus value divided by two and used todetermine the horizontal offset in pixels of the displayed imageframe.

The following code presents how the generation of sampled im-ages for sets of two groups of neighboring pixels is accomplishedby using the Matlab software.

Code 1 Matlab script to sample two images from an original one.I=imread(’image.bmp’); % Original image (1280x960)I=rgb2gray(I); % Convert to gray scaleA=I(1:2:end,1:2:end); % Extract pixels (first sample)B=I(1:2:end,2:2:end); % Extract pixels (second sample)imwrite(A,’sample1.bmp’,’BMP’) % Store image A (640x480)imwrite(B,’sample2.bmp’,’BMP’) % Store image B (640x480)

Page 7: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375 373

Fig. 13. Architecture of ‘artificial inter-pixel steps’.

Fig. 14. Image generation with different samples using the technique ‘artificial inter-pixel steps’: (a) original image; (b) even and odd columns; (c) odd image; (d) evenimage.

Fig. 15 tries to illustrate the caused effect based on the exactamplification of the same region in two generated images. The dis-placement visual impression is neatly noticed, although the effectis easier to be visualized with the image moving.

4. Results

It is evident that for great stimulus variations, the H1 neuronneeds to code a greater quantity of information which increases itsfiring rate. A greater firing rate implicates in lower pulse intervals.As we apply the interpixel effect we observe a decrease of shortpulse intervals, we have confirmed the smoothing process of thestimuli through these experiments. Please see the histograms fromFig. 16.

In other words, the occurrences of these short intervals are dueto the artifacts initially present in the traditional stimuli generationsystem which are non-existing artifacts in the nature (continuousstimuli, δT � 0).

As we want to generate a more natural stimuli each time, weconsidered the smoothing technique to be very advantageous.

Fig. 15. Idea of ‘inter-pixel’ effect: a bigger image, sampled in two (or more) sub-sampled images produces a small displacement in each one of them.

Page 8: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

374 M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375

Fig. 16. Inter-spike-interval histograms: (a) standard; (b) artificial inter-pixel steps; (c) difference between standard and inter-pixel: as we apply the interpixel effect weobserve a decrease of short pulse intervals, we have confirmed the smoothing process of the stimuli through these experiments.

5. Conclusion

We analyzed experiments on the visual system of a fly, wherewe recorded from the H1 neuron while its visual system was stim-ulated by our developed equipment.

As we apply the interpixel effect we observe a decrease of shortpulse intervals, we have confirmed the smoothing process of thestimuli through these experiments.

A novel architecture was developed and its implementationgenerates a visual effect capable of doubling the horizontal posi-tioning capabilities of the visual stimulus generator allowing moreprecise and movements more contiguous.

Acknowledgments

We thank I. Zuccoloto and I.M. Esteves for their help with theexperiments. We also thank A.P. Sieh for his help in translation.The laboratory was partially funded by FAPESP grant 0203565-4.We thank Altera Corporation for their University program andTerasic for their support.

References

[1] K. Hausen, Monokulare und binokulare Bewegungsauswertung in der LobulaPlate der Fliege, Verh. Dtsch. Zool. Ges. (1981) 49–70.

[2] K. Hausen, Motion sensitive interneurons in the optomotor system of the fly,Biol. Cybernet. 46 (1) (1982) 67–79.

[3] K. Hausen, The lobula-complex of the fly: structure, function and significancein visual behaviour, in: M. All (Ed.), Photoreception and Vision in Invertebrates,Plenum Press, New York, 1984, pp. 523–559.

[4] R.R.R.V. Steveninck, G.D. Lewen, S.P. Strong, R. Koberle, W. Bialek, Reproducibil-ity and variability in neural spike trains, Science 275 (1997) 1805–1808.

[5] LG Corporation, Studioworks 221U repairing manual, Technical manual, 1999,30 pp.

[6] S. Sherr, Electronic Displays, 2nd edn., Wiley, New York, 1993.

[7] L.O.B. Almeida, Desenvolvimento de Intrumentacão eletrônica para estudosde codifica cões neurais no duto óptico em moscas, Master thesis, Universi-dade de São Paulo, 2006, 86 pp.

[8] A. Warzecha, M. Egelhaaf, On the performance of biological movement detec-tors and ideal velocity sensors in the context of optomotor course stabilization,Visual Neurosci. 15 (1998) 113–122.

[9] M.A. Garcia-Perez, E. Peli, Luminance artifacts of cathode-ray tube displays forvision research, Spatial Vision 14 (2) (2001) 201–215.

[10] N. Horner, Multifractalidade no código neural da mosca, Master’s thesis, Uni-versidade de São Paulo, 2008, 70 pp.

[11] N. Fernandes, et al., Recording from two neurons: second order stimulus re-construction from spike trains, Neural Comput. 10 (22) (2010) 2537–2557.

[12] T.M. Cover, J.A. Thomas, Elements of Information Theory, 2nd edn., Wiley–Interscience, Hoboken, NJ, 2006.

[13] K. Moses, Fly eyes get the whole picture, Nature 443 (12) (2006) 638–639.[14] A.C. Zelhof, R.W. Hardy, A. Becker, C.S. Zuker, Transforming the architecture of

compound eyes, Nature 443 (12) (2006) 696–699.[15] B. Tatler, D.C. O’Carrol, S.B. Laughlin, Temperature and the temporal resolving

power of fly photoreceptors, J. Comp. Physiol. A 186 (4) (2000) 399–407.

Page 9: Expanding the horizontal capabilities of CRT monitors using artificial inter-pixel steps for neuroscience experiments

M. Gazziro et al. / Digital Signal Processing 22 (2012) 367–375 375

[16] J.H. van Hateren, R. Kern, G. Schwerdtfeger, M. Egelhaaf, Function and codingin the blowfly H1 neuron during naturalistic optic flow, J. Neurosci. 25 (17)(2005) 4343–4352.

[17] G.D. McCann, G.F. MacGinitie, Optomotor response studies of insect vision,Proc. Roy. Soc. B: Biol. Sci. 163 (992) (1965) 369–401.

[18] K. Compton, Image Performance in CRT Displays, SPIE Publications, 2003.[19] P. Neri, Spatial integration of optic flow signals in fly motion-sensitive neurons,

J. Neurophysiol. 95 (3) (2006) 1608–1619.[20] J.W. Peirce, Generating stimuli for neuroscience using PsychoPy, Front. Neuroin-

form. 2 (2009) 1–8.[21] A. Wertz, J. Haag, A. Borst, Local and global motion preferences in descending

neurons of the fly, J. Comp. Physiol. A 195 (2009) 1107–1120.[22] M.B. Reiser, M.H. Dickinson, A modular display system for insect behavioral

neuroscience, J. Neurosci. Methods 167 (2008) 127–139.[23] A.F. Husain, S. Hayes, M. Young, D. Shah, Visual evoked potentials with CRT and

LCD monitors: When newer is not better, Neurology 72 (2009) 162–164.

Further reading

[24] G.E. Uhlenbeck, L.S. Ornstein, On the theory of the Brownian motion, Phys.Rev. 36 (5) (1930) 823–841.

Mario Gazziro received the BSc degree (2003) and MSc degree (2005)in computer science at Instituto de Ciências Matemáticas e de Com-putação/USP, Brazil. He received the PhD degree (2009) in applied physicsat Instituto de Física e Química de São Carlos/USP, Brazil (with internshipat Instituto Superior Técnico de Lisboa, Portugal). He is also a specialist

in microelectronis by Cadence (2008), USA, and by Toshiba Semiconduc-tors (2010), Japan. Research includes advanced cybernetics, digital signalprocessing, neurobiophysics and magnetic ressonance imaging.

Nelson Fernandes received the BSc degree (2004) and PhD degree(2010) in applied physics both at Instituto de Física de São Carlos/USP,Brazil. Research includes applied physics and neurobiophysics.

Lirio Almeida received the BSc degree (2003) in computer scienceat Centro Universitário Central Paulista/UNICEP, Brazil, and MSc degree(2006) in Computational Physics, from the same institution where cur-rently working towards the PhD degree, at Instituto de Física de SãoCarlos/USP, Brazil. Research includes electronics instrumentation and neu-robiophysics.

Paulo Matias received the BSc degree (2009) in Computational Physicsat Instituto de Física de São Carlos/USP, Brazil. Working towards the MScdegree in applied physics at the same institution. Research includes scien-tific instrumentation and neurobiophysics.

Jan Slaets is a full professor, Computational physics group and appliedinstrumentation. Received the BSc degree (1970) in Ellectronics Engineer-ing at Hoger Rijksinstituut voor Technish Onderwijs/Hasselt, Belgium. Hereceived the MSc degree (1976) and PhD degree (1979) both in appliedphysics at Instituto de Física e Química de São Carlos/USP, Brazil. Researchincludes electronics instrumentation and neurobiophysics.


Recommended