+ All Categories
Home > Documents > The Geneva Reduction and Analysis Pipeline for High...

The Geneva Reduction and Analysis Pipeline for High...

Date post: 11-Mar-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
9
Mon. Not. R. Astron. Soc. 000, 1–9 (2014) Printed 8 December 2014 (MN L A T E X style file v2.2) The Geneva Reduction and Analysis Pipeline for High-contrast Imaging of planetary Companions J. Hagelberg 1? , D. S´ egransan 1 , S. Udry 1 , and F. Wildi 1 1 Observatoire de Gen` eve, Universit´ e de Gen` eve, 51 Chemin des Maillettes, 1290, Versoix, Switzerland Accepted 1988 December 15. Received 1988 December 14; in original form 1988 October 11 ABSTRACT We present GRAPHIC, an new angular differential imaging (ADI) reduction pipeline where all geometric image operations are based on Fourier transforms. To achieve this goal the en- tire pipeline is parallelised making it possible to reduce large amounts of observation data without the need to bin the data. The specific rotation and shift algorithms based on Fourier transforms are described and performance comparison with conventional interpolation algo- rithm are given. The test using fake companions injected in real science frames demonstrate the significant gain obtained by using geometric operations based on Fourier transforms com- pared to conventional interpolation. This also translates in a better point spread function and speckle subtraction in with respect to conventional reduction pipelines. Flux conservation of the companions is also demonstrated. This pipeline is currently able to reduce science data produced by VLT/NACO, Gemini/NICI, VLT/SPHERE, and Subaru/SCExAO. Key words: methods: data analysis – techniques: image processing – stars: planetary systems 1 INTRODUCTION Eighteen years after the first discovery of an exoplanet around a sun-like star (Mayor & Queloz 1995) and the unambiguous detec- tion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo, Osorio & Martn 1995), more than 900 exoplanets and nearly 1300 brown dwarfs have been discovered, and about 3000 candidates from the Kepler space mission are waiting to be confirmed. These numbers are growing ever faster as the pace of new detections is increasing, thanks to newly built instruments pur- posely designed to search for sub-stellar objects, but also to the optimisation of data analysis techniques. The vast majority of exoplanets are currently detected with the radial velocity or transit techniques. However, orbital periods longer than the time-span of the observations will hardly be de- tected by these two techniques, inducing a sharp decrease in de- tectability beyond 5 AU and leaving unprobed a large area in the mass-separation parameter space. Direct imaging on the opposite probes the outer orbital regions not accessible with the two pre- vious techniques, but the high contrast at small separation which needs to be reached turns it into one of the most challenging ex- oplanet detection techniques. The main hurdle to detect compan- ions by high contrast imaging is to remove the stellar point spread function (PSF) without diminishing the signal from the faint com- panion. This can be achieved through instrumental improvements or by improving the observing and data reduction techniques, with efforts focusing on these two fronts concurrently. Since the first planets around stars have been directly imaged ? E-mail: [email protected] (Marois et al. 2008; Lagrange et al. 2009), the rate of exoplanets detected by direct imaging is steadily increasing thanks to progress to overcome the many technical challenges and careful selection of the target samples. But the small number of detections in total contrasts with the many direct imaging surveys which generated only few or no detection at all (e.g., Masciadri et al. 2005; Biller et al. 2007; Lafrenire et al. 2007a; Chauvin et al. 2010; Heinze et al. 2010; Vigan et al. 2012; Bowler et al. 2012; Nielsen et al. 2013; Wahhaj et al. 2013; Janson et al. 2013; Crepp et al. 2012). The technical challenge of subtracting the host star point spread function is currently based on two complementary differ- ential imaging methods, with the same core idea of generating a point spread function as similar as possible to the one which should be subtracted, but without having any potential companion signal in it. The difficulty being that the speckle structure of the point spread function evolves in time, with many speckles in the stellar halo having a similar shape and intensity to what would be expected from a companion. The first method, called Simultaneous Differen- tial Imaging (SDI), is based on simultaneous observations in mul- tiple bands taking advantage of the chromaticity of the speckles, in other words the speckle pattern scales with the wavelength but the potential companion stays on the same spot (Racine et al. 1999; Lenzen et al. 2004). The other method known as Angular Differen- tial Imaging (ADI) is based on the rotation of the field (Liu 2004; Marois et al. 2006), and has proven to be currently the most efficient method for point spread function subtraction. These two methods do not require the use of a coronograph even though their use can increase the detection limits in certain cases. Finally, the two meth- ods can be combined by letting the field rotate while observing simultaneously in multiple bands. Nearly every survey developed c 2014 RAS
Transcript
Page 1: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

Mon. Not. R. Astron. Soc. 000, 1–9 (2014) Printed 8 December 2014 (MN LATEX style file v2.2)

The Geneva Reduction and Analysis Pipeline for High-contrastImaging of planetary Companions

J. Hagelberg1?, D. Segransan1, S. Udry1, and F. Wildi11Observatoire de Geneve, Universite de Geneve, 51 Chemin des Maillettes, 1290, Versoix, Switzerland

Accepted 1988 December 15. Received 1988 December 14; in original form 1988 October 11

ABSTRACTWe present GRAPHIC, an new angular differential imaging (ADI) reduction pipeline whereall geometric image operations are based on Fourier transforms. To achieve this goal the en-tire pipeline is parallelised making it possible to reduce large amounts of observation datawithout the need to bin the data. The specific rotation and shift algorithms based on Fouriertransforms are described and performance comparison with conventional interpolation algo-rithm are given. The test using fake companions injected in real science frames demonstratethe significant gain obtained by using geometric operations based on Fourier transforms com-pared to conventional interpolation. This also translates in a better point spread function andspeckle subtraction in with respect to conventional reduction pipelines. Flux conservation ofthe companions is also demonstrated. This pipeline is currently able to reduce science dataproduced by VLT/NACO, Gemini/NICI, VLT/SPHERE, and Subaru/SCExAO.

Key words: methods: data analysis – techniques: image processing – stars: planetary systems

1 INTRODUCTION

Eighteen years after the first discovery of an exoplanet around asun-like star (Mayor & Queloz 1995) and the unambiguous detec-tion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al.1995; Rebolo, Osorio & Martn 1995), more than 900 exoplanetsand nearly 1300 brown dwarfs have been discovered, and about3000 candidates from the Kepler space mission are waiting to beconfirmed. These numbers are growing ever faster as the pace ofnew detections is increasing, thanks to newly built instruments pur-posely designed to search for sub-stellar objects, but also to theoptimisation of data analysis techniques.

The vast majority of exoplanets are currently detected withthe radial velocity or transit techniques. However, orbital periodslonger than the time-span of the observations will hardly be de-tected by these two techniques, inducing a sharp decrease in de-tectability beyond≈ 5 AU and leaving unprobed a large area in themass-separation parameter space. Direct imaging on the oppositeprobes the outer orbital regions not accessible with the two pre-vious techniques, but the high contrast at small separation whichneeds to be reached turns it into one of the most challenging ex-oplanet detection techniques. The main hurdle to detect compan-ions by high contrast imaging is to remove the stellar point spreadfunction (PSF) without diminishing the signal from the faint com-panion. This can be achieved through instrumental improvementsor by improving the observing and data reduction techniques, withefforts focusing on these two fronts concurrently.

Since the first planets around stars have been directly imaged

? E-mail: [email protected]

(Marois et al. 2008; Lagrange et al. 2009), the rate of exoplanetsdetected by direct imaging is steadily increasing thanks to progressto overcome the many technical challenges and careful selectionof the target samples. But the small number of detections in totalcontrasts with the many direct imaging surveys which generatedonly few or no detection at all (e.g., Masciadri et al. 2005; Billeret al. 2007; Lafrenire et al. 2007a; Chauvin et al. 2010; Heinzeet al. 2010; Vigan et al. 2012; Bowler et al. 2012; Nielsen et al.2013; Wahhaj et al. 2013; Janson et al. 2013; Crepp et al. 2012).

The technical challenge of subtracting the host star pointspread function is currently based on two complementary differ-ential imaging methods, with the same core idea of generating apoint spread function as similar as possible to the one which shouldbe subtracted, but without having any potential companion signalin it. The difficulty being that the speckle structure of the pointspread function evolves in time, with many speckles in the stellarhalo having a similar shape and intensity to what would be expectedfrom a companion. The first method, called Simultaneous Differen-tial Imaging (SDI), is based on simultaneous observations in mul-tiple bands taking advantage of the chromaticity of the speckles,in other words the speckle pattern scales with the wavelength butthe potential companion stays on the same spot (Racine et al. 1999;Lenzen et al. 2004). The other method known as Angular Differen-tial Imaging (ADI) is based on the rotation of the field (Liu 2004;Marois et al. 2006), and has proven to be currently the most efficientmethod for point spread function subtraction. These two methodsdo not require the use of a coronograph even though their use canincrease the detection limits in certain cases. Finally, the two meth-ods can be combined by letting the field rotate while observingsimultaneously in multiple bands. Nearly every survey developed

c© 2014 RAS

Page 2: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

2 J. Hagelberg, D. Segransan, S. Udry, and F. Wildi

its own reduction pipeline most often based on either the LocallyOptimized Combination of Images (LOCI) (Lafrenire et al. 2007b)or more recently Principal Component Analysis (PCA) (Soummer,Pueyo & Larkin 2012; Amara & Quanz 2012).

Here we present the Geneva Reduction and Analysis Pipelinefor High-contrast Imaging of planetary Companions based on ADIfor point spread function subtraction, which makes intensive use ofFourier analysis. It was specifically developed for the Geneva high-contrast imaging search of companions revealed by radial velocitytrends in the HARPS and Coralie surveys.

2 THE GENEVA HIGH-CONTRAST IMAGING SEARCHOF COMPANIONS REVEALED BY RADIALVELOCITY TRENDS IN THE HARPS AND CORALIESURVEYS

Our campaign aims at detecting with direct imaging companionsrevealed by the RV trend they are causing, based on data from ourtwo CORALIE and HARPS RV planet-search surveys. The radialvelocity data spans over more than a decade with a precision reach-ing below 1 m/s in the case of HARPS so that trends induced bysub-stellar companions on wide orbits can readily be detected. Theselected targets are observed using VLT/NACO and the angular dif-ferential imaging technique with deep observations of up to fourhours on target in order to reach the faint companions which hadtime to cool down. Our targets are all bright which results in inte-gration times below one second to reach saturation, and in order notto resort to frame binning we are using the cube mode offered byNACO where frames are stacked into a data cube. Each cube con-taining hundreds of frames is then saved into a single fits file withthe benefit of reducing readout overheads during observations.

2.1 Parallelisation

The four hours of observation on NACO used in our campaign leadto roughly 100GB of data and 100’000 frames. A straight forwardsingle core reduction would take an extremely long time and wouldrun out of memory before finishing, due to the many complex oper-ations involved in the data reduction mostly based on Fast FourierTransforms (FFT). The most widely used and easiest solution tothis large data handling issue would be to average bin the data.

By suitably binning the frames, one can decrease the totalamount of data to a quantity which fits the hardware limitations.The drawback is that valuable information gets lost in the binningprocess. First of all, the characteristics of atmospheric turbulenceare not constant in time so is also the quality of the adaptive op-tics turbulence correction. The Strehl ratio for binned frames is themean ratio of the frames in the bin, so that if half of the frames havepoor adaptive optics (AO) correction of the final binned frame willalso have below average Strehl, even though the other half of theframes had good Strehl. Furthermore, binning frames before recen-tring and correcting for the field rotation smears out the companionpoint spread function which in turn decreases its signal in the finalproduct.

The different algorithms of the pipeline fit very well to a dataparallelism scheme, which focuses on distributing the data acrossdifferent parallel computing nodes. A master node shares the databetween the slave nodes, which in turn only have a fraction of thedata to process. Once the slaves have finished, the data is gatheredby the master and reassembled. Two different types of parallelisa-tion are used, which differ on the way the data is shared between the

nodes. If the operations are pixel based, the spatial parallelisationscheme is used. In this scheme the datacube is cut in pieces alongthe time axis, which means that each node receives one specific re-gion of all the frames (see Figure 1a). The other scheme, temporalparallelisation, is used when the whole frame is needed for a spe-cific operation. This is for example the case for shifts and rotations.The datacube is then separated in frame packages, and each nodereceives a different package containing full frames (see Figure 1b).

The pipeline is implemented in PYTHON using C and FOR-TRAN libraries for the calculation intensive parts. The parallelisa-tion is achieved using the Open Message Passing Interface ( OPEN-MPI, Gabriel et al. 2004). Parallelisation can be distributed trans-parently among many different nodes, independently of their ar-chitecture. The interface between the python code and OPENMPIis handled by the MPI4PY module (Dalcn et al. 2008). All the datareduction steps given in this chapter are parallelised, and based onthe specificity of the process either in spatial or time parallelisation.

3 ADI DATA REDUCTION

The pipeline was initially developed to reduce non-coronographicsaturated ADI observations in L’ band (3.8µm) from the CONICA1024x1024 InSb Aladdin 3 detector which is part of the NACOinstrument at VLT (Lenzen et al. 2003). It was later extended toprocess coronographic data from NACO and GEMINI/NICI (Chunet al. 2008) in any band. Current work is ongoing to adapt thepipeline to VLT/SPHERE (Beuzit et al. 2010) and Subaru/SCExAO(Martinache & Guyon 2009).

The philosophy of this pipeline is to preserve the companionphotometry by relying on techniques which have the least impacton the signal. Conservation of the noise structure is also a priority inorder to achieve an efficient noise subtraction. This is achieved byapplying the geometric transformations for centring and rotation inFourier space. Another important aspect of the pipeline is the scal-able parallelisation, which makes it possible to run it on anythingfrom a cluster for high performance computing (HPC) down to anymodern laptop.

The reduction procedure can be partitioned in four sections:

(i) registration 3.1(ii) cosmetics 3.2(iii) point spread function subtraction 3.3(iv) derotation 3.4

3.1 Registration

The very first step of the reduction process is the registration. Everydata frame is analysed and a table is generated containing for everyframe the time of exposure, the parallactic angle, the star centre andother point spread function characteristics needed for the differentreduction steps. Even though the data-cubes are not modified at thisstep, it is a key element of the reduction as any error on the angleor star centre determination smears out the companion signal.

The registration process is also used to assess the quality ofthe adaptive optics correction for every single frame. This qualityestimate is than later used to keep only the best frames in a kind oflucky adaptive optics.

c© 2014 RAS, MNRAS 000, 1–9

Page 3: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

GRAPHIC 3

Time1 2 3 4

Nodes:

(a) Spatial parallelisation

Node 1

Node 2

Node 3Node 4

(b) Time parallelisation

Figure 1. The two different data parallelism schemes used by the pipeline. In spatial parallelism (a), the observation frames are cut along the time axis. Eachnode one a different region of the frame. In time parallelisation (b), the frames are distributed among the nodes. Each node processes a different frame, orrather frame group.

3.1.1 Parallactic angle

The parallactic angle varies in time as the star moves in the sky, andits hour angle changes. Based on the “Local Sidereal Time” (LST)given in seconds in the FITS header, and the right ascension α ofthe star (given in degrees), the hour angle h is given by:

h =15 ·LST

3600−α

When observing with NACO in cube mode the frames arestored in data-cubes of 30 to more than 500 frames, with only onesingle header. This implies that the hour angle for each single framecannot directly be derived from the header, since solely one LSTvalue is given. Thus the observing time for every single frame hasto be interpolated in order to get the correct parallactic angles.

Frame loss can be as high as 39 per cent when using sub-second integration times on NACO, which in turn induces er-rors on the interpolated individual frame hour angles. These er-rors eventually yield wrong parallactic angles, which have two ad-verse effects on the possible companion. First the companion signalgets smeared, thus lowering the signal to noise ratio, and secondthe companion self-subtraction risk is higher as non-optimal pointspread functions are used for subtraction (see 3.3).

3.1.2 Star point spread function registration

The determination of the star centre position is a complex taskparticularly in the case of saturated point spread function and/orcoronographic imaging because the information at the core of thepoint spread function is often lost. The determination of the cen-tring method accuracy is also limited by the fact that the exactpoint spread function centre is unknown. In order to make thepoint spread function registration work on any combination of sat-urated/unsaturated, full/coronographic, and AO/non-AO modes thealgorithm is split in a two stage process. First a basic centroidsearch is done, and once the centroid is found a two-dimensional

function is fitted to the point spread function, which takes into ac-count possible coronographic or saturated cores by masking outpixels. This method also works if there is more than one star in thefield of view, as long as no other star has exactly the same flux asthe target.

The first step which is the centroiding algorithm searches fora patch of contiguous pixels above a given threshold value. If thepatch size is within the range given by the user, the centre-of-massof the patch is calculated. These values are then fed into the pointspread function fitting algorithm as initial values.

The Moffat (1969) function was initially derived from the con-volution of a Gaussian function with an aperture diffraction func-tion and a scattering function of the photographic emulsion, to de-scribe the instrumental response when observing with photographicplates through atmospheric turbulence. This function later turnedout to be also well adapted to CCD imaging on very good seeingconditions (Bendinelli, Zavatti & Parmeggiani 1988), and even fordiffraction limited observations (Trujillo et al. 2001). The pipelineuses the following form of the Moffat function to fit the point spreadfunction centred on x0,y0:

I(x,y,α,β ) = I0 ·β −1πα2 ·

(1+

(x0− x)2 +(y0− y)2

α2

)−β

+Bg

The full width at half maximum (FWHM) of the point spreadfunction is then given by the two fitting variables α and β :

FWHM(α,β ) = 2α

√21/β −1

The Moffat fitting on the point spread function is achievedusing lmdif’s modified Levenberg-Marquardt algorithm which ispart of the Fortran MINPACK library (Mor, Garbow & Hillstrom1980).

3.1.3 Frame selection

In order to increase the signal to noise ratio of the companion, wekeep only the frames with good adaptive optics correction. A first

c© 2014 RAS, MNRAS 000, 1–9

Page 4: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

4 J. Hagelberg, D. Segransan, S. Udry, and F. Wildi

0 20 40 60 80

0

20

40

60

80

0 20 40 60 80

0

20

40

60

80

Figure 2. The single non-zero pixel surrounded by a background at zero(left figure) is shifted in Fourier space by ∆x = 0.3, and ∆y = 1.5 pixels(right figure), resulting in the typical pattern of the Gibbs phenomenon.

rough selection is based on the centroiding algorithm, by simplydiscarding frames where no centroid has been found.

The second step is based on the point spread function geome-try. When the detector integration time is more than a few secondsit can happen that bad tip-tilt correction create a sharp elongatedpoint spread function specially when in coronographic mode. Sim-ple selection on maximum signal strength hardly detects this kindof frames, even though the point spread function shape is asymmet-ric. Using the values of the point spread function fitting has shownto be a robust way to verify the quality of the adaptive optics cor-rection.

Furthermore, by using individual frame point spread functionfitting instead of the widely used cross-correlation method, we canascertain that only good and optimally centred frames are used. Theindependent centring also ensures that poor centring on one framewill not affect centring on the other frames.

Once the frame selection has been done an optional quick-look algorithm can be run which bins the recentred good framesto decrease the total number of frames to process in the followingsteps.

3.2 Cosmetics

Cosmetics play a key role when using Fourier transforms, becauseof the Gibbs (1898) phenomenon. This phenomenon causes a che-querboard pattern to appear with pixels alternately overshootingin the positive and negative values around an image discontinuity,such as bad pixels, saturated point spread function or sharp im-age borders. The reason for these oscillations is that a discontinuitywould need infinite Fourier series to be correctly characterised, butsince we are dealing with finite discrete Fourier transforms the dis-continuity is not well approximated.

The frame preparation is thus a key step in order to useFourier-based operations, where no deviant pixel should be left out.The two key steps for the cosmetics preparation are the sky gener-ation and subtraction and the bad pixel correction.

3.2.1 Master sky

The sky background varies very rapidly when observing in the nearinfrared especially in L’ band. Observations are done in ditheringmode in order to directly use the science frames to determine thesky background. To decrease the effect of the star point spread func-tion on the sky determination, a mask is applied on the region con-

taining the star. A median of N time-contiguous cubes with N dif-ferent masked dithering positions is then calculated. As ditheringpositions change every one or two seconds, and we are using fourto five dithering positions, a sky frame is produced for every five toten seconds of observation. Using this method we manage to havea median sky as close as possible to the sky background of the sci-ence frames. Once the master skies have been generated, they aresubtracted on each frame, using each time the master sky frameclosest in time.

3.2.2 Bad pixel correction

First a bad pixel map is generated using either a master dark or amaster sky. A simple sigma clipping routine flags the deviant pix-els, where bad pixels are defined as a pixels which values variesmore than C ·σ from the median value of the frame. By changingthe value of the C coefficient the selection criteria can be adaptedto ensure that all bad pixels are cleaned.

Since each bad pixel will be spread over many pixels as a con-sequence of the sub-pixel re-centring and the derotation process,it is not possible to simply keep a bad pixel map to mask out thepixels. Furthermore the discontinuities induced by bad pixels in-duce Gibbs overshooting when doing Fourier transforms (Fig. 2).To clean the bad pixels, they are all first set to NaN. The bad pixelvalues are then replaced by the median of the neighbours, ignoringany NaN pixels. This ensures that bad pixel clumps are correctlyflattened.

3.3 Point spread function subtraction

The “point spread function subtraction” is the core of the reductionprocess. It is based on the well established ADI algorithm whichaims at subtracting the stellar point spread function and specklesby using the field rotation in order to increase the sensitivity to sur-rounding point sources (Marois et al. 2006). We are generating aspecific point spread function for every single frame, in a similarway to LOCI (Lafrenire et al. 2007b) but without using any combi-nation coefficients in an effort to preserve the flux.

For an observing sequence composed of C datacubes contain-ing each N frames. The total number of frames is then T = C ·N.The frame fi = (c j;nk) will have a parallactic angle αi given by

αi = arctan(

cosφ sinhi

cosδ sinφ − sinδ cosφ coshi

)with φ the observatory latitude, δ the target declination, and hi thehour angle at observing time ti.

A point spread function is generated for frame fi using theframes fx fulfilling the conditions on maximum time separationtmax

|tk− ti|< tmax

and minimum field rotation αmin such that

|αk−αi|> αmin = 2 · sin(

nFWHM · FWHM

2 · rmin

)where rmin is the minimum radius in pixels to consider,

FWHM is the point spread function full width at half maxi-mum from the fitting (3.1), and nFWHM is the minimum numberof point spread function displacements to prevent companion self-subtraction.

c© 2014 RAS, MNRAS 000, 1–9

Page 5: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

GRAPHIC 5

3.3.1 Fourier shift

All the geometric operations on the image are based on one andtwo dimensional Fourier transforms. As a short reminder, we givethe definition of Fourier transforms in two dimensions of a functionf (x,y):

f (νx,νy) =∫∫

−∞

f (x,y) · e−i2π(νxx+νyy)dxdy

and its inverse Fourier transform:

f (x,y) = f (νx,νy)∨ =

∫∫∞

−∞

f (νx,νy) · ei2π(νxx+νyy)dνxdνy

The one dimensional case is a trivial simplification of the 2Dcase, and the notation for a one dimensional Fourier transformalong the x axis will be noted by f (νx,y), similarly the transformalong the y axis will be noted f (x,νy).

To perform a shift of the image we use the translation propertyof Fourier transforms. If f (ν) is the Fourier transform of the onedimensional function f (x), then the Fourier transform of f (x+ a)is exp(−i2πνa) f (ν). Thus a spatial shift is equivalent to multiplythe Fourier transform f (ν) by a phasor e−i2πνa. A shift along the xaxis in the two dimensional case can thus be expressed as

fx(x+a,y) =∫

−∞

e−i2πνxa f (νx,y)ei2π(νxx)dνx

and the more general case of a shift a in x and b in y is thenobtained by a multiplication by a phasor e−i2π(νxa+νyb):

f (x+a,y+b) =∫∫

−∞

e−i2π(νxa+νyb) f (νx,νy) ·ei2π(νxx+νyy)dνxdνy

The frames we are recentring are defined on a finite area,whereas the shift property holds for infinite domains. We cannonetheless apply this operation to the frames provided we intro-duce a zero-padding which also prevents apparition of Gibbs oscil-lations at the borders, but this implies that the operations have to beapplied on at least double sized frames.

The frames are finally median combined which has shown tobe less sensitive to the typical speckles noise at small separation(Brandt et al. 2013).

3.4 Derotation

The final step of the ADI processing is to correct each frame forthe field rotation and merge all the frames. In order to preserve thecompanion signal, and also to keep the noise structure unchangedwe perform the rotation using Fourier transforms.

The rotation algorithm we are using is implemented by ap-plying to Fourier transforms the property that a rotation matrixcan be decomposed in three shear matrices (Unser, Thevenaz &Yaroslavsky 1995; Eddyy, Fitzgerald & Noll 1996; Larkin, Old-field & Klemm 1997; Yoon, Weinberg & Katz 2011; Welling, Eddy& Young 2006). This method is largely used in satellite imageryof Earth and medical imaging. The galaxy image decompositiontool GALPHAT is an example of its previous use in astronomicalimaging (Yoon, Weinberg & Katz 2011). We will only give a briefdescription of the method based on the detailed description of thealgorithm given by Larkin, Oldfield & Klemm (1997).

Any rotation matrix Rθ of a given angle θ can be expressed asthe product of three shear matrices:

(cosθ −sinθ

sinθ cosθ

)︸ ︷︷ ︸

=

(1 − tan θ

20 1

)︸ ︷︷ ︸

Sx

(1 0

sinθ 1

)︸ ︷︷ ︸

Sy

(1 − tan θ

20 1

)︸ ︷︷ ︸

Sx

where Sx and Sy are shear matrices on the x axis and y axisrespectively.

To shear by a factor a = tan θ

2 in the x direction an im-age described by the function f (x,y) we apply the transformationsx(x,y) = f (x + ay,y), which can be readily adapted to Fouriertransforms using their shift property. The shear matrix Sx applied tothe image f (x,y) can then be expressed in terms of Fourier trans-forms as the function sx

sx(x,y) =∫

−∞

e−i2πνxay f (νx,y) · ei2πνxxdνx

the product SySx becomes by noting b =−sinθ

syx(x,y) =∫

−∞

e−i2πνybxsx(x,νy) · ei2πνyydνy

and the rotation SxSySx

sxyx(x,y) =∫

−∞

e−i2πνxaysyx(νx,y) · ei2πνxxdνx

Similar to the shift case, the frames need to be padded. Thisimplies that in order to rotate one frame 6 FFT have to be applied ona double sized frame, resulting in a significant increase of compu-tation time compared to standard interpolation methods. This rota-tion technique can only be applied to such a large amount of framesthanks to parallelisation.

4 PIPELINE PERFORMANCE

Contrast curves obtained for HD142527 observations with NICI,using GRAPHIC and PCA are displayed in figure 6. At small sep-aration we reach a higher contrast than PCA, while the low-passfiltering from interpolation results in better contrasts at higher sep-aration where the noise is mainly gaussian.

To test the pipeline performance we developed an algorithmto inject fake companions. These companions are generated by firstincluding their signal into a plane wavefront which is then con-voluted with a pupil based on the main optical characteristic ofthe used telescope. Using this technique we have a precise controlon the companion flux, and furthermore the point spread functionscales precisely with wavelength. Poisson noise is finally added tothese nearly perfect point spread functions before adding them tothe real science frame.

4.1 Performance of geometric transformations

To characterise the performance of the shift and rotation algo-rithms, we generated a test image with fake companions. This im-age is composed of an L’ short-exposure image with a saturatedpoint spread function, which has been sky-subtracted, cleaned frombad pixels, and median-filtered. To this image we added fake com-panions with magnitudes differences to the star reaching step-wisefrom 1 to 8 and with separations from 0.5 to 6.5 arcseconds witharcseconds steps, and Poisson noise was included in the process.

To then test the performance of the shift algorithm, we shiftedthe original image in a ∆x,∆y direction and then shifted it backin the opposite direction (−∆x,−∆y). This double shifted image

c© 2014 RAS, MNRAS 000, 1–9

Page 6: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

6 J. Hagelberg, D. Segransan, S. Udry, and F. Wildi

0.0 0.5 1.0 1.5 2.0 2.5 3.0

Separation [arcsecond]

100

101

102

103

104

Standarddeviation

Figure 3. Root mean square representing the noise caused by the shift al-gorithms. As a reference the root mean square of the original image as afunction of separation to the center is given by the black dotted line. To testthe algorithms we shift an image first by ∆x = 3.5,∆y = 2.7 pixels and thenback. The difference between the original image and the one shifted backto the initial position is then plotted for the interpolation and Fourier shift,blue dashed and red solid lines respectively.

can then be compared with the original non-shifted image. By sub-tracting the original image from the double-shifted, the effects in-duced by the different shifting algorithms become visible. The dot-ted black line in figure 3 shows the normalised root mean squareof the original test frame, calculated in concentric annuli. The in-jected fake companions are causing the peaks at 0.5, 1.5, and 2.5arcseconds. The two additional lines show the root mean squareof the original test frame where the double shifted frames havebeen subtracted, where the interpolation and Fourier shift are rep-resented by the blue dashed and red solid lines respectively. Thesetwo lines would be flat with no root mean square if the shift algo-rithms were perfect. For the spline interpolation this is clearly notthe case, where the curve reflects two phenomenons in and out ofthe peaks which are both caused by the fact that interpolations inimage space act as an uncontrolled low-pass filter. When the imageis interpolated, the structure of the noise is modified so that the highfrequency noise is not removed by the subtraction, as it is missingfrom the double shifted frame. This effect leaves an overall noisecontinuum. The effect on the peaks comes from the fact that thefake companions are flattened by the interpolation, so that part ofthe fake companion signal is not removed.

For the rotation algorithm we applied a similar test. We firstrotated a test image by an angle α = 11.3 followed by a rotation byan angle −α . Ideally such a double rotation should return the orig-inal image. Changes in the image structure induced by the rotationcan be found by subtracting the original image from the double ro-tated one. The root mean square as a function of separation to thecentral point spread function of the original image is plotted on fig-ure 5 with a black dotted line, and the subtracted rotations usinginterpolation and Fourier shears are represented on the same figureby the blue dashed and red solid lines respectively.

The images from the rotation test are represented on figure 4,where 4a is the original image, 4b and 4c are the resulting imagefrom the rotation followed by an inverse rotation using interpola-tion and the 3-shear algorithm respectively. The most striking dif-ference between the two rotated images is the residual noise struc-ture, the 3-shear algorithm preserving the noise structure while theinterpolation algorithm acts as an uncontrolled low-pass filter.

0.0 0.5 1.0 1.5 2.0 2.5 3.0

Separation [arcsecond]

10−1

100

101

102

103

104

Standard

deviation

Figure 5. Root mean square representing the noise caused by the rotationalgorithms. As a reference the root mean square of the original image asa function of separation to the center is given by the black dotted line. Totest the algorithms we rotate the image first by an angle α = 11.3 and thenback. The difference between the original image and the one rotated back tothe initial position is then plotted for the interpolation and Fourier rotation,blue dashed and red solid lines respectively.

0.0 0.5 1.0 1.5 2.0 2.5 3.0

Separation [arcsecond]

10−6

10−5

10−4

10−3

Flu

xra

tio

14

15

16

17

18

19

20

Mag

nit

ud

ed

iffer

ence

Figure 6. Angular differential imaging detection limits for Gemini/NICIdata in CH4-K5%S band, using a 0.22′′ semi-transparent coronographicmask with≈ 40 degrees of field rotation. The three lines are detection limitsat 5-σ . The black dashed line is the detection limit achieved by GRAPHICbefore correction for self-subtraction while the blue solid line is the de-tection limit of GRAPHIC corrected in order to take into account the fluxloss. The blue dashed line is the detection limit using a principal componentanalysis pipeline which was used in Casassus et al. (2013).

The different effects of the two rotation algorithms is evenmore evident when the original image is subtracted from the ro-tated images, as can be seen on figure 4d for the interpolation andfigure 4e for the 3-shear algorithm. In the case of interpolation, thecompanions become dark blue points surrounded by signal whichmeans that the companion point spread function is spread, with sig-nal being transferred from the centre of the point spread functioninto the wings. The 3-shear residuals show no structure, indicatingthat the star and companion signals are not altered by the rotation.Some numerical noise can be noted, but due to its very high fre-quency it is not altering much the image.

c© 2014 RAS, MNRAS 000, 1–9

Page 7: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

GRAPHIC 7

a

I0 (x,y)

Interpolation

FFT

b

I2 =Rot(Rot(I0 ,α),−α) residuals =I2−I0

c

d

100

80

60

40

20

0

20

40

60

80

100

Inte

nsity

[AD

U]

e

5

4

3

2

1

0

1

2

3

4

5

Inte

nsity

[AD

U]

Figure 4. The original non-rotated test image is given in figure (a). The result of two consecutive 11.3 and -11.3 degree rotations of the original figure (a),using a third order spline interpolation figure (b), and the 3 shear algorithm fig (c). The residuals are computed by subtracting the original image from theinterpolation rotated (d), and three-shear rotated (e) images. All figures have the same intensity scale, except for figure (e) where we reduced the cuts by afactor 20 to reveal some of the induced noise.

4.2 Photometric accuracy

Test data-sets are created by injecting fake companions into all theraw frames with an angle following the field rotation. By inject-ing the companions into the raw frames we are able to take intoaccount nearly all the steps of the reduction, namely cosmetics,recentring, point spread function subtracting, derotating, and finalcollapse. Centre and parallactic angle determination are the onlytwo operations we cannot test with this method, as the companioninjection already relies on these two parameters.

The flux loss induced by the pipeline has to be well charac-terised, in order to have accurate photometry of the companions.Using our fake planet injection algorithm we measured the flux lossas a function of initial flux and separation. We determined an un-certainty on the companion flux of four per cent independent of theseparation as long as the flux is above the detection limit. The goalto create an efficient pipeline with high photometric accuracy hasthus been reached.

4.3 Performance and observation length

In order to test our observing strategy of long observations, we de-fined a specific test case. Taking one of our nearly three hour obser-vation data set, we added fake companions to the raw images. Theobservation was then reduced using three different data set sub-samples. For the first one we used the whole data-set trimmed inorder to have as much observing time before and after transit atmeridian. This results in a two hour data-set, with one hour beforeand one hour after meridian transit. We did the same for a one hoursub-sample and a 30 minutes sub-sample from the same initial dataset, each time centred on the meridian transit.

The resulting detection limits obtained by reducing the sub-samples with exactly the same reduction parameters are plotted infigure 7. The detection limits clearly show that the reduction wastuned for the inner region within 0.5 arcseconds. At 0.3 arcsecondsseparation, the achieved magnitude difference are 7, 7.6, and 8 for30 minutes, one hour, and two hours respectively. At 1.5 arcsecondsseparation these limits become 9, 9.3, and 9.7. With two hour ob-servations we thus gain one magnitude in sensitivity with respect toa short 30 minutes sequence, and half a magnitude with respect to a

c© 2014 RAS, MNRAS 000, 1–9

Page 8: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

8 J. Hagelberg, D. Segransan, S. Udry, and F. Wildi

0.0 0.5 1.0 1.5 2.0 2.5 3.0

Separation [arcsecond]

10−5

10−4

10−3

10−2

Flu

xra

tio

9

10

11

12

13

14

15

Magn

itu

de

diff

eren

ceFigure 7. Angular differential imaging detection limits for the same ob-servation centred on meridian transit using 30 minutes, one hour and twohours sub-samples, plotted in blue dashed, black dotted, and solid red linerespectively.

conventional one hour observation. A one magnitude difference inL’ band is what separates a 13 MJ from a 20 MJ companion, basedon Allard (2014) at 1 Gyr.

5 CONCLUSIONS

We presented a new pipeline for angular differential imagingwhich, to our knowledge, is the first to have all image process-ing only relying on Fourier transforms. The gain in performancedelivered by the use of Fourier transforms for shifts and rotationshas been demonstrated through various comparisons tests. Contrastcurves obtained on the same data with different pipelines were alsopresented for comparison. Conservation of the companion photo-metric signal has also been demonstrated using fake companionsinjected in the raw data.

GRAPHIC is able to process up to 100’000 frames with nobinning thanks to massive parallelism. This parallelisation of thepipeline makes it also possible to implement further computation-ally demanding algorithms . Further development is planned in or-der to use graphics processing units (GPU) for a gain in processingtime and wavelet filtering. GRAPHIC has also very recently beenadapted to process VLT/SPHERE and Subaru/SCExAO data.

GRAPHIC has also been applied on an extended source withresults published in Casassus et al. (2013) under the developmentname PADIP.

ACKNOWLEDGEMENTS

To develop the pipeline the author also made use of SCIPY (Joneset al. 2001), NUMPY (Oliphant 2007), ASTROPY (Astropy Col-laboration et al. 2013), BOTTLENECK, IPYTHON (Prez & Granger2007), and MATPLOTLiB (Hunter 2007).

REFERENCES

Allard F., 2014, in , pp. 271–272Amara A., Quanz S. P., 2012, MNRAS, 427, 948Astropy Collaboration et al., 2013, Astronomy and Astrophysics,

558, 33

Basri G., Marcy G. W., 1995, The Astronomical Journal, 109, 762Bendinelli O., Zavatti F., Parmeggiani G., 1988, Journal of Astro-

physics and Astronomy, 9, 17Beuzit J.-L. et al., 2010, in , p. 231Biller B. A. et al., 2007, The Astrophysical Journal Supplement

Series, 173, 143Bowler B. P., Liu M. C., Shkolnik E. L., Dupuy T. J., Cieza L. A.,

Kraus A. L., Tamura M., 2012, The Astrophysical Journal, 753,142

Brandt T. D. et al., 2013, The Astrophysical Journal, 764, 183Casassus S. et al., 2013, Nature, 493, 191Chauvin G. et al., 2010, Astronomy and Astrophysics, 509, 52Chun M. et al., 2008, arXiv:0809.3017 [astro-ph], 70151V, arXiv:

0809.3017Crepp J. R. et al., 2012, The Astrophysical Journal, 761, 39Dalcn L., Paz R., Storti M., DEla J., 2008, Journal of Parallel and

Distributed Computing, 68, 655Eddyy W. F., Fitzgerald M., Noll D. C., 1996, Magnetic Reso-

nance in Medicine, 36, 923Gabriel E. et al., 2004, in Recent Advances in Parallel Virtual Ma-

chine and Message Passing Interface, Springer, pp. 97–104Gibbs J. W., 1898, Nature, 59, 200Heinze A. N., Hinz P. M., Kenworthy M., Meyer M., Sivanandam

S., Miller D., 2010, The Astrophysical Journal, 714, 1570Hunter J. D., 2007, Computing in Science & Engineering, 9, 90Janson M. et al., 2013, The Astrophysical Journal, 773, 73Jones E., Oliphant T., Peterson P., et al., 2001, SciPy: Open source

scientific tools for PythonLafrenire D. et al., 2007a, The Astrophysical Journal, 670, 1367Lafrenire D., Marois C., Doyon R., Nadeau D., Artigau ., 2007b,

The Astrophysical Journal, 660, 770Lagrange A.-M. et al., 2009, Astronomy and Astrophysics, 493,

L21Larkin K. G., Oldfield M. A., Klemm H., 1997, Optics Commu-

nications, 139, 99Lenzen R., Close L., Brandner W., Biller B., Hartung M., 2004, in

, pp. 970–977Lenzen R. et al., 2003, in , pp. 944–952Liu M. C., 2004, Science, 305, 1442Marois C., Lafrenire D., Doyon R., Macintosh B., Nadeau D.,

2006, The Astrophysical Journal, 641, 556Marois C., Macintosh B., Barman T., Zuckerman B., Song I., Pa-

tience J., Lafrenire D., Doyon R., 2008, Science, 322, 1348Martinache F., Guyon O., 2009, in , pp. 74400O–74400O–9Masciadri E., Mundt R., Henning T., Alvarez C., Barrado y Navas-

cus D., 2005, The Astrophysical Journal, 625, 1004Mayor M., Queloz D., 1995, Nature, 378, 355Moffat A. F. J., 1969, Astronomy and Astrophysics, 3, 455Mor J. J., Garbow B. S., Hillstrom K. E., 1980, User guide for

MINPACK-1Nakajima T., Oppenheimer B. R., Kulkarni S. R., Golimowski

D. A., Matthews K., Durrance S. T., 1995, Nature, 378, 463Nielsen E. L. et al., 2013, The Astrophysical Journal, 776, 4Oliphant T. E., 2007, Computing in Science & Engineering, 9, 10Prez F., Granger B. E., 2007, Computing in Science & Engineer-

ing, 9, 21Racine R., Walker G., Nadeau D., Doyon R., Marois C., 1999,

Publications of the Astronomical Society of the Pacific, 111,587, ArticleType: research-article / Full publication date: May1999 / Copyright 1999 The University of Chicago Press

Rebolo R., Osorio M. R. Z., Martn E. L., 1995, Nature, 377, 129Soummer R., Pueyo L., Larkin J., 2012, The Astrophysical Jour-

c© 2014 RAS, MNRAS 000, 1–9

Page 9: The Geneva Reduction and Analysis Pipeline for High ...obswildif/publications/2015_GRAPHIC_pipeline.pdftion of three brown dwarfs (Basri & Marcy 1995; Nakajima et al. 1995; Rebolo,

GRAPHIC 9

nal Letters, 755, L28Trujillo I., Aguerri J. A. L., Cepa J., Gutirrez C. M., 2001,

Monthly Notices of the Royal Astronomical Society, 328, 977Unser M., Thevenaz P., Yaroslavsky L., 1995, IEEE Transactions

on Image Processing, 4, 1371Vigan A. et al., 2012, Astronomy and Astrophysics, 544, 9Wahhaj Z. et al., 2013, The Astrophysical Journal, 773, 179Welling J. S., Eddy W. F., Young T. K., 2006, Graphical Models,

68, 356Yoon I., Weinberg M. D., Katz N., 2011, Monthly Notices of the

Royal Astronomical Society, 414, 1625

c© 2014 RAS, MNRAS 000, 1–9


Recommended