+ All Categories
Home > Documents > UAl Data Acquisition System

UAl Data Acquisition System

Date post: 23-Sep-2016
Category:
Upload: esko
View: 228 times
Download: 2 times
Share this document with a friend
4
IEEE Transactions on Nuclear Science, Vol. NS-32, No. 4, August 1985 UAl DATA ACQUISITION SYSTEM Esko Pietarinen Department of High Energy Physics University of Helsinki Siltavuorenpenger 20 C SF-00170 Helsinki, Finland The UAI experiment observes particles produced in proton-antiproton collisions at the CERN Super Proton Synchrotron. The data acquisition system of UAl must handle very high data rates and perform sophisticated trigger and filtering in order to store the physically interesting events. In this report particular emphasis is given to the new VME based readout system designed to provide the required speed, modularity and expandability. 1. The layout of the UAl detector complex Particles produced in the collision of a proton and an antiproton in the center of the detector are observed based on the information, which they leave into the different layers of the apparatus. Fig. 1 describes schematically the location of the various detector components [1]. The timing and the amount of raw data generated by each detector part are considered below. The interval between beam crossings is 3.8 ps, which determines the main timing of the trigger system. MUON CHAMBERS GONDOLAS ELECTROMAGNETIC_ CALORIMETER MUON WALL 11 |STREAI1ER TUBES| and scintillator sheets. Hadrons often pass through the electromagnetic calorimeter and are absorbed in the hadron calorimeter, which consists of iron and scintillator material. The iron serves also as a return yoke for the magnetic field. The produced light is collected to photomultiplier tubes and the signals are digitized by LRS 2282 ADC's. The total conversion time is about 3 ms and the amount of calorimeter data is about 14 kbytes. Part of the calorimeter data are collected separately to produce fast trigger within 3 ps from beam crossing. A digital buffer memory after the ADC's allows a second trigger after the conversion time. This time (3 ms) can be reduced by installing analog buffers before ADC's. Forward detectors are needed to complete the energy summation. The calorimeter data are crucial in the selection of interesting events. The third level filter program in 168E emulators refines the trigger decision using mainly these data. 1.3 Muon detector and streamer tubes Muons pass through the calorimeters without difficulty and they are detected by muon drift chambers and streamer tubes. Hit information from the drift chambers is used for fast first level trigger, which exploits a look up table to search for tracks pointing to the interaction vertex within 150 mRad. Drift chamber information is further processed by the FAMP multiprocessor system to provide a second level trigger in 1 ms. Streamer tube data comes from 40000 channels and it is packed and formatted to a few kbytes by the readout electronics and a VME system consisting of 6 VME processors CPUA1. The muon detector readout is also doubly buffered. BUNCH CROSSINGx DN Ii PSITION I OTCTR |ENDCAP | LECTORI CENTRAL DETECTOR Fig. 1. The UAl detector 1.1 The central detector The central detector is a cylindrical drift chamber containing 6250 sense wires along a 0.7 T magnetic dipole field direction . The drift velocity of the electrons is 5.3 cm/ps and the drift time is measured in 4 ns units. Signals from both ends of each wire are amplified and fed into 6-bit A/D converters to measure the pulse height and the charge division. The amount of data are 256 bytes per wire giving a total of 1.6 Mbytes/event. Zero suppression reduces this to typically 300 kbytes and actual hit information after formatting is about 80 kbytes. 110 Read Out Processor modules (ROP's) containing MC68BOO and 8X300 CPU's perform the data reduction and formatting. ROP readout is carried out with the REMUS (CAMAC) system. Central detector digitizers have a fast memory, which enables them to store another event only 4 ps after the previous one. However, when both buffers are full, it takes about 35 ms until next trigger is possible. 1.2 Electromagnetic and hadron calorimeters Electrons and photons are detected in the electromagnetic calorimeter, which consists of lead PP COLLISII LEVEL I CAI LEVEL I MU LEVEL 2 MU DIGITIZATII DATA REDU 1 ILOST "NEXT tEVENT IEVENT | ''I Z. I _I _._. i ON I.5P' j j_ _ _ _ _ 'L. 1 2.5 p3 l 1 | I ION I<! ION I| f-1 m ON (ADC) 3 Ms I l ICTION l 40ms L - - - - --X I - I I I PARALLEL READOUT LEVEL 3. EVENT FILTER t1ASS STORAGE LEVEL I 4 us I_+ Z l 50-500 Ms L l I __ , I*I EVEL2 I ms LEVEL 3 .. 50500 mg Fig. 2. The timing of trigger levels 1 and 2 and filter level 3. 2. Parallel readout and event builder systems 2.1 REMUS readout The data acquisition of UAl is based on the REMUS multicrate system (modified CAMAC) (2] . After digitization data appear scattered in about 150 CAMAC crates, from which they must be read out as fast as possible. The present multibranch system has a total transfer speed over 50 Mbytes/sec. Until now data were read into REMUS data buffers (RDB's) and collected in 0018-9499/85/0008-1463$01.00 © 1985 IEEE 1 !I - -L I ------ 1 200 ms II 1463
Transcript
Page 1: UAl Data Acquisition System

IEEE Transactions on Nuclear Science, Vol. NS-32, No. 4, August 1985

UAl DATA ACQUISITION SYSTEM

Esko PietarinenDepartment of High Energy Physics

University of HelsinkiSiltavuorenpenger 20 C

SF-00170 Helsinki, Finland

The UAI experiment observes particlesproduced in proton-antiproton collisions at the CERNSuper Proton Synchrotron. The data acquisition systemof UAl must handle very high data rates and performsophisticated trigger and filtering in order to storethe physically interesting events. In this reportparticular emphasis is given to the new VME basedreadout system designed to provide the required speed,modularity and expandability.

1. The layout of the UAl detector complex

Particles produced in the collision of aproton and an antiproton in the center of the detectorare observed based on the information, which theyleave into the different layers of the apparatus.Fig. 1 describes schematically the location of thevarious detector components [1]. The timing and theamount of raw data generated by each detector part areconsidered below. The interval between beam crossingsis 3.8 ps, which determines the main timing of thetrigger system.

MUONCHAMBERS

GONDOLASELECTROMAGNETIC_CALORIMETER

MUON WALL 11|STREAI1ER TUBES|

and scintillator sheets. Hadrons often pass throughthe electromagnetic calorimeter and are absorbed inthe hadron calorimeter, which consists of iron andscintillator material. The iron serves also as areturn yoke for the magnetic field. The produced lightis collected to photomultiplier tubes and the signalsare digitized by LRS 2282 ADC's. The total conversiontime is about 3 ms and the amount of calorimeter datais about 14 kbytes. Part of the calorimeter data arecollected separately to produce fast trigger within 3ps from beam crossing. A digital buffer memory afterthe ADC's allows a second trigger after the conversiontime. This time (3 ms) can be reduced by installinganalog buffers before ADC's. Forward detectors areneeded to complete the energy summation. Thecalorimeter data are crucial in the selection ofinteresting events. The third level filter program in168E emulators refines the trigger decision usingmainly these data.

1.3 Muon detector and streamer tubes

Muons pass through the calorimeters withoutdifficulty and they are detected by muon driftchambers and streamer tubes. Hit information from thedrift chambers is used for fast first level trigger,which exploits a look up table to search for trackspointing to the interaction vertex within 150 mRad.Drift chamber information is further processed by theFAMP multiprocessor system to provide a second leveltrigger in 1 ms. Streamer tube data comes from 40000channels and it is packed and formatted to a fewkbytes by the readout electronics and a VME systemconsisting of 6 VME processors CPUA1. The muondetector readout is also doubly buffered.

BUNCH CROSSINGx

DN IiPSITIONI OTCTR

|ENDCAP |

LECTORI

CENTRALDETECTOR

Fig. 1. The UAl detector

1.1 The central detector

The central detector is a cylindrical driftchamber containing 6250 sense wires along a 0.7 Tmagnetic dipole field direction . The drift velocityof the electrons is 5.3 cm/ps and the drift time ismeasured in 4 ns units. Signals from both ends of eachwire are amplified and fed into 6-bit A/D convertersto measure the pulse height and the charge division.The amount of data are 256 bytes per wire giving atotal of 1.6 Mbytes/event. Zero suppression reducesthis to typically 300 kbytes and actual hitinformation after formatting is about 80 kbytes. 110Read Out Processor modules (ROP's) containing MC68BOOand 8X300 CPU's perform the data reduction andformatting. ROP readout is carried out with the REMUS(CAMAC) system. Central detector digitizers have afast memory, which enables them to store another eventonly 4 ps after the previous one. However, when bothbuffers are full, it takes about 35 ms until nexttrigger is possible.

1.2 Electromagnetic and hadron calorimeters

Electrons and photons are detected in theelectromagnetic calorimeter, which consists of lead

PP COLLISII

LEVEL I CAI

LEVEL I MU

LEVEL 2 MU

DIGITIZATII

DATA REDU

1 ILOST "NEXTtEVENT IEVENT

| ''IZ. I _I _._. i

ON I.5P' j j_ _ _ _ _

'L. 1 2.5 p3 l 1 | I

ION I<!ION I| f-1 m

ON (ADC) 3 Ms I l

ICTION l 40ms L

- - - - --X I - I I IPARALLEL READOUT

LEVEL 3. EVENT FILTER

t1ASS STORAGE

LEVEL I4 us

I_+ Z

l 50-500 Ms LlI __ , I*I

EVEL2I ms

LEVEL 3.. 50500 mg

Fig. 2. The timing of trigger levels1 and 2 and filter level 3.

2. Parallel readout and event builder systems

2.1 REMUS readout

The data acquisition of UAl is based on theREMUS multicrate system (modified CAMAC) (2] . Afterdigitization data appear scattered in about 150 CAMACcrates, from which they must be read out as fast aspossible. The present multibranch system has a totaltransfer speed over 50 Mbytes/sec. Until now data wereread into REMUS data buffers (RDB's) and collected in

0018-9499/85/0008-1463$01.00 © 1985 IEEE

1 !I --L

I------

1 200 ms I I

1463

Page 2: UAl Data Acquisition System

1464

a tree like structure into 5 main branches. A specialmodule reads the data from these branches with 0.5 psCAMAC cycle time into one of the 5 168E emulatorprocessors for third level filtering[3]. Since averageevent size is about 120 kbytes, 35-40 ms are neededfor a typical event.

FORWARD CHAMBER CENTRAL DETECTOR CENTRAL DETECTOR

Readout control and event data busses are separated inorder not to delay control functions because of thehigh speed block transfers (which are interruptible)occurring on the event data bus. The VME/VMX REMUSbranch driver is a secondary master on the VMX buswith direct VMX memory access. The writing speed islimited by CAMAC to 2 Mbytes/sec.

REMUS BRANCH

Fig. 4. VME/VMX REMUS branchdriver parallel readout unit

Another example of VME readout is thestreamer tube analog readout unit (STAR)[61 , whichuses dedicated busses to transfer digitized data tothe VME/VMX STAR controllers. The actual readoutelectronics at streamer tubes performs A/D conversionand double buffering.

Fig. 3. REMUS readout crate layout

Although the above readout is quiteefficient, it has several points, where improvement isnecessary. The main problems are due to the difficultyto extend the size, speed and control of buffermemories as well as lack of certain diagnostic,autotest and control features, which would need localCPU power. Also the implementation of event task units(processes requiring event data) is either nonstandard or occurs with some sacrificing of the speed.Starting from the next runs the readout system will bepartly implemented in VME, which adds the requiredproperties.

2.2 VME readout system

VME is an industry standard multiprocessorbus [4]. Originally it was not designed for such dataacquisition tasks as needed in high energy physics,but equipped with a few new modules it becomes highlycompetitive also in those applications.

One of the most important requirements isthat the parallel readout must be done along anauxiliary bus (e.g. dedicated or VMX bus) withoutloading the main VME bus. Equally important is that aCPU can access this bus for data reduction or secondlevel trigger purposes. In practice this means thatdata are written into dual port memories via the VMXbus and transferred further for event building via theVME bus. Since the auxiliary VMX bus is connected witha flat cable, it can be easily adapted for suchconfigurations.

The connection between VME and REMUS is madeby the VME/VMX REMUS branch driver (see fig. 4) [5].

Fig. 5. Streamer tube analogreadout controller

2.3 Event task units

Parallel readout units write data into dualport memories, from where the event builder reads andbroadcasts the event to one or more event task units.The VME crate interconnect modules perform the blocktransfer with speeds up to 10 Mbytes/sec. Broadcasting

Page 3: UAl Data Acquisition System

on VME is a nonstandard feature implemented in thememories. The most important event task unit is datastorage, which at the same time contains the thirdlevel filter using 168/E's[31. Later a group of 3081/Eemulators will be connected in a similar way toperform on line and even off line data analysis. Thedata are finally written to tapes at a speed of 3-4Hz. The task of a third level filter is to match thehigher event rate from the parallel readout (up to 30Hz) to the tape writing speed. During the next datataking optical disk storage will be tested as analternative storage medium. Two video disk drives with1 Gbyte storage and 0.5 Mbyte writing speed will beinstalled as event task units to the VME system.

3. VME system modules and utilities

3.1 VME crate interconnect module

The purpose of the crate interconnect module[7) is to combine the VME multiprocessor systems into anetwork. The transfer speed required in high energyphysics experiments like UAl exceeds the capacity ofcommercial local area network systems by one or twoorders of magnitude. The crate interconnect wasconstructed as a single width VME module connected toa flat cable (crate interconnect bus) with multidropprinciple transferring 32 bits in parallel andcontaining a limited number of address and controlsignals. The system is hierarchical with the master ofthe crate interconnect bus always located in the samecrate, although a transfer of mastership is alsopossible by software. The distance between crates isnormally limited to a few metres, since TTL-logic isused. However, an interface with differential driversand receivers has also been built to cover transferranges up to a few hundred metres.

The VME crate interconnect operates in twomodes, one for control access and another for blocktransfers. A control access has to be fully handshakedand speed is not of primary importance. It is possibleto map any 64 kbytes long segment of the memory map ofa remote VME crate to the memory map of the mastercrate by first writing the access address modifiersand the desired (window) base address to the remotemodule. After that it is possible to perform VMEcycles in the remote crate as if they were accesses tothe master crate. The required software is just a callto set up the required window mode, after which asingle instruction is suffient to move the window, ifthe given 64 kbyte range is not sufficient. Digitalfiltering of strobe and acknowledge signals ensurereliable operation.

|HA MODEe|IfO Mb/Sec| MEMORY WINDOW MODE

64KB WINDOWPROGRAMMABLE 8ASE

ADDRESS

Fig. 6. The VME crate interconnect

1465

source and destination modules and initiates thetransfer, which proceeds autonomously. The speed islimited by the memory cycle time, which is about 400ns, giving speed of 10 Mbytes/sec. A parity check isperformed and the number of transferred words ischecked to be correct at the source and destinationends. For broadcasting the appropriate data transferacknowledge signal can be generated by the crateinterconnect module. The crate interconnect cantransmit an interrupt to the master crate. In this wayservice requests originating from the remote cratescan be handled by the master.

3.2 VME processor modules and memories

The processor and memory modules [8] werebuilt according to the specifications from UAl.Corresponding boards are now available also from othermanufacturers. One frequently needed feature is thatCPUA1 has a mailbox, dual port RAM accessible fromboth the CPU internal bus and the VME bus. In additionwriting to the first location of the mailbox causes aninterrupt on the CPU. In a multiprocessor environmentlike our VME system (60 VME processors) this is bothan efficient and simple way of system management.Other features of particular importance are thetimers, 256 kbytes of RAM and a serial RS-232 port. Afew parallel I/O lines are also useful for interruptsignal connections and control.

The dual port RAM module (DPRX) [8) exploits8k bytewide static CMOS RAM's to make a 128 kbytemodule or 256 kbytes with a piggy pack board. Thememories have a special broadcast mode. Accessing themwith a special address modifier does not give theusual data transfer acknowledge signal on VME, insteada front panel signal is generated, which matches thelogic and the connector in the crate interconnectmodule for a global data transfer acknowledge signal.An important property of DPRX is a programable baseaddress on VME. In connection with the virtual memory68010 processor in CPUAl this allows running largeprograms on the processor.

Recently at least two companies haveannounced 68020 based VME modules [91 [10 , which areextremely interesting also to high energy physics use.Both modules have an auxiliary bus and the option ofusing coprocessors like floating point (68881 alreadyavailable) or memory management processor. The 5processors [101 to be soon delivered for UAl have alarge dual ported VME accessible internal memory of 1Mbyte. The high computing power of such VME boardswill be applied in on line calculation of calibrationconstants, histogramming and event display.

3.3OtherVM modules supportinz-dataacquisition

Several other comercially available VMEmodules have been utilized in the UAl dataacquisition, like EPROM or battery backup RAM boards,graphics processors, large RAM's and parallel I/Omodules. Also an interrupt vector generator withTTL/NIM inputs has been developed [111 . For high speeddata transfer between mainframes and VME the crateinterconnect bus can be connected to newly developedCAMAC or Unibus modules [12] . For monitoring andsortware development a personal computer interface to68000 based Macintosh [13) has been developed at CERN[141 . This provides a user friendly control acces toVME as well as CAMAC crates. From VME bus Macintoshappears similar to any other VME processor.

Block transfers can be done between any pairof crates. For this purpose the master CPU writes theappropriate start addresses and byte counts to the

Page 4: UAl Data Acquisition System

1466

3.4 VME software

All processors contain a specially developedmonitor and system software package on their on boardEPROM's 151. This contains in addition to normalmonitor functions facilities to communicate via themailboxes, serial links, CAMAC and crate interconnectnetwork in a device independent way. Also timer,interrupt and I/O handling is included.

A Fortran compiler has also been developedwith enhancements suited to the data acquisition [ .This compiler, Real Time Fortran for 68k (RTF68K), hasFortran 77 features with additions like pointers,interrupt handling, and possibility to acces internalregisters as well as to add imbedded assembly code.Even the new 68020 and floating point 68881instructions are implemented.

The data acquisition software [171 contains anumber of self checking properties, which make thesystem fault tolerant. The actual configuration of thememory maps and the CPU's can be obtained rapidly byscanning through specific locations in all crates. Thebase address of the mailbox of each CPU has an offsetdepending on the CPU number. In case of a problem thesystem can reconfigure itself in a fraction of asecond and attempt data acquisition without the faultymodule. There is a good separation between eventtasks, since they are carried out by separateprocessors.

3.5 Outlook

In the future the performance of VME baseddata acquisition increses rapidly. It will soon becost effective to produce memories with block transferspeeds approaching 100 ns cycle times. Fast memoriesare also needed to exploit fully the power of the new32 bit processors.

The VME crate interconnect module will appearequipped with a fibre optics interface. The new modulewill have two optical inputs and two outputs to permitvarious ring structures and will most likely beimplemented as a 68020 coprocessor. Much highertransfer speeds make feasible its use for second leveltrigger e.g. in connection with the new uraniumcalorimeter planned for the UAl experiment.

The increasing processor power located in theVME system connected with the high speed optical diskstorage will permit off line analysis to be carriedout by the data acquisition system after the datataking periods. The raw data are written on theoptical disks leaving gaps for results of a lateranalysis. Since the access times are short, therelevant information is found easily and additionalinformation can be written in place with minimalsystem overhead.

2 P.J. Ponting, A guide to Romulus/Remusdata acquisition systems, CERN-EP 80-1report (1980) and references therein

3 J.T. Carrol et al., Data acquisition using168/E, Proceedings of the "Three-dayin-depth review on the impact ofspecialized processors in elementaryparticle physics, Padua (1983) p. 47

4 VME bus specification manual, e.g.Motorola Inc. MVMEBS/D1 (1982) andrevisions

5 C. Engster and L.G. van Koningsveld,REMUS-VME-VMX branch driver (RVMX)type V385, CERN-EP (1984)

6 S. Centro, Streanr Tube Analog ReadoutPadua report (1984)

7 E. Pietarinen, VME bus interconnectmanual, Univ. of Helsinki report (1985)

8 CPUA1 and DPRX are products byData-Sud S.A., Montpellier, France

9 Motorola Inc., MVME 130 processorwith MC68020

10 Robcon Oy, P.O.Box 9, SF-00391 HelsinkiVME-020 processor with 68020

11 A. Arignon and Ph. Fathouat, VMEinterrupt vector generator, DPhPE-SEIPECENS, F-91191 Gif-sur-Yvette Cedex, France

12 Unibus is a trademark of Digital EquipmentCorp.Crate interconnect-Unibus interface:D. Pascoli et al., University of Padua,(to be published)

13 Macintosh is a trademark of Apple Inc.

14 B.G. Taylor, MACVEE interface, CERN-EPreport (1985)

15 M. Demioulin, The CPUA1 monitor,UAl-note (1985)

16 H. von der Schmitt, RTF68K users manual,CERN-EP and University of Heidelberg,(1985)

17 S. Cittolin, (private comnunication)

Acknowledgements

I am grateful to colleagues in the UAl online computer working group for discussions and nicecollaboration.

The support by the Academy of Finland isgratefully acknowledged.

References:

1 A 4ir solid angle detector for the SPS usedas a proton-antiproton collider at centreof mass energy of 540 GeV (UAl proposal)CERN/SPSC 78-6 (1978)


Recommended