+ All Categories
Home > Documents > The Use of Trigger and DAQ in High Energy Physics Experiments Lecture 2: The LHC O. Villalobos...

The Use of Trigger and DAQ in High Energy Physics Experiments Lecture 2: The LHC O. Villalobos...

Date post: 17-Jan-2018
Category:
Upload: willa-allen
View: 220 times
Download: 0 times
Share this document with a friend
Description:
CERN LHC 2 contra-rotating beams with up to 3564 bunches each. Bunch crossing rate of MHz CM energies from 900 GeV to 14 TeV 2 injection points: Beam 1 comes in near ALICE; Beam 2 near LHCb 3September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie

If you can't read please download the document

Transcript

The Use of Trigger and DAQ in High Energy Physics Experiments Lecture 2: The LHC O. Villalobos Baillie School of Physics and Astronomy The University of Birmingham Introduction to the LHC Pipelining Examples from ALICE Data Acquisition systems Commissioning Examples from early collisions Summary Contents Lecture 2 2September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie CERN LHC 2 contra-rotating beams with up to 3564 bunches each. Bunch crossing rate of MHz CM energies from 900 GeV to 14 TeV 2 injection points: Beam 1 comes in near ALICE; Beam 2 near LHCb 3September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Physics Objectives of the Experiments The four major LHC experiments together cover very different areas of physics. ALICE is designed principally for Pb-Pb collisions having very high multiplicities. This necessitates the use of slower detectors, putting an upper limit on usable luminosities. In Pb-Pb mode the LHC can deliver 8kHz of interactions, while in pp the maximum interaction rate the experiment can handle is about 100 kHz. ATLAS and CMS are aiming at rare processes in pp interactions, for which they need the highest possible luminosity.(>40 MHz collision rate) LHCb specializes in beauty decays, but is faced with a very high level 1 trigger rate (around 1 MHz). By using the trigger to select interesting decay modes, this rate is reduced to a final level trigger rate of around 200 Hz In pp mode, the physics potential comes both from the greatly increased energy and the greatly increased luminosity, offering the possibility to access processes that up till now have been too rare to be studied. In AA mode, annual data collection rates are comparable to RHIC but there is a 25-fold increase in centre-of-mass energy. A challenge is that AA running time is only ~1 month, so data acquisition rates need to be an order of magnitude higher than at RHIC. 4September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie LHC pushing the boundaries Each of the LHC experiments pushes the limits achieved in previous experiments for trigger rates or data collection rates in some form. This has led to new architectures for both trigger and DAQ 5September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie LHC Challenge The first and most obvious challenge of the LHC is that the time between bunch crossings is very short. LEP style trigger and DAQ strategies will not work- new approach needed to deal with the high bunch crossing rate The high intensity of LHC beams means that radiation levels are much higher than at LEP, leading to severe restrictions on access. This means that many operations have to be done remotely, and cable lengths are significantly greater than ever before. 6September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Pipelining Most experiments (except ALICE) have between ~1 and ~10 interactions per BC, and must therefore be able to process each BC. The time between 2 BCs (25 ns) is far too short to allow a trigger to be processed. Solution is to break the algorithm into tasks that can be performed in one BC, and arrange that data from each successive BC are stored. When the complete algorithm is complete (fixed number of BCs) a decision is made In this way a new set of data (for 1 BC) enters the system at each BC, and A new trigger decision is made for each BC, a fixed time (trigger latency) after the data arrived. Data from non-triggering detectors are also stored in shift registers, advancing one position per BC. If when the trigger decision is made, it turns out the data are not needed, they are discarded. 7September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Pipelining Although the trigger algorithm can take some time, the important thing is that the trigger decisions are deferred, and a new set of data is collected at each BC. The processing is a typical production line process, delivering a fresh decision each BC 8September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie How to select on partonic physics Key to partonic processes is to select on transverse momentum or energy. High pt part of event stands out from underlying event Trigger can exploit this to make a dramatic reduction in rate Examples from Tevatron data September 14th ICTDHEP Jammu - O. Villalobos Baillie Trigger Rates for some ATLAS triggers ConditionRate 1 muons with p T 20 GeV/c~11 kHz 2 muons with p T 6 GeV/c~1 kHz 1 e/ with E T >30 GeV~22 kHz 2 e/ with E T >20 GeV~5 kHz 1 jet with E T >290 GeV~200 Hz 1 jet with E T >100 GeV AND missing E T >100 GeV ~500 Hz 3 jets with E T > 130 GeV~200 Hz 4 jets with E T >90 GeV~200 Hz L =10 34 cm -2 s -1 10September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie W, Z production gluon-to-Higgs fusion squarks, gluinos (m ~ 1 TeV) High-p T QCD jets Quark-flavour production pp(bar) Cross Sections 11September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Trigger Signatures CMS jet IDET ECAL HCAL MuDET e proton beams Features distinguishing new physics from the bulk of the SM cross-section Presence of high-p T objects from decays of heavy particles (min. bias ~ 0.6 GeV) More specifically, the presence of isolated high-p T leptons or photons The presence of known heavy particles (W, Z) Missing transverse energy (either from high-p T neutrinos, or from new invisible particles) 12September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Overview of ATLAS Level-1 Trigger [ Drawing by Nick Ellis (CERN), 2006 ] Calorimeter triggerMuon trigger Central Trigger Processor (CTP) Timing, Trigger, Control (TTC) Pre-processor (analogue ~ E T ) Muon Barrel Trigger Muon End-cap Trigger ~7200 calorimeter trigger towers O(1M) RPC/TGC channels Design all digital, except input stage of calorimeter trigger Pre-processor Latency limit: 2.5 s Local Trigger Processors (LTP) Cluster Processor (e/, /h) Jet / Energy-sum Processor Muon central trigger processor 13September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie ATLAS Level-1 e/ Trigger ATLAS e/ trigger is based on 44 overlapping, sliding windows of trigger towers Each trigger tower 0.10.1 rad in ~3500 such towers in each of the EM and hadronic calorimeters There are ~3500 such windows per system Each tower participates in calculations for 16 windows This is a driving factor in the trigger design De-clustering: cluster must have more E T than 8 surrounding 22 ones avoids double counting 14September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie ATLAS Level-1 Calorimeter Trigger Analogue electronics on detector sums signals to form trigger towers Signals received and digitised Digital data processed to determine E T per tower for each BC Tower data transmitted to Cluster Processor performing object finding Fan out values needed in more than one crate, requiring compact design Within CP crate, values need to be fanned out between electronic modules, and between processing elements on the modules Connectivity and data-movement issues drive the design 15September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Pre-processor Analogue electronics on detector sums signals to form trigger towers Signals received and digitised Digital data processed to determine E T per tower for each BC Tower data transmitted to Cluster Processor performing object finding Fan out values needed in more than one crate, requiring compact design Within CP crate, values need to be fanned out between electronic modules, and between processing elements on the modules Connectivity and data-movement issues drive the design Pre-processor module (PPr) Takes signals from the ATLAS calorimeters as shaped analogue pulses digitises and synchronises them identifies the BCID from which each pulse originated performs E T calibration prepares digital signals for serial transmission The PPr serves JEP and CP systems Pre-processor module (PPr) Takes signals from the ATLAS calorimeters as shaped analogue pulses digitises and synchronises them identifies the BCID from which each pulse originated performs E T calibration prepares digital signals for serial transmission The PPr serves JEP and CP systems More details:heidelberg.de/Elektronik/EWweb/PPrSysKomp/PPrSysKomp.html 16September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Jet and Cluster Processors Analogue electronics on detector sums signals to form trigger towers Signals received and digitised Digital data processed to determine E T per tower for each BC Tower data transmitted to Cluster Processor performing object finding Fan out values needed in more than one crate, requiring compact design Within CP crate, values need to be fanned out between electronic modules, and between processing elements on the modules Connectivity and data-movement issues drive the design Jet/Energy Processor Looks for extended jet-like objects in calorimeters and for sum of missing transverse energy Receives towers with twice coarser granularity from PPr. Jet/Energy Processor Looks for extended jet-like objects in calorimeters and for sum of missing transverse energy Receives towers with twice coarser granularity from PPr. Cluster Processor Identifies objects, whose energy- deposits are contained in narrow calorimeter regions (e, , , h) Cluster Processor Identifies objects, whose energy- deposits are contained in narrow calorimeter regions (e, , , h) 17September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie The ATLAS Level-1 Calorimeter Trigger System Level-1 Calorimeter Pre-processor crateAnalogue trigger cables received in electronics cavern 18September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Calorimeter signals extend over many bunch-crossings Need to combine information from a sequence of measurements to estimate the energy and identify the bunch-crossing where energy was deposited Bunch-Crossing Identification Apply Finite Impulse Response filter Result LUT to convert value to E T Result peak finder to determine BC where energy was deposited Need to take care of signal distortion for very large pulses Dont lose most interesting physics! An ASIC* incorporates the above *Application-specific integrated circuit 25 ns sampling 19September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie The array of E T values computed in the previous stage has to be transmitted to the CP Data Transmission and Cluster Processor Use digital electrical links to Cluster Processor modules ~5000 links at 400 Mbits/s Fan out data to 8 large FPGAs* per module The e/ (together with the /h) algorithm is implemented in FPGAs This has only become feasible with recent advances in FPGA technology Require very large and fast devices Each FPGA handles 42 windows Needs data from 752 towers ( E/H) Fan-out data to neighbouring modules over high-density custom backplane ~800 pins per slot in 9U crate 160 Mbits/s point-to-point Algorithm described in programming language that can be converted into FPGA configuration file Can adapt algorithms with experience *FPGA = Field Programmable Gate Array i.e. reprogrammable logic Parameters can be changed easily e.g. cluster-E T thresholds are held in registers that can be programmed 20September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie L1 e/ trigger already very selective Need to use complex algorithms and full-granularity detector data in HLT HLT Electron Trigger Calorimeter selection Sharpen E T cut Use shower-shape variables to improve jet rejection Optimise signal efficiency and background rejection May use multivariate techniques already in trigger ! HLT is implemented in software, running on farms of PCs Almost full flexibility within the constraints of the available computing resources Available time per event is from tens of ms to a few seconds (second and third trigger levels) Associate track in inner detector Matching calorimeter cluster Compute E/p Photon turn-on curve for for 20 trigger 21September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Comments The whole algorithm takes about 2 s, a considerably shorter time than the OPAL algorithm, and delivers a fresh decision every bunch crossing, thanks to the pipelined architecture. However, note that pipelining such a large detector implies very large (temporary) data storage, and is expensive. Note the improvement on the L1 decision that comes from using a High Level Trigger (HLT) (more later) 22September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie ALICE Detector The ALICE detector capabilities have been chosen to allow the detector to handle the very high multiplicity events expected in Pb-Pb collisions Detector design gives more importance to tracking and charged particle identification and less to calorimetry, compared to the other LHC experiments. Low material budget is essential, hence choice of a Time Projection Chamber (TPC) as principal tracking detector. Lower rates relative to other experiments mean that full pipelining becomes difficult to justify. Most detector systems in ALICE are not pipelined. 23September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie TPC PHOS Muon arm TOF TRD HMPID PMD ITS ACORDE ALICE Detector 24September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie ALICE Heavy Ion Event 25September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie ALICE Trigger ALICE trigger architecture is different from that of the other experiments Individual ALICE detectors can select on Global features of the event (multiplicity, E T, E ZDC ) Presence of e, , jets Globally, no requirement to try to correlate in ( , ) as in other experments. (Though provision exists.) However, there is a requirement to try to optimize the use of the detector. 26September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Silicon Pixel Detector Half stave 76 mm 39 mm Sensor Pixel chips Readout MCM Sensor z G-Link channel to control room 141 mm 400 mm 1 Pixel Trigger System CTP Processing Fast-OR extraction Optical splitters To DAQ in control room 800 ns 400 ns150 ns250 ns 120 G-Link MHz 120 G-Link ALICE Pixel Trigger 10 chips per half-stave 27September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie This trigger electronics involves very high data rates Input bandwidth: Gb/s = 96 Gb/s ( Gb/s) = 192 Gb/s Output bandwidth: 10 Mb/s Careful attention must be paid to heat dissipation in the G-link receiver boards. Even with cooling, hot spots reach temperatures > 100 o C 28September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Transition Radiation Detector TRD 1.2 million channels 1.4 million ADCs MCMs 540 modules 29September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie TRD Operation Principle high voltage anode, amplification factor 5000 gas ionization by charged particles charge amplifier at cathode pads ADC samples at 10 MHz 30September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie TRD Trigger Timing Global Tracking Inside GTU (Global Tracking Unit) Analyze up to 20,000 tracklets Objective: find high momentum tracks Search for tracklets belonging together Combine tracklets from all six layers Reconstruct p t, compare to threshold and generate trigger Global Tracking Inside GTU (Global Tracking Unit) Analyze up to 20,000 tracklets Objective: find high momentum tracks Search for tracklets belonging together Combine tracklets from all six layers Reconstruct p t, compare to threshold and generate trigger Trigger decision after 6 s, required to prevent loss of data in other detectors (TPC) Charge drift and data pre- processing uses most of the time Only approx. 1.5 s processing time for global tracking and trigger decision 31September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie The Trigger Processor GTU Tracklets & Raw Data from Detector (240 GByte/s via 1080 links) Each node: data from one detector stack (2.7 GByte/s via 12 fiber links) TRD Trigger Contribution to HLT/DAQ (3.5 GByte/s) Raw Data 32September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Online Track Reconstruction 3D track matching: find tracklets belonging to one track Projection of tracklets to virtual planes Intelligent sliding window algorithm: y, Vertex, z Track is found if 4 tracklets from different layers inside same window Each stack: Up to 240 tracklets per event Each stack: Up to 240 tracklets per event 33September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Particle Momentum Reconstruction const. a Assumption: particle origin is at collision point Estimation of p t from line parameter a: p t = Fast cut condition for trigger: const. |p t,min a| High precision: p t /p t < 1% 34September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Reconstruction Precision depending on tracklet qualityGTU p t reconstruction precision: 0.3 % (comparison between GTU and offline circle fitting algorithm) Reconstruction Precision depending on tracklet qualityGTU p t reconstruction precision: 0.3 % (comparison between GTU and offline circle fitting algorithm) Online Tracking Performance TRD Stack Tracklet Reconst. Track (Data from PS test beam in Nov. 2007) Detection efficiency depending on occupancy and number of tracklets delivered by detector > 98 % for clean tracks with 6 tracklets Detection efficiency depending on occupancy and number of tracklets delivered by detector > 98 % for clean tracks with 6 tracklets (Measured data) 35September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Installation at CERN Complete trigger processor system installed at CERN (70 m underground) Commissioning and extensive read-out and interoperability tests performed successfully Continuous effective operation during beam tests and cosmics runs GTU Installation at CERN (End of 2008, Work in Progress) 2 out of 18 GTU Segments 36September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Partitions and Clusters It is convenient to be able to select sub-systems of the detector and allow them to operate independently (partitioning) Requires separation of trigger, DAQ and Detector Control System (DCS). End up with an independent smaller experiment. This is usually done on a run-by-run basis as part of the configuration of the detector (changes at most every few hours) ALICE also allows consecutive events to activate different sections of the detector. This arrangement makes more efficient use of the detector, since (especially in Pb-Pb) readout of different detector systems can vary by orders of magnitude. 37September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Context diagram of the Central Trigger Processor (CTP) CTP inputs LHC timing BC, Orbit 60 trigger inputs 24 L0 24 L1 12 L2 24 BUSY inputs CTP outputs 24 independent sets 7 outputs per sub-detector 168 signals total CTP interface ECS, DAQ, RoIP CTP readout Trigger data for events accepted at L2 level Interaction Record 38September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Block diagram of the CTP Synchronous processor MHz bunch- crossing clock (BC) Logic blocks designed as individual VME boards 6U form-factor 8 PCB layers moderate density 39September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie CTP boards in a VME crate Front panel connections Timing inputs Trigger inputs BUSY inputs CTP outputs Interface links Internal connections Custom backplane 40September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Need for Clean events ALICE rates are such that there are large gaps between events At the same time, especially in Pb-Pb conditions, the multiplicities are so extreme that pile-up events become unmeasurable Important to monitor pile-up, and select those events where it has not occurred Pile-up circuits monitor in sliding windows of time (appropriate for TPC (drift time ~100 s) or ITS (SDD has drift time ~6 s) 41September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Pile-up Two questions: (i)What is the number of interactions in one bunch crossing? (ii)What is the probability of 1 interaction in n bunch crossings? (i)Suppose mean number of interactions per bunch crossing is Number n of interactions follows a Poisson distribution If , can be taken as the probability p of an interaction. then the probability of having no interactions in an interval around an interaction is the probability of having no interactions in 2 successive bunch crossings, or (ii) 42September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Past-future Protection circuit 4 independently programmable circuits at each trigger level (+1 for Test Class) Sliding time-window during which the interaction signal (INTa/b) is counted 2 identical blocks, based on dual-port memory Programmable parameters: Protection interval (Ta/b) 2 Thresholds (THa1/2, THb1/2) Output delay (a/b) Output logic function Delay and alignment of output signals 43September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie CTP Monitoring Two types Trigger provides full information to the offline system of how a decision was taken Triggers activated Trigger inputs received Scalers also recorded regularly and summarized at end of run During run, scalers monitored directly Additional very detailed tests available using snapshots Record of all BCs (whether triggered or not) during a 32ms period Used for analysis of how trigger arrived at decision, possible errors, trigger alignment, busy behaviour, etc. 44September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Data Acquisition Very challenging for all experiments. Use all known tricks! Decouple data read-out from triggering via adequate buffering possibly in more than one place Have (but hope not to use) a back- pressure mechanism to stop data flow if DAQ is having trouble keeping up. 45September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie 46 ALICE DAQ architecture 46September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie More Realistic Data Flow Modelling DETL0DLL1DLL2DLF2D-DL L0 SEB L0 MEB L1 SEB L1 MEB L2 SEB L2 MEB RORC L0L1L2 DATA LINK Detailed modelling is required to decide the optimum parameters for buffering. The above model (for ALICE) is generic, in that not all the features apply Decide whether a given detector has both (or either) SEB (single event buffer or MEB (multi event buffer) Choose modelling scheme (both generic C++ and PTOLEMY have been used in ALICE) Note full system, including DAQ not shown here, is needed to avoid pitfalls. 47September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Suppression of Rare Triggers All buffering in DAQ filled, so rate drops because of back pressure (takes some seconds) By monitoring the status of buffers in the DAQ, warnings can be issued when space is running low, and at this point frequent triggers are temporarily switched off, allowing full bandwidth for rare triggers When buffering OK again, frequent triggers are re-instated Results in cyclic pattern for frequent triggers, constant data taking for rares. MB only on cyclically 48September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie HLT Architecture Totally configurable All or none of the functions are performed in any run HLT trigger decisions either flagged or acted upon 49September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie ATLAS DAQ Overall Structure 50September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Clocks The LHC timing is distributed in a common way to all experiments using the Timing Trigger and Control (TTC) system. This provides An accurate (~40 ps precision) clock, centrally synchronized to the LHC machine and distributed to all experiments, together with other timing signals (e.g. flagging of orbit changes, etc.) A means to send the principal trigger signal A means to send a set of (short) messages to the trigger systems, either as broadcasts or to individual locations. The system uses single mode optical fibres for communication, and specially commissioned quartz crystals used with custom QPLL circuits to regenerate accurate signals at many points along the chain. In a new (4 year old) development, the master clock is delivered to the experiments using the 400 MHz clock used by the machine itself, scaled down to the 40 MHz cycle used to identify bunch crossings using the 6U VME RF2TTC module. All experiments use the clock distribution and trigger distribution. ALICE uses the message system to Reset orbit Send pretrigger for calibration at a fixed BC number Send other trigger levels along with trigger information on trigger types and clusters fired. 51September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie TTC continued TTC carries clock trigger and data by running at a higher frequency than 40 MHz Each 25 ns period divided into two parts: TTCa and TTCb TTCa transition dedicated to trigger (always available) TTCb transition dedicated to signal (priority mechanism to avoid signal collisions Orbit alignment is done by looking at structure of orbit, which (even when machine is fully commissioned) will contain characteristic gaps at known places. TTCaTTCb 52September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Commissioning So far, the LHC experiments have run in three basic modes Cosmic ray detection Beam splashes Collisions All these phases have been crucial for preparing the detector for operations. 53September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Cosmic Rays Cosmic ray detection has been crucial for detectors in geometrical alignment. Same tracks passing through different detectors are a much more efficient and precise way to align detector components (ALICE has in the ITS alone, each with 6 parameters to specified) For the trigger, main problem is that cosmic rays bear no relation to LHC clock, so triggering is approximate only. Rates are often very low. For example ALICE ITS alignment required tracks to pass through the smallest detector (the Silicon Pixel detector) with radius 7 cm for outer layer. Rate was ~0.4 Hz Tracks come from wrong direction (mostly downwards) so timing between detectors is either not possible (same tracks do not go through requisite detectors or a bit misleading (tracks go through upper layers first, instead of inner layers first. 54September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Spectacular Results Events are mainly one track, but occasionally spectacular events occur 55September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Beam Splashes As the preparations for LHC progressed, there were a series of beam extraction tests, in which the beam was brought to the entrance of the LHC and then dumped. These were particularly useful for the experiments near the injection points (ALICE and LHCb) as they gave a supply of measurements with the correct timing structure. Note however that the timing is correct only on the outgoing side. On the incoming side, tracks pass through the detectors in the wrong order compared to collisions. Still gives a lot of information, and allows relevant trigger timing to be performed Less useful for CMS and (especially) ATLAS, as the beam only gets to these detectors late in the commissioning process. Beam splash on TED (in the LHC beam pipe) stops the beam and generates a very large number or particles. It means only suitable detectors can take part. 56September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Beam Splash Event First LHC event June 15 th 2008 Beam was stopped just outside LHC, but tracks went through the detector. 57September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Beam 2 debris going through LHCb These events are useful for trigger timing Inner tracking devices off, but outer tracker, calorimeters, RICH detectors and muon chambers recording 58September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie September 10th 2008 and all that Circulating beam provided the opportunity to do these tests with much more statistical precision. Procedures would be improved considerably over the long wait to 2009, meaning that things went very smoothly at startup in November. 59September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Trigger timing (before alignment) versus bunch number for SPD, V0, beam-pickup BPTX, T0 triggers Beam pick-up T0 SPD V0 Beam Timing September September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Single turn Double turn, beam 1 back at point 2 ! Luminosity monitor (V0) 61September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Collisions in at brought the first collisions, at very short notice, on November 23rd. Long wait was worth it, and all techniques improved from lessons of 2008. After a pause of a week or so, collisions continued for about three weeks. However, no attempt at focussing beams to increase intensity, so rates were only in the region of ~2 Hz in each experimental area most of the time. Each experiment collected a few 10 5 events. Enough for some total cross section physics tests. Intensity so low it did not allow rare triggers to be tested. 62September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie 63 FA - General First Physics Meeting - 16 October 2009 delayed by 75ns for each orbit (and for each experiment) 1 bunch-bunch encounter (bb) 1 bunch-empty encounter (be) 1 empty-bunch encounter (eb) Reminder: transverse beam dimension: average number of int. per bb crossing: n = bunch intensity ~ 85 5 TeV ~ GeV Initial beam settings 22 bunches Also small pilot bunches which show up on counters 63September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Trigger Alignment Alignment was in good shape: Largely done in advance with beam splashes circulating beam 64September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie November 23rd 2009 First Events 65September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie First CMS Event I. Mikulec - CMS66 Mon 23 Nov 19:21 First candidate collision Run Evt September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Orbit structure early runs 44 bunches (plus pilots) 2 collisions 2 non-colliding As number of bunches goes up, near misses start to appear, which collide not far from the nominal interaction point and result in tracks going into the apparatus. Such interactions will be very much a part of normal running conditions when the machine is running with full complement of bunches. colliding non-colliding 67September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Prospects for has showed us that the LHC works In 2010 there will be many new things to do Beam intensity will go up and energy will go up Triggers which were not tested in 2009 will become active As rates go up, pile-up between events and in a single event will occur frequently DAQ systems will be tested much more severely Plenty of interesting things to do! 68September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie Summary In this lecture we have looked at some specific issues with trigger and DAQ in the LHC environment For most LHC experiments, pipelining is a must, as the only way to cope with the short period between collisions We have looked at Two trigger detector systems One Central Trigger Processor DAQ systems decouple from triggering and draw in events from buffers asynchronously HLT systems help to filter events In ALICE also used for data compression 69September 14th 2013ICTDHEP Jammu - O. Villalobos Baillie


Recommended