The LHC is designed to collide bunches of about 1011 protons at a frequency of 40 MHz. This corresponds on average to 35 inelastic proton-proton collisions every 25 ns. The Level-1 trigger has to reduce the interaction rate of 40 MHz down to 100 kHz, by selecting those events which contain traces of interesting physics. Because of the limited size of the on-detector data buffers, a decision has to be taken at maximum 2.5 microseconds after each event takes place.
To perform this task, the Level-1 trigger searches for highly energetic particles in the calorimeters and the muon system of the ATLAS detector. It consists of three different subsystems. The Level-1 Calorimeter Trigger (L1Calo) analyzes the energy depositions in the calorimeters to find electrons, photons, taus and jets. It also computes global sums of total and missing energy. The Level-1 Muon Trigger (L1Muon) uses data from the muon spectrometers to locate muon candidates. The information from these two subsystems is combined in the Central Trigger Processor (CTP), which makes the final Level-1 trigger decision. Only events which contain particles that pass configurable energy- or momentum-thresholds are accepted.
The L1Calo system uses reduced granularity data to find objects in the calorimeters of the ATLAS detector. It is divided into several different processors, all of which are fully implemented as special hardware and process the incoming data in parallel:
The PreProcessor (PPr) receives ~7200 pre-summed analogue signals from the calorimeters for each bunch crossing of the LHC. It was developed at the KIP in Heidelberg. The PPr digitizes these signals at a frequency of 40 MHz, the same as the LHC bunch crossing frequency. Afterwards it determines the amount of energy and the bunch crossing of each measured energy deposition. The resulting energy values for the different channels are then routed to the two following processors.
The Cluster Processor (CP) uses a sliding window algorithm to search for energy depositions that originate from electrons, photons or hadronically decaying taus.
The Jet/Energy Processor (JEP) finds jet-candidates and measures global energy sums covering the whole of the calorimeter, i.e. the total transverse energy and the missing transverse energy. Like the CP, the JEP uses sliding window algorithms to fulfill its task.
For the upcoming Run-3 data taking period, the above system will run in parallel to a new FGPA-based system with so-called feature extractors (FEXes). In this upgraded system, tower information is digitized either directly on the front-end electronics of the Liquid Argon calorimeter or via a new board called the Tile Rear Extension modules (TREX) for the Tile calorimeter. The electrons, photons, taus and jet identification as well as the calculation of the missing transverse energy are estimated via three FEX systems. Using the FPGAs within the FEXes allows for greater flexibility in the identification algorithm and are necessary in order to select the interesting physics events at high efficiencies in the much harser LHC running conditions.
The Tile Read Extension Modules
The phase-I upgrade, for the Run-3 data taking period, of the ATLAS L1Calo trigger system introduces new subsystems for identification of isolated particles and jets, called feature extractors (FEXes). In order to provide digitized data from the Tile Calorimeter to the FEXes, via high speed optical links, the L1Calo PreProcessor subsystem is being extended with new Tile Rear Extension (TREX) modules. Housing modern FPGAs and high speed optical transmitters, the TREX provides digitized hadronic transverse energy results at the LHC clock frequency to the FEX processors via optical fibers operating at 11.2 Gbps, while also maintaining the data path to the legacy L1Calo processors via electrical cables. For verification of the trigger decision, the TREX gathers, formats and transfers event data to the DAQ system, via the Front-End Link Exchange (FELIX) board or the legacy ROD module. The pre-production TREX modules include a new multi-processor sytem-on-chip(MPSoC) device, which manages the slow-control functionality and periodically sends the operating conditions via Ethernet to the DCS.
Software and Calibrations with Phase-I system
Some new hardware updates are introduced by the Phase-I Upgrade for L1Calo, including new Layer Sum Boards (LSB) and digitizing the system via new boards like the LAr Trigger Digitizer Board (LTDB). With this improved hardware, Run 2 analogue system, which is planned to run parallel to the upgraded system at the start of Run 3, will need to be recalibrated. Compared to the system installed during Run 2, the new boards route the signals from the detector through different paths in order to allow for processing the higher spatial resolution in the calorimeter. This means, however, that the travel time for part of that signal through the system has changed. The analogue system relies on processing the signal in parallel and simultaneously and therefore require precise knowledge of the timing delay paths. These changes to the timing delay paths can be compensated by introducing new delays in the Tower Builder Boards (TBBs) which allow for slowing down target signals. Deriving these specific delays is part of the work done at KIP.
The L1Calo PreProcessor
One of the main contributions of the KIP ATLAS group to the trigger system is the construction and operation of the L1Calo PreProcessor. To handle the input of over 7000 analogue calorimeter channels, the PPr system is divided into ~120 PreProcessor modules (PPM) in eight VME crates. Each PPM is capable of processing 64 channels in parallel. The main processing tasks are carried out by Multichip Modules (MCM), of which there are 16 on each PPM. Each MCM is capable of handling four channels.
There are several chips soldered on each MCM, including the following:
During the running of the LHC, the KIP group was heavily involved in monitoring and maintaining the L1Calo system. This task includes the calibration of PPr parameters for energy determination, the optimization of the fine timing as well as the development of monitoring tools for the different variables. Technical support for the installed PPMs was also provided. Detailed information can be found here.
The new Multichip Module
In Run-I of the LHC (2010-2012), the PreProcessor system performed its task very well and with high efficiency. After the LHC upgrade during the long shutdown, which ended in 2015, the number of particles colliding at the same time ('pile-up') increased and thus lead to harsher conditions for the trigger system. Therefore an upgraded replacement module for the MCM was developed, the 'new MCM' (nMCM). Its main features are 2 dual-channel Flash ADCs which work at a higher frequency than the previous ones (80 MHz instead of 40 MHz), as well as a modern FPGA to replace the ASIC. As a new feature, the nMCM includes an on-board signal generator, to enable independent tests of the boards functionality. Prototypes were produced and successfully tested. The production of the final nMCMs and their tests was completed in late 2013, with installation and commissioning at CERN taking place in early 2014.
These hardware changes allow for a more precise processing of the digital calorimeter signals. Thus, the upgrade poses an opportunity to develop and implement new and enhanced trigger algorithms on the nMCM. For example, the KIP ATLAS group studies algorithms for the bunch crossing identification (BCID) for saturated signals. By using the higher digitization frequency of the nMCM, a maximum efficiency can be reached even for the highest possible energies. Possible ways to dynamically correct for pile-up induced fluctuations of the signal pedestal were also developed. Detailed information can be found here.