Physik : Elektronik [S4]
-- System Components for
the Pre-Processor in the ATLAS Level-1 Calorimeter Trigger --
(some links hereunder point to a picture
of the item; use the browser's "back" to return here)
The Pre-Processor Module (PPM) carries the
functionality required to transform analog calorimeter signals into digital
values of "transverse energy". These are transmitted as serial data-streams
to the subsequent processors, where "object-finding" is performed.
The "Cluster Processor" (->
RAL) identifies objects, whose energy-deposits are contained
in a small space-region of the calorimetry. Examples of such objects are
photons, electrons, tau-leptons ...
The "Jet/Et Processor" (->
Mainz) looks for extended objects like particle-jets, the sum
of "missing transverse energy" ...
Hence, two data-paths are formed at the PPM's output. The first represents
energy-deposits on a "fine" granularity (0.1 in pseudo-rapidity * 0.1 in
azimuthal angle) adequate for the "Cluster-Processor". The second path
shows energy-deposits on a "coarse" granularity (0.2 * 0.2) adapted for
the identification of extended objects.
The analog input signals (ca. 7200 in total) enter the PPMs at the front-panel.
Each Module receives 64 calorimeter cells as differential signal pairs.
Pre-processing each signal implies several
steps before digital data can be passed on to the object-finding processors:
Conditioning the analog input for digitization.
Determination of the time to attribute the signal to the "bunch-crossing"
in the storage-ring accelerator, where the collision took place. One possibility
is the application of a threshold in a discriminator to mark the point
Digitization to a 10-bit value with exact sampling on the signal's peak.
Alignment of all pipelined values in terms of "bunch-crossing" clock-ticks.
The detector is huge, hence signal propagation to the central trigger location
has a much larger spread than the clock-interval.
Fast monitoring of trigger cell occupation by means of histogramming in
"Bunch-Crossing" Identification (BCID) for ALL domains of signal-amplitude
Fine-Calibration of the transverse energy by means of a Look-Up Table (LUT).
Multiplexing channels, where possible, to economise.
Adding cells to coarse granularity for detection of "extended" objects
(particle-jets, transverse energy in hemispheres).
Serialization of the data-streams for transmission, because parallel transmission
of 7200 channels is ruled out by cable-logistics. ----- AND -----
Provision of event-by-event readout. Setting-up, calibrating and on-line
monitoring of such a complex system can only be achieved by means of computerized
tools. Also, verification of a trigger decision in a quantitative off-line
analysis is only possible when the raw input data are available.
The AnIn daughterboard converts the signal
to unipolar form, applies a programmable voltage-offset and transmits the
conditioned signal to a FADC for digitisation to 10 bits at 40 MHz. In
parallel, a comparator marks the crossing of a programmable threshold in
time to identify the LHC "bunch crossing", in which the triggered proton-proton
collision took place.
The analog signal as well as the digital time-marker are transported to
the Pre-Processor Multi-Chip
Module (PPrMCM). The MCM carries
a number of "bare" silicon-dies bonded to the substrate. These are 4 FADCs
by "Analog Devices", a Phos4 timer chip developed at CERN, a Pre-Processor
ASIC (PPrASIC) developed at the Kirchhoff-Institute's ASIC-Laboratory
and three LVDS transmitters by "National Semiconductor". The MCM offers
the possibility to combine "commercial chips" with a dedicated ASIC (Application
Specific Integrated Circuit), which contains experiment-specific functionality
not available on the market.
It should be noted further, that the compactness as well as the processing
speed of the final system for ATLAS could only be achieved by making use
of integration-techniques given by the MCM and ASIC technology. This was
shown in studies resulting in a so-called "demonstrator
The PPrASIC forms the "core" of the Pre-Processor
system. Its functional content is described using Verilog HDL (Hardware
Description Language). The design is synthesized using a "standard cell"
library provided by the manufacturer AMS (Austria MicroSystems) for the
0.6 µm CMOS process. Memory cells are also provided as blocks of
requested size by AMS. The physical size of the resulting silicon-die is
8.4 mm by 8.4 mm. The design is simulated on VerilogHDL level as well as
on the physical layout level, where real "wire" delays are included. Design
output data will be transmitted to AMS for production of wafers.
Parallel data emanating from the PPrASIC are serialized in LVDS-dies on
the MCM for transmission across the board. ATLAS requires duplication
of signals from specific parts of the detector (fan-out) as well as transmission
across cables to the "trigger-processors" located at some meters distance.
This is achieved on yet another daughterboard of the PPM, the LVDS-Sender.
The board carries also components to pre-compensate amplitude-losses due
to cable properties. The "real-time" digital signals leave the crate through
a back-plane fragment, where cables are connected from the rear.
The trigger system has to work in "real time" driven by the LHC accelerator's
"bunch crossing" clock running at 40.08 MHz. The system incorporating distribution
of the clock and other protocol signals ("Level-1 Accept", "bunch counter
reset" ...) is the Trigger and Timing Control (TTC),
whose optical output is received in one Trigger
Control Module (TCM, see also below) for each Pre-Processor
crate. Electrical point-to-point wiring on the VME back-plane carries the
protocol to each PPM, where it is received and decoded in a CERN-developed
TTCrx chip, such that individual signals are available on the board.
A mandatory task in any trigger system is the continous controlling and
monitoring of its performance. A Level-1 trigger makes a decision in "real-time".
There is no possibility of reversal or recuperation of measurement data.
Furthermore, the signals used in the trigger are branched-off at
the detector's front-end analog electronics. Thus, no access to these measurement
values is possible except in the trigger system itself. Seen from this
point of view, the Level-1 trigger forms a "detector" of its own right.
Any experiment puts data from each of its "detectors" onto its read-out
system usually called DAQ (Data AcQuisition) for recording and storage.
The data-volume to be collected locally in the Pre-Processor system is
not terribly big, but the rate is high since the system can produce "Level-1
Accept" up to a rate of 100 kHz.
To master the task of transporting a high data-volume per time-unit,
a special, 32-bit wide system called PipeLineBus
(PLBus) has been implemented matching the transport-capacity of the subsequent
SLink (a CERN development). The crucial component in the PLBus scheme is
the ReM_FPGA (ReadoutMerger_Field Programmable Gate Array) on the PPM.
Its task is the collection of data on the PPM through various interfaces
on board as well as formatting and transmission under the PLBus protocol.
The PLBus system is bi-directional. Hence, set-up data can also be loaded
into the Pre-Processor system, when "routine"-DAQ runs take place at LHC.
A second means of access from computing infrastructure is given by the
implementation of VME bus. At first sight, PipeLineBus and VMEBus look
like "double knitting". However, the data-volume to be read-out prevents
VME as only bus-system. Nevertheless, setting-up, debugging, running-in
and local monitoring at the experiment as well as stand-alone testing of
PPMs are greatly facilitated by the presence of a "standard" data-bus system.
Furthermore, crate hardware with "standard" back-planes and power-supplies
is available on the market. The PLBus system has been implemented as an
"addition" on so-called "user-defined" pins on VME-connector J2.
The experiment's control has to have an overview on environmental parameters
ensuring safe operation of the apparatus. In the case of the trigger system,
such parameters are: supply-voltages in the crates and/or temperatures
on particular parts of electronic boards. These parameters are constantly
measured and sent via CanBus
to the control-room.
The next station on the DAQ-path is the Pre-Processor ReadOut Driver (PPROD),
where collection from a section of PLBus-slaves (i.e. PPMs) is mastered.
Data are formatted further for transmission on SLink to the DAQ system
- hence the name "driver". In addition, local copies of data can be kept
on the crate-level for debugging / monitoring purposes in the local CPU
The Crate-Controller (PPCC)
is basically a PC-motherboard, whose PCI bus has been mapped onto VME by
means of the TUNDRA interface chip. This "TUNDRA PCI-to-VME interface"
is a printed-circuit board developed in house at KIP. It connects
on one hand via a short cable (ca. 20 cm) to PCI and plugs on the other
hand into the VME back-plane. The rest of the assembly is purchased (PC-motherboard,
local disk etc) and mounted in a frame, which occupies 2 slots in a 9U
VME-crate. The PC runs under the Linux operating system without local graphics.
The full system in the ATLAS experiment will consist of EIGHT Pre-Processor
crates. Coverage of half the detector in terms of racks
and crates yields a rather complex arrangement, even though electronics
integration is pushed to the highest degree possible. Each Pre-Processor
holds 2 * 8 PPMs, which are supplied with Level-1 protocol signals from
the TCM located in the center. Two PPRODs collect the readout data from
the respective PipeLineBus ring. A crate-controller CPU allows controlling
/ debugging / monitoring of the hardware in the crate via the VME data