Design of the Level-1 Global Calorimeter Trigger

Size: px
Start display at page:

Download "Design of the Level-1 Global Calorimeter Trigger"

Transcription

1 Design of the Level-1 Global Calorimeter Trigger For I reckon that the sufferings of this present time are not worthy to be compared with the glory which shall be revealed to us The epistle of Paul the apostle to the Romans, ch.8, v Introduction This document describes the Global Calorimeter Trigger, part of the Level-1 trigger logic for CMS. This section and the next contain an introductory discussion of the Level-1 trigger, and a description of the calorimeter trigger aims and geometry. This is followed in Section 3 by the detailed specification of the Global Calorimeter Trigger requirements, updating the material in the Level-1 trigger Technical Design Report, Section 6.2. An overview description of the system implementation follows in Section 4. More detailed discussion of the different system aspects and key technology tests can be found in the accompanying documentation set [1]-[9]. Project management and schedule information is summarised elsewhere. The CMS Level-1 trigger uses a subset of the data from the muon detectors and calorimeters in order to select bunch-crossings containing interesting events at a rate of no more than 100kHz. The Level-1 latency is required to be less than 128 LHC bunch-crossings (3.2µs) in order to limit the length of readout buffers. No significant deadtime is allowed; the system is therefore implemented in custom hardware, using fully pipelined logic synchronised to the LHC bunch clock. The trigger algorithms are based upon the identification of trigger objects (µ, e/γ, jet, τ) and on measurement of missing and total transverse energy. Trigger objects are identified in the muon and calorimeter systems separately, and a subset of identified objects is selected from each according to transverse energy and other criteria. The full information describing the energy and position of the selected objects is sent to the global trigger, which combines information from the muon and calorimeter systems, and makes a Level-1 trigger decision based upon object energies and event topology. Figure 1: The CMS Level-1 trigger system - 1 -

2 The Level-1 trigger system is shown in outline in Figure 1. The calorimeter trigger consists of the following subsystems: The Trigger Primitive Generator (TPG) system, which processes the digitised calorimeter signals to produce coarser-grained trigger input data. The Regional Calorimeter Trigger (RCT), which implements e/γ identification algorithms and calculates local transverse energy sums, in each of several separate detector regions. The Global Calorimeter Trigger (GCT), which fulfils the functions described below. A first conceptual design for the GCT system has been already been presented. Since the initial design study, the functional and technical requirements for the GCT and the Level-1 trigger system as a whole have been extended and refined, and new technologies have become available. We present here an updated design in the light of these changes. The GCT is the final component in the Calorimeter Trigger chain. Its purpose is to implement the stages of the trigger algorithms that require information from the entire CMS calorimeter system. The GCT receives trigger object data from the RCT, performs several stages of data processing, and sends a reduced amount of information to the Global Trigger (GT). This note describes an implementation of the GCT that includes a large part of the jet and energy sum processing. The GCT functionality consists of two major parts: Final-stage sorting of e/γ trigger objects according to rank Jet finding and sorting, and energy sum calculation, based on input energies from covering the whole of the calorimeter system. Additionally, the GCT is required to perform luminosity monitoring using L1 trigger data. The detailed specifications for each of these functions are given in Section 3. The GCT implementation is then described, along with the interfaces to other trigger system components and the strategies for system control, setup and test. Finally, the GCT prototyping programme and project status and schedule are discussed. Maybe The incorporation of jet finding into the GCT represents a significant increase in functionality over the baseline system described in the Trigger TDR. The impact of this increase on the design of the system is discussed in an Appendix. 2. Calorimeter Trigger Geometry CMS has high-resolution electromagnetic and hadronic calorimeters (ECAL, HCAL) covering the pseudorapidity range η <3, and coarser-grained coverage up to η =5 via the forward hadron calorimeter HF. The ECAL and HCAL detectors are divided into barrel and endcap portions. The calorimeter trigger receives E T and fine grain profile information from the calorimeter electronics and finds isolated and nonisolated electrons or photons, τ candidates, jets and missing E T. Electron/photon (e/γ) and τ recognition is implemented over the range η 2.5, while jets are found over the full calorimeter system. The trigger tower is the smallest geometrical subdivision of the calorimeter seen by the trigger electronics; the (φ,η) dimension of a trigger tower varies over the η range. Trigger towers are grouped together into regions, which are - 2 -

3 more nearly constant in size. The information input to the GCT is sent at the level of trigger regions. This section describes the trigger tower and region geometry in detail. The trigger tower (φ,η) dimension results from a compromise between the background rate of the electron/photon trigger, which increases with the cell size, and the number of trigger channels, which must be as small as possible for cost reasons. In total the CMS calorimeter trigger has 4176 towers, corresponding to 2448, 1584 and 144 towers respectively in the barrel, endcap and forward calorimeters. In the barrel and endcap, the boundaries of ECAL and HCAL trigger towers follow each other. Each ECAL half-barrel is divided in 72 towers in φ and 17 towers in η. A calorimeter trigger tower in the barrel has dimensions φ η= , and is formed by 5 5 crystals. In the ECAL endcap, the trigger tower average (φ,η) boundaries are φ η= up to η 2. The η dimension then grows with η, so that the physical size of the towers does not become unreasonably small. The number of crystals per trigger tower varies between 25 at η 1.5 and 10 at η 2.8. The trigger towers correspond to the (φ,η) size of an HCAL physical tower, except for η >1.74 where the HCAL tower has twice the φ dimension of the trigger tower. In this region, the HCAL tower energy is divided in equal amount and assigned to two trigger towers that are contained in it. In the barrel-endcap transition region, barrel and endcap HCAL segments are added together. Table 1: Calorimeter trigger regions. Regions Towers Maximum η value η 11, , , , Calorimeter subdivision barrel 7, overlap 6, , , , , , endcap forward Information at the level of trigger towers is used in the e/γ processing. For the jet and energy processing and for reporting the position of e/γ objects, the barrel and endcap trigger towers are organised into calorimeter regions. Each region is 4 4 trigger towers. The trigger segmentation of the HF is not required to have small φ binning, since this detector does not participate in the electron/photon trigger. However, we do need seamless coverage for jet and missing E T algorithms. Therefore we keep 18 HF φ divisions which exactly match the trigger boundaries of the 4 4 trigger tower regions in the HB and HE. The HF towers are treated as individual regions and their φ division matches the regions in the barrel and endcap. These calorimeter regions form the basis of the jet and energy triggers. The dimensions of the calorimeter region are adequate to the jet trigger algorithm, which - 3 -

4 is based on sliding windows of 3 3 calorimeter regions (12 12 trigger towers). Missing E T is computed using φ divisions of for the entire (φ,η) plane. The (φ,η) indices of the calorimeter regions are used to identify the location of trigger objects (e/γ and jets) in the GCT and Global Trigger. Table 1 shows the details of the region definitions in η. A region covers 20 in φ and a η slice of typically A total of regions covers the whole calorimeter system, with 14 regions in η in the barrel/endcap and four in each HF. 3. System specification In this section, the GCT algorithms are specified, and the functional requirements for data capture and system control are given. An overview of the GCT processing is shown in Figure Sort processing The principal function of the GCT is to reduce the number of trigger object candidates that need to be considered by the GT. This is achieved by sorting the trigger objects according to a computed rank, and forwarding to the GT only a fixed number of objects with the highest rank. The rank of each object is in general based on its E T, but may be modified by other factors such as isolation status. Figure 2: Processing layout in the GCT Sorting is performed on five categories of object: three types of jets and two classes of e/γ candidate. The pattern recognition to find the jet objects takes place in the GCT, as described in Section 3.2, while the e/γ feature extraction is located in the RCT crates. The sorting for all object types follows closely analogous paths; we illustrate the procedure here for the e/γ sort. The RCT sends the four non-isolated and four isolated e/γ objects of the highest rank from each barrel/endcap trigger crate to be sorted in the GCT. The rank ordering of each incoming set of four objects is not specified by the RCT; that is, the objects are sent to the GCT in random order. The - 4 -

5 GCT sorts incoming objects separately within each of the two classes, and forwards four from each class to the GT. All sorting is performed strictly according to object rank; in the rare case of two or more objects of a class having identical rank, a simple priority algorithm is applied, based on object positions. The information received by the GCT for each trigger object consists of a rank and a location in (φ,η) space. The position information identifies the trigger region in which each object lies. Only the relative position within the trigger crate is specified; each crate covers 2 7 regions, and so the location can be represented in four bits. The location information for each object is recoded and expanded during GCT processing to reflect the absolute position within the global (φ,η) space; this requires nine bits per object at the GCT output. The rank is encoded in six bits, and is not altered by the GCT Jet and energy sum processing In the implementation considered here, the remaining information received by the GCT consists of transverse energy sums from each of the trigger regions in the barrel/endcap and forward calorimeters. The GCT examines the pattern of energy deposition to find jets over the whole calorimetry system. Jets are found and sorted in three categories, as described above. The energy input is also summed to find the total energy in the calorimeter, E T, and the missing energy vector, E T miss. Additionally, the jet objects are used to generate further quantities as input for multi-jet and activity triggers. Numbers of jets are accumulated satisfying various energy threshold and other conditions; and the total calibrated transverse energy in jets, known as H T, is also calculated as a measure of overall event activity. Each region energy value is accompanied by a feature bit. In the barrel and endcap, the feature bits represent a τ veto, while in the forward calorimeter they may be used to distinguish between narrow, relatively low-e T jets and more diffuse activity or pileup fluctuations. The functions to be performed using these region energy inputs are: Find jets over the full (φ,η) range of the calorimeter. Divide the jets into forward and central jets at some η value. Further divide the central jets into τ and non-τ jets using the τ feature bits. For jet object triggers: assign each jet a rank based on its E T and on η. Sort the ranks separately in each of the three categories to find the largest four, as for e/γ candidates. For multi-jet triggers: count jets found above a number of thresholds, selected to enhance searches for specific physics signatures. For activity triggers: sum the total transverse energy and the components of the missing E T vector, and perform the missing E T calculation. Sum the total calibrated energy in jets to form H T. Thus the eventual outputs are the four highest central τ, non-τ and forward jets, some jet counts, the total event E T and H T, and the magnitude and direction of the missing E T vector. For each jet we send 6 bits of rank, plus the φ and η region coordinates in 9 bits. Up to twelve 5-bit jet counts are foreseen. The total E T and H T are sent in 12 bits of value plus an overflow bit. The missing E T is 19 bits total; 12 bits of magnitude with an overflow bit, and 6 bits of direction

6 Jet object triggers The jet processing is based on a 3 3-region sliding window algorithm. A jet is found in a particular region if the energy in that region is larger than in the 8 surrounding regions. In this case the jet E T is the sum of the 9 region energies. The algorithm takes care of the rare case where two neighbouring regions have large and equal E T, and ensures that one and only one jet is found. Jets found in the central 14 regions in η can be classified as τ jets, if none of the 9 regions contributing to the energy sum has its τ veto bit set. Each jet found by the algorithm is assigned a 6-bit rank, taking care of possible η-dependent effects in the E T calibration. The ranks are then sorted for each of the three jet types. Multi-jet triggers The total numbers of jets found above certain energy or rank thresholds must be counted. Other conditions such as cuts on particular η ranges can also be applied to the jet counts. Flexibility is required here to allow the cuts to be selected according to the physics requirements when searching for multiple jet signatures. Activity triggers The start of the energy sum processing is integrated with the jet clustering. The clustering is organised so as to enable the summation of region energies into strips of 20 in φ. For the missing E T calculation, these strip sums must be transformed into E x and E y, and further summed over the whole calorimeter. The components of the missing E T vector are then transformed into a magnitude and a direction in φ. In parallel, the raw strip energies are also summed to form the total E T. The summing to form H T also proceeds in parallel, starting from the calibrated energies of identified jets above a chosen threshold Online luminosity measurement Accurate knowledge of the LHC luminosity at the CMS interaction point is necessary in order to measure physics cross-sections. Absolute luminosity measurements will be performed using specialised detectors, with the resulting data evaluated offline. However, it is also important to perform continuous online monitoring of luminosity in order to provide rapid feedback to the CMS and LHC operators, and to monitor the luminosity for each LHC bunch-pair individually. Since the GCT receives data from the whole calorimeter system for every bunch crossing, it is possible to provide online luminosity monitoring on a bunch-by-bunch basis. In principle, the rate of any distinguishable physics signal can be used as a measure of relative luminosity. In practice, it is necessary to use signals with a rate that is low enough to count, but high enough to provide a rapid statistical measurement, and which are reasonably free of contamination by background and pile-up effects. It is desirable to be able to identify quantities whose rate, while sufficiently high, is also relatively insensitive to calibration-dependent threshold shifts. The exact choice of channels is under study, but the measurements are likely to be based on rates of high-p T jets and/or global energy flows. It is therefore probable that the luminosity subsystem of the GCT will make direct use of the jet counts described above, along with summary data from the global energy calculation subsystem

7 The calculated bunch luminosities are sent to the CMS detector control system at regular intervals. The goal is to provide an updated luminosity estimate for each LHC bunch every few seconds during normal running, with a relative precision of less than 10% for measurements integrated on this time scale Trigger Data Capture The GCT is required to provide information to the CMS DAQ system for each event passing the L1 trigger. This information will be used for online performance monitoring and offline analysis of trigger efficiency. The DAQ and higher-level trigger systems may also make use of event summary data from the L1 trigger in order to identify regions of interest. All input and output data associated with the GCT are made available to the DAQ system, along with selected intermediate data. This allows the performance of the GCT to be monitored and communication errors with other components of the L1 trigger system to be identified. The fraction of the available data that is sent to the DAQ system is programmable, as more detailed information will be required during setup and diagnostic periods than during normal running. The interface between the GCT and DAQ systems is via the CMS standard SLINK-64 interface. The GCT captures information from the trigger data stream and buffers it for the duration of the remaining L1 latency. Upon receipt of a L1 accept signal (L1A), the data are passed into derandomisers, formatted into a single event block, and placed in the SLINK-64 buffers. Sufficient capacity is provided by the various buffers to allow, for example, the storage of all GCT input/output data corresponding to 4 bunch-crossings before and after each bunch-crossing passing the L1 trigger, at the maximum L1A rate of 100 khz Control, Test and Monitoring Automated control, test and monitoring of the GCT system is an important requirement. It must be possible to perform system setup, reset, test and reconfiguration without physical access at any time during LHC running. The GCT is capable of performing rigorous self-test on a chip, board, and system level whilst in situ, and without manual intervention. Tests of almost every system component may be performed using simulated physics data generated by software, previously captured input data or real input data from the RCT. The GCT can also fulfil a variety of test functions in cooperation with other trigger subsystems, in order to test synchronisation and communication link performance. Continuous online monitoring of GCT operation is performed using data captured from the normal trigger data stream. The system is controlled by software capable of diagnosing problems using the monitoring data, and alerting the supervising trigger control system to take corrective action (e.g. halt, system reset, buffer flush) when necessary. Fast status signals are also derived directly from the GCT processor logic in order to detect conditions such as synchronisation loss and buffer overflow, and sent to the trigger control system via dedicated hardware links. High-level control of the GCT system will be coordinated through the trigger control system; the interface to this system will be implemented at a software level

8 4. System Overview In this section, the implementation of the GCT system is described. The GCT, which is housed in a single electronics rack within the CMS underground counting room USC55, comprises two crates of input modules and a single processing crate. The system layout is shown in Figure 3. The function of the input modules is to receive the data from the 18 RCT crates, synchronise to the local clock and route to the processing modules. Muon pattern bits calculated in the RCT are sent directly to the Global Muon Trigger. The processing crate contains FPGA-based processing modules of a single, generic design, with interfaces to the central timing, control and data acquisition systems of CMS. Timing and control signals for the input modules are routed through the processing modules. GT / GMT Rack Input Cables from RCT GCT Rack Input Crate Processor Crate Input Crate Figure 3: Layout of the GCT system within USC55 We begin by listing the trigger input to and output from the system. In subsections 4.2 to 4.8 we then provide overview descriptions of different aspects of the implementation. For each of these subsections, a corresponding accompanying document is available which presents the relevant material in more detail Input and output data The RCT processing is organised into 18 crates. One crate covers two regions in φ and half of the calorimeter in η. A crate therefore sends e/γ objects from 2 7 regions and E T sums from 2 12 regions to the GCT. These data arrive in differential ECL on six 34-pair cables from each crate. The total number of cables is 108. Each input cable carries up to 64 bits of data per 25 ns beam-crossing interval (bx). Two of the cables per RCT crate transfer the non-isolated and isolated e/γ - 8 -

9 candidates, together with pattern bits required by the Global Muon Trigger. The other four cables are filled with the (φ,η) map of the energy deposition in the calorimeter. Table 2 specifies the input [6] and output [7] data for the GCT. A total of 6264 bits are received by the input modules per bx. 504 bits are staged directly to the GMT. The output of the GCT processing is 405 bits per bx sent to the GT. Table 2: GCT input and output data Data item Bits per item per bx Total bits per bx Input data 72 isolated e/γ 6 rank, non-isolated e/γ 4 position 720 Muon pattern bits 18φ 4η cables e/γ data 1944 Barrel/endcap region energy 18φ 14η Forward region energy 18φ 8η cables energy data 4320 Total input 6264 Output data Data to Global Muon Trigger isolated e/γ 60 4 non-isolated e/γ 60 6 rank, 4 central jets 9 position 60 4 forward jets 60 4 tau-flagged jets 60 Total transverse energy E T Total calibrated energy in jets H T miss Magnitude of missing energy E T miss Direction of missing energy E T 12 value, 1 overflow 6 bits 12 jet counts 5 bits per count 60 Data to Global Trigger Input modules The data from the RCT are first received in the GCT by a set of input modules (IMs). Each IM receives six of the 108 input cables. The data are transferred from the IMs to the processor modules and to the GMT on serial links. One link has a maximum data carrying capacity equivalent to 32 bits per bx. Each IM has twelve output links and each processor module 24 input links. The IM is shown in cartoon form in Figure 4. It consists of a 6U 220mm card with a passive daughtercard, occupying two standard-width crate slots. The function of the daughtercard is to host three of the six input connectors and carry half of the input data onto the motherboard

10 The IM has to synchronise the incoming data to the local GCT clock and to compensate for any timing skew between different data bits. This is performed using FPGAs, which also incorporate some routing of data to the output links. For the e/γ data, the IM packs the 24 candidates from six cables into eight links. The accompanying muon pattern bits are sent separately using the remaining four links to the GMT. The full data for each class of e/γ candidates arrives on 18 cables from the RCT. Three IMs receive this data, and forward it on 24 links to a single processor module for each e/γ class. For the region energy data, the IM forwards the data on each cable to the processor modules using two links, with no reformatting. The 72 cables of region energy data require 12 IMs, and the 144 links are sent to six processor modules. control link control serdes Clock & control 12 links to processor modules/global Trigger 15 Gb/s serialisers Sync/ route FPGA Sync/ route FPGA ECL line receivers Figure 4: The input module 4.3. Processor modules All trigger processing functions in the GCT are performed by a single design of generic Trigger Processor Module (TPM) [8]. The TPM layout is shown in cartoon form in Figure 5. Its physical format is a single width, 9U 400mm module. The processing is performed by four large (minimum 3 Mgate) FPGAs arranged in a 3+1 configuration. The three Proc A devices receive the input and perform the first stage of processing. Subsequent stages are handled by the Proc B devices. The input from the IMs, and the Global Trigger output, are both via the front panel on serial links. Higher speed, local links operate between processors within the crate over a configurable cable backplane. A total of 12 additional, smaller FPGAs are used for control of the link hardware and also take care of routing of data between the processing FPGAs and the links. Data arriving from the RCT and IMs through the front panel can also be routed directly to the fast backplane links for low-latency sharing of data between modules. The input and backplane link FPGAs also incorporate latency buffers to capture data at various stages for DAQ and local GCT diagnostic purposes, while another FPGA handles control functions on the module

11 12 links from input modules Proc A0 Clock & control 6 links to Global Trigger Input/ output block Proc A1 Proc B 20 Gb/s duplex i/o 12 links from input modules Proc A2 40 Gb/s duplex i/o Figure 5: The processor module 4.4. Data links This section describes the link technologies used in the GCT. More detail can be found in the accompanying note [3], which also collects results from bit-error rate measurements and other testing of the different technologies. Input data from RCT The data transmission from the RCT to the GCT has been specified in CMS IN-2001/017. The data are sent from the RCT in differential ECL over 34-pair universal SCSI cable from Madison cable. The GCT input connectors used are standard SCSI-2. Tests have shown that a cable gauge of 30 AWG gives bit-error rates below The data are received by Texas Instruments SN65LVDS352 line receiver chips and input to Xilinx Spartan-IIE FPGAs. These devices perform the synchronisation and send the data onwards via serial links to the TPMs. TPM front panel links The links from the input to the processor modules, and the output to the Global Trigger, GMT and DAQ, use the National Semiconductor DS92LV16 serialiser/deserialier (serdes) device. The serialiser circuit accepts 16 bits of parallel

12 data at 80 MHz and sends 18 bits serial in differential LVDS at 1.44 Gbit/s. Since only 16 of the 18 bits contain trigger data, the effective transfer rate of one link is 1.28 Gbit/s. At the destination this bit stream is received by a deserialiser, which recovers the 80 MHz parallel data. The control of the devices is performed by FPGAs, of the Spartan-IIE family on the IM end and the Virtex-II on the TPM. For transmission of the serial data we will use Infiniband connectors and halogen-free Skewclear cable from Amphenol. Infiniband connectors are available in two sizes. The 1X connector can accommodate two differential pairs, and the 4X eight. To allow flexibility in the IM-to-TPM connections, and to allow the necessary connections to be made both for the electron and jet processing, we will use hydra cable assemblies consisting of a single 4X connector at the TPM end, with four 1X connectors at the IM end. The input to the GT, GMT and DAQ will arrive on 1X connectors. Cable backplane links For communication between modules within the processor crate we use faster serial links than the DS92LV16 to achieve a higher density of information. The device selected is the Vitesse VSC7226 serdes. This contains four transmit and four receive circuits, which convert between 8-bit parallel data and 10-bit serial streams. The parallel data will be clocked at 320 MHz, so that the signals on the cable backplane are 3.2 Gbit/s differential LVPECL although the effective data rate is 2.56 Gbit/s. The TPM has six Vitesse devices, each controlled by a Virtex-II FPGA, giving 24 input and 24 output links and an aggregate bandwidth of over 60 Gbit/s in each direction. Two input and two output links are reserved for the exchange of control information, with the remainder available for algorithm data. The cable backplane uses the VHDM-HSD connectors from Teradyne, which allow a signal density of two pairs per 2mm of card edge. The signals are transmitted between modules over Gore eye-opener 6 cables, which are purpose-designed to mate with the VHDM-HSD connectors. Each cable contains two active signal pairs and can be used either to transfer data in both directions at 2.56 Gbit/s, or in one direction at twice this rate. Of the 24 cable positions on each processor module, ten are used for data transfer in one direction (five input, five output) and the remainder for bi-directional data. Communication between processing blocks on the TPM The Virtex-II family of FPGAs will be used for the four processors on the TPM as well as for the control of the front panel and backplane links. These devices offer significant advantages over the earlier families of Virtex FPGAs on which much of our algorithm development work has been undertaken. These advantages include more flexible clocking mechanisms as well as improved support for high-speed, lowswing input/output standards. They can support both LVTTL, needed for communication with the DS92LV16s, and SSTL2 for the Vitesse serdes. The 320 MHz parallel transmission rate can be achieved using the dual data rate (DDR) registers built into the Virtex-II i/o blocks, with a 160 MHz on-chip clock. For communication between the FPGAs on a processor module, the SSTL2/I signalling standard will be used with a signal rate of 160 MHz. The four algorithm FPGAs each have over 300 i/o pins in use, giving a data rate in excess of 50 Gbit/s

13 4.5. System hardware layout A total of nine TPMs is required to perform the GCT function specified in Section 3. Two of these take care of the sorting of isolated and non-isolated e/γ candidates. Each e/γ TPM receives data from three IMs and provides the clock and control for these. A further six modules process the region energy data, including jet object recognition and the generation of jet and energy trigger data for the GT as described in Section 3.2. Each jet/energy TPM receives data from two IMs. The ninth TPM handles the interface to the data acquisition, and the accumulation of summaries of activity indicators for the luminosity calculation. This module is also used to distribute control information to the other eight. Input crate 1 Processor crate Data from input crates to processor crate via 1.44 Gb/s serial links Input crate 2 DAQ Cjet0 Fjet0 Cjet1 Fjet1 Cjet2 Fjet2 elec isoe GCT TPM crate Figure 6: Module layout and front panel connections The layout of modules in the GCT is illustrated in Figure 6. On the left we show the two input crates and the processor crate, with the front panel connections carrying the input data to the processors. The right-hand view shows the processor crate with the processing function for each of the nine modules indicated. The two rightmost TPMs perform the non-isolated and isolated e/γ sort. The next six perform the jet and energy processing, and are labelled alternately Cjet and Fjet, for central and forward jets. The luminosity, data acquisition and control module is on the left. The system clock and control interfaces, hosted in a separate Communications Module on the left of the processor crate, are not shown here. The two input module crates are 6U high by 160mm. They have only a simple backplane that distributes 48V power and JTAG signals to alternate slots. Both the power and the JTAG are distributed from the processor crate. The processor module crate is 9U high, 400 mm deep. The top part of the backplane carries reduced, nonstandard VME together with 48V power and JTAG. The remainder is the cable backplane, which covers 18 of the 21 slots allowing for additional trigger functionality to be implemented if required using GCT technology. The remaining slots house the Communications Module and a standard VME controller. The crate backplanes are illustrated in Figure 7 and specified in more detail in [9]

14 21 x Metral 4-row 96 way signal 21 x Metral 4-row 24 way signal 21 x Metral 4-row 8 way power 48mm 92mm 12mm 12mm 19 x Teradyne VHDM-HSD 5-row 50w 38 x Teradyne VHDM-HSD 5-row 50w locating 57 x Teradyne VHDM-HSD 8-row 80w (on reverse) Processor crate control backplane 12mm 22mm 71mm 27mm 12mm 261mm 12mm 12mm Processor crate cable backplane Input crate backplane Figure 7: The processor crate and input crate backplanes 4.6. Processing layout In this section we illustrate in more detail the processing tasks performed by the GCT and the allocation of these tasks to FPGAs in the nine modules. The full specification for this layout is contained in [1]. Here we consider separately the e/γ and the jet/energy processing. Electron sorting The processing for each of the two categories of e/γ candidate takes place on a single module. Figure 8 shows the processing flows. The 72 candidates input from the RCT arrive on 18 cables and are received by three input modules. Eight serial links from each module transmit the data to the TPM. Four hydra cable assemblies are used to carry the data. Each has eight differential signal pairs, of which six are used for data and two to transmit clock and control information between the TPM and the

15 input modules. On the TPM, 24 candidates are input to each of the three Proc A devices. These perform the first stages of the sort algorithm and the Proc B finds the final four output objects. 4 electrons from RCT 4 electrons from RCT lectrons from RCT 24 electrons 4 final electrons 4 electrons to GT 24 electrons 12 electrons 24 electrons Figure 8: Data flow from input to processor module for e/γ objects Jet and energy processing The jet and energy processing is illustrated in Figure 9. Here each TPM communicates with two input modules. The first two cartoons show a central jet and a forward jet processor module. The Proc A devices on these modules perform the jet object recognition. Around 3000 bits of data per bx are exchanged between the modules over the cable backplane for this part of the processing. On the right of the Figure we show three modules executing the algorithm for the central η regions, and three covering the forward regions, with the energy information in the endcap regions 6-7 being transferred from the forward to the central modules. The Proc B FPGAs on the six modules are used to produce the sorted jets and activity trigger information for the GT, and a further 1275 data bits per bx are exchanged here. A total of 39 links on the cable backplane are used to achieve the communication between modules required for this processing GCT algorithms and firmware Trigger algorithms are to be implemented in the GCT to perform object sorting, jet pattern recognition and energy summing [2]. The algorithms will be loaded into the Proc A and Proc B devices on the TPMs, with the transfer of data between processing stages being handled by the front panel and backplane routing FPGAs. A common firmware framework will be implemented on all FPGAs on the TPM to handle common functions. These functions include clock signal handling and distribution and on-chip FIFO storage for data capture and playback

16 10 regions from RCT 12 regions from RCT 15 regions 10 regions from RCT 18 regions 12 regions from RCT 15 regions 10 regions from RCT 18 regions 12 regions from RCT 20 regions 24 regions 20 regions 20 regions 20 regions 24 regions 24 regions 24 regions final jets/energy final jets/energy jets/energy to GT 20 regions raw jets forward jets in central jets out φ neighbour in jets/energy to GT 24 regions raw jets forward jets out central jets in φ neighbour in 20 regions 20 regions 20 regions 20 regions raw jets 20 regions final jets/energy raw jets 20 regions final jets/energy final jets/energy raw jets 24 regions 24 regions φ 24 regions φ final jets/energy φ final jets/energy final jets/energy 20 regions η neighbour in 24 regions η neighbour out cjet data φ neighbour out φ neighbour out φ data a copy 10 regions from RCT 12 regions from RCT 10 regions from RCT 12 regions from RCT 15 regions 10 regions from RCT 18 regions 12 regions from RCT Central jet input and processor modules Forward jet input and processor modules Central jets module Forward jets module Data exchange via cable backplane Figure 9: Data flows in the jet and energy processing

17 In this section we describe briefly the framework and the various algorithms under development. Figure 10 shows the components of the firmware loaded into every FPGA. The ALGO processing block implements the required trigger functionality. Virtex-II Digital Clock Manager components DCM are used to generate clocks and enables. The fundamental clock frequency for the processing is 160 MHz. The data routing layer insulates the ALGO block from the details of the device implementation and pin-out. Buffering is provided for DAQ and test purposes. A simple bus for DAQ and control visits all FPGAs on the TPM. Timing and control issues are discussed further in Section 4.8. Figure 10: Structure of the firmware loaded into each processing FPGA, showing the trigger processing block ALGO and other components In Figure 11(a) we show the algorithms required in the e/γ sort tree, while Figure 11(b) shows the jet and energy processing structure. In each case the data enters from the left and is synchronised to the local GCT clock in the IMs. The remaining processing takes place in the Proc A and Proc B FPGAs within the TPMs. The three main types of algorithm required are for object sorting; jet finding; and energy summing. We discuss each of these in turn. Object sorting In both the e/γ and jet processing we require Sort blocks that find the four highest-ranked objects from some input. The number of input objects is either 24 or 12. The data format for all objects is standardised, with six bits for the object rank or transverse energy, plus additional bits to encode the (φ,η) position of the object. The Sort block expands the position information as it goes. The fastest way to perform a Sort is to compare all pairs of inputs in parallel and use priority-encoding logic to select the highest-ranked candidates. At earlier stages in the GCT development the amount of logic required for this appeared

18 prohibitive, and so we developed an alternative algorithm that significantly reduces the number of comparison operations required, while maintaining very low latency. The key step here is to arrange the incoming objects into groups of four, and pre-sort the objects in each group. Then the known rank ordering within the groups means that many of the comparisons that would otherwise be needed are redundant. Within the tree-structured sort operations needed for the GCT, the pre-sort has to be performed only at the first stage since the input to subsequent stages is already sorted. Further, the pre-sorting of groups of four objects fits well with the data transmission between FPGAs at 160 MHz, four times the beam crossing frequency. The objects arrive sequentially at the ALGO input and can be processed one at a time, leading to a very efficient use of logic. (a) 24 3 electrons Input sync/ Input Input sync/ routesync/ route (elec) route (elec) (elec) 24 3 electrons pre-sort, pre-sort, 4 3 sorted electrons final 24-to-4 pre-sort, 24-to-4 sort 12-to-4 sort 24-to-4 sort 4 final electrons (b) Figure 11:Trigger processing algorithms (a) e/γ sort tree, (b) jet and energy processing Jet finding There are two jetfind ALGO blocks required, for central (barrel and endcap) and forward jets. Each implements the full 3 3 sliding window jet cluster. The two processes differ in the number of calorimeter (φ,η) regions covered, and also due to the different data representations from the central and forward calorimeters. The operations required are additions, to find the total energy in a 3 3-region window, and comparisons of neighbouring region energies to establish the local maximum condition. The addition and comparison operations proceed in parallel. They are followed by a final processing stage that selects the identified jets and assigns each object a rank, based on its raw total E T and on η, and a position. A processing architecture has been developed that achieves low latency while making efficient use of resources through the re-use of logic elements for different functions during the 25ns beam-crossing clock cycle

19 Energy summing In addition to jet objects for sorting, the two jetfind ALGO blocks produce input for the energy and activity triggers. This consists of φ-strip sums of E T, together with H T and jet count primitives. The majority of the processing to be performed on this data consists of summing trees; each of the 18 jetfind blocks on the six Cjet and Fjet processor modules produces essentially the same primitives, and these are summed in two stages of logic. The φ-strip E T sums are also used as input to the missing E T calculation. Here the conversion to E x and E y for each strip is implemented as fast multiply operations. Techniques have also been developed to handle the transformation from total E x, E y to a magnitude and direction using multiply and compare arithmetic with acceptable latency GCT system issues In this section we discuss aspects of the GCT design not directly concerned with the trigger processing function. These include clock signal and fast control distribution; fast monitoring and status reporting; data buffering and collection for DAQ; system control, initialisation and test sequences; monitoring and error handling. We describe the strategies used for control, configuration, monitoring and DAQ functions in the GCT system, and the hardware and software used to support them. We also discuss the interfaces to systems outside the Level-1 trigger: the TTC system, the DAQ and the central CMS control software. The accompanying documents provide more details on timing, synchronisation and fast signals [5], and on other aspects of control, test and monitoring [4] Overview From the control point of view, the GCT consists of a collection of independent IMs and TPMs. The modules are designed to be assembled into a system with many of the control interfaces passing through a single TPM. The DAQ TPM is used to distribute and collect control information to all other TPMs, via serial links on the cable backplane. During normal operation, these links will be used to carry the DAQ traffic together with synchronous control and status signals. In the TPM crate, additional paths are provided for control via VMEbus and via JTAG. On each TPM there is a single control FPGA that performs a similar function locally to the DAQ TPM in the system as a whole. That is, the control FPGA distributes and collects control to all other FPGAs on the board. The control path is extended from the TPM to its associated IMs via serial links. The system-wide JTAG bus is also extended to the IM crates. The hardware interfaces to the TTC and the DAQ systems, as defined centrally within CMS, are housed on a purpose-built Communications Module (CM). This module fans out the 40 MHz system clock to all TPMs, and handles all other required functionality via information exchange with the DAQ TPM. An FPGA on the CM provides VME and JTAG interfaces and minimal setup and control for the other devices on the board. The TPM crate also accommodates a VME controller to allow the GCT to be controlled either from the CMS detector control system or from test software; the GCT control software therefore forms part of a larger software system, and will largely take the form of modules and drivers used in conjunction with other software packages. The hardware and software systems for control and monitoring have to

20 support system and module configuration; low-level access to registers and memories; self-test of system components; collection of event data for DAQ; spy and playback functions for monitoring; and checking and adjustment of operating parameters such as link synchronisation delays. In the following sections, we first describe the interfaces and components that make up the control and monitoring hardware. We then describe their implementation within the GCT modules, and finally describe the overall strategies for control and monitoring Control system components This section summarises each of the major components of the GCT control and monitoring hardware. System control buses The TPM crate backplane includes an A24/D16 VME-compatible bus. The physical implementation of the bus is non-standard, in that a more compact connector type is used, and only those signal lines required for a restricted subset of VME modes are bussed. The TPMs incorporate VME slave functionality, and support the VME64 extensions recommended for all CMS VMEbus systems. The crate is designed to accommodate a standard 6U VME controller in slot 1 Both the TPM and the IM crates incorporate JTAG backplanes. Each board on the bus may be individually selected for debugging. Any TPM in the processor crate may act as JTAG controller, though this function is usually allocated to the DAQ TPM. The JTAG backplane in each of the input crates is driven by a TPM via a cable link. In addition to the standard JTAG signals, the JTAG backplane also incorporates a hard reset signal and several general-purpose signals. The hard reset signal may be driven by the TTCrx, allowing the system to be returned to a known state. In addition to providing a path for trigger data, the serial backplane is used to pass control information between the processor modules. A single duplex backplane link connects each board to the DAQ TPM. DAQ data, and fast monitoring bits, pass from each board to the DAQ system while, in the reverse direction, synchronous control bits from the TTC are forwarded to each TPM. Board-level control and configuration On the TPM, all data paths for control, monitoring and configuration functions pass through the control FPGA. The FPGA firmware includes a CPU core, the VME interface, and controllers for the other devices on the board. A synchronous bus interface is provided to the processing and link control FPGAs. Communication with the Vitesse links and environmental monitoring devices is via serial buses. For external communication, the control FPGA has exclusive use of two out of the 24 serial backplane links. Additional general purpose i/o lines are provided to allow information exchange either via the backplane bus or through a front-panel connector. The FPGA configuration data for each TPM is held on a removable 256Mbit Compact Flash card. FPGA programming is carried out by a Xilinx SystemACE controller using the TPM JTAG chain. The control FPGA is connected to the processor interface port of the SystemACE. This means that the Flash card contents

21 are accessible either to the board control software, running in the CPU core, or to the VME interface. The much simpler control of the IMs is managed from the associated TPM, via an extension of the synchronous control bus through a dedicated serial link connection. For configuration, the FPGA programs are held within a Xilinx reprogrammable PROM device. The PROM may be reprogrammed via JTAG at any time. Each module, TPM or IM, contains a single JTAG chain that includes all JTAG-enabled devices on that board and may be addressed via a Test Access Controller device connected to the backplane. The JTAG chain is used both for boardlevel connectivity tests and for configuration of programmable devices. On the TPM, the JTAG chain may be driven by any one of several devices: the backplane, a JTAG header used in standalone testing, or the SystemAce controller. The SystemAce is responsible for arbitrating control of the JTAG chain, and may itself be addressed via JTAG. External interfaces The interface to the CMS TTC system is provided by a single TTCrx ASIC and optical receiver. The interface to the CMS DAQ system is provided by a single SLINK-64 daughtercard. Both interfaces are accommodated on the Communications Module. Control signals from the TTC system are sent to the DAQ concentrator TPM via a front-panel serial link. DAQ data are formatted by the DAQ TPM and transferred to the CM via front-panel serial links. Fast monitoring signals describing the GCT hardware status are calculated on the DAQ TPM, and send directly from the front-panel GPIO connector to the Fast Monitoring Module Timing and fast signals TTC interface In common with all other components of the CMS trigger, the basic timing reference for the GCT is given by the CMS TTC system. This provides a stable MHz clock signal, along with synchronous control information (L1A, etc) generated by the trigger control system (TCS). In addition, it provides a BX0 signal that has a known timing relationship to the LHC orbit structure. The GCT contains a single TTC optical receiver and TTCrx decoder ASIC, housed on the CM. Clock distribution The TTCrx clock signal is cleaned up, if necessary, on the CM using the QPLL or other appropriate device, and fanned out for transmission to the TPMs. Each TPM receives a 40MHz clock signal from the CM on a point-to-point cable link. These clock links are transmitted using the Teradyne connector/gore cable combination, and are contained within the cable backplane enclosure. Each TPM contains a balanced clock tree, which supplies all ICs on the board with a similarly phased clock. The number of clock destinations is such that a twolevel fanout tree must be used. A 40 MHz clock is sent to all FPGAs on the module; the serdes devices require 80 MHz frequency. Since the TPM clock signals must be

CMS Conference Report

CMS Conference Report Available on CMS information server CMS CR 1997/017 CMS Conference Report 22 October 1997 Updated in 30 March 1998 Trigger synchronisation circuits in CMS J. Varela * 1, L. Berger 2, R. Nóbrega 3, A. Pierce

More information

Tests of the boards generating the CMS ECAL Trigger Primitives: from the On-Detector electronics to the Off-Detector electronics system

Tests of the boards generating the CMS ECAL Trigger Primitives: from the On-Detector electronics to the Off-Detector electronics system Tests of the boards generating the CMS ECAL Trigger Primitives: from the On-Detector electronics to the Off-Detector electronics system P. Paganini, M. Bercher, P. Busson, M. Cerutti, C. Collard, A. Debraine,

More information

WBS Calorimeter Trigger. Wesley Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Review April 12, 2000

WBS Calorimeter Trigger. Wesley Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Review April 12, 2000 WBS 3.1.2 - Calorimeter Trigger Wesley Smith, U. Wisconsin CMS Trigger Project Manager DOE/NSF Review April 12, 2000 1 Calorimeter Electronics Interface Calorimeter Trigger Overview 4K 1.2 Gbaud serial

More information

Trigger Report. Wesley H. Smith CMS Trigger Project Manager Report to Steering Committee February 23, 2004

Trigger Report. Wesley H. Smith CMS Trigger Project Manager Report to Steering Committee February 23, 2004 Trigger Report Wesley H. Smith CMS Trigger Project Manager Report to Steering Committee February 23, 2004 Outline: Calorimeter Triggers Muon Triggers Global Triggers The pdf file of this talk is available

More information

US CMS Endcap Muon. Regional CSC Trigger System WBS 3.1.1

US CMS Endcap Muon. Regional CSC Trigger System WBS 3.1.1 WBS Dictionary/Basis of Estimate Documentation US CMS Endcap Muon Regional CSC Trigger System WBS 3.1.1-1- 1. INTRODUCTION 1.1 The CMS Muon Trigger System The CMS trigger and data acquisition system is

More information

Global Trigger Trigger meeting 27.Sept 00 A.Taurok

Global Trigger Trigger meeting 27.Sept 00 A.Taurok Global Trigger Trigger meeting 27.Sept 00 A.Taurok Global Trigger Crate GT crate VME 9U Backplane 4 MUONS parallel CLOCK, BC_Reset... READOUT _links PSB 12 PSB 12 24 4 6 GT MU 6 GT MU PSB 12 PSB 12 PSB

More information

BABAR IFR TDC Board (ITB): requirements and system description

BABAR IFR TDC Board (ITB): requirements and system description BABAR IFR TDC Board (ITB): requirements and system description Version 1.1 November 1997 G. Crosetti, S. Minutoli, E. Robutti I.N.F.N. Genova 1. Timing measurement with the IFR Accurate track reconstruction

More information

TTC Interface Module for ATLAS Read-Out Electronics: Final production version based on Xilinx FPGA devices

TTC Interface Module for ATLAS Read-Out Electronics: Final production version based on Xilinx FPGA devices Physics & Astronomy HEP Electronics TTC Interface Module for ATLAS Read-Out Electronics: Final production version based on Xilinx FPGA devices LECC 2004 Matthew Warren warren@hep.ucl.ac.uk Jon Butterworth,

More information

Description of the Synchronization and Link Board

Description of the Synchronization and Link Board Available on CMS information server CMS IN 2005/007 March 8, 2005 Description of the Synchronization and Link Board ECAL and HCAL Interface to the Regional Calorimeter Trigger Version 3.0 (SLB-S) PMC short

More information

The ATLAS Tile Calorimeter, its performance with pp collisions and its upgrades for high luminosity LHC

The ATLAS Tile Calorimeter, its performance with pp collisions and its upgrades for high luminosity LHC The ATLAS Tile Calorimeter, its performance with pp collisions and its upgrades for high luminosity LHC Tomas Davidek (Charles University), on behalf of the ATLAS Collaboration Tile Calorimeter Sampling

More information

12 Cathode Strip Chamber Track-Finder

12 Cathode Strip Chamber Track-Finder CMS Trigger TDR DRAFT 12 Cathode Strip Chamber Track-Finder 12 Cathode Strip Chamber Track-Finder 12.1 Requirements 12.1.1 Physics Requirements The L1 trigger electronics of the CMS muon system must measure

More information

Status of the CSC Track-Finder

Status of the CSC Track-Finder Status of the CSC Track-Finder D. Acosta, S.M. Wang University of Florida A.Atamanchook, V.Golovstov, B.Razmyslovich PNPI CSC Muon Trigger Scheme Strip FE cards Strip LCT card CSC Track-Finder LCT Motherboard

More information

Test Beam Wrap-Up. Darin Acosta

Test Beam Wrap-Up. Darin Acosta Test Beam Wrap-Up Darin Acosta Agenda Darin/UF: General recap of runs taken, tests performed, Track-Finder issues Martin/UCLA: Summary of RAT and RPC tests, and experience with TMB2004 Stan(or Jason or

More information

WBS Trigger. Wesley Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Review April 11, 2000

WBS Trigger. Wesley Smith, U. Wisconsin CMS Trigger Project Manager. DOE/NSF Review April 11, 2000 WBS 3.1 - Trigger Wesley Smith, U. Wisconsin CMS Trigger Project Manager DOE/NSF Review April 11, 2000 US CMS DOE/NSF Review, April 11-13, 2000 1 Outline Overview of Calorimeter Trigger Calorimeter Trigger

More information

LHCb and its electronics. J. Christiansen On behalf of the LHCb collaboration

LHCb and its electronics. J. Christiansen On behalf of the LHCb collaboration LHCb and its electronics J. Christiansen On behalf of the LHCb collaboration Physics background CP violation necessary to explain matter dominance B hadron decays good candidate to study CP violation B

More information

Synchronization of the CMS Cathode Strip Chambers

Synchronization of the CMS Cathode Strip Chambers Synchronization of the CMS Cathode Strip Chambers G. Rakness a, J. Hauser a, D. Wang b a) University of California, Los Angeles b) University of Florida Gregory.Rakness@cern.ch Abstract The synchronization

More information

Local Trigger Electronics for the CMS Drift Tubes Muon Detector

Local Trigger Electronics for the CMS Drift Tubes Muon Detector Amsterdam, 1 October 2003 Local Trigger Electronics for the CMS Drift Tubes Muon Detector Presented by R.Travaglini INFN-Bologna Italy CMS Drift Tubes Muon Detector CMS Barrel: 5 wheels Wheel : Azimuthal

More information

LHCb and its electronics.

LHCb and its electronics. LHCb and its electronics. J. Christiansen, CERN On behalf of the LHCb collaboration jorgen.christiansen@cern.ch Abstract The general architecture of the electronics systems in the LHCb experiment is described

More information

Level 1 Calorimeter Trigger:

Level 1 Calorimeter Trigger: ATL DA ES 0038 30 November 2006 EDMS document number 489129 Version Draft 0.6 Level 1 Calorimeter Trigger: DAQ and CMM cabling L1Calo Group 1 1 Introduction The purpose of this note is to complete the

More information

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015 Optimization of Multi-Channel BCH Error Decoding for Common Cases Russell Dill Master's Thesis Defense April 20, 2015 Bose-Chaudhuri-Hocquenghem (BCH) BCH is an Error Correcting Code (ECC) and is used

More information

GREAT 32 channel peak sensing ADC module: User Manual

GREAT 32 channel peak sensing ADC module: User Manual GREAT 32 channel peak sensing ADC module: User Manual Specification: 32 independent timestamped peak sensing, ADC channels. Input range 0 to +8V. Sliding scale correction. Peaking time greater than 1uS.

More information

Trigger Cost & Schedule

Trigger Cost & Schedule Trigger Cost & Schedule Wesley Smith, U. Wisconsin CMS Trigger Project Manager DOE/NSF Review May 9, 2001 1 Baseline L4 Trigger Costs From April '00 Review -- 5.69 M 3.96 M 1.73 M 2 Calorimeter Trig. Costs

More information

BABAR IFR TDC Board (ITB): system design

BABAR IFR TDC Board (ITB): system design BABAR IFR TDC Board (ITB): system design Version 1.1 12 december 1997 G. Crosetti, S. Minutoli, E. Robutti I.N.F.N. Genova 1. Introduction TDC readout of the IFR will be used during BABAR data taking to

More information

CSC Data Rates, Formats and Calibration Methods

CSC Data Rates, Formats and Calibration Methods CSC Data Rates, Formats and Calibration Methods D. Acosta University of Florida With most information collected from the The Ohio State University PRS March Milestones 1. Determination of calibration methods

More information

VLSI Chip Design Project TSEK06

VLSI Chip Design Project TSEK06 VLSI Chip Design Project TSEK06 Project Description and Requirement Specification Version 1.1 Project: High Speed Serial Link Transceiver Project number: 4 Project Group: Name Project members Telephone

More information

THE DIAGNOSTICS BACK END SYSTEM BASED ON THE IN HOUSE DEVELOPED A DA AND A D O BOARDS

THE DIAGNOSTICS BACK END SYSTEM BASED ON THE IN HOUSE DEVELOPED A DA AND A D O BOARDS THE DIAGNOSTICS BACK END SYSTEM BASED ON THE IN HOUSE DEVELOPED A DA AND A D O BOARDS A. O. Borga #, R. De Monte, M. Ferianis, L. Pavlovic, M. Predonzani, ELETTRA, Trieste, Italy Abstract Several diagnostic

More information

The TRIGGER/CLOCK/SYNC Distribution for TJNAF 12 GeV Upgrade Experiments

The TRIGGER/CLOCK/SYNC Distribution for TJNAF 12 GeV Upgrade Experiments 1 1 1 1 1 1 1 1 0 1 0 The TRIGGER/CLOCK/SYNC Distribution for TJNAF 1 GeV Upgrade Experiments William GU, et al. DAQ group and Fast Electronics group Thomas Jefferson National Accelerator Facility (TJNAF),

More information

Design, Realization and Test of a DAQ chain for ALICE ITS Experiment. S. Antinori, D. Falchieri, A. Gabrielli, E. Gandolfi

Design, Realization and Test of a DAQ chain for ALICE ITS Experiment. S. Antinori, D. Falchieri, A. Gabrielli, E. Gandolfi Design, Realization and Test of a DAQ chain for ALICE ITS Experiment S. Antinori, D. Falchieri, A. Gabrielli, E. Gandolfi Physics Department, Bologna University, Viale Berti Pichat 6/2 40127 Bologna, Italy

More information

The ATLAS Level-1 Central Trigger

The ATLAS Level-1 Central Trigger he AAS evel-1 entral rigger RSpiwoks a, SAsk b, DBerge a, Daracinha a,c, NEllis a, PFarthouat a, PGallno a, SHaas a, PKlofver a, AKrasznahorkay a,d, AMessina a, Ohm a, Pauly a, MPerantoni e, HPessoa ima

More information

SuperB- DCH. Servizio Ele<ronico Laboratori FrascaA

SuperB- DCH. Servizio Ele<ronico Laboratori FrascaA 1 Outline 2 DCH FEE Constraints/Estimate & Main Blocks front- end main blocks Constraints & EsAmate Trigger rate (150 khz) Trigger/DAQ data format I/O BW Trigger Latency Minimum trigger spacing. Chamber

More information

S.Cenk Yıldız on behalf of ATLAS Muon Collaboration. Topical Workshop on Electronics for Particle Physics, 28 September - 2 October 2015

S.Cenk Yıldız on behalf of ATLAS Muon Collaboration. Topical Workshop on Electronics for Particle Physics, 28 September - 2 October 2015 THE ATLAS CATHODE STRIP CHAMBERS A NEW ATLAS MUON CSC READOUT SYSTEM WITH SYSTEM ON CHIP TECHNOLOGY ON ATCA PLATFORM S.Cenk Yıldız on behalf of ATLAS Muon Collaboration University of California, Irvine

More information

FPGA Design. Part I - Hardware Components. Thomas Lenzi

FPGA Design. Part I - Hardware Components. Thomas Lenzi FPGA Design Part I - Hardware Components Thomas Lenzi Approach We believe that having knowledge of the hardware components that compose an FPGA allow for better firmware design. Being able to visualise

More information

Logic Devices for Interfacing, The 8085 MPU Lecture 4

Logic Devices for Interfacing, The 8085 MPU Lecture 4 Logic Devices for Interfacing, The 8085 MPU Lecture 4 1 Logic Devices for Interfacing Tri-State devices Buffer Bidirectional Buffer Decoder Encoder D Flip Flop :Latch and Clocked 2 Tri-state Logic Outputs

More information

CMS Tracker Synchronization

CMS Tracker Synchronization CMS Tracker Synchronization K. Gill CERN EP/CME B. Trocme, L. Mirabito Institut de Physique Nucleaire de Lyon Outline Timing issues in CMS Tracker Synchronization method Relative synchronization Synchronization

More information

University of Oxford Department of Physics. Interim Report

University of Oxford Department of Physics. Interim Report University of Oxford Department of Physics Interim Report Project Name: Project Code: Group: Version: Atlas Binary Chip (ABC ) NP-ATL-ROD-ABCDEC1 ATLAS DRAFT Date: 04 February 1998 Distribution List: A.

More information

An FPGA based Topological Processor Prototype for the ATLAS Level-1 Trigger Upgrade

An FPGA based Topological Processor Prototype for the ATLAS Level-1 Trigger Upgrade Preprint typeset in JINST style - HYPER VERSION An FPGA based Topological Processor Prototype for the ATLAS Level-1 Trigger Upgrade Bruno Bauss, Volker Büscher, Reinhold Degele, Weina Ji, Sebastian Moritz,

More information

Racks, Cabling and Latency

Racks, Cabling and Latency Racks, Cabling and Latency Murrough Landon 2 November 2000 Overview Rack Layout Cabling paths Latency estimates Outstanding issues "! #%$"! & % &(' Racks Layout Original Requirements Minimise the overall

More information

Advanced Training Course on FPGA Design and VHDL for Hardware Simulation and Synthesis. 26 October - 20 November, 2009

Advanced Training Course on FPGA Design and VHDL for Hardware Simulation and Synthesis. 26 October - 20 November, 2009 2065-28 Advanced Training Course on FPGA Design and VHDL for Hardware Simulation and Synthesis 26 October - 20 November, 2009 Starting to make an FPGA Project Alexander Kluge PH ESE FE Division CERN 385,

More information

A pixel chip for tracking in ALICE and particle identification in LHCb

A pixel chip for tracking in ALICE and particle identification in LHCb A pixel chip for tracking in ALICE and particle identification in LHCb K.Wyllie 1), M.Burns 1), M.Campbell 1), E.Cantatore 1), V.Cencelli 2) R.Dinapoli 3), F.Formenti 1), T.Grassi 1), E.Heijne 1), P.Jarron

More information

Data Quality Monitoring in the ATLAS Inner Detector

Data Quality Monitoring in the ATLAS Inner Detector On behalf of the ATLAS collaboration Cavendish Laboratory, University of Cambridge E-mail: white@hep.phy.cam.ac.uk This article describes the data quality monitoring systems of the ATLAS inner detector.

More information

Data Converters and DSPs Getting Closer to Sensors

Data Converters and DSPs Getting Closer to Sensors Data Converters and DSPs Getting Closer to Sensors As the data converters used in military applications must operate faster and at greater resolution, the digital domain is moving closer to the antenna/sensor

More information

GALILEO Timing Receiver

GALILEO Timing Receiver GALILEO Timing Receiver The Space Technology GALILEO Timing Receiver is a triple carrier single channel high tracking performances Navigation receiver, specialized for Time and Frequency transfer application.

More information

Optical Link Evaluation Board for the CSC Muon Trigger at CMS

Optical Link Evaluation Board for the CSC Muon Trigger at CMS Optical Link Evaluation Board for the CSC Muon Trigger at CMS 04/04/2001 User s Manual Rice University, Houston, TX 77005 USA Abstract The main goal of the design was to evaluate a data link based on Texas

More information

Short summary of ATLAS Japan Group for LHC/ATLAS upgrade review Liquid Argon Calorimeter

Short summary of ATLAS Japan Group for LHC/ATLAS upgrade review Liquid Argon Calorimeter Preprint typeset in JINST style - HYPER VERSION Short summary of ATLAS Japan Group for LHC/ATLAS upgrade review Liquid Argon Calorimeter ATLAS Japan Group E-mail: Yuji.Enari@cern.ch ABSTRACT: Short summary

More information

SignalTap Plus System Analyzer

SignalTap Plus System Analyzer SignalTap Plus System Analyzer June 2000, ver. 1 Data Sheet Features Simultaneous internal programmable logic device (PLD) and external (board-level) logic analysis 32-channel external logic analyzer 166

More information

SingMai Electronics SM06. Advanced Composite Video Interface: HD-SDI to acvi converter module. User Manual. Revision 0.

SingMai Electronics SM06. Advanced Composite Video Interface: HD-SDI to acvi converter module. User Manual. Revision 0. SM06 Advanced Composite Video Interface: HD-SDI to acvi converter module User Manual Revision 0.4 1 st May 2017 Page 1 of 26 Revision History Date Revisions Version 17-07-2016 First Draft. 0.1 28-08-2016

More information

IPRD06 October 2nd, G. Cerminara on behalf of the CMS collaboration University and INFN Torino

IPRD06 October 2nd, G. Cerminara on behalf of the CMS collaboration University and INFN Torino IPRD06 October 2nd, 2006 The Drift Tube System of the CMS Experiment on behalf of the CMS collaboration University and INFN Torino Overview The CMS muon spectrometer and the Drift Tube (DT) system the

More information

Major Differences Between the DT9847 Series Modules

Major Differences Between the DT9847 Series Modules DT9847 Series Dynamic Signal Analyzer for USB With Low THD and Wide Dynamic Range The DT9847 Series are high-accuracy, dynamic signal acquisition modules designed for sound and vibration applications.

More information

EECS150 - Digital Design Lecture 12 - Video Interfacing. Recap and Outline

EECS150 - Digital Design Lecture 12 - Video Interfacing. Recap and Outline EECS150 - Digital Design Lecture 12 - Video Interfacing Oct. 8, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John

More information

The Pixel Trigger System for the ALICE experiment

The Pixel Trigger System for the ALICE experiment CERN, European Organization for Nuclear Research E-mail: gianluca.aglieri.rinella@cern.ch The ALICE Silicon Pixel Detector (SPD) data stream includes 1200 digital signals (Fast-OR) promptly asserted on

More information

BEMC electronics operation

BEMC electronics operation Appendix A BEMC electronics operation The tower phototubes are powered by CockroftWalton (CW) bases that are able to keep the high voltage up to a high precision. The bases are programmed through the serial

More information

ALICE Muon Trigger upgrade

ALICE Muon Trigger upgrade ALICE Muon Trigger upgrade Context RPC Detector Status Front-End Electronics Upgrade Readout Electronics Upgrade Conclusions and Perspectives Dr Pascal Dupieux, LPC Clermont, QGPF 2013 1 Context The Muon

More information

First LHC Beams in ATLAS. Peter Krieger University of Toronto On behalf of the ATLAS Collaboration

First LHC Beams in ATLAS. Peter Krieger University of Toronto On behalf of the ATLAS Collaboration First LHC Beams in ATLAS Peter Krieger University of Toronto On behalf of the ATLAS Collaboration Cutaway View LHC/ATLAS (Graphic) P. Krieger, University of Toronto Aspen Winter Conference, Feb. 2009 2

More information

Logic Analysis Basics

Logic Analysis Basics Logic Analysis Basics September 27, 2006 presented by: Alex Dickson Copyright 2003 Agilent Technologies, Inc. Introduction If you have ever asked yourself these questions: What is a logic analyzer? What

More information

FPGA Development for Radar, Radio-Astronomy and Communications

FPGA Development for Radar, Radio-Astronomy and Communications John-Philip Taylor Room 7.03, Department of Electrical Engineering, Menzies Building, University of Cape Town Cape Town, South Africa 7701 Tel: +27 82 354 6741 email: tyljoh010@myuct.ac.za Internet: http://www.uct.ac.za

More information

Field Programmable Gate Arrays (FPGAs)

Field Programmable Gate Arrays (FPGAs) Field Programmable Gate Arrays (FPGAs) Introduction Simulations and prototyping have been a very important part of the electronics industry since a very long time now. Before heading in for the actual

More information

Logic Analysis Basics

Logic Analysis Basics Logic Analysis Basics September 27, 2006 presented by: Alex Dickson Copyright 2003 Agilent Technologies, Inc. Introduction If you have ever asked yourself these questions: What is a logic analyzer? What

More information

DT9834 Series High-Performance Multifunction USB Data Acquisition Modules

DT9834 Series High-Performance Multifunction USB Data Acquisition Modules DT9834 Series High-Performance Multifunction USB Data Acquisition Modules DT9834 Series High Performance, Multifunction USB DAQ Key Features: Simultaneous subsystem operation on up to 32 analog input channels,

More information

Objectives. Combinational logics Sequential logics Finite state machine Arithmetic circuits Datapath

Objectives. Combinational logics Sequential logics Finite state machine Arithmetic circuits Datapath Objectives Combinational logics Sequential logics Finite state machine Arithmetic circuits Datapath In the previous chapters we have studied how to develop a specification from a given application, and

More information

Scalable, intelligent image processing board for highest requirements on image acquisition and processing over long distances by optical connection

Scalable, intelligent image processing board for highest requirements on image acquisition and processing over long distances by optical connection i Product Profile of Scalable, intelligent image processing board for highest requirements on image acquisition and processing over long distances by optical connection First Camera Link HS F2 Frame grabber

More information

KEK. Belle2Link. Belle2Link 1. S. Nishida. S. Nishida (KEK) Nov.. 26, Aerogel RICH Readout

KEK. Belle2Link. Belle2Link 1. S. Nishida. S. Nishida (KEK) Nov.. 26, Aerogel RICH Readout S. Nishida KEK Nov 26, 2010 1 Introduction (Front end electronics) ASIC (SA) Readout (Digital Part) HAPD (144ch) Preamp Shaper Comparator L1 buffer DAQ group Total ~ 500 HAPDs. ASIC: 36ch per chip (i.e.

More information

THE WaveDAQ SYSTEM FOR THE MEG II UPGRADE

THE WaveDAQ SYSTEM FOR THE MEG II UPGRADE Stefan Ritt, Paul Scherrer Institute, Switzerland Luca Galli, Fabio Morsani, Donato Nicolò, INFN Pisa, Italy THE WaveDAQ SYSTEM FOR THE MEG II UPGRADE DRS4 Chip 0.2-2 ns Inverter Domino ring chain IN Clock

More information

The Read-Out system of the ALICE pixel detector

The Read-Out system of the ALICE pixel detector The Read-Out system of the ALICE pixel detector Kluge, A. for the ALICE SPD collaboration CERN, CH-1211 Geneva 23, Switzerland Abstract The on-detector electronics of the ALICE silicon pixel detector (nearly

More information

Libera Hadron: demonstration at SPS (CERN)

Libera Hadron: demonstration at SPS (CERN) Creation date: 07.10.2011 Last modification: 14.10.2010 Libera Hadron: demonstration at SPS (CERN) Borut Baričevič, Matjaž Žnidarčič Introduction Libera Hadron has been demonstrated at CERN. The demonstration

More information

LHC Physics GRS PY 898 B8. Trigger Menus, Detector Commissioning

LHC Physics GRS PY 898 B8. Trigger Menus, Detector Commissioning LHC Physics GRS PY 898 B8 Lecture #5 Tulika Bose Trigger Menus, Detector Commissioning Trigger Menus Need to address the following questions: What to save permanently on mass storage? Which trigger streams

More information

White Paper Lower Costs in Broadcasting Applications With Integration Using FPGAs

White Paper Lower Costs in Broadcasting Applications With Integration Using FPGAs Introduction White Paper Lower Costs in Broadcasting Applications With Integration Using FPGAs In broadcasting production and delivery systems, digital video data is transported using one of two serial

More information

Development of beam-collision feedback systems for future lepton colliders. John Adams Institute for Accelerator Science, Oxford University

Development of beam-collision feedback systems for future lepton colliders. John Adams Institute for Accelerator Science, Oxford University Development of beam-collision feedback systems for future lepton colliders P.N. Burrows 1 John Adams Institute for Accelerator Science, Oxford University Denys Wilkinson Building, Keble Rd, Oxford, OX1

More information

A video signal processor for motioncompensated field-rate upconversion in consumer television

A video signal processor for motioncompensated field-rate upconversion in consumer television A video signal processor for motioncompensated field-rate upconversion in consumer television B. De Loore, P. Lippens, P. Eeckhout, H. Huijgen, A. Löning, B. McSweeney, M. Verstraelen, B. Pham, G. de Haan,

More information

FRONT-END AND READ-OUT ELECTRONICS FOR THE NUMEN FPD

FRONT-END AND READ-OUT ELECTRONICS FOR THE NUMEN FPD FRONT-END AND READ-OUT ELECTRONICS FOR THE NUMEN FPD D. LO PRESTI D. BONANNO, F. LONGHITANO, D. BONGIOVANNI, S. REITO INFN- SEZIONE DI CATANIA D. Lo Presti, NUMEN2015 LNS, 1-2 December 2015 1 OVERVIEW

More information

Laboratory 4. Figure 1: Serdes Transceiver

Laboratory 4. Figure 1: Serdes Transceiver Laboratory 4 The purpose of this laboratory exercise is to design a digital Serdes In the first part of the lab, you will design all the required subblocks for the digital Serdes and simulate them In part

More information

VIDEO GRABBER. DisplayPort. User Manual

VIDEO GRABBER. DisplayPort. User Manual VIDEO GRABBER DisplayPort User Manual Version Date Description Author 1.0 2016.03.02 New document MM 1.1 2016.11.02 Revised to match 1.5 device firmware version MM 1.2 2019.11.28 Drawings changes MM 2

More information

Digital Audio Broadcast Store and Forward System Technical Description

Digital Audio Broadcast Store and Forward System Technical Description Digital Audio Broadcast Store and Forward System Technical Description International Communications Products Inc. Including the DCM-970 Multiplexer, DCR-972 DigiCeiver, And the DCR-974 DigiCeiver Original

More information

1 Digital BPM Systems for Hadron Accelerators

1 Digital BPM Systems for Hadron Accelerators Digital BPM Systems for Hadron Accelerators Proton Synchrotron 26 GeV 200 m diameter 40 ES BPMs Built in 1959 Booster TT70 East hall CB Trajectory measurement: System architecture Inputs Principles of

More information

Front End Electronics

Front End Electronics CLAS12 Ring Imaging Cherenkov (RICH) Detector Mid-term Review Front End Electronics INFN - Ferrara Matteo Turisini 2015 October 13 th Overview Readout requirements Hardware design Electronics boards Integration

More information

A MISSILE INSTRUMENTATION ENCODER

A MISSILE INSTRUMENTATION ENCODER A MISSILE INSTRUMENTATION ENCODER Item Type text; Proceedings Authors CONN, RAYMOND; BREEDLOVE, PHILLIP Publisher International Foundation for Telemetering Journal International Telemetering Conference

More information

Figure 1: Feature Vector Sequence Generator block diagram.

Figure 1: Feature Vector Sequence Generator block diagram. 1 Introduction Figure 1: Feature Vector Sequence Generator block diagram. We propose designing a simple isolated word speech recognition system in Verilog. Our design is naturally divided into two modules.

More information

Front End Electronics

Front End Electronics CLAS12 Ring Imaging Cherenkov (RICH) Detector Mid-term Review Front End Electronics INFN - Ferrara Matteo Turisini 2015 October 13 th Overview Readout requirements Hardware design Electronics boards Integration

More information

Exercise 1-2. Digital Trunk Interface EXERCISE OBJECTIVE

Exercise 1-2. Digital Trunk Interface EXERCISE OBJECTIVE Exercise 1-2 Digital Trunk Interface EXERCISE OBJECTIVE When you have completed this exercise, you will be able to explain the role of the digital trunk interface in a central office. You will be familiar

More information

CONTROL OF THE LOW LEVEL RF SYSTEM OF THE LARGE HADRON COLLIDER

CONTROL OF THE LOW LEVEL RF SYSTEM OF THE LARGE HADRON COLLIDER 10th ICALEPCS Int. Conf. on Accelerator & Large Expt. Physics Control Systems. Geneva, 10-14 Oct 2005, PO1.028-1 (2005) CONTROL OF THE LOW LEVEL RF SYSTEM OF THE LARGE HADRON COLLIDER A. Butterworth 1,

More information

SingMai Electronics SM06. Advanced Composite Video Interface: DVI/HD-SDI to acvi converter module. User Manual. Revision th December 2016

SingMai Electronics SM06. Advanced Composite Video Interface: DVI/HD-SDI to acvi converter module. User Manual. Revision th December 2016 SM06 Advanced Composite Video Interface: DVI/HD-SDI to acvi converter module User Manual Revision 0.3 30 th December 2016 Page 1 of 23 Revision History Date Revisions Version 17-07-2016 First Draft. 0.1

More information

National Park Service Photo. Utah 400 Series 1. Digital Routing Switcher.

National Park Service Photo. Utah 400 Series 1. Digital Routing Switcher. National Park Service Photo Utah 400 Series 1 Digital Routing Switcher Utah Scientific has been involved in the design and manufacture of routing switchers for audio and video signals for over thirty years.

More information

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS NOTE 2007/000 The Compact Muon Solenoid Experiment CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland DRAFT 23 Oct. 2007 The CMS Drift Tube Trigger

More information

EE178 Lecture Module 4. Eric Crabill SJSU / Xilinx Fall 2005

EE178 Lecture Module 4. Eric Crabill SJSU / Xilinx Fall 2005 EE178 Lecture Module 4 Eric Crabill SJSU / Xilinx Fall 2005 Lecture #9 Agenda Considerations for synchronizing signals. Clocks. Resets. Considerations for asynchronous inputs. Methods for crossing clock

More information

EECS150 - Digital Design Lecture 10 - Interfacing. Recap and Topics

EECS150 - Digital Design Lecture 10 - Interfacing. Recap and Topics EECS150 - Digital Design Lecture 10 - Interfacing Oct. 1, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

A Fast Constant Coefficient Multiplier for the XC6200

A Fast Constant Coefficient Multiplier for the XC6200 A Fast Constant Coefficient Multiplier for the XC6200 Tom Kean, Bernie New and Bob Slous Xilinx Inc. Abstract. We discuss the design of a high performance constant coefficient multiplier on the Xilinx

More information

Digilent Nexys-3 Cellular RAM Controller Reference Design Overview

Digilent Nexys-3 Cellular RAM Controller Reference Design Overview Digilent Nexys-3 Cellular RAM Controller Reference Design Overview General Overview This document describes a reference design of the Cellular RAM (or PSRAM Pseudo Static RAM) controller for the Digilent

More information

How to overcome/avoid High Frequency Effects on Debug Interfaces Trace Port Design Guidelines

How to overcome/avoid High Frequency Effects on Debug Interfaces Trace Port Design Guidelines How to overcome/avoid High Frequency Effects on Debug Interfaces Trace Port Design Guidelines An On-Chip Debugger/Analyzer (OCD) like isystem s ic5000 (Figure 1) acts as a link to the target hardware by

More information

ADF-2 Production Readiness Review

ADF-2 Production Readiness Review ADF-2 Production Readiness Review Presented by D. Edmunds 11-FEB-2005 The ADF-2 circuit board is part of the new Run IIB Level 1 Calorimeter Trigger. The purpose of this note is to provide the ADF-2 Production

More information

FPGA Design with VHDL

FPGA Design with VHDL FPGA Design with VHDL Justus-Liebig-Universität Gießen, II. Physikalisches Institut Ming Liu Dr. Sören Lange Prof. Dr. Wolfgang Kühn ming.liu@physik.uni-giessen.de Lecture Digital design basics Basic logic

More information

A Terabyte Linear Tape Recorder

A Terabyte Linear Tape Recorder A Terabyte Linear Tape Recorder John C. Webber Interferometrics Inc. 8150 Leesburg Pike Vienna, VA 22182 +1-703-790-8500 webber@interf.com A plan has been formulated and selected for a NASA Phase II SBIR

More information

TransitHound Cellphone Detector User Manual Version 1.3

TransitHound Cellphone Detector User Manual Version 1.3 TransitHound Cellphone Detector User Manual Version 1.3 RF3 RF2 Table of Contents Introduction...3 PC Requirements...3 Unit Description...3 Electrical Interfaces...4 Interface Cable...5 USB to Serial Interface

More information

BUSES IN COMPUTER ARCHITECTURE

BUSES IN COMPUTER ARCHITECTURE BUSES IN COMPUTER ARCHITECTURE The processor, main memory, and I/O devices can be interconnected by means of a common bus whose primary function is to provide a communication path for the transfer of data.

More information

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking Proceedings of the 2(X)0 IEEE International Conference on Robotics & Automation San Francisco, CA April 2000 1ms Column Parallel Vision System and It's Application of High Speed Target Tracking Y. Nakabo,

More information

Contents Circuits... 1

Contents Circuits... 1 Contents Circuits... 1 Categories of Circuits... 1 Description of the operations of circuits... 2 Classification of Combinational Logic... 2 1. Adder... 3 2. Decoder:... 3 Memory Address Decoder... 5 Encoder...

More information

arxiv: v1 [physics.ins-det] 1 Nov 2015

arxiv: v1 [physics.ins-det] 1 Nov 2015 DPF2015-288 November 3, 2015 The CMS Beam Halo Monitor Detector System arxiv:1511.00264v1 [physics.ins-det] 1 Nov 2015 Kelly Stifter On behalf of the CMS collaboration University of Minnesota, Minneapolis,

More information

Radar Signal Processing Final Report Spring Semester 2017

Radar Signal Processing Final Report Spring Semester 2017 Radar Signal Processing Final Report Spring Semester 2017 Full report report by Brian Larson Other team members, Grad Students: Mohit Kumar, Shashank Joshil Department of Electrical and Computer Engineering

More information

EE178 Spring 2018 Lecture Module 5. Eric Crabill

EE178 Spring 2018 Lecture Module 5. Eric Crabill EE178 Spring 2018 Lecture Module 5 Eric Crabill Goals Considerations for synchronizing signals Clocks Resets Considerations for asynchronous inputs Methods for crossing clock domains Clocks The academic

More information

HIGH SPEED ASYNCHRONOUS DATA MULTIPLEXER/ DEMULTIPLEXER FOR HIGH DENSITY DIGITAL RECORDERS

HIGH SPEED ASYNCHRONOUS DATA MULTIPLEXER/ DEMULTIPLEXER FOR HIGH DENSITY DIGITAL RECORDERS HIGH SPEED ASYNCHRONOUS DATA MULTIPLEXER/ DEMULTIPLEXER FOR HIGH DENSITY DIGITAL RECORDERS Mr. Albert Berdugo Mr. Martin Small Aydin Vector Division Calculex, Inc. 47 Friends Lane P.O. Box 339 Newtown,

More information

Update on DAQ for 12 GeV Hall C

Update on DAQ for 12 GeV Hall C Update on DAQ for 12 GeV Hall C Brad Sawatzky Hall C Winter User Group Meeting Jan 20, 2017 SHMS/HMS Trigger/Electronics H. Fenker 2 SHMS / HMS Triggers SCIN = 3/4 hodoscope planes CER = Cerenkov(s) STOF

More information

The CMS Drift Tube Trigger Track Finder

The CMS Drift Tube Trigger Track Finder Preprint typeset in JINST style - HYPER VERSION The CMS Drift Tube Trigger Track Finder J. Erö, Ch. Deldicque, M. Galánthay, H. Bergauer, M. Jeitler, K. Kastner, B. Neuherz, I. Mikulec, M. Padrta, H. Rohringer,

More information