Data Quality Monitoring in the ATLAS Inner Detector

Similar documents
arxiv:hep-ex/ v1 27 Nov 2003

Commissioning and Performance of the ATLAS Transition Radiation Tracker with High Energy Collisions at LHC

Commissioning of the ATLAS Transition Radiation Tracker (TRT)

First LHC Beams in ATLAS. Peter Krieger University of Toronto On behalf of the ATLAS Collaboration

Compact Muon Solenoid Detector (CMS) & The Token Bit Manager (TBM) Alex Armstrong & Wyatt Behn Mentor: Dr. Andrew Ivanov

TRT Software Activities

Commissioning of the Transition Radiation Tracker

arxiv: v1 [physics.ins-det] 1 Nov 2015

LHC Physics GRS PY 898 B8. Trigger Menus, Detector Commissioning

TORCH a large-area detector for high resolution time-of-flight

Status of CMS and preparations for first physics

THE ATLAS Inner Detector [2] is designed for precision

SciFi A Large Scintillating Fibre Tracker for LHCb

TRT Alignment. John Alison Aart Heijboer Joel Heinrich Joe Kroll. Outline: What we're trying to do. How we're doing it. Results. Plans for the future.

A new Scintillating Fibre Tracker for LHCb experiment

SCT Activities. Nick Bedford, Mateusz Dyndal, Alexander Madsen, Edoardo Rossi, Christian Sander. DESY ATLAS Weekly Meeting 03. Jun.

The Pixel Trigger System for the ALICE experiment

The CMS Detector Status and Prospects

IPRD06 October 2nd, G. Cerminara on behalf of the CMS collaboration University and INFN Torino

CSC Data Rates, Formats and Calibration Methods

The ATLAS Pixel Detector

Synchronization of the CMS Cathode Strip Chambers

The Status of the ATLAS Inner Detector

The CALICE test beam programme

Threshold Tuning of the ATLAS Pixel Detector

Study of the Clocking Effect in the TRT Alignment

2008 JINST 3 S LHC Machine THE CERN LARGE HADRON COLLIDER: ACCELERATOR AND EXPERIMENTS. Lyndon Evans 1 and Philip Bryant (editors) 2

CMS Upgrade Activities

Study of the performances of the ALICE muon spectrometer

The Scintillating Fibre Tracker for the LHCb Upgrade. DESY Joint Instrumentation Seminar

ATLAS Pixel Subsystem and Simulation

Tests of the boards generating the CMS ECAL Trigger Primitives: from the On-Detector electronics to the Off-Detector electronics system

A pixel chip for tracking in ALICE and particle identification in LHCb

The ATLAS Beam Conditions and Beam Loss Monitors

Drift Tubes as Muon Detectors for ILC

Development of beam-collision feedback systems for future lepton colliders. John Adams Institute for Accelerator Science, Oxford University

Test beam data analysis for the CMS CASTOR calorimeter at the LHC

Time Resolution Improvement of an Electromagnetic Calorimeter Based on Lead Tungstate Crystals

The ALICE Inner Tracking System: commissioning and running experience

THE Collider Detector at Fermilab (CDF) [1] is a general

Concept and operation of the high resolution gaseous micro-pixel detector Gossip

CMS Conference Report

The hybrid photon detectors for the LHCb-RICH counters

LHCb and its electronics. J. Christiansen On behalf of the LHCb collaboration

Diamond detectors in the CMS BCM1F

Report from the 2015 AHCAL beam test at the SPS. Katja Krüger CALICE Collaboration Meeting MPP Munich 10 September 2015

Review of the CMS muon detector system

TTC Interface Module for ATLAS Read-Out Electronics: Final production version based on Xilinx FPGA devices

BABAR IFR TDC Board (ITB): requirements and system description

THE DESIGN OF CSNS INSTRUMENT CONTROL

PITZ Introduction to the Video System

Trigger Menus and Rates

The Readout Architecture of the ATLAS Pixel System. 2 The ATLAS Pixel Detector System

The ATLAS Tile Calorimeter, its performance with pp collisions and its upgrades for high luminosity LHC

LHC COMMISSIONING PLANS

SLHC tracker upgrade: challenges and strategies in ATLAS

CERN S PROTON SYNCHROTRON COMPLEX OPERATION TEAMS AND DIAGNOSTICS APPLICATIONS

Progress on the development of a detector mounted analog and digital readout system

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection

LHCb and its electronics.

The Silicon Pixel Detector (SPD) for the ALICE Experiment

CMS Tracker Synchronization

Performance and aging of OPERA bakelite RPCs. A. Bertolin, R. Brugnera, F. Dal Corso, S. Dusini, A. Garfagnini, L. Stanco

The CMS Drift Tube Trigger Track Finder

RF Testing of A Single FPIX1 for BTeV

Test Beam Wrap-Up. Darin Acosta

Performance of a double-metal n-on-n and a Czochralski silicon strip detector read out at LHC speeds

RTPC 12 Simulation. Jixie Zhang Aug 2014

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

A fast and precise COME & KISS* QDC and TDC for diamond detectors and further applications

OVERVIEW OF DATA FILTERING/ACQUISITION FOR A 47r DETECTOR AT THE SSC. 1. Introduction

Development of an Abort Gap Monitor for High-Energy Proton Rings *

Atlas Pixel Replacement/Upgrade. Measurements on 3D sensors

LHC Beam Instrumentation Further Discussion

LHCb experience during the LHC 2015 run

Precision measurements of beam current, position and phase for an e+e- linear collider

R&D on high performance RPC for the ATLAS Phase-II upgrade

-Technical Specifications-

Proposal by the Ohio State University High Energy Physics to join the BABAR Collaboration

Testing and Characterization of the MPA Pixel Readout ASIC for the Upgrade of the CMS Outer Tracker at the High Luminosity LHC

Silicon Drift Detectors for the NLC

Electronics for the CMS Muon Drift Tube Chambers: the Read-Out Minicrate.

Copyright 2016 Joseph A. Mayer II

ATLAS Raw Data Reading and TRT Low Threshold Optimization

FIRST SIMULTANEOUS TOP-UP OPERATION OF THREE DIFFERENT RINGS IN KEK INJECTOR LINAC

An extreme high resolution Timing Counter for the MEG Upgrade

CGEM-IT project update

PulseCounter Neutron & Gamma Spectrometry Software Manual

Development of SiTCP Based Readout System for The ATLAS Pixel Detector Upgrade

Report from the Tracking and Vertexing Group:

Introduction. Edge Enhancement (SEE( Advantages of Scalable SEE) Lijun Yin. Scalable Enhancement and Optimization. Case Study:

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

Evaluation of ALICE electromagnetic calorimeter jet event trigger performance for LHC-Run2 by simulation

Scintillation Tile Hodoscope for the PANDA Barrel Time-Of-Flight Detector

Beam test of the QMB6 calibration board and HBU0 prototype

PEP-II/BaBar Performance, Accumulated Luminosity. BaBar has submitted over 400 papers for publication (Last year this number was 350)

PIXEL2000, June 5-8, FRANCO MEDDI CERN-ALICE / University of Rome & INFN, Italy. For the ALICE Collaboration

Neutron Irradiation Tests of an S-LINK-over-G-link System

Progress Update FDC Prototype Test Stand Development Upcoming Work

Using Geant4 in the BaBar Simulation. CHEP03 25 March 2003 Dennis Wright (SLAC) on behalf of the BaBar computing group

Transcription:

On behalf of the ATLAS collaboration Cavendish Laboratory, University of Cambridge E-mail: white@hep.phy.cam.ac.uk This article describes the data quality monitoring systems of the ATLAS inner detector. A brief description of the ATLAS detector is provided along with a summary of the software frameworks available for developers. There follows a tour of the packages developed by each subsystem of the inner detector, and a discussion of recent tests. Vertex 2008, 17th International Workshop on Vertex detectors Utö Island, Sweden July 28-August 1, 2008 Speaker. c Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it/

1. Introduction The Large Hadron Collider (LHC) at CERN, Geneva, is the largest particle accelerator and collider in the world. Protons are accelerated to an energy of 7 TeV, and collide head on at four points around the 27 km circumference tunnel previously used by the LEP electron-positron collider. At each of the four collision points, activity is monitored by one of four primary LHC experiments: ATLAS, CMS, LHCb and ALICE. The largest of these- ATLAS and CMS- are specifically designed to search for new physics beyond the Standard Model, and each is a complex combination of subsystems designed to measure the momentum and energy of the particles produced in collisions. The LHC final design luminosity of 1 10 34 cm 2 s 1 leads to immense challenges in event selection and offline data storage, in addition to creating an extremely harsh radiation environment for the detectors themselves. In such complex conditions, it is essential to develop accurate monitoring systems in order to calibrate and commission each detector and to track their performance over time. Furthermore, since detector problems can easily fake new physics signatures, it is vital to have fool-proof mechanisms for classifying data sets as good or bad in order that physicists can declare new discoveries with confidence. This talk reviews the data quality monitoring systems of the ATLAS inner detector (ID). Following an ID-centric review of the ATLAS detector in section 2, the relevant ATLAS software frameworks are discussed in section 3. This is followed by a description of the monitoring packages developed by each subsystem community in section 4 and a final summary. 2. The ATLAS Detector 2.1 Detector description The basic design criteria of the ATLAS experiment [1] include very good electromagnetic calorimetry for measurements of photons and electrons, and full-coverage hadronic calorimetry for jet and missing transverse energy measurements. Efficient tracking is expected at both high and low luminosity, thus fulfilling an important requirement for high-p T lepton-momentum measurements, electron and photon identification and heavy flavour identification. The detector has a large acceptance in η and almost full coverage in φ, whilst also being able to trigger at low p T thresholds. Accurate measurements of particles in accelerator experiments rely on precise measurements of particle momentum and charge, whilst also requiring information such as whether a particle was produced at the primary interaction vertex or at a secondary vertex (the latter is important for b-tagging and tau reconstruction). The ATLAS inner detector is designed to perform all of these measurements by observing the bending of particle tracks in the 2 T magnetic field of the central solenoid. The field points along the beam axis and hence particle tracks are bent according to the transverse momentum of the particle. The ATLAS inner detector (shown in figure 1) is divided into three separate parts: a semiconductor pixel detector that provides high granularity near the vertex region, a semiconductor tracker (SCT) that utilises silicon micro-strip technology, and a straw tube tracker (TRT) that provides continuous track-following with much less material per point. The detector as a whole consists of 2

Barrel SCT Forward SCT Pixel Detectors Figure 1: The ATLAS inner detector, taken from [2]. TRT a barrel region and two endcap regions, and the design is such that no single component dominates the momentum measurement. 2.2 A brief guide to ATLAS dataflow The path of data through the ATLAS experiment is illustrated in figure 2 and, for the uninitiated, is an enigmatic mess of acronyms. Essentially, data from the detector pass to a Read Out Driver (ROD) when the first level trigger (LVL1) has accepted an event. The ROD formats the data and sends a basic event fragment to a Read Out Buffers, which stores it until the level 2 trigger (LVL2) has reached a decision. If the event is good, a Read Out Subsystem moves the data block to the Event Builder (through a Sub Farm Input) which holds it until the Event Filter can apply the full ATLAS reconstruction software and produce hits, tracks and other derived quantities from the raw data. This represents the final result of the processing chain, and the events are subsequently written to disk through a Sub Farm Output. The main aim of figure 2 is to illustrate the various points where one might choose to monitor data in ATLAS. Several monitoring frameworks have been developed, and each is capable of monitoring the data at any point in the data flow. Given that the rate of data flow drops at each stage of the trigger process, it is clear that there is often an advantage to monitoring data early in the chain if possible. For more sophisticated analysis, one must use data from the Event Filter, and hence must accept a lower rate of data collection. 3. ATLAS monitoring software frameworks As is usual in large collaborations, several software frameworks have been developed in order to perform monitoring tasks in ATLAS. Some have merged over the years, but others have remained intact because they either serve a unique function or provide a vital cross-check when combined 3

Figure 2: The path of data through the ATLAS experiment [3]. The abbreviations are defined in the text. with another system. Before discussing these systems in depth, it is worth pointing out that there are three distinct use cases for a general monitoring system: 1. Online Running: Whilst the detector is in operation, it is vital that shift workers and experts are able to observe the detector in real time in order to debug problems as they arise. Monitoring systems running online are forced to sample the data stream, and hence there will always be a limit to the statistics accumulated in online running. 2. Offline running: Once data has been collected on disk, one can perform much more thorough and detailed studies using the ATLAS full reconstruction software. This can in principle use a large number of machines (using GRID computing for example), and the output can be stored in offline databases in order to record detector parameters and classify the quality of datasets. 3. Semi-offline running: In ATLAS, there is a 24 hour delay between live data taking and the commencement of the full offline processing, and this can be used to store parameters that are needed before the full reconstruction is run. As an example, one might want to store, and subsequently veto, noisy detector components that might otherwise disturb the track reconstruction algorithms. There are two dominant online monitoring systems in ATLAS. The GNAM framework [3] is capable of monitoring at all levels shown in figure 2 but is mostly used for obtaining fast feedback using relatively raw data such as hits, errors, bunch crossing information and very simple derived objects. This is computationally quick and can be performed at lower trigger levels to maximise the event rate. A second monitoring system uses the ATLAS Athena software framework. This is typically run at the Event Filter stage of the trigger process, leading to a much lower rate of approximately 1 Hz. In addition, some detectors have custom systems built into their RODs. 4

The online monitoring tools publish ROOT histograms in an online histogram server which can subsequently be displayed by a series of collaboration wide tools such as the Online Histogram Presenter [4]. A subset of histograms is also sent to an automatic checking system that is designed to highlight basic problems in the form of a traffic light reporting system. Most offline monitoring in ATLAS uses the Athena software framework, often with the same code as the online system. The greater processing power available offline enables more detailed studies to be performed. There are also custom tools available in some subsystems that were developed before the use of Athena became more widespread. The offline tools store histograms in ROOT files, as well as sending histograms to an offline version of the automatic histogram checking system described above. For permanent reference, output from the monitoring systems can be stored in a COOL database. All systems have had ample opportunity for recent tests both in combined commissioning runs with cosmic rays and in staged challenges that pipe simulated data through the complete data flow chain (the so-called Full Dress Rehearsal challenges). 4. Monitoring systems by subsystem 4.1 Pixel detector The pixel detector community use three different monitoring systems, each for a different purpose. Online, their GNAM based system samples data from their ROSs at a rate of approximately 1 khz, and monitors the total number of hits in each ROS (including the total number per event), errors flagged by each ROS and the occupancy for pixel layers. An Athena based system samples events from the SFI and reconstructs pixel clusters with a maximum rate of about 20 Hz. This plots occupancies at the module level as well as time over threshold information, the number of pixels per cluster and time dependent quantities. There is also a built in histogramming function at the hardware level, but this has yet to be used in any organised way. Offline, their Athena code runs in an expanded version to provide more thorough data quality checks. The online and offline functionality has been tested extensively in the recent tests, and all packages were found to be in good shape. The focus has now shifted from code development to training a shift team for full time data taking. 4.2 SCT Physicists working on the SCT detector also have a variety of monitoring systems at their disposal. A GNAM based system samples data fragments from the ROSs and plots errors, hits and hit coincidences (i.e. primitive spacepoints) in the form of maps and trends. This provides very fast basic feedback online and has been widely used during this year s commissioning work to time in the detector and spot noisy detector components. An Athena online monitoring system (whose development is dealt with in detail in [5] samples data from the SFIs and uses full event data (hits, clusters, tracks, etc) to provide more detailed checks at a much lower rate of approximately 1 Hz. The SCT Athena monitoring system is unique in having amendments that allow dynamic rebooking of histograms. This allows an expert to examine each strip of the SCT in real time during live data taking without exceeding the limited 5

bandwidth allocation for sending histograms to and from the online server. The system automatically updates a display for shift workers to show the most problematic modules at the strip level. The wider system monitors noise occupancies, errors, efficiencies and track parameters such as residuals, pulls and fit χ 2 values. The system is running smoothly, and is being used to commission the detector and train shift workers for next year. The Athena monitoring package also forms the basis for the semi-offline and offline monitoring strategies. For semi-offline processing, the SCT use a stripped down version of the Athena package (with tracking turned off) to log noisy and dead strips in a COOL database. For pure offline running, the package runs the full Athena reconstruction and plots similar quantities to the online package, though with greatly increased statistics. 4.3 TRT The TRT use a combination of an Athena monitoring package and an earlier custom tool known as TRTViewer which has remained in use due to its excellent visual representation of the TRT. In its most recent incarnation, TRTViewer acts simply as display program for the output of the Athena monitoring package, or for histograms produced using a standalone ROS level monitoring package. Quantities of interest include the high level and low level occupancies of the detector (which has two thresholds), the average drift time, the number of tracks and the track residuals. Variables are plotted in a variety of ways, including vs time and vs φ angle. The TRT monitoring systems are already interfaced with the ATLAS automatic data quality checker, and have performed well in recent commissioning tests. 4.4 Global Inner Detector Monitoring In addition to monitoring systems developed for each subsystem, there is an Athena based monitoring package designed to investigate issues that involve combinations of sub-detectors such as detector synchronisation (in the form of bunch crossing and level 1 trigger information), correlations in detector occupancy, track parameters for global tracks, and angles between track segments for different detectors. This can run either offline or online, in which case it samples the event stream from the SFI. The tool is split into three packages: 1. An alignment monitoring tool focusing on hit efficiencies, residuals and track properties. 2. A beam spot monitoring tool that compares the measured beam spot with reconstructed vertex positions. 3. A performance monitoring tool that looks at some classic particle resonances, and assesses the detector performance by comparing the mass, width and yield to the expected values. As would be expected during early commissioning with cosmic rays, the first of these has proved most useful so far, though the other two tools continue to be developed using simulated data. 6

5. Summary The detectors of the LHC are amongst the most complicated apparatus ever built, and accurate monitoring systems are essential to evaluate their performance, provide feedback to shift workers and track changes in detector parameters over time. The monitoring systems of ATLAS use a variety of custom software frameworks, and comprise detailed monitoring packages for each sub-detector and a global tool designed to monitor synchronisation, alignment and global performance. All systems have been tested in commissioning runs involving cosmic rays and in simulation exercises, and all are at the final stages of debugging and commissioning. As a final note, it is worth stating that the arrival of beam at the LHC will no doubt provide surprises for the monitoring systems, most immediately involving a jump in the average occupancy of the detectors above which problems must be detected. This must be used to reevaluate the performance of the algorithms that are already in place, but is not expected to cause major problems for any of the systems described in this article. 6. Acknowledgements I am hugely grateful to Dr Richard Brenner for his invitation to present the talk on which this article is based. I thank the organisers of the Vertex 2008 conference for their spectacular hospitality, and the UK based Science and Technology Facilities Council for their partial funding of the visit. Finally, I would like to thank my colleagues in the ATLAS inner detector collaboration for helpful comments and discussions during the preparation of this final draft. References [1] G. Aad et al. The ATLAS Experiment at the CERN Large Hadron Collider. JINST, 3:S08003, 2008. [2] ATLAS Inner Detector Technical Design Report. CERN/LHCC/97-16 and CERN/LHCC/97-17, 1997. [3] P. Adragna et al. GNAM: A low-level monitoring program for the ATLAS experiment. IEEE Trans. Nucl. Sci., 53:1317 1322, 2006. [4] P. Adragna, G. Crosetti, M.D. Pietra, A. Dotti, R. Ferrari, G. Gaudio, C. Roda, D. Salvatore, F. Sarri, W. Vandelli, and P.F. Zema. The GNAM monitoring system and the OHP histogram presenter for ATLAS. Real Time Conference, 2005. 14th IEEE-NPSS, pages 4 pp., June 2005. [5] Martin John White. Searching for supersymmetry with the ATLAS detector. CERN-THESIS-2008-052. 7