CERN Prototype

From DUNE
Jump to navigation Jump to search

protoDUNE

The protoDUNE experimental program is designed to test and validate the technologies and design that will be applied to the construction of the DUNE Far Detector at the Sanford Underground Research Facility (SURF). The protoDUNE detectors will be run in a dedicated beam line at the CERN SPS accelerator complex. The rate and volume of data produced by these detectors will be substantial and will require extensive system design and integration effort.

protoDUNE CERN proposal: DUNE DocDB 186


As of Fall 2015, "protoDUNE" is the official name for the two apparatuses to be used in CERN beam test: single-phase and dual-phase LArTPC detectors. Each received a formal CERN experiment designation:

  • NP02 for the dual-phase detector.
  • NP04 for single-phase detector.

North Area Extension:installation schedule

Materials and Meetings

Infrastructure

Expected Data Volume and Rates

Baseline parameters as per the proposal

Estimates in this area were developed over a period of time. Both data rate and volume are determined primarily by the number of tracks due to cosmic ray muons, recorded within the readout window, which is commensurate with the electron collection time in the TPC (~2ms).

For a quick summary of the data rates, data volume and related requirements see:

A few numbers:

  • Planned trigger rate: 200Hz
  • Instantaneous data rate in DAQ: 1GB/s
  • Sustained average: 200MB/s

Based on this, the nominal network bandwidth required to link the DAQ to CERN storage elements is ~2GB/s. This is based on the essential assumption that zero suppression will be used in all measurements. There are considerations for taking some portion of the data in non-zs mode, which would require approximately 20GB/s connectivity. Since WA105 specified this as their requirement, DUNE-PT may be able to obtain a link in this range.

The measurement program is still being updated, the total volume of data to be taken will be ~O(1PB). Brief notes on the statistics can be found in Appendix II of the "Materials" page.

Taking data in non-ZS mode

As of Q2 of 2016, non-ZS mode is considered for implementation in protoDUNE (single phase)

Software and Computing

Overview

Due to the short time available for data taking, the data to be collected during the experiment is considered "precious" (impossible or hard to reproduce) and redundant storage must be provided for such data. One primary copy would be stored on tape at CERN, another at FNAL and auxiliary copies will be shared between US sites e.g. BNL and NERSC. The aim is to reuse existing software from other experiments to move data between CERN and the US with appropriate degree of automation, error checking and correction, and monitoring.

An effort will be made to implement near-time processing and monitoring of data quality, including full tracking in express production streams. This can be done on a subset of the raw data. In order to process data with about same speed as it is collected. At the same time, a very rough estimate indicates that for off-line processing, ~5000 cores would be sufficient to make a fist reconstruction pass of the data with the same velocity as it is received.

Handling the data

Current design ideas are summarized in the document created during a meeting with FNAL data experts in Mid-March 2016. The main idea is to leverage a few existing systems (F-FTS, SAM etc) in order to satisfy a number of requirements. A few of the initial requirements are documented below; the design document above contains a lot more detail

Storage at CERN

More information (including fairly technical bits) can be found on the CERN Data Handling page.

  • EOS is a high-performance distributed disk storage system based on XRootD. It is used by major LHC experiments as the main destination for writing raw data.
  • CASTOR is the principal tape storage system at CERN. It does have a built-in disk layer, which was earlier utilized in production and other activities but this is no longer the case since this functionality

is handled more efficiently by EOS. For that reason, the disk storage that exists in CASTOR serves as a buffer for I/O and system functions.


Data Transfer