Difference between revisions of "CERN Prototype"

From DUNE
Jump to navigation Jump to search
Line 41: Line 41:
  
 
=== Links of interest ===
 
=== Links of interest ===
Note: some of these links may be restricted to users associated with respective LHC experiments. This will be resolved
 
at a later date (i.e. relevant and public information extracted, reduced and systematized).
 
 
* CERN
 
** [http://iopscience.iop.org/1742-6596/396/4/042030/pdf/1742-6596_396_4_042030.pdf Overview of LHC storage operations at CERN]
 
** [http://castor.web.cern.ch/ CASTOR]
 
** [https://cern.service-now.com/service-portal/article.do?n=KB0001998 Beginner's Tutorial] for EOS
 
** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/TsiSection Technology and Storage Infrastructure group at CERN]
 
** '''CDR''' (Central Data Recording, not to be confused with Conceptual Design Report)
 
*** [https://twiki.cern.ch/twiki/bin/view/DSSGroup/TabCdr Apparently '''main''' CDR wiki]
 
*** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/CDR_Problem CDR Troubleshooting]
 
*** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/CDR_Config CDR Configuration]
 
*** [https://twiki.cern.ch/twiki/bin/view/P326/CDR An older CDR configuration example]
 
* Misc
 
** [https://twiki.cern.ch/twiki/bin/view/AtlasComputing/ATLASStorageAtCERN ATLAS Storage at CERN] (main link)
 
** [https://indico.cern.ch/event/119650/contribution/4/material/slides/1.pdf ATLAS Migration of disk pools] from CASTOR to EOS
 
** [https://twiki.cern.ch/twiki/bin/view/AtlasComputing/TierZeroExpertOnCallNotes ATLAS documentation - Tier-0 expert on call notes]
 
** [https://en.wikipedia.org/wiki/ATLAS_experiment General description of ATLAS - take a look at the data rates]
 
* Some Older but informative links
 
** [https://twiki.cern.ch/twiki/pub/Main/DaqTierZeroTierOnePlanning/ALICE__DAQ-T0-T1_architecture_and_time_schedules.doc ALICE DAQ Architecture Document]
 
** [https://twiki.cern.ch/twiki/pub/Main/DaqTierZeroTierOnePlanning/CDRandT0scenarios3.doc LHC-wide document analyzing data recording]
 

Revision as of 18:47, 7 August 2015

Materials and Meetings

Infrastructure

Expected Data Volume and Rates

Estimates in this area were developed over a period of time. Both data rate and volume are determined primarily by the number of tracks due to cosmic ray muons, recorded within the readout window, which is commensurate with the electron collection time in the TPC (~2ms).

For a quick summary of the data rates, data volume and related requirements see:

A few numbers:

  • Planned trigger rate: 200Hz
  • Instantaneous data rate in DAQ: 1GB/s
  • Sustained average: 200MB/s

The measurement program is still being updated, the total volume of data to be taken will be ~O(1PB). Brief notes on the statistics can be found in Appendix II of the "Materials" page.

Software and Computing

Intro

As of March 2015, this is work in progress. In accordance with common requirements, we anticipate preserving three copies of "precious" data to be collected during the experiment. One primary copy would be stored on tape at CERN, another at FNAL and auxiliary copies will be shared between US sites e.g. BNL and NERSC. There are proposal to reuse software which was proven in IceCube and Daya Bay experiments, to move data between CERN and the US with appropriate degree of automation, error checking and correction, and monitoring.

The salient point of the Software and Computing plan is near-time processing and monitoring of data quality, including full tracking in express production streams. This can be done on a subset of the raw data. At the same time, a rough estimate indicates that for off-line processing, ~5000 cores will be sufficient to process data with about same speed as it is collected.

Handling the data

Storage at CERN

In early 2000s, the CASTOR system was deployed at CERN which provides front-end to mass storage, in the form of both tape and disk pools. In early 2010s, the disk pools were largely migrated to EOS, a newer and high-performance system which has better functionality for managing large disk pools. CASTOR is still used for custodial data on tape.

EOS is derived from xrootd and root files are accessible natively.


Links of interest