Difference between revisions of "CERN Prototype Materials"

From DUNE
Jump to navigation Jump to search
Line 115: Line 115:
  
 
== Data Storage Links ==
 
== Data Storage Links ==
* CERN
+
=== General CERN Storage Info ===
** In early 2000s, the CASTOR system was deployed at CERN which provides front-end to mass storage, in the form of both tape and disk pools. In early 2010s, the disk pools were largely migrated to EOS, a newer and high-performance system based on XRootD which has better functionality for managing large disk pools. CASTOR is still used for custodial data on tape.
+
* In early 2000s, the CASTOR system was deployed at CERN which provides front-end to mass storage, in the form of both tape and disk pools. In early 2010s, the disk pools were largely migrated to EOS, a newer and high-performance system based on XRootD which has better functionality for managing large disk pools. CASTOR is still used for custodial data on tape.
** [http://iopscience.iop.org/1742-6596/396/4/042030/pdf/1742-6596_396_4_042030.pdf Overview of LHC storage operations at CERN]
+
* [http://iopscience.iop.org/1742-6596/396/4/042030/pdf/1742-6596_396_4_042030.pdf Overview of LHC storage operations at CERN]
** [http://castor.web.cern.ch/ CASTOR]
+
* [http://castor.web.cern.ch/ CASTOR]
** [https://cern.service-now.com/service-portal/article.do?n=KB0001998 Beginner's Tutorial] for EOS
+
 
** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/TsiSection Technology and Storage Infrastructure group at CERN]
+
* [https://twiki.cern.ch/twiki/bin/view/FIOgroup/TsiSection Technology and Storage Infrastructure group at CERN]
** '''CDR (Central Data Recording, not to be confused with Conceptual Design Report)'''
+
* '''CDR (Central Data Recording, not to be confused with Conceptual Design Report)'''
*** [https://twiki.cern.ch/twiki/bin/view/DSSGroup/TabCdr Apparently '''main''' CDR wiki]
+
** [https://twiki.cern.ch/twiki/bin/view/DSSGroup/TabCdr Apparently '''main''' CDR wiki]
*** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/CDR_Problem CDR Troubleshooting]
+
** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/CDR_Problem CDR Troubleshooting]
*** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/CDR_Config CDR Configuration]
+
** [https://twiki.cern.ch/twiki/bin/view/FIOgroup/CDR_Config CDR Configuration]
*** [https://twiki.cern.ch/twiki/bin/view/P326/CDR An older CDR configuration example]
+
** [https://twiki.cern.ch/twiki/bin/view/P326/CDR An older CDR configuration example]
 +
 
 +
=== EOS ==
 +
* [http://eos.cern.ch/index.php?option=com_content&view=article&id=87:using-eos-at-cern&catid=31:general&Itemid=41 Setting up EOS]
 +
* [https://cern.service-now.com/service-portal/article.do?n=KB0001998 Beginner's Tutorial] for EOS
 +
 
 
* Misc
 
* Misc
 
** [https://twiki.cern.ch/twiki/bin/view/AtlasComputing/ATLASStorageAtCERN ATLAS Storage at CERN] (main link)
 
** [https://twiki.cern.ch/twiki/bin/view/AtlasComputing/ATLASStorageAtCERN ATLAS Storage at CERN] (main link)
Line 134: Line 139:
 
** [https://twiki.cern.ch/twiki/pub/Main/DaqTierZeroTierOnePlanning/ALICE__DAQ-T0-T1_architecture_and_time_schedules.doc ALICE DAQ Architecture Document]
 
** [https://twiki.cern.ch/twiki/pub/Main/DaqTierZeroTierOnePlanning/ALICE__DAQ-T0-T1_architecture_and_time_schedules.doc ALICE DAQ Architecture Document]
 
** [https://twiki.cern.ch/twiki/pub/Main/DaqTierZeroTierOnePlanning/CDRandT0scenarios3.doc LHC-wide document analyzing data recording]
 
** [https://twiki.cern.ch/twiki/pub/Main/DaqTierZeroTierOnePlanning/CDRandT0scenarios3.doc LHC-wide document analyzing data recording]
 
  
 
== Data Management ==
 
== Data Management ==
 
* [https://dcameron.web.cern.ch/dcameron/docs/chep07-atlas-ddm.pdf ATLAS DDM (CHEP07)]
 
* [https://dcameron.web.cern.ch/dcameron/docs/chep07-atlas-ddm.pdf ATLAS DDM (CHEP07)]
 
* [http://iopscience.iop.org/1742-6596/219/6/062037/pdf/1742-6596_219_6_062037.pdf ATLAS DDM (CHEP10)]
 
* [http://iopscience.iop.org/1742-6596/219/6/062037/pdf/1742-6596_219_6_062037.pdf ATLAS DDM (CHEP10)]

Revision as of 20:02, 8 September 2015

Select Documents and Meetings

  • Current (2015)
    • Workup to the proposal
      • DocDB 10385: Brief review of Computing Requirements for the test
      • DocDB 10428: Draft proposal for the test and comments on the document
    • Proposal

Appendix I

An early example of the measurements program

Particle Type Momentum Range (GeV/c) Bin (MeV/c)
p 0.1-2.0 100
p 2.0-10.0 200
π± 0.1-2.0 100
π± 2.0-10.0 200
μ± 0.1-1.0 50
μ± 1.0-10.0 200
e± 0.1-2.0 100
e± 2.0-10.0 200
K+ 0.1-1.0 100
γ(π0) 0.1-2.0 100
γ(π0) 2.0-5.0 200

Appendix II

The detector characterization includes a few distinct areas, and the goals of measurements in each affect the desired statistics. A few items are presented below.

Energy Scale and Resolution

In terms of detector characterization, some of the important parameters include energy scale and resolution for both single tracks and showers - hadronic and EM. Let's consider them first (using comments from T.Junk):

  • Energy scale: for Gaussian distribution, uncertainty will be sigma/sqrt(n). Assuming resolution of 1%, and aiming for ±0.1% precision, only 100 events would be needed.
  • Hadronic showers: older calorimeters had resolution of 80%/sqrt(E). Since sampling fraction in LAr TPC is higher, we are likely to do better than this, but still conservatively assume O(10%)/sqrt. Qualitatively, we can follow arguments similar to the previous item. It follows then that O(103-104) events will be enough for the purposes of this measurement. Indeed, looking at typical test beam and calibration practices (per papers published), we see that 104 events is the typical statistics for a given incident beam momentum.

In summary, depending on case, this part of the measurement program can be accomplished with event sample of the size ~O(103-104), and in some cases less.

PID

Measuring the "fake" rate, i.e. particle mis-identification, is important for certain physics to be addressed by the experiment (cf. proton decay). If the probability of mis-PID is "p", then the statistical uncertainty can be expressed as sqrt(p*(1-p)/n). This can also be understood in terms of precise measurements of "tails" in certain distributions. If we are looking at probabilities of mis-identification of the order of 10-6, this translates into quite substantial statistics. At the time of writing, we need more guidance in this area, but in general it appears that in this case we would indeed be motivated to take as much data as practically feasible. This does mean that we will aim to take a few million events in each of a few momentum bins (TBD). Reading: Nucleon Decay Searches

Systematics

Let's assume we want to measure a signal with 5% precision and signal to background ratio is 1:10. To provide this scale of accuracy, we will need ~105 events in the momentum bin of interest.

Appendix III

DAQ references

Note: some of these links may be restricted to users associated with respective LHC experiments. This will be resolved at a later date (i.e. relevant and public information extracted, reduced and systematized).

Data Storage Links

General CERN Storage Info

  • In early 2000s, the CASTOR system was deployed at CERN which provides front-end to mass storage, in the form of both tape and disk pools. In early 2010s, the disk pools were largely migrated to EOS, a newer and high-performance system based on XRootD which has better functionality for managing large disk pools. CASTOR is still used for custodial data on tape.
  • Overview of LHC storage operations at CERN
  • CASTOR

= EOS

Data Management