Difference between revisions of "CERN Prototype Materials"

From DUNE
Jump to navigation Jump to search
Line 2: Line 2:
 
* Current (2015)
 
* Current (2015)
 
** Workup to the proposal
 
** Workup to the proposal
 +
 +
*** {{DocDB|9905|DocDB 9905}}: Meeting 10/29/2014.
 
*** {{DocDB|10385|DocDB 10385}}: Brief review of Computing Requirements for the test
 
*** {{DocDB|10385|DocDB 10385}}: Brief review of Computing Requirements for the test
 
*** {{DocDB|10428|DocDB 10428}}: Draft proposal for the test and comments on the document
 
*** {{DocDB|10428|DocDB 10428}}: Draft proposal for the test and comments on the document

Revision as of 01:23, 13 September 2015

Select Documents and Meetings

  • Current (2015)
    • Workup to the proposal

Appendix I

An early example of the measurements program

Particle Type Momentum Range (GeV/c) Bin (MeV/c)
p 0.1-2.0 100
p 2.0-10.0 200
π± 0.1-2.0 100
π± 2.0-10.0 200
μ± 0.1-1.0 50
μ± 1.0-10.0 200
e± 0.1-2.0 100
e± 2.0-10.0 200
K+ 0.1-1.0 100
γ(π0) 0.1-2.0 100
γ(π0) 2.0-5.0 200

Appendix II

The detector characterization includes a few distinct areas, and the goals of measurements in each affect the desired statistics. A few items are presented below.

Energy Scale and Resolution

In terms of detector characterization, some of the important parameters include energy scale and resolution for both single tracks and showers - hadronic and EM. Let's consider them first (using comments from T.Junk):

  • Energy scale: for Gaussian distribution, uncertainty will be sigma/sqrt(n). Assuming resolution of 1%, and aiming for ±0.1% precision, only 100 events would be needed.
  • Hadronic showers: older calorimeters had resolution of 80%/sqrt(E). Since sampling fraction in LAr TPC is higher, we are likely to do better than this, but still conservatively assume O(10%)/sqrt. Qualitatively, we can follow arguments similar to the previous item. It follows then that O(103-104) events will be enough for the purposes of this measurement. Indeed, looking at typical test beam and calibration practices (per papers published), we see that 104 events is the typical statistics for a given incident beam momentum.

In summary, depending on case, this part of the measurement program can be accomplished with event sample of the size ~O(103-104), and in some cases less.

PID

Measuring the "fake" rate, i.e. particle mis-identification, is important for certain physics to be addressed by the experiment (cf. proton decay). If the probability of mis-PID is "p", then the statistical uncertainty can be expressed as sqrt(p*(1-p)/n). This can also be understood in terms of precise measurements of "tails" in certain distributions. If we are looking at probabilities of mis-identification of the order of 10-6, this translates into quite substantial statistics. At the time of writing, we need more guidance in this area, but in general it appears that in this case we would indeed be motivated to take as much data as practically feasible. This does mean that we will aim to take a few million events in each of a few momentum bins (TBD). Reading: Nucleon Decay Searches

Systematics

Let's assume we want to measure a signal with 5% precision and signal to background ratio is 1:10. To provide this scale of accuracy, we will need ~105 events in the momentum bin of interest.