Difference between revisions of "35t Test"

From DUNE
Jump to navigation Jump to search
Line 1: Line 1:
 +
* [[35t DAQ]]
 +
 
= Recent documents =  
 
= Recent documents =  
 
* [http://lbne2-docdb.fnal.gov:8080/cgi-bin/ShowDocument?docid=9677 DocDB 9677, Proposed Run Modes for the 35t test]
 
* [http://lbne2-docdb.fnal.gov:8080/cgi-bin/ShowDocument?docid=9677 DocDB 9677, Proposed Run Modes for the 35t test]

Revision as of 15:17, 17 October 2014

Recent documents

Run Modes (per DocDB 9677)

No. Mode Description Comment
1 Continuous main DAQ mode Similar to final 'triggerless' far detector running mode, i.e. parallel trigger farm looks for nice muons in real time. Needs ZS of TPC data to work. ---
2 Triggered main DAQ mode Main selection is external trigger counter ---
3 Immediate triggered mode Use TOC triggered mode (only outputs data in window) or SSP triggered mode to avoid bottleneck. This is a fallback option of there is bottleneck in either RCE or SSP data.
4 Triggered window mode TPC in triggered non-zero suppressed mode Use before we are happy with ZS in RC
5 Wide-window burst mode TPC in burst mode with the longest window possible (512MB buffer = 0.5s). Special run mode for offline ZS studies
6 Burst mode Collect data in certain time window (could either determine this from a Nova time hash function, or using Penn board trigger input Fallback before ZS works in RCE, this is also the mode for measuring noise, i.e. this is the mode for November vertical slice test

Issues

  • Purposes and architecture of the proposed online farm
  • Design of the metadata
  • Handling TPC stream data within the framework which is trigger-oriented, issues of timing and multiple interaction in temporally close slices
  • Possible necessity to duplicate parts of raw data to create overlaps, in order to make practical operation of the online farm

Misc Questions

In no particular order, these are the questions pertaining to DAQ and its interface with off-line:

  • what is the low level DAQ file format/encoding (ROOT? binary?)
  • what schema does the content of DAQ files follow?
  • where is a diagram showing all the parts of the DAQ data stream with their names (millislice, microblock, tick, "trigger", etc)?
  • what is the start/stop criteria that defines the highest level "chunk" of data (e.g. a "trigger")?
  • how does this "chunk" correspond to a trigger for each expected trigger criteria?
  • what time-ordering is expected from data coming out of the DAQ, particularly between disparate sources (e.g. Wires/PDs)?
  • what unit of data "chunk" ("event") is required and desired for offline analysis?
  • what offline analysis decisions/limitations can we, as a collaboration, be comfortable "baking in" to this choice of unit?
  • how are DAQ data "chunks" (triggers) numbered? How are offline data "chunks" ("events") numbered?
  • what is the mapping from the former to the latter?
  • what changes to art and/or larsoft are needed to accommodate the above answers?