ProtoDUNE DAQ

From DUNE
Jump to: navigation, search

Info


Baseline Plan

  • We keep the basic artdaq process roles the same initially, with the addition that we add support for multiiple data logging Aggregators:
    • BoardReaders are responsible for configuring upstream hardware, reading out data fragments, and sending the fragments to EventBuilders. The number and location of BoardReader processes will be matched to the upstream electronics in a reasonable way based on data volumes and network connections between the electronics and the computers on which we run the BoardReaders.
    • EventBuilders are responsible for assembling full events and running compression or analysis modules (art modules)
    • Aggregator(s) are responsible for writing data to disk and serving events to online monitoring
  • We choose the number of EventBuilders based on the processing load, and we choose the number of disk-writing Aggregators based on the number of streams that we need to write. The layout of the EventBuilders will be based on the available processing power on the DAQ nodes. The layout of the Aggregators will be based on which computers have the disks installed and any other hardware boundary conditions that apply.
    • Keeping the processing function separate from the disk-writing function allows us to size each piece appropriately. (Yes, I recall that I proposed writing data from EBs originally, and I'll admit that I'm currently thinking about a different model.)
  • In preparation for protoDUNE running, we run tests which either verify that this model will work or provide clear guidance on what needs to be changed in the model. (We have dummy art modules that simply consume CPU for a configurable amount of time, so we can simulate processing loads in the EBs.)

Artdaq processes

Q/A

  • With regard to the layout of the artdaq processes, some sample questions include the following:
    • Q. What maximum data rate to disk needs to be supported? (This is related to the performance of the disks that are purchased, of course.)
    • A. 230 MB/s (TBD)
    • Q. Do the accepted events need to be sequential in a single file stream?
    • A. This is being discussed now, with this requirement lifted being the likely option
    • Q. Are there any special requirements on the fraction of the events that are seen by online monitoring, or is any pseudo-random subset acceptable?
    • A. TBD
    • Q. How much processing (e.g. compression or filtering) needs to be done in the software? (cf. the number and performance of computers that are purchased)
    • A. TBD
  • With regard to the purchase of computer servers, network switches, and disks, the usual questions apply:
    • Q. What event and data rates need to be supported?
    • A. See DUNE DocDB 1212
    • How much software processing (compression, filtering, etc.) needs to be done?
    • What is the required data rate to disk?
    • If the optimal combination of servers, switches, and disks can not be purchased because of financial concerns, which requirements should be relaxed first?

Functionality

As far as moving functionality between the different types of artdaq processes, that can certainly be done. Need to determine that it is necessary before anyone does that work.

  • as far as putting Huffman encoding in the BR: we already have an art module to do Huffman encoding in the art thread in the EB. We could do this in the BR, but we would need to decide whether to duplicate the code or add an art thread to the BR so that it could run the existing code. (The latter has come up in other contexts, but again, I'd like to understand that it is really necessary before we do the work.)
  • to add a layer that is dedicated to Huffman encoding, I would imagine an earlier layer of EBs to do this. But, we should learn whether a suitably-sized single EB layer could do that job.

Navigation