CERN Storage

From DUNE
Revision as of 18:41, 15 June 2017 by MaximPotekhin (talk | contribs) (Created page with "=ATLAS= ==TDAQ== * [http://iopscience.iop.org/1742-6596/331/2/022007/pdf/1742-6596_331_2_022007.pdf ATLAS TDAQ dataflow in Run 1] Note: some of these links may be restricted t...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

ATLAS

TDAQ

Note: some of these links may be restricted to users associated with respective LHC experiments. This will be resolved at a later date (i.e. relevant and public information extracted, reduced and systematized).

ATLAS SFO

  • "SFO" means subfarm output processor. It's function is to receive event data which passed triggers of all levels and assemble it before committing to mass storage

managed by CERN central services. SFO nodes use XFS file system for data storage.

ATLAS Dynamic Data Management

Misc ATLAS Storage

CMS

  • "storman" - the system that moves data from the detector to mass storage
  • github repository
  • in CMS, information about the raw data files status in the processing chain is kept in a dedicated database, but such files are not registered in the metadata sense.

CERN Data Storage

General CERN Storage Info

  • In early 2000s, the CASTOR system was deployed at CERN which provides front-end to mass storage, in the form of both tape and disk pools. In early 2010s, the disk pools were largely migrated to EOS, a newer and high-performance system based on XRootD which has better functionality for managing large disk pools. CASTOR is still used for custodial data on tape.
  • Overview of LHC storage operations at CERN
  • CASTOR