Difference between revisions of "DUNE xrootd"

From DUNE
Jump to navigation Jump to search
Line 3: Line 3:
 
* A comprehensive [[Media:Dcache_xrootd_Litvintsev_June2014.pdf|review of dCache/xrootd]]. This document is quite relevant as it explains how dCache storage at FNAL is equipped with a "xroot door" so that it's exposed to external xrootd servers.
 
* A comprehensive [[Media:Dcache_xrootd_Litvintsev_June2014.pdf|review of dCache/xrootd]]. This document is quite relevant as it explains how dCache storage at FNAL is equipped with a "xroot door" so that it's exposed to external xrootd servers.
 
==A pedestrian view on running a xrootd service==
 
==A pedestrian view on running a xrootd service==
Starting the xrootd daemon is enough to serve data from a single node. In a clustered environment, you also need to start the cluster manager daemon, e.g.
+
There is more than way to start the xrootd service (see documentation). The most primitive way is to start the requisite daemon processes from the command line. A few details are given below.
 +
 
 +
Starting the xrootd daemon by itself is enough to serve data from a single node.
 +
<pre>
 +
xrootd -c xr1.cfg /path/to/data
 +
</pre>
 +
 
 +
This can be tested by using the xrdcp client from any machine from which the server is accessible, e.g.
 +
<pre>
 +
xrdcp myFile.txt root://serverIP//path/to/data
 +
</pre>
 +
 
 +
In a clustered environment, you also need to start the cluster manager daemon, e.g.
 
<pre>
 
<pre>
 
xrootd -c xr1.cfg /path/to/data
 
xrootd -c xr1.cfg /path/to/data

Revision as of 23:23, 15 July 2016

Documentation

dCache/xrootd

  • A comprehensive review of dCache/xrootd. This document is quite relevant as it explains how dCache storage at FNAL is equipped with a "xroot door" so that it's exposed to external xrootd servers.

A pedestrian view on running a xrootd service

There is more than way to start the xrootd service (see documentation). The most primitive way is to start the requisite daemon processes from the command line. A few details are given below.

Starting the xrootd daemon by itself is enough to serve data from a single node.

xrootd -c xr1.cfg /path/to/data

This can be tested by using the xrdcp client from any machine from which the server is accessible, e.g.

xrdcp myFile.txt root://serverIP//path/to/data

In a clustered environment, you also need to start the cluster manager daemon, e.g.

xrootd -c xr1.cfg /path/to/data
cmsd -c xr1.cfg /path/to/data

An example of a working configuration file suitable for a server node (not for the manager node):

all.role server
all.export /home/maxim
all.manager 192.168.0.191:3121
xrd.port 1094
acc.authdb /home/maxim/auth_file

In the example above the IP address for the manager needs to be set correctly, it's arbitrary in this sample.

xrootd@BNL

Currently there is a small DUNE Cluster (for historical reason named "lbne cluster") at Brookhaven National Lab under the umbrella of RACF RHIC and ATLAS Computing Facility. The machines have names like lbne0001 etc. Xrootd software is deployed on all of these. To utilize it, the user needs to be authenticated with a X.509 certificate by the xrootd service and authorized to access it by system administrators (please contact Brett Viren or Maxim Potekhin for further information.

Once authorized on the site, the user will need the use the following commands to obtain the Grid proxy:

setenv GLOBUS_LOCATION /afs/rhic.bnl.gov/@sys/opt/vdt/globus
source $GLOBUS_LOCATION/etc/globus-user-env.csh
grid-proxy-init

...and enter the passphrase as required. This will make sure the user can be authenticated to the xrootd service is allowed to use it.

The following is an example of a shell command that will transport a single file from FNAL to BNL:

xrdcp root://lbnelrd.rcf.bnl.gov//lbne/mc/lbne/simulated/001/singleparticle_antimu_20140801_Simulation1.root \
/tmp/singleparticle_antimu_20140801_Simulation1.root

Possible xrootd architecture for medium term

The idea behind the architecture proposed here is to achieve federation of storage and access to data across a few data centers (e.g. national labs) with modest amount of effort and resources. In this approach, this is effectively achieved by using a "global redirector" which allows xrootd services to locate a particular piece of data within the federation.

Xrootd-arch.png

Misc

For Xrootd we can have global Xrootd paths like:

root://data.<tbd>.org/path/to/file.root

But, in the future we may want to serve data files on other protocols but in the same domain/namespace. Ie:

http://data.<tbd>.org/path/to/file.root

Since the two are on different ports this should be okay.

Back to Main Page (DUNE)

Back to DUNE Computing