Difference between revisions of "Basic XRootD"

From DUNE
Jump to navigation Jump to search
 
(8 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
=Installing or Building XRootD=
 
=Installing or Building XRootD=
 
The XRootD team provides rpm packages for installation on Scientific Linux. If your OS is not supported in this manner,
 
The XRootD team provides rpm packages for installation on Scientific Linux. If your OS is not supported in this manner,
building from source is a workable option and is not difficult.
+
building from source is a workable option and is not difficult. Download the archive and follow the instructions on the official XRootD site.
  
Download the archive and follow the instructions on the official XRootD
+
Be sure to consult the README file included in the archive. One tiny caveat is the the "source directory" mentioned in the README is not the "src" as most people would expect, but is one level above (i.e. it's the directory which contains the unzipped content of the archive you downloaded).
site. Be sure to read the README file included in the archive. A recent version of CMake
+
 
shall be required for the build. The build script may be finicky when it's testing the features of the C++ compiler
+
A recent version of CMake shall be required for the build. The build script may be finicky when it's testing the features of the C++ compiler
 
found on the system so you may need to upgrade the compiler or install an alternative one and reconfigure PATH and some
 
found on the system so you may need to upgrade the compiler or install an alternative one and reconfigure PATH and some
 
other elements of your setup correspondingly.  It is to be expected that you'll need to take care of a few dependencies,
 
other elements of your setup correspondingly.  It is to be expected that you'll need to take care of a few dependencies,
Line 18: Line 18:
 
Basic experimentation with XRootD can start with utilizing a few machines which are idle or aren't heavily loaded by other applications. It is very helpful
 
Basic experimentation with XRootD can start with utilizing a few machines which are idle or aren't heavily loaded by other applications. It is very helpful
 
to have the "sudo" privilege (e.g. for installation in standard directories and configuring auxiliary services) but strictly speaking this is not 100% necessary.
 
to have the "sudo" privilege (e.g. for installation in standard directories and configuring auxiliary services) but strictly speaking this is not 100% necessary.
 
+
Some Linux trivia helpful for controlling your cluster can be found on the [[Linux Tools]] page.
It is convenient to control a few machines from one screen. Typically ssh is used for this purpose, but if security is not a concern (e.g. then the network is strictly local) telnet can be also used as a quick solution. You may find it useful to utilize telnet at first in order to access all computers from one location when setting up ssh. Another advantage of ssh is X11 forwarding, which functionality telnet does not have.
 
 
 
===ssh===
 
You'll need to run the '''sshd''' service on every machine you want to connect to. On Linux, this is most frequently '''openssh-server''' and it can be trivially installed. To be used productively, private and public keys will need to be generated or imported as necessary. For the private/public key pair to work, public keys should be added to the file ".ssh/authorized_keys". A matching private key must be loaded to an identity managing service (e.g. ssh-agent in case of Linux) on the machine ''from'' which you are going to connect. If it's not cached, you will likely be prompted to enter the passphrase for the key.
 
 
 
Typically (this depends on the flavor of your sshd) you will get a message specifying which public key is used during the login that you are attempting. This is useful to know if you have many keys and forget which was used for what connection.
 
 
 
Restarting the service:
 
<pre>
 
sudo systemctl restart ssh
 
</pre>
 
 
 
Adding a key to the agent:
 
<pre>
 
eval "$(ssh-agent -s)"
 
ssh-add key_file
 
</pre>
 
 
 
===telnet===
 
On Ubuntu one can install the software necessary to run the telnet service in the following manner:
 
<pre>
 
sudo apt-get install xinetd telnetd
 
</pre>
 
...and start the service as follows:
 
<pre>
 
sudo /etc/init.d/xinetd start
 
</pre>
 
 
 
===pdsh===
 
This is an advanced parallel shell designed for cluster management. It often uses ssh as the underlying protocol although there are other options as well. Configuration is defined by files residing in /etc/pdsh. For example, the file "machines" needs to contain the list of computers to be targeted by pdsh. Optionally, this is also the place for a file that can be sourced for convenience of setup, cf
 
<pre>
 
# setup pdsh for cluster users
 
export PDSH_RCMD_TYPE='ssh'
 
export WCOLL='/etc/pdsh/machines'
 
</pre>
 
 
 
This of course can be done from the command line anyway, cf
 
<pre>
 
export PDSH_RCMD_TYPE=ssh
 
</pre>
 
 
 
Using ssh as the underlying protocol for pdsh implies that you have set up private and public keys just like you normally would for ordinary ssh login.
 
Once this is done, you should be able to do something like this as a basic test of your setup:
 
<pre>
 
pdsh -w targetHost "ls"
 
</pre>
 
 
 
If the targetHost is omitted, the command will be run against all machines listed in the "machines" file as explained above. Should a command fail on a particular machine, this will be indicated (with an error code) in the output of the command, with the name of the machine listed. Redirection of stderr with something like "2>/dev/null" included with the command you run won't work with pdsh.
 
 
 
<!-- In Ubuntu, if you need to add a few applications to your desktop, this can be done as follows:
 
sudo cp /usr/share/applications/firefox.desktop  ~/Desktop/
 
sudo chmod +x ~/Desktop/firefox.desktop -->
 
  
 
==Configuration and Log Files==
 
==Configuration and Log Files==
Line 86: Line 34:
 
The "/path/to/data" denotes the designated directory, from which it is allowed to serve
 
The "/path/to/data" denotes the designated directory, from which it is allowed to serve
 
the data. In xrootd terminology, this path is "exported".
 
the data. In xrootd terminology, this path is "exported".
 +
In the above example the file configFile.cfg contains other configuration detail.
 +
Without options or the config file present, xrootd will still run and some simple defaults will be assumed which will allow for testing,
 +
but the setup isn't likely to be useful for any real application. For example, if you
 +
just type "xrootd" at the OS prompt, you will get a service running on that computer with following limitations:
 +
* the directory /tmp will be exported, for security reasons
 +
* there will be no clustering, just this one server ready to serve the data
 +
This, however, allows for straightforward basic testing (i.e. whether it compiled OK) right out of the box.
  
In the above example the file configFile.cfg contains the necessary configuration.
+
In addition to the command line option, the path which is to be exported can also be defined in the configuration file
Without it present, some simple defaults will be assumed which will allow for testing,
 
but the setup isn't likely to be very useful for real applications.
 
The path which is to be exported can also be defined in the configuration file
 
 
(which is optimal), in which case it's not necessary to put it in the command line.
 
(which is optimal), in which case it's not necessary to put it in the command line.
 
See the corresponding section below.
 
See the corresponding section below.
Line 222: Line 174:
 
xrdfs managerIP query checksum /my/path/to/file
 
xrdfs managerIP query checksum /my/path/to/file
 
</pre>
 
</pre>
 +
 +
=Hardware Options=
 +
* [https://twiki.cern.ch/twiki/bin/view/CENF/NeutrinoClusterCERN Neut Cluster at CERN]

Latest revision as of 01:34, 27 September 2016

Disclaimer

The information below is not meant to replace XRootD documentation. It may be helpful for experimentation with a small xrootd cluster and getting familiar with the basics of XRootD configuration.

Installing or Building XRootD

The XRootD team provides rpm packages for installation on Scientific Linux. If your OS is not supported in this manner, building from source is a workable option and is not difficult. Download the archive and follow the instructions on the official XRootD site.

Be sure to consult the README file included in the archive. One tiny caveat is the the "source directory" mentioned in the README is not the "src" as most people would expect, but is one level above (i.e. it's the directory which contains the unzipped content of the archive you downloaded).

A recent version of CMake shall be required for the build. The build script may be finicky when it's testing the features of the C++ compiler found on the system so you may need to upgrade the compiler or install an alternative one and reconfigure PATH and some other elements of your setup correspondingly. It is to be expected that you'll need to take care of a few dependencies, for example install packages like zlib.

Running XRootD

Controlling your cluster

Basic experimentation with XRootD can start with utilizing a few machines which are idle or aren't heavily loaded by other applications. It is very helpful to have the "sudo" privilege (e.g. for installation in standard directories and configuring auxiliary services) but strictly speaking this is not 100% necessary. Some Linux trivia helpful for controlling your cluster can be found on the Linux Tools page.

Configuration and Log Files

In most cases you will want to run two processes on each machine, "cmsd" and "xrootd". Each of them will need a proper configuration file. See corresponding sections below. Be sure to include paths for log files in the configuration so the log information is readily available for debugging.

Starting a simple instance of xrootd service

There is more than way to start the xrootd service (see documentation). The simplest way is to start the requisite daemon processes from the command line. Starting the xrootd daemon by itself is enough to serve data from a single node (i.e. without creating a storage cluster).

xrootd -c configFile.cfg /path/to/data &

The "/path/to/data" denotes the designated directory, from which it is allowed to serve the data. In xrootd terminology, this path is "exported". In the above example the file configFile.cfg contains other configuration detail. Without options or the config file present, xrootd will still run and some simple defaults will be assumed which will allow for testing, but the setup isn't likely to be useful for any real application. For example, if you just type "xrootd" at the OS prompt, you will get a service running on that computer with following limitations:

  • the directory /tmp will be exported, for security reasons
  • there will be no clustering, just this one server ready to serve the data

This, however, allows for straightforward basic testing (i.e. whether it compiled OK) right out of the box.

In addition to the command line option, the path which is to be exported can also be defined in the configuration file (which is optimal), in which case it's not necessary to put it in the command line. See the corresponding section below.

The "-b" option will start the process in the background by default, and the "-l" option can be used to specify the path to the log file (otherwise stderr will be assumed). Examples:

cmsd -b -l /path/to/log/cmsd.log -c client.cfg
xrootd -b -l /path/to/log/xrootd.log -c client.cfg

The "cmsd" is the clustering daemon which is explained in one of the following sections.


If the "path to data" is not explicitely defined, xrootd will default to /tmp which might work for initial testing but isn't practical otherwise. Whether xrootd is running as expected can be tested by using the xrdcp client from any machine from which the server is accessible, e.g.

xrdcp myFile.txt root://serverIP//path/to/data

Clustering

In a clustered environment, you also need to start the cluster manager daemon, e.g.

xrootd -c configFile.cfg /path/to/data &
cmsd -c configFile.cfg /path/to/data &

Alternatively,

cmsd -b -l /path/to/log/cmsd.log -c client.cfg
xrootd -b -l /path/to/log/xrootd.log -c client.cfg

...in which case the log files are explicitly defined on the command line (as opposed to the default stderr) and the processes are run as daemons.In this example the exported pat is defined in the config file so there is no need to put it in the command line.

The data in the cluster is exposed through the manager node, whose address is to be used in queries. Example:

xrdcp -f xroot://managerIP//my/path/foo local_foo

The file "foo" will be located and if it exists, will be copied to "local_foo" on the machine running the xrdcp client. Caveat: if multiple files exist in the system under the same path, the result (i.e. which one gets fetched) is random.

Configuration File

An example of a working configuration file suitable for a server node (not for the manager node):

all.role server
all.export /path/to/data
all.manager 192.168.0.191:3121
xrd.port 1094
acc.authdb /path/to/data/auth_file

In the example above the IP address for the manager needs to be set correctly, it's arbitrary in this sample.

authdb

The "authdb" bit is important, things mostly won't work without proper authorization (quite primitive in this case as it relies on a file with permissions). If all users are given access to all data, the content of the file can be as simple as

u * /path/to/data lr

Redirector

The redirector coordinates the function of the cluster. For example, it finds the data based on the path given by the clients such as xrdcp, without the client having to know which nodes contains this bit of data. A crude (but working) example of the redirector configuration (for cmsd):

all.manager managerIP:3121
all.role manager
xrd.port 3121
all.export /path/to/data
acc.authdb /path/to/data/auth_file

Note the port number. This is not the data port but the service port to used for communication inside the cluster (e.g. for metadata).

The configuration of the server (i.e. for xrootd) might look like this:

all.manager managerIP:3121
all.role manager
xrd.port 1094
all.export /path/to/data
acc.authdb /path/to/data/auth_file

The simplest way to initialize the redirector service on this node is as follows

xrootd -c server.cfg /path/to/data &
cmsd -c redir.cfg /path/to/data &

xrdfs

File Info

Filesystem functionality. Example:

xrdfs managerIP ls -l /my/path
xrdfs managerIP ls -u /my/path

In the above the first item performs similarly to "ls -l" in Linux shell, the second prints URLs of the files.

The following command locates the path, i.e. returns the address(es) of the server(s) which physically hold(s) the path - can be multiple machines:

xrdfs managerIP locate /my/path

Adding the "-r" option will force the server to refresh, i.e. to do a fresh query. Otherwise, a cached result will be used if it exists.

The "stat" command provides info similar to "stat":

xrdfs managerIP stat /my/path

The "rm" command does what the name suggest, with the usual caveat that if same path is present on a few machines, the result will be arbitrary - one of the files will be deleted at a time.

Host Info

xrdfs hostIP query config role

Checksum

XRootD hosts can report checksums for files, with a few checksum algorithms available. To enable this on a host a special line needs to be added to the configuration file, for example:

xrootd.chksum md5

As usual, it is only necessary to query the redirector in order to get this info by the xrdfs client:

xrdfs managerIP query checksum /my/path/to/file

Hardware Options