Mu2e Home
CVMFS
Search
Mu2e@Work


Introduction

CERN Virtual Machine File System is a distributed disk system for providing an experiment's code and libraries to interactive node and grids worldwide. It is used by CMS and Atlas and well as most experiments at FNAL.

The code manager copies a code release to a CVMFS work space and "publishes" it. This process examines the code, compresses it, and inserts it in a database. The original database is called the tier 0 copy. Remote sites may support tier 1 copies of the database, synced to the tier 0.

The user's grid job sees a CVMFS disk mounted and containing a copy of the experiment's code, which can be accessed in any way the code would be accessed on a standard disk. The disk is actually a custom nfs server with a small ( ~8 GB) local cache on the node and a backend that sends file requests to a squid web cache. The squid may get its data from the tier 1 database, if available, or from the tier 0. As a practical matter, most grid jobs do not access much in a release, usually just a small set of shared object libraries, and these end up cached on the worker node, or on the squid, thereby avoiding a long-distance network transfer.

CVMFS is efficienct only for distributing code and small data files which are required by a large number of nodes on the grid. On the other hand, datasets, such as event data files, are many files which are each sent to only one node during a grid job. CVMFS is not efficienct for this type of data distribution or for this sort of data volume. Data files should be distributed through dCache, which is designed to deliver each file to one node, and to handle the data volume. A single large file which is to be distributed to all nodes also need to be avoided since it would would churn or overflow the small local caches. Examples of this sort of file are the Genie flux files or a large analysis fit template library. The lab is developing an Alien Cache feature for CVMFS, to address this sort of file. The mu2e B-field files and stopped muon ntuples are distributed by CVMFS, but are about at the limit of file size that is appropriate.

The mu2e CVMFS partition is available on all grid sites that mu2e can submit to. The Fermigrid local farm and interactive nodes have the same setup, so you do not need to modify your script for onsite versus offsite. CVMFS is mounted on the mu2e interactive nodes, and is the default source for code and products.

Using CVMFS

Here is the recommended setup sequennce, which will work on interactive nodes and all grid jobs.
source /cvmfs/fermilab.opensciencegrid.org/products/common/etc/setups
setup mu2e
Note that the source, in products/common, may already be done by the grid startup procedure. It doesn't create any known problem if it is sourced a second time.

The second method, which is not preferred, is to simply source the mu2e startup script:

source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh

To setup a base release of the mu2e Offline code (after checking version, SLF5/6, and prof/debug):

source /cvmfs/mu2e.opensciencegrid.org/Offline/v5_2_2/SLF6/prof/Offline/setup.sh

To summarize:

source /cvmfs/fermilab.opensciencegrid.org/products/common/etc/setups
setup mu2e
source /cvmfs/mu2e.opensciencegrid.org/Offline/v5_2_2/SLF6/prof/Offline/setup.sh

These recipes assume you have not done any other setup actions, but even if you have, "setup mu2e" or its equivalent should force your PRODUCTS path to point to cvmfs, for mu2e and common products, no matter what.

Occassionaly, on OSG remote sites, we can land on a node that knows of the latest version of cvmfs contents, but still returns the next-to-latest version of the contents for one minute. In order to avoid this (rare) case, you can run these commands before accessing cvmfs:

/cvmfs/grid.cern.ch/util/cvmfs-uptodate /cvmfs/fermilab.opensciencegrid.org
/cvmfs/grid.cern.ch/util/cvmfs-uptodate /cvmfs/mu2e.opensciencegrid.org

Installing CVMFS

If you have root access, it is straightforward to install a readonly CVMFS on a remote linux system. For a small load (a few desktops), please use this recipe and set CVMFS_HTTP_PROXY=DIRECT. For a large installation like a farm, you will need to use these instructions and investigate a local "squid" cache for CVMFS_HTTP_PROXY.

It is preferred that all mu2e sites use this as a local code disk.

There was a Jan 2017 discussion on hypernews about installing cvmfs on SL7.

Filling CVMFS

These are the steps to copy code into the cvmfs repository, and publish it so that it goes out to all sites. Since only base releases and products are distributed this way, this procedure is only performed by the code manager.

Here is some background information on maintaining cvmfs and its limitations.

Base releases are built by hand or on Jenkins build server and installed on cvmfs (and on /mu2e/app/Offline if required). Download instructions is good for an overview of how to pull art products, DataFiles and setupmu2e-art.sh.

art products are distributed a set called a manifest. The manifests are listed on scisoft.fnal.gov/bundles/mu. These will tell you the arguments to pullProducts script below. We use relocatable ups products which can also be pulled individually from as tarballs.

Note! G4beamline is not pulled with pullProdcuts, it is not a real product. You will need to copy that separately below. Note! If you have ups setup from anywhere while running pullProducts, it will not pull ups, even if it is on the manifest (unsetup ups to avoid this).

To publish files on the mu2e CVMFS server, you will need to follow this pattern:

# login to the cvmfs server node (permission through the .k5login there)
ssh -X -l cvmfsmu2e oasiscfs.fnal.gov

# this tells the cmfs server to record the changes you are about to make
cvmfs_server transaction mu2e.opensciencegrid.org

# easier to work from here, where the data is going
cd /cvmfs/mu2e.opensciencegrid.org

# copy the new files in here
rsync, pullProducts, wget, etc.

# NOTE!!!! do not "mv" any files or dirs, instead delete and recopy if necessary
# (it will hang). This is a bug in cvmfs as of 2/2015

# must cd out of the data dir or the next step will fail!
cd ~

# record and publish to all of cvmfs the changes just made
cvmfs_server publish mu2e.opensciencegrid.org
NOTE!!!! do not "mv" any files or dirs, instead delete and recopy if necessary (it will hang). This is a bug in cvmfs as of 2/2015

Here is an example of using rsync. In the rsync command, if the source is the name of a directory, and if the directory name contains a trailing slash, rsync interprets this to mean a wildcard of the directory contents, not the name of directory itself.

ssh -X -l cvmfsmu2e oasiscfs.fnal.gov
cd ~
cvmfs_server transaction mu2e.opensciencegrid.org
cd /cvmfs/mu2e.opensciencegrid.org
rsync -aur rlc@mu2egpvm01:/mu2e/app/Offline/tmp/DataFiles .
rsync -aur rlc@mu2egpvm01:/mu2e/app/Offline/tmp/setupmu2e-art.sh .
rsync -aur rlc@mu2egpvm01:/grid/fermiapp/products/mu2e/artexternals/G4beamline artexternals
cd ~
cvmfs_server publish mu2e.opensciencegrid.org

Here is an example of pulling the products from an art manifest directly on the cvmfs host machine.

ssh -X -l cvmfsmu2e oasiscfs.fnal.gov
cd ~
cvmfs_server transaction mu2e.opensciencegrid.org
cd /cvmfs/mu2e.opensciencegrid.org/artexternals
~/pullProducts $PWD slf5 mu-v1_17_02 s20-e9 prof
rm -f *.bz2 *MANIFEST.txt
cd ~
cvmfs_server publish mu2e.opensciencegrid.org

Here is an example of pulling a release built on Jenkins directly on the cvmfs host machine. This will try to copy SLF5,6 prof,debug

ssh -X -l cvmfsmu2e oasiscfs.fnal.gov
cd ~
cvmfs_server transaction mu2e.opensciencegrid.org
cd /cvmfs/mu2e.opensciencegrid.org/Offline
~/copyFromJenkins.sh v5_5_1
# delete the intermediate .os and other temp files
~/cleanupRelease.sh $PWD/v5_5_1/SLF6/prof/Offline
~/cleanupRelease.sh $PWD/v5_5_1/SLF6/debug/Offline
~/cleanupRelease.sh $PWD/v5_5_1/SLF5/prof/Offline
~/cleanupRelease.sh $PWD/v5_5_1/SLF5/debug/Offline
cd ~
cvmfs_server publish mu2e.opensciencegrid.org

Here is an example of creating a link release

ssh -X -l cvmfsmu2e oasiscfs.fnal.gov
cd ~
cvmfs_server transaction mu2e.opensciencegrid.org
cd /cvmfs/mu2e.opensciencegrid.org/OfflineSpecial
mkdir -p v5_7_9-cosmic_target5-v3/SLF6/prof/Offline
cd v5_7_9-cosmic_target5-v3/SLF6/prof/Offline
find /cvmfs/mu2e.opensciencegrid.org/Offline/v5_7_9/SLF6/prof/Offline -maxdepth 1 | while read FF; do ln -s $FF; done
rm JobControl
mkdir JobControl
cd JobControl
find /cvmfs/mu2e.opensciencegrid.org/Offline/v5_7_9/SLF6/prof/Offline/JobControl -maxdepth 1 | while read FF; do ln -s $FF; done
rm cd3
mkdir cd3

# continue with links and replacing with real files where needed

# check that real files are there
find /cvmfs/mu2e.opensciencegrid.org/OfflineSpecial/v5_7_9-cosmic_target5-v3 -type f

cd ~
cvmfs_server publish mu2e.opensciencegrid.org

Here is an example of installing a single product from scisoft.fnal.gov/packages directly on the cvmfs host machine

ssh -X -l cvmfsmu2e oasiscfs.fnal.gov
cd ~
cvmfs_server transaction mu2e.opensciencegrid.org
cd /cvmfs/mu2e.opensciencegrid.org/artexternals
wget http://scisoft.fnal.gov/scisoft/packages/geant4/v4_10_1_p02a/geant4-4.10.1.p02a-slf6-x86_64-e9-prof.tar.bz2 
tar -xjf geant4-4.10.1.p02a-slf6-x86_64-e9-prof.tar.bz2
rm -f *.bz2
cd ~
cvmfs_server publish mu2e.opensciencegrid.org
Here are some products which are pulled occasionally by hand
allinea       - a debugger
historoot     - needed for g4beamline
artdaq        - not clear if we need this
mu2egrid      - we only need the most recent version
mpich         - this is for testing with multithreading - used by Mike Wang in trigger studies - we need it.
toyExperiment - take only the most recent version - used by art_workbook
upd           - probably should add this so that we can install from kits
.updfiles     - same
valgrind      - memory checker

pulled by hand for G4Beamline:
g4radiative v3_6
g4photon v2_3
root v5_28_00c

To see the size of the cvmfs cache on a node,

df -h /cvmfs/mu2e.opensciencegrid.org/
this is shared by all cvmfs partitions. Also useful is the config file:
cat /etc/cvmfs/default.local
which contains CVMFS_QUOTA_LIMIT, in MB. The actual cache is usually in /var/cache. mu2egpvm seems to have 8GB, grid nodes have 3.6GB.

The compressed database of files is under

/srv/cvmfs/mu2e.opensciencegrid.org
on oasiscfs.fnal.gov.

Guidelines on Usage

On September 6 2016, we asked Dave Dykstra for information about quotas and/or other usage guidelines. The short version of his reply is that we have a quota of about 191 GB on:
/srv/cvmfs/mu2e.opensciencegrid.org
and no quota (ie usage limited only by the free space on the file system) on:
/cvmfs/mu2e.opensciencegrid.org
Throughout this discussion 1 GB means 10243 bytes.

On Sep 6, 2016 the Mu2e quota on /srv/cvmfs/mu2e.opensciencegrid.org was doubled from 95 GB to 191 GB. This directory holds the de-duplicated and compressed database and is visible only on oasiscfs.fnal.gov. The partition that contains this directory also contains databases for other experiments and as of, Sept 6, 2016, the partition has about 4TB of which about 50% is used.

To check the Mu2e quota and the used fraction of the quota:

quota -s /srv/cvmfs/mu2e.opensciencegrid.org
and look for the information about:
/dev/mapper/vgData-srv_cvmfs
You can also check the usage on this disk using du -sh; its answer should agree with the answer given by quota.

If necessary, we may ask for an increase in the quota.

On oasiscfs.fnal.gov the directory:

/cvmfs/mu2e.opensciencegrid.org
contains the source from which the the de-duplicated compresssed database is created. On other machines it is a cached image of that database. On oasiscfs.fnal.gov there is no Mu2e quota on this disk.

The following table gives the disk space used on /cvmfs/mu2e.opensciencegrid.org/ (Uncompressed) and /srv/cvmfs/mu2e.opensciencegrid.org (Compressed) at various times:

Date    Uncompressed (GB)    Compressed (GB)   
Sep 3, 2015    80    32   
Dec 20, 2015    144    --   
Sep 6, 2016    236    96   

The ratio of compressed to uncompressed is roughly constant with time.

Some other details:

  1. About the compressed and de-duplicated database: /srv/cvmfs/mu2e.opensciencegrid.org.
    1. The total space used by all experiments is 2.0 TB and there is 2.0 TB free.
    2. Another 4TB file system is available to handle overflow.
    3. NOvA is the big user with 880TB (on Sep 3, 2015; probably more now)
  2. If the present ratio of compressed to uncompressed remains unchanged, 191GB on the compressed de-duplicated database corresponds to about 500GB on the published repository.
  3. Other factoids:
    1. The compressed and de-duplicated repository is what is copied to the stratum 1 servers. The owners of these servers prefer that usage on the servers does not grow "too fast"; at this time the BNL server is tight for space.
    2. Removing files from the publshed repo does not automatically recover space in the compressed and de-duplicated repo.
    3. Space is only recovered if a snapshot is taken, which is a manual process and only happens rarely.
    4. The snapshot process needs to be done at each stratum 1 server.


Fermilab at Work ]  [ Mu2e Home ]  [ Mu2e @ Work ]  [ Mu2e DocDB ]  [ Mu2e Search ]

For web related questions: Mu2eWebMaster@fnal.gov.
For content related questions: rlc@fnal.gov
This file last modified Thursday, 15-Nov-2018 11:59:19 CST
Security, Privacy, Legal Fermi National Accelerator Laboratory