CVMFS
|
||
Working groups |
Blessed plots and figures |
Approving new results and publications |
Approval web pages - new results |
Approval web pages - new publications |
Mu2e Acronyn Dictionary |
Fermilab Meeting Rooms |
Fermilab Service Desk |
ReadyTalk : Home |
ReadyTalk : Help |
ReadyTalk : Toll Free Numbers |
The code manager copies a code release to a CVMFS work space and "publishes" it. This process examines the code, compresses it, and inserts it in a database. The original database is called the tier 0 copy. Remote sites may support tier 1 copies of the database, synced to the tier 0.
The user's grid job sees a CVMFS disk mounted and containing a copy of the experiment's code, which can be accessed in any way the code would be accessed on a standard disk. The disk is actually a custom nfs server with a small ( ~8 GB) local cache on the node and a backend that sends file requests to a squid web cache. The squid may get its data from the tier 1 database, if available, or from the tier 0. As a practical matter, most grid jobs do not access much in a release, usually just a small set of shared object libraries, and these end up cached on the worker node, or on the squid, thereby avoiding a long-distance network transfer.
CVMFS is efficienct only for distributing code and small data files which are required by a large number of nodes on the grid. On the other hand, datasets, such as event data files, are many files which are each sent to only one node during a grid job. CVMFS is not efficienct for this type of data distribution or for this sort of data volume. Data files should be distributed through dCache, which is designed to deliver each file to one node, and to handle the data volume. A single large file which is to be distributed to all nodes also need to be avoided since it would would churn or overflow the small local caches. Examples of this sort of file are the Genie flux files or a large analysis fit template library. The lab is developing an Alien Cache feature for CVMFS, to address this sort of file. The mu2e B-field files and stopped muon ntuples are distributed by CVMFS, but are about at the limit of file size that is appropriate.
The mu2e CVMFS partition is available on all grid sites that mu2e can submit to. The Fermigrid local farm and interactive nodes have the same setup, so you do not need to modify your script for onsite versus offsite. CVMFS is mounted on the mu2e interactive nodes, and is the default source for code and products.
source /cvmfs/fermilab.opensciencegrid.org/products/common/etc/setups setup mu2eNote that the source, in products/common, may already be done by the grid startup procedure. It doesn't create any known problem if it is sourced a second time.
The second method, which is not preferred, is to simply source the mu2e startup script:
source /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh
To setup a base release of the mu2e Offline code (after checking version, SLF5/6, and prof/debug):
source /cvmfs/mu2e.opensciencegrid.org/Offline/v5_2_2/SLF6/prof/Offline/setup.sh
To summarize:
source /cvmfs/fermilab.opensciencegrid.org/products/common/etc/setups setup mu2e source /cvmfs/mu2e.opensciencegrid.org/Offline/v5_2_2/SLF6/prof/Offline/setup.shThese recipes assume you have not done any other setup actions, but even if you have, "setup mu2e" or its equivalent should force your PRODUCTS path to point to cvmfs, for mu2e and common products, no matter what.
Occassionaly, on OSG remote sites, we can land on a node that knows of the latest version of cvmfs contents, but still returns the next-to-latest version of the contents for one minute. In order to avoid this (rare) case, you can run these commands before accessing cvmfs:
/cvmfs/grid.cern.ch/util/cvmfs-uptodate /cvmfs/fermilab.opensciencegrid.org /cvmfs/grid.cern.ch/util/cvmfs-uptodate /cvmfs/mu2e.opensciencegrid.org
It is preferred that all mu2e sites use this as a local code disk.
There was a Jan 2017 discussion on hypernews about installing cvmfs on SL7.
Here is some background information on maintaining cvmfs and its limitations.
Base releases are built by hand or on Jenkins build server and installed on cvmfs (and on /mu2e/app/Offline if required). Download instructions is good for an overview of how to pull art products, DataFiles and setupmu2e-art.sh.
art products are distributed a set called a manifest. The manifests are listed on scisoft.fnal.gov/bundles/mu. These will tell you the arguments to pullProducts script below. We use relocatable ups products which can also be pulled individually from as tarballs.
Note! G4beamline is not pulled with pullProdcuts, it is not a real product. You will need to copy that separately below. Note! If you have ups setup from anywhere while running pullProducts, it will not pull ups, even if it is on the manifest (unsetup ups to avoid this).
To publish files on the mu2e CVMFS server, you will need to follow this pattern:
# login to the cvmfs server node (permission through the .k5login there) ssh -X -l cvmfsmu2e oasiscfs.fnal.gov # this tells the cmfs server to record the changes you are about to make cvmfs_server transaction mu2e.opensciencegrid.org # easier to work from here, where the data is going cd /cvmfs/mu2e.opensciencegrid.org # copy the new files in here rsync, pullProducts, wget, etc. # NOTE!!!! do not "mv" any files or dirs, instead delete and recopy if necessary # (it will hang). This is a bug in cvmfs as of 2/2015 # must cd out of the data dir or the next step will fail! cd ~ # record and publish to all of cvmfs the changes just made cvmfs_server publish mu2e.opensciencegrid.orgNOTE!!!! do not "mv" any files or dirs, instead delete and recopy if necessary (it will hang). This is a bug in cvmfs as of 2/2015
Here is an example of using rsync. In the rsync command, if the source is the name of a directory, and if the directory name contains a trailing slash, rsync interprets this to mean a wildcard of the directory contents, not the name of directory itself.
ssh -X -l cvmfsmu2e oasiscfs.fnal.gov cd ~ cvmfs_server transaction mu2e.opensciencegrid.org cd /cvmfs/mu2e.opensciencegrid.org rsync -aur rlc@mu2egpvm01:/mu2e/app/Offline/tmp/DataFiles . rsync -aur rlc@mu2egpvm01:/mu2e/app/Offline/tmp/setupmu2e-art.sh . rsync -aur rlc@mu2egpvm01:/grid/fermiapp/products/mu2e/artexternals/G4beamline artexternals cd ~ cvmfs_server publish mu2e.opensciencegrid.org
Here is an example of pulling the products from an art manifest directly on the cvmfs host machine.
ssh -X -l cvmfsmu2e oasiscfs.fnal.gov cd ~ cvmfs_server transaction mu2e.opensciencegrid.org cd /cvmfs/mu2e.opensciencegrid.org/artexternals ~/pullProducts $PWD slf5 mu-v1_17_02 s20-e9 prof rm -f *.bz2 *MANIFEST.txt cd ~ cvmfs_server publish mu2e.opensciencegrid.org
Here is an example of pulling a release built on Jenkins directly on the cvmfs host machine. This will try to copy SLF5,6 prof,debug
ssh -X -l cvmfsmu2e oasiscfs.fnal.gov cd ~ cvmfs_server transaction mu2e.opensciencegrid.org cd /cvmfs/mu2e.opensciencegrid.org/Offline ~/copyFromJenkins.sh v5_5_1 # delete the intermediate .os and other temp files ~/cleanupRelease.sh $PWD/v5_5_1/SLF6/prof/Offline ~/cleanupRelease.sh $PWD/v5_5_1/SLF6/debug/Offline ~/cleanupRelease.sh $PWD/v5_5_1/SLF5/prof/Offline ~/cleanupRelease.sh $PWD/v5_5_1/SLF5/debug/Offline cd ~ cvmfs_server publish mu2e.opensciencegrid.org
Here is an example of creating a link release
ssh -X -l cvmfsmu2e oasiscfs.fnal.gov cd ~ cvmfs_server transaction mu2e.opensciencegrid.org cd /cvmfs/mu2e.opensciencegrid.org/OfflineSpecial mkdir -p v5_7_9-cosmic_target5-v3/SLF6/prof/Offline cd v5_7_9-cosmic_target5-v3/SLF6/prof/Offline find /cvmfs/mu2e.opensciencegrid.org/Offline/v5_7_9/SLF6/prof/Offline -maxdepth 1 | while read FF; do ln -s $FF; done rm JobControl mkdir JobControl cd JobControl find /cvmfs/mu2e.opensciencegrid.org/Offline/v5_7_9/SLF6/prof/Offline/JobControl -maxdepth 1 | while read FF; do ln -s $FF; done rm cd3 mkdir cd3 # continue with links and replacing with real files where needed # check that real files are there find /cvmfs/mu2e.opensciencegrid.org/OfflineSpecial/v5_7_9-cosmic_target5-v3 -type f cd ~ cvmfs_server publish mu2e.opensciencegrid.org
Here is an example of installing a single product from scisoft.fnal.gov/packages directly on the cvmfs host machine
ssh -X -l cvmfsmu2e oasiscfs.fnal.gov cd ~ cvmfs_server transaction mu2e.opensciencegrid.org cd /cvmfs/mu2e.opensciencegrid.org/artexternals wget http://scisoft.fnal.gov/scisoft/packages/geant4/v4_10_1_p02a/geant4-4.10.1.p02a-slf6-x86_64-e9-prof.tar.bz2 tar -xjf geant4-4.10.1.p02a-slf6-x86_64-e9-prof.tar.bz2 rm -f *.bz2 cd ~ cvmfs_server publish mu2e.opensciencegrid.orgHere are some products which are pulled occasionally by hand
allinea - a debugger historoot - needed for g4beamline artdaq - not clear if we need this mu2egrid - we only need the most recent version mpich - this is for testing with multithreading - used by Mike Wang in trigger studies - we need it. toyExperiment - take only the most recent version - used by art_workbook upd - probably should add this so that we can install from kits .updfiles - same valgrind - memory checker pulled by hand for G4Beamline: g4radiative v3_6 g4photon v2_3 root v5_28_00c
To see the size of the cvmfs cache on a node,
df -h /cvmfs/mu2e.opensciencegrid.org/this is shared by all cvmfs partitions. Also useful is the config file:
cat /etc/cvmfs/default.localwhich contains CVMFS_QUOTA_LIMIT, in MB. The actual cache is usually in /var/cache. mu2egpvm seems to have 8GB, grid nodes have 3.6GB.
The compressed database of files is under
/srv/cvmfs/mu2e.opensciencegrid.orgon oasiscfs.fnal.gov.
/srv/cvmfs/mu2e.opensciencegrid.organd no quota (ie usage limited only by the free space on the file system) on:
/cvmfs/mu2e.opensciencegrid.orgThroughout this discussion 1 GB means 10243 bytes.
On Sep 6, 2016 the Mu2e quota on /srv/cvmfs/mu2e.opensciencegrid.org was doubled from 95 GB to 191 GB. This directory holds the de-duplicated and compressed database and is visible only on oasiscfs.fnal.gov. The partition that contains this directory also contains databases for other experiments and as of, Sept 6, 2016, the partition has about 4TB of which about 50% is used.
To check the Mu2e quota and the used fraction of the quota:
quota -s /srv/cvmfs/mu2e.opensciencegrid.organd look for the information about:
/dev/mapper/vgData-srv_cvmfsYou can also check the usage on this disk using du -sh; its answer should agree with the answer given by quota.
If necessary, we may ask for an increase in the quota.
On oasiscfs.fnal.gov the directory:
/cvmfs/mu2e.opensciencegrid.orgcontains the source from which the the de-duplicated compresssed database is created. On other machines it is a cached image of that database. On oasiscfs.fnal.gov there is no Mu2e quota on this disk.
The following table gives the disk space used on /cvmfs/mu2e.opensciencegrid.org/ (Uncompressed) and /srv/cvmfs/mu2e.opensciencegrid.org (Compressed) at various times:
Date | Uncompressed (GB) | Compressed (GB) | |||
---|---|---|---|---|---|
Sep 3, 2015 | 80 | 32 | |||
Dec 20, 2015 | 144 | -- | |||
Sep 6, 2016 | 236 | 96 |
The ratio of compressed to uncompressed is roughly constant with time.
Some other details:
Security, Privacy, Legal |