![]() |
Scratch Dcache
|
![]() |
![]() |
Working groups |
Blessed plots and figures |
Approving new results and publications |
Approval web pages - new results |
Approval web pages - new publications |
Mu2e Acronyn Dictionary |
Fermilab Meeting Rooms |
Fermilab Service Desk |
ReadyTalk : Home |
ReadyTalk : Help |
ReadyTalk : Toll Free Numbers |
There is also a tape-backed dCache pool, not addressed in this page, which is written through an upload process and read through SAM
mkdir /pnfs/mu2e/scratch/users/$USERand copy files in (also from grid nodes). You will need a kerberos ticket.
setup mu2e setup ifdhc ifdh cp local-file /pnfs/mu2e/scratch/users/$USER/new-file-nameor, to leave the file name the same (trailing slash required)
ifdh cp local-file /pnfs/mu2e/scratch/users/$USER/You can copy files out:
ifdh cp /pnfs/mu2e/scratch/users/$USER/file .
Wildcards are not allowed - move one file per command. Try to keep the number of files in a directory under 1000 and to avoid excessive numbers of small files, or frequent renaming of files.
/pnfs is not a directory, it is an interface to a database implemented as an nfs server, and there are restrictions. You can use the following commands in /pnfs: ls, rm, mv, mkdir, rmdir, chmod, cp, cat, more, less, etc. On SL5 (mu2egpvm04) you can't use the following commands: cp, cat, more, less, etc. You shouldn't run commands which make large demands on the database: "find .", "ls -lr *"
You can see basic metadata about the file in dCache though the /pnfs/mu2e/scratch filesystem. All files in dCache will appear here. /pnfs looks like a file system but is actually an nfs server with the database of dCache files as a backend. It is mounted on many lab machines, but only as needed. There are many subdirectories possible under /pnfs, but only a selected few are mounted on any particular machine, so we only see /pnfs/mu2e.
dCache instances can come in read, write, or read/write forms. Write-only is usually used to get raw data to tape. Read-only might be used to read fixed datasets. The most common is read/write, like /pnfs/mu2e/scratch which acts like a scratch disk. There are also tape-backed versions, where all files written to the dCache are migrated to tape and can be migrated off tape and into the dCache again on demand.
When you issue commands to read or write data on dcache you have to use one of several protocols. We will usually use the ifdh script and allow it to pick the most logical protocol. In any protocol, the procedure starts with contacting a head node with the request in the form of a reading or writing a /pnfs file spec. The head node is a load-balancer which directs you to a "door" process on a server. The door looks up your request and determines how to satisfy it. The door passes your request to a queue on a disk server node to actually serve or receive the file. Your transfer protocol request will hang until it gets through the queue and complates the transfer. Since the transfer is coming from the disk server node to your machine, dCache is providing the throughput capability of all the disk server nodes, typically dozens of Gb connections. A dCache can be tuned in the number and size of servers, the number and of type of door and queue and other parameters.
dCache has a file access load balancing system. If it detects that some files are frequently requested, it can spread those files to many severs to greatly increase the overall throughput for these files. It can keep multiple copies of a file to help with frequent access or as a planned backup mechanism. dCache regularly checks all file checksums and can gracefully handle local hardware failures without stopping the rest of the system.
dccp /pnfs/mu2e/scratch/users/$USER/filename .dccp is installed on mu2egpvm, but it may have to be installed or setup (the product is named dcap) on other nodes
on SL5: export LD_PRELOAD=/usr/lib64/libpdcap.so.1 root [0] ff = TFile::Open("/pnfs/mu2e/scratch/users/$USER/file") on SL6: root [0] ff = TFile::Open("/pnfs/mu2e/scratch/users/$USER/file")There are other versions of this access, through different plugins which trigger different authentication, protocols and transfer queues.
curl -1 -L --cacert $X509_USER_PROXY --capath /etc/grid-security/certificates --cert /tmp/x509up_u1311 https://fndca1.fnal.gov:2880/pnfs/fnal.gov/usr/mu2e/scratch/users/rlc/s1 (not currently working)
kinit getcert export X509_USER_CERT=/tmp/x509up_u`id -u` export X509_USER_KEY=$X509_USER_CERT export X509_USER_PROXY=$X509_USER_CERT grid-proxy-init voms-proxy-init -noregen -rfc -voms fermilab:/fermilab/mu2e/Role=Analysis globus-url-copy gsiftp://fndca1.fnal.gov:2811/scratch/users/$USER/file-name file:///$PWD/file-name
![]() |
|
Security, Privacy, Legal |
![]() |