Mu2e Home
art Native Services
Search
Mu2e@Work

This page describes services that are distributed as part of art. It does not discuss services that are part of the Mu2e Offline software.
  1. Tracer
  2. TimeTracker
  3. MemoryTracker
  4. TFileService
  5. RandomNumberService


Tracer

This service will print an informational message at the start and end of every call to a module or to a user service; it indicates depth within the event loop state machine using "++" to indicate top level, "++++" to indicate the next level, and so on.

The service can be enabled or disabled from the mu2e command line:

  > mu2e --trace   -c input.fcl
  > mu2e --notrace -c input.fcl
It can also be enabled by adding the following fragment within the services parameter set in your .fcl file:
services : {
  scheduler: { wantTracer : true }
}
If there is both a wantTracer: true/false parameter in the .fcl file and a command line argument, the command line argument takes precedence.

Under the covers, the tracer registers a callback for every state machine transition for which a service may register. At each transition, it will call the registered callback, which will print an informational message to the log file. For the underlying code see:

One can use this code as a model for how to register service code to respond to state machine transitions.


TimeTracker

The art TimeTracker service provides you with information about the time used by each module in an art job. It can be configured to provide just summary information in the job log or to write an SQL database that has detailed information about how long each module spent on each event.

For documentation, see the art wiki page describing the TimeTracker service.

art contains a related service, the Timing service, which is deprecated and may be removed from art at any time. If you are still using the Timing service, you should switch your workflows to use the TimeTracker service as soon as possible. In the mean time you can still access the documentation on this deprecated service.

Caveats About Event Timing

There are two important caveats about event timing. When running Mu2e GEANT4 simulations, there are outlier events that take up to a few hundred times longer than the median event. These events occur at the percent level. When you run a short grid job in order to estimate the time for a much longer job, be aware that your estimates could be off because too many or too few of these events are in the small sample sample. Be sure to run long enough test jobs to average this out; a test job that executes for 30 minutes should usually be safe.

I presume that a similar issue exists for running full reconstruciton on simulated events but I have not yet quantified it.

A corollary of the above is that the numbers from the timing report should be more robust than the estimate you would get by runninng two test jobs, one with few events and one with many events, in order to measure the startup overhead. This method is vulnerable to one of the events in the first job being a long-cpu-time outlier.

The second issue is that different grid slots have different CPU power. Most of the GP Fermigrid nodes have the same CPU power per slot as do detsim or mu2egpvm*. But there are some older machines around, some of which are slower per core but some of which are faster per core ( but with fewer cores per chip ). If you run jobs opportunistically, which the mu2egrid scripts do by default, then you may be exposed to machines with either more or less CPU power than those on which you ran your test jobs.

Taken together, these two effects can lead to a factor of two difference in execution time among grid processes in one grid job cluster.


MemoryTracker

The art MemoryTracker service provides you with information about the memory used by each module in an art job. It can be configured to provide just summary information in the job log or to write an SQL database that has detailed information about how much memory was used by each module for each event.

For documentation, see the art wiki page describing the MemoryTracker service.

art contains a related service, the SimpleMemoryCheck service, which is deprecated and may be removed from art at any time. If you are still using the Timing service, you should switch your workflows to use the MemoryTracker service as soon as possible. In the meany time you can still access the documentation on this deprecated service.


TFileService

When you want to make make histograms inside a module you will normally use a package named ROOT. Here "histograms" is shorthand for any of histograms, ntuples, TTrees, TGraphs and all other sorts of data presentation and data summary objects. When other people wish their modules to do the same, they will also use ROOT. In addition, the art event-data IO system uses ROOT's IO subsystem. You are free to use other data presentation tools but the only supported tool is ROOT.

ROOT, however, is fragile and, when several independent pieces of code all use ROOT, it is very easy for these codes to collide with each other. Sometimes this will crash the program but more often it will produce subtlety incorrect results.

The remainder of this section presumes that you have a minimal familiarity with ROOT: you know how to create, fill and write out histograms and ntuples.

Art supplies a service, named TFileService, that if used as intended, will automatically avoid these problems without any need for any person to be aware of what others are doing. TFileService opens one ROOT output file to hold all of the histograms, ntuples, TTrees, TGraphs, TCanvases, etc that are produced by the modules; this file is distinct from the event-data output files that are created by output modules; this file is only created as needed and will not be created if no histograms, ntuples etc are created.

Within the ROOT output file, TFileService makes a subdirectory for each module instance; the subdirectory is named using the module label. When a module instance runs, any histograms that it creates are automatically created inside the correct subdirectory. In this way two modules can make a histogram named "multiplicity" and these two histograms will automatically be distinguished. Moreover, if an art job contains two or more instances of the same module, the histograms from each module instance are distinguished as each is created in a uniquely named directory. Finally, TFileService ensures that user created histograms do not interfere event-data IO.

The name TFileService comes from ROOT's TFile class, which is the name of the ROOT class that manages disk files.

There are two parts to the pattern of using TFileService. The first part is to include the following fragments in your .fcl file.

services :
{
  TFileService : { fileName : "readback.root" }
}
physics :
{
  analyzers:{
    mylabel1 : {
       module_type : MyClass
       eMin        : 0.001
    }
  }
}
The fileName parameter to TFileService specifies the name of the file in which the histograms for all modules will be found. If none of the modules in your job create histograms, then the TFileService parameter set may be omitted; that, however, should be very, very rare. In this example, the module that makes the histograms happens to be an analyzer, but nothing significant changes if it is a producer or a filter.

The second part of the pattern is inside your module class:

namespace mu2e {

  class MyClass : public art::EDAnalyzer {
  public:

    explicit MyClass(fhicl::ParameterSet const& pset);
    virtual void beginJob() override;
    void analyze(const art::Event& event) override;

  private:
    double eMin;
    std::string _makerModuleLabel;
    TH1F* _hNSimulated;
  };

  MyClass::MyClass(fhicl::ParameterSet const& pset):
     eMin(pset.get<int>("eMin"),
     _makerModuleLabel("makeSH"),
     _hNSimulated(0){
  }

  // At the start of the job, create histograms.
  void MyClass::beginJob(){
    art::ServiceHandle<art::TFileService> tfs;
    _hNSimulated = tfs->make<TH1F>( "hNSimulated", "Number of SimParticles", 100, 0., 300. );
  }

  void MyClass::analyze(const art::Event& event) {

    art::Handle<StrawHitCollection> hitsHandle;
    evt.getByLabel(_makerModuleLabel,hitsHandle);
    StrawHitCollection const& hits = *hitsHandle;

    int n(0);
    for ( StrawHitCollection::const_iterator i=hits.begin(), e=hits.end();
          i !=e ; ++i ){
       StrawHit const& hit = *i;
       if ( hit->energyDep() > eMin ){
          ++n;
       }
    }

     _hNSimulated->Fill(n);
  }

This example creates and fills one histogram. That histogram will be found in the output file with the name /mylabel1/hNSimulated.

In the beginJob method, the first step is to get a ServiceHandle to the TFileService. The next step is to use the TFileService to create a histogram; compared to native ROOT, the only real difference is wrapping the "new TH1F" call inside "tfs->make<TH1F>". You may create additional ROOT objects using a similar syntax. You may also make ROOT subdirectories and create new histogram objects in those directories.

Once the histogram has been created, access to it is available via the bare pointer that is held as member data of the class. In the above example, the analyze method fills the histogram using this pointer. Another standard ROOT behaviour remains unchanged: the histogram will automatically be written to the ROOT output file when that file is closed.

During each call to tfs->make<TH1F> TFileService checks if it needs to open the ROOT output file and checks if it needs to create the per-module-label subdirectory. If either of these actions are necessary, TFileService will perform them. Therefore, if a module creates no histograms, there will be no corresponding per-module-label subdirectory in the ouptut ROOT file; and, if an entire art job that creates no histograms, no ROOT output file will be created. This is part of the art philosophy that art will have no unnecessary side-effects.

Behind the scenes, TFileService does the following work:

  1. Before calling modules, save selected parts of the state of ROOT.
  2. On any call to tfs->make<TH1F>:
  3. On return from the call to MyClass::beginRun, restore the state of ROOT to that saved in step 1.
  4. Before subsequent calls to most methods of the module labeled mylabel1, cd to the ROOT directory /mylabel1; the methods that TFileService knows about are, the constructor, analyze/produce/filter, BeginJob, EndJob, BeginRun, EndRun, BeginSubRun, and EndSubRun. TFileService does not do this for module methods that respond to opening/closing of input/output event-data files.
  5. On subsequent calls to the list methods of MyClass, restore the state of ROOT to that saved in step 1.

In the above example, the histograms were created in the beginJob method. Alternatively they could been created in the beginRun method; at present, the end result would be the same. We are considering adding an option to TFileService such that histograms created in beginRun will be maintained on a per run, not per job basis.

One could also have chosen to create the histograms in the constructor of MyClass. For this simple example, creating histograms in the constructor would have worked. But be aware that the geometry and conditions data will, in general, only be defined on a per run, or per subRun basis; therefore histograms that require this sort of information at creation time can only be created after that information is defined.

Now consider the following change to the run time configuration:

physics :
{
  analyzers:{
    mylabel1 : {
       module_type : MyClass
       eMin        : 0.001
    }
    mylabel2 : {
       module_type : MyClass
       eMin        : 0.002
    }
  }
}
In this case TFileService will make two subdirectories, /mylabel1 and /mylabel2; there will be two histograms, which will be named /mylabel1/hNSimulated and /mylabel2/hNSimulated

We strongly advise that you do not open your own ROOT files and manage them; it is very easy to cause conflicts with the art IO subsystem. If you do need to do this, speak with the Mu2e Software team before you do so.

Making ROOT Subdirectories

In your module it is possible to create subdirectories of the directory that TFileService has made for you. The syntax is,

   art::ServiceHandle<art::TFileService> tfs;
   art::TFileDirectory tfdir           = tfs->mkdir( "EjectedNeutronGun" );
   _hMultiplicity                      = tfdir.make<TH1D>( "hMultiplicity", "Neutron Multiplicity", 20, 0, 20  );
where _hMultiplicity is a member datum of type TH1D*.


RandomNumberService

This is important enough to have a whole page of its own.


Fermilab at Work ]  [ Mu2e Home ]  [ Mu2e @ Work ]  [ Mu2e DocDB ]  [ Mu2e Search ]

For web related questions: Mu2eWebMaster@fnal.gov.
For content related questions: kutschke@fnal.gov
This file last modified Wednesday, 29-Jun-2016 11:58:42 CDT
Security, Privacy, Legal Fermi National Accelerator Laboratory