Mu2e Home
Mu2e Software Workbook

Table of Contents


The purpose of the exercises in this workbook is to develop broad familiarity with the Mu2e Offline software. This is much, more than "learning C++" or "learning the framework"; in order to work with the Mu2e Offline software you will need to develop an overview understanding of many of the following:

These last two bullets are by far the hardest to learn. Depending on what you are doing, you will also need to develop detailed understanding of some of the above.

What you will see in these exercises will be the recommended way of doing things within the Mu2e software environment. In a different environment, many of the recommendations could be very different; this even includes recommendations about how to use features in the C++ core language! So don't think of this workbook as something that is teaching you C++; it is teaching you C++ constrained by a large set of boundary conditions.

Starting from the beginning and developing a detailed understanding of all of the above is a multi-year effort. And it is several months work for someone who is already experience in the software of another modern HEP experiment. The way to work effectively is to get an overview; then focus on a one project and learn the details necessary to complete that project; then move on to the next project.

Some Caveats

In many places we had to make tradeoffs in order to get work done on time for CD-1. As we move towards CD-2, and beyond, those tradeoffs will change. And there are also a few instances in which things we thought were good ideas did not work out as well as we had hoped. The result is that many low level details, even some mid level details, will change in the coming months and years. Indeed some very visible changes will take place in the few weeks immediately following the workshop.

On the other hand, we do believe that most of the big ideas are now in close to final form and, therefore, that this is an appropriate time for you to make the investment in learning the Offline software.

In many places we have tried to make the code "symmetric" in the sense that if you do something one way in a particular context, the same ideas will work in related contexts. There are lots of places in which we have succeeded but there are many places that still need work.

Finally, this writeup is still just a draft. Let us know where it needs help.


To do the workbook exercises you may log into either detsim or mu2evm.

You will normally check out code onto /grid/fermiapp/mu2e/users/<your_kerberos_principal>. You will build the code in place. You should write all output files, including root files, data files and log files to /mu2e/data/users/<your_kerberos_principal>. If a Mu2e disk has a users subdirectory, you have permission to create your own working directory in that users area.

There is additional information is available about computing resources provided by Fermilab.

Units and Coordinate Systems

The Mu2e Offline software uses the same units as Geant4; this choice was made because it removes the need for unit transformations at the boundaries between Mu2e code and Geant4 code. Fortuitiously these happen to be natural units for Mu2e.

Quantity   Unit
Electric ChargeCharge of the proton = +1
Plane Angleradian
Solid Anglesteradian
Magnetic Field weird: see below

In the Mu2e magnetic field maps, both in the files and in memory, the field values are stored in Tesla. However Geant4 internally works in units in which 1 T=0.001 MV ns mm-2. The routines that interface the Mu2e internal field maps to Geant4 make this transformation on each call.

If you need to perform unit conversions in your code, conversion factors can be found in the CLHEP SystemOfUnits package. After you have setup a relesae of the Mu2e software, the header file for this package can be found at

Please use the conversion factors from this package, do not invent your own. Need link to a description of how to use it.

In the long run, there are three coordinate systems that all users of the Mu2e Offine software need to understand. In addition many subsystems define their own internal coordinate systems. At the this time, there are two more coordinate system that you need to know; one of these is a legacy system that will be phased out in the next weeks and the other is subsystem specific system that is inappropriately used in a global context.

The five coordinate systems that you need to know about are:

The coordinate axes and origin of the Mu2e system are defined in the figure below:
The origin of the Mu2e system is at the center of symmetry of the straight section TS3; the z axis points along the symmetry axes of the PS and DS; the y axis is uptowards the surface and the x axis is defined by the cross product, y cross z.

The z axis of the Detector system is along the symmetry axis of the DS magnetic field in the region of the tracker; this is, and for the forseeable future this will be, parallel to the z axis of the Mu2e system; the other axes are also parallel to those of the Mu2e system. The orgin of Detector system is at center of the tracker. In a few years we will start to think more carefully about as-built measurements and alignment. At that time the relationship between the Mu2e system and the Detector system will evolve and will, in general, include a rotation as well as a translation. Recently a new feature was added to the code to provide routines that transform between these two systems; some of the older code still hand codes the translation and we are fixing this as targets of opportunity. Please write all new code using the general transformation routines; these are found in the DetectorSystem object within the GeometryService.

The origin of the Geant4 world system is, by definition, at the center of the Geant4 world box; the axes of the Geant4 world system are parallel to those of the Mu2e system but the location of the origin of the Mu2e system within the Geant4 system is computed on the fly for each job; no quantities of physics interest are, or will be, presented to the end user in Geant4 coordinates. The Mu2e code transforms to and from Geant4 coordinates at the boundaries between Mu2e code and Geant4 code. This convention permits us to do things like changing the dimensions of the hall or extending the Geant4 world to study very shallow cosmic rays without requiring code changes, only changes to the geometry file. The only Mu2e members who will need to know about the Geant4 coordinate system are those who are modifying geometry objects near the top of the heirarchy and those who are extracting information from Geant4 to export it for use by the rest of us.

The production target system has its z axis parallel to the symmetry axis of the production target and its positive direction in the sense of the positive z axis in the Mu2e world system; its x and y axes are given by the rotation, in the Goldstein convention, that moves the z axis of the Mu2e system on to the z axis of the production target system. The origin of the production target system is at the center of the proton-upstream face of production target. In this coordinate system, the nominal proton beam direction is along the -z axis. The only place that most people will see quantities in this coordinate system is when looking at generated particles created by the primary proton gun. In the next few weeks this will change so that the primary proton gun performs the transformation to Mu2e coordinates before putting the generated particle into the event.

The deprecated legacy coordinate system is one in which the axes are parallel to those of the Detector system and the origin is on the symmetry axis of the DS at zMu2e=12,000 mm. The only geometry objects expressed in this system are those that make up the stopping target system. In the near future they will be changed so that the will provide in formation in either the Mu2e system, or the Detector system, at the request of the caller. In addition some generated particles are described in this system: see the next page.

The GenParticle Coordinate System Circus

Each generated particle carries a code that identifies which generator created it. One of the brilliant ideas that is considerably less brilliant in hindsight is that generated particles now describe their creation position and 3-momentum in a coordinate system that is "natural" for the generator that generated them; everything can be sorted out by keying on the generator Id code. At present generated particles exist that use the Mu2e system, the production target system and the deprecated legacy system. The net result is that too many places in the code need to know about the transformations among these coordinate systems; and we need to waste your time and my own time talking about this here.

In the next few weeks all generated particles will have their postions and momentum described in the Mu2e coordinate system. This change was not made earlier because it will invalidate older files that were important for CD-1 and which are used as input files for the exercises in this workbook!

Footnotes to the Geometry

In the current Offline geometry, the stopping targets, the tracker and the calorimeter are at slightly different z locations than those described in the CDR; as a consequence there are a few related differences between CDR description of the Muon Beam Stop (MBS) and its software model. In the near future these will all be changed to match the CDR. They were not changed early because we did not want to invalidate many files of simulated event-data in the run up to the CDR; these files are also needed for this workshop.

In the PS and DS sections, the magnetic field component Bz is in the positive z direction. This is true for all Mu2e produced field maps. However we also have magnetic field maps used by MECO, in which the magnetic field points in the negative z direction.

Input Files for the Exercises

The input files prepared for the first few exercises can be found in /grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/. They were prepared using the the v2_0_1 tag of the Mu2e Offline and the .fcl files found in Workbook/InputFiles.

In the filenames starting with "conversionOnly", each event contains a single generated particle, a conversion electron. The generator started from the results of a previous simulation that computed the points and times at which muons stopped in the target foils. The stopped muons were allowed to decay exponentially with the lifetime of muonic Aluminium and the decay times were wrapped into one 1694 ns cycle of the muon beamline. A generated particle was created at the time and place just described, uniformly in 4pi and with an energy equal to the conversion energy of Aluminium. The generated particles were processed through Geant4 and the output of Geant4 was turned into StrawHits, which describe a single data-like hit on one straw, and CaloHits, which describe a single data-like hit on one APD. The CaloHits were then combined with other CaloHits on the same crystal to form CaloCrystalHits. Many of the events contain no hits in the tracker or calorimeter because the track did not have enough transverse momentum.

The files conversionOnly_01_data.root and conversionOnly_02_data.root have 20 events each. The file conversionOnly_03_data.root has 2000 events.

In the filenames starting with "selectVD3", each generated event contains a single proton, at the nominal beam energy, incident on the production target.

Getting Started

Almost everything discussed on this workbook is case sensitive. This includes computer names, usernames, file names, variable names, class names, and variable names within the run-time configuration files.

Login into either detsim or mu2evm following the instructions on how to log in to the Mu2e interactive machines. The short version of the instructions is:

cd /grid/fermiapp/mu2e/users/your_kerberos_principal
setup mu2e
cvs co Workbook
cd Workbook
source /grid/fermiapp/mu2e/Offline/v2_0_1/g4_4942/bin/createTestRelease

Long Version of the Instructions

If you have not already done, so make a working directory for yourself on either /mu2e/app/users or /grid/fermiapp/mu2e/users:

 cd /grid/fermiapp/mu2e/users
 mkdir your_kerberos_principal
 cd your_kerberos_principal
If you already have such a directory, just cd to it:
 cd /grid/fermiapp/mu2e/users/your_kerberos_principal
The next step is to establish the site specific Mu2e environment. At Fermilab this is done by:
setup mu2e
which will produce output like
Mu2e external products are rooted at:  /grid/fermiapp/products/mu2e
MU2E_DATA_PATH is:  /grid/fermiapp/mu2e/DataFiles
On remote Mu2e sites, the name of the file to be sourced will be different. But after that all of the remaining instructions should work without change. The next step is to check out the Workbook from cvs:
cvs co Workbook
which will produce output that starts something like
cvs checkout: Updating Workbook
cvs checkout: Updating Workbook/Ex01
U Workbook/Ex01/
U Workbook/Ex01/SConscript
U Workbook/Ex01/ex01.fcl
U Workbook/Ex01/ex01.list
cvs checkout: Updating Workbook/Ex02
U Workbook/Ex02/
U Workbook/Ex02/SConscript
U Workbook/Ex02/ex02a.fcl
U Workbook/Ex02/ex02b.fcl
... and so on
cd Workbook
If you do an ls you should see something like:
CVS  Ex01  Ex02  Ex03  Ex04  Ex05  Ex06

Base Releases and Test Releases

The next step is to create a test release based on an existing base release of the Mu2e software. For those familiar with SRT, the ideas of test release and base release have the same meaning as in SRT. There is a long story why were are not using SRT and we do have proper replacements for most of its functions; however the support of test releases is still a bit of hack.

For those of you not familiar with SRT, here are the basic ideas. In the exericses in this workbook you will checkout a small subset of the Mu2e code from the cvs repository. You will compile, link and run this code. In order to compile the code, you need to know where to find the other Mu2e header files. In order to link the code, you will need to know where to find the other Mu2e libraries. In order to run the code, you will need to know where to find the other mu2e libraries. The solution to all three questions is to look for these things in a pre-built release of the complete Mu2e code. That prebuilt release is called a base release. The code in the tree descended from your current working directory is called the test release. Once you have established a base release and a test release you can know where to find them by looking at the environment variables MU2E_BASE_RELEASE and MU2E_TEST_RELEASE.

The Mu2e build system knows that when it needs a file, and if a test release is defined, it should look first in the test release and, only if that fails should it look in the base release. The Mu2e run-time environment also has the same policy: if a test release is defined, look there first.

To create a test release, issue the command:

source /grid/fermiapp/mu2e/Offline/v2_0_1/g4_4942/bin/createTestRelease

This will strip off bin/createTestRelease from the path name and use what's left to be the root of the base release:

The createTestRelease command does not setup that directory to be the base release; that comes later. The output of createTestRelease will look like:
Creating test release:
Making matching directory for output files: /mu2e/data/users/your_kerberos_principal/Workbook

Now do some more exploring. The lines starting with > are what you should type ( not including the >) and the other lines are what you should see:

> ls
bin  CVS  Ex01  Ex02  Ex03  Ex04  Ex05  Ex06  lib  out  SConstruct
> ls -l out
lrwxrwxrwx 1 mu2e mu2e 34 Aug 11 20:02 out -> /mu2e/data/users/your_kerberos_principal/Workbook/out
The createTestRelease script did the following:
  1. It created two directories, bin and lib.
  2. It copied two files, SConstruct and bin/ from /grid/fermiapp/mu2e/Offline/v2_0_1/g4_4942.
  3. It wrote the file, described below.
  4. It created a directory on /mu2e/data/users to hold the output files from your work.
  5. It made a symbolic link from the current directory to that working space.

The last step in the getting started instructions is to source the setup script:

which will produce output like
Base release directory is:  /grid/fermiapp/mu2e/Offline/v2_0_1/g4_4942
MU2E_SEACH_PATH:    /grid/fermiapp/mu2e/Offline/v2_0_1/g4_4942/:/grid/fermiapp/mu2e/DataFiles/
MU2E_SEARCH_PATH:   /grid/fermiapp/mu2e/users/your_kerberos_principal/Workbook/:/grid/fermiapp/mu2e/Offline/v2_0_1/g4_4942/:/grid/fermiapp/mu2e/DataFiles/
Sourcing will establish the base release, establish the current and working directory as the test release. These two operations define some environment variables, append to other environment variables and add elements to the front of your path.

Logging out and Logging In Again

If you log out from your session and log in again, the following steps are needed to restablish the working environment.
cd /grid/fermiapp/mu2e/users/your_kerberos_principal/Workbook
setup mu2e
The order of the first two lines is not important.

Rebinding a Test release to a new Base Release

This section discusses a future scenario that you need to be aware of; it is not part of the getting started instructions.

In the future you may wish to bind your test release to a new base release. This will happen if you want to build your test release against a newer base release in order to keep up with changes in the head. It may also happen if the base release you have been using is declared out of date and is deleted.

To rebind to a new base release, follow the following steps:

  1. Optional: use tar to make backup of your work.
  2. Do a full clean: scons -c
  3. Log out.
  4. Log in again and cd to the root directory of your test release. Then:
setup mu2e
where path_to_new_base_release is the path to the new base release that you wish to use. The second script will make a subdirectory in your working area named oldBaseRelease. It will copy three files in to this new directory: SConstruct, bin/ and It will then make new versions of these files, copying the first two from the base release and writing the third itself. The new will tie your test release to the base release.

You might wish to compare the save files against their new counterparts in order to see if anything has changed in the new base release.

If your working directory already contains a directory named oldBaseRelease, then the rebind script will detect this and stop. To continue, remove or rename this directory and rerun the rebind script.

A First Look at art

The Mu2e Offline software uses a framework whose name is art. The name art is not an acronym and it is always written in lower case. art is an evolution of the CMS framework, cmsrun, that was forked about 3 years ago; we no longer share code with CMS. Mu2e uses art as an external product, just as it uses ROOT, CLHEP and Geant4; that is, when you build the Mu2e software, you do not need to built art, you just link to it.

What is a framework? To a physicist I would say that art is the thing that drives the event loop. To a computer scientist I would say that it is a state machine that holds run-time configurable lists of callbacks to be executed on each state transtition.

In addition, art has many other features that we be discussed as they become important.

What is an event? At this stage we have the flexibility that an event could be all of the data associated with the live window ( about 900 ns) of one cycle of the muon beamline. Or an event could be the data associated with one hardware trigger. Both options can be accommodated without difficulty.

One feature that we inherit from our choice of art is that events have 3 part event id. The parts are named the run number, the subrun number and the event number. Mu2e has not yet defined what unit of data will constitute a run and what will constitute a subrun. At this stage we should view it simply as a keys within tiered bookkeeping system that we can use in whatever way we find convenient.

When you want to do a HEP reconstruction or analysis project, you often want to write several distinct pieces of code that communicate with each other and form a coherent whole. For example, you may wish to write:

The way that you do this in art is that you write a C++ class that obeys a few rules established by art and a few other rules established by the Mu2e build system. A class that obeys these rules is called an art module. A module class is required to have a member function that is called for every event; it may also have member functions called at beginJob, beginRun, endRun, endJob and several other state transtions.

Not every class you write will be an art module (although the workbook exercises will make it seem that way!) Once you start a real project, most of your classes will be ordinary C++ classes that have no special constraints imposed by art; most of your classes will be subsystems and tools that are called by your modules.

When the build system builds your module, it will build it into a shared library, sometimes called a plug_in. The shared library has a name that follows one of the conventions defined by art, which ensures that art knows where to find it.

The name of the art main program for the Mu2e experiment is mu2e. When this main program is linked, it is NOT linked to any of the Mu2e written modules.

When you run mu2e you must give a command line argument that holds the the name of an art run time configuration file. This file tells art what input files it should read, what output files it should write, which modules it should run and so on; it also tells art which groups of modules must be run in a specified order and which groups of modules are free to be run in any order. When you tell art to run your module, art looks in a list of directories to find a shared library whose filename contains the name of your module (it has protection against name collisions ). It then loads your shared libary and uses it. To add a new module to your mu2e job does not require relinking the main program; you need only update the run time configuration file and, if not already done, build the shared library for the new module.

Within one art job it is possible to run a particular module more than once, giving each instance of the module a different run time configuration. For example one might wish to compare the output of track finding with loose cuts to that of track finding with tight cuts. Using art, one can do both tasks in a single job and art will properly label the output of each module instance so that downstream users can tell which is which.

There are five types of modules:

The prefix ED refers to Event-Data. A producer module is permitted to read information from the current event and to add information to it. An analyzer module is permitted to read information from the current event but it may not add information to it; presumably an analyzer module will do something like fill histograms or make printed output. A filter module is permitted to alter the flow of processing modules within an event; a filter module may also add information to the event, just like a producer. The flow control properties of filter modules will be discussed later. Input and Output modules will not be discussed in this workbook.

About the word event-data. A few decades ago the Particle Data Group admonished the HEP community for using the word "data" to refer, interchangably, to either simulated events or to actual experimental data. The HEP software community has since adopted the word Event-Data to refer to the software details of dealing with the information found in events, whether those events are experimental data or are simulated. Thus the files that contain our simulated events are called event-data files.

Exercise 1

Make sure you have completed the getting started instructions.

In the first exercise you will build a module that prints out the event id and the number of reconstructed tracks found in each event. You will then run a job that reads an input file, runs the track finder and fitter and then runs your module. For technical reasons we cannot yet store the fitted tracks in the event-data file; that will come soon and this exercise will be reworked so that you do not need the step that does the track finding and fitting.

This exercise has four files:

Ex01: Building, Running and Checking the Output

To build and run the exercise

scons lib/
mu2e -c Ex01/ex01.fcl >& out/ex01.log
Look near the end of out/ex01.log. There should be a line
Art has completed and will exit with status 0.
If the last number is anything except 0, then art did not complete successfully and you must investigate. If you cannot figure out what the error is and how to fix it, speak with the software team. In almost all circumstances this will be the last line in the file but it is possible to link in external code that will make printout following this line.

Every time you run an art job, you should check the completion status.

You will also notice that this job created the file out/ex01.root; this contains some diagnostics created by TrkPatRec; you should ignore it for now.

The source code for the module run in this job is found in the file Ex01/ The core of this file is the fragment shown below:

  void Ex01::analyze( const art::Event& event ) {

    art::Handle<KalRepCollection> krepsHandle;
    KalRepCollection const& kreps(*krepsHandle);

    std::cout << "Ex01: ReadKalmanFits  for event: " <<
              << "  Number of fitted tracks: "
              << kreps.size()
              << std::endl;

The method analyze is called once for each event. The first three lines in the body of the method are the recommended pattern for accessesing information in an event. In this case we want to access the collection of fitted tracks; the details of how this works will be discussed later. For the time being it is enough to know that KalRep is the name of the class that holds all of the information about one fitted track and KalRepCollection is, under the covers, just a std::vector<KalRep const*> ( hidden a few layers down so that the intermediate layers can provide some additional functions ). A KalRepCollection contains one entry for each fitted track in the event and the expression kreps.size() tells you how many entries there are in the collection, which is the same as the number of reconstructed tracks in the event.

If you look at the output file out/ex01.log, you will find the following 10 lines:

Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 1  Number of fitted tracks: 0
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 2  Number of fitted tracks: 1
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 3  Number of fitted tracks: 1
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 4  Number of fitted tracks: 0
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 5  Number of fitted tracks: 0
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 6  Number of fitted tracks: 0
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 7  Number of fitted tracks: 0
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 8  Number of fitted tracks: 1
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 9  Number of fitted tracks: 1
Ex01: ReadKalmanFits  for event: run: 1 subRun: 0 event: 10  Number of fitted tracks: 0
The lines may be scattered throughout the file because of buffering issues when many output streams are writing to the same output file.To see them all at once use the unix command:
 grep Ex01: out/ex01.log
Because the simulated conversion electrons in these files were generated uniformly over 4pi, many of them will not make enough hits in the tracker to be reconstructed. So it is not surprising that many events have no reconstructed tracks.

Some exercises with the source block

Now look at the file Ex01/ex01.fcl, which is the file that tells art what to do. Find the block:

source : {
  module_type : RootInput
  fileNames   : [ "/grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/conversionOnly_01_data.root" ]
  maxEvents   : 10
Note that the name of the input file is quoted; this is required because the filename contains special characters, the slashes, the dot and the underscores.

Here are a set of suggested exercises

  1. Run the mu2e command without the argument "-c Ex01/ex01.fcl" and observe the error message.
  2. Change the value of maxEvents to 15 and rerun the program; check the printout to verify that it ran the correct number of events.
  3. Write a malformed number as the value of the maxEvents parameter and observe the error.
  4. Set maxEvents to -1; art will run until it reaches the end of file. In this case 20 events.
  5. Change the name of the input file to conversionOnly_02_data.root, in the same directory. This files also has 20 events but those events have the run number 2, not 1. Rerun the job and check the output.
  6. Tell art to read both files, one after the other. To do this put both filenames in the definition of the parameter fileNames, inside the [] and separated by a comma. Inside the [] and outside the quotes, whitespace is not significant and a newline counts as white space; so you can spread the two filenames over several lines:
      fileNames   : [ "/grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/conversionOnly_01_data.root",
      maxEvents   : -1
  7. Change Ex01/ex01.fcl back to reading in just one file and test it with maxEvents : -1.
  8. Get the mu2e command line help: mu2e --help
  9. Modify the number of events to process from the command line using the -n argument.
  10. Read a list of filenames read from a file: mu2e -c Ex01/ex01.fcl -S Ex01/ex01.list
  11. Try out the --nskip and -s options and verify that they behave as you expect.
If a parameter is specified in both the .fcl file and on the command line, the command line takes precedence. There is no intrinsic limit to the number of files that can be specified in the fileNames parameter of the source block or via the --source-list option.

At present art is configured so that you can only change a few parameters from the command line. It is possible to reconfigure art to permit changes any parameter from the command line. I think that the right answer is somewhere in between these two extremes and it is possible to write any set of rules that we want.

Ex01: Introducing Module Labels and Paths

The file Ex01/ex01.fcl contains the following fragment:

physics : {

  producers : {
    trkPatRec : @local::DownstreameMinus

  analyzers : {
    readfits : { module_type : Ex01   }

  p1 : [ trkPatRec ]
  e1 : [ readfits ]

  trigger_paths  : [p1]
  end_paths      : [e1]

This fragment defines the run time configurations for one producer module and one analyzer module. The producer module is the module that runs the track finder and fitter. It has a long complicated configuration that you can find if you look for the definition of TrkPatRec in the file
The meaning of this configuration will be discussed in Dave Browns lectures later this week.

The analyzer module has a trivial configuration. The only element is the name of the class to use, Ex01; this is the class whose source is in Ex01/ . In the context of the configuration of a module the parameter name module_type is a keyword reserved to art.

In this fragment two identifiers are highlighted in red, trkPatRec and readfits. These are almost arbitrary names chosen by the user and they are known as module labels. A module label may contain only letters and numbers; in particular it must not contain an underscore character; module labels are case sensitive.

A module label is different from the class name of the module. These ideas are distinguished because a module label represents a module class name plus its run time configuration. It is meaningful to put multiple instances of one module class in one job, with each instance having its own configuration. Each of these instances is identified by its module label. It is perfectly correct art/fhicl to choose a module label that is the same as the class name but it might confuse others.

It is required that all module labels within a single art job be unique.

Now look again at the analyze method of

 void Ex01::analyze( const art::Event& event ) {

    art::Handle<KalRepCollection> krepsHandle;
    KalRepCollection const& kreps(*krepsHandle);

    std::cout << "Ex01: ReadKalmanFits  for event: " <<
              << "  Number of fitted tracks: "
              << kreps.size()
              << std::endl;

This code has hard coded in it the module label of the module that produced the fitted tracks. If, in the .fcl file you change the module label of the track fitting module you must change it here and recompile. For this reason it is a bad idea to hard code module labels into code; it is done here so that you can feel the pain when you do the next set of exercises. Exercise Ex02 will show you how to avoid the need for recompilation.

In the above fragment from ex01.fcl, there are two names highlighted in blue, p1 and e1. These are known as path names. They are almost arbitrary names that must be unique at the scope of the physics block; in addition they must not be any of the keywords defined by art at top level scope within the physics block: producers, analyzers, filters, trigger_paths and end_paths. It does not make much sense to talk much about paths until we have a richer example. If you feel ambitious you can read the web page that discusses reconstruction modes.

Ex01: Exercises with Module Labels and Path Names

Do the following exercises:

  1. In ex01.fcl, change readfits to some other name. Do this in only one of the two places; then run the code and observe the error message.
  2. Change readfits in the other place so that the names match again. Run and observe that it works again.
  3. Change trkPatRec in both places. Run the code and observe the error message.
  4. Edit Ex01/ and change the name trkPatRec to match what you did in the previous exercise. Rebuild the code:
            scons lib/        
    Run the code and see that it works again.
  5. Change p1 to some other name at its definition but leave it unchanged in the definition of trigger_paths. Run and observe the error.
  6. Change p1 in the definition of trigger_paths to the new name you used in the previous step. Run and observe that the code works again.
  7. Repeat that last two exercises for e1.

Ex01: What scons did

Mu2e uses a build system named scons; it is a replacement for gmake and its friends. The command scons runs the build system, which will do the following operations:

  1. Compile Ex01/ to produce the object file Ex01/Ex01_module.os
  2. Link Ex01/ to produce the shared library lib/
The information about the compiler and linker options, the include path (-I), the link path (-L) and the required link libaries (-l) is found in the two files SConstruct and Ex01/SConscript. These will be discussed later. The output of the scons command will look like

scons lib/
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
g++ -o Ex01/Ex01_module.os -c -g -O3 -fno-omit-frame-pointer ...  Ex01/
g++ -o lib/ -shared Ex01/Ex01_module.os ..
scons: done building targets.
where I have cut out a lot from the two lines that start g++. If you read those lines on your screen you can identify the compiler options, include path, link path and link libraries.

Mu2e has a web page a that has some additional details about scons.

What are .os and .so files? In the bad old days, a build system compiled a .cc file into an object file ending in .o and put the .o file into an library that ended in .a. The .a libraries are still around but Mu2e uses shareable libaries that end in .so; these libraries have many advantages over .a libraries but that discussion is beyond the scope of this section. Scons uses the convention that object files destined for a .a library end in .o while object files destined for a .so library end in .os. This convention simplifies the task of keeping the bookkeeping straight if you want to build both .a and .so libraries from the same source and you need different compile options for the two cases.

The command mu2e runs the mu2e main program. In the example above, the -c option tells mu2e to find its instructions in the file Ex01/ex01.fcl. And the output of the run is redirected to the file out/ex01.log ( instead of going to the terminal screen). The file out is actually a symbolic link to a directory on a large data disk; the disk that holds your Workbook files is backed and, therefore, has a limited size. It is important that you not fill up work working disk with data files, log files and root files.

The Rest of Ex01/ex01.fcl

This section gives an overview of the file Ex01/ex01.fcl. This file is written in a language called the Fermilab Hierarchical Configuration Language ( FHICL, pronounced "fickle"), a language that was developed at Fermilab to support run-time configuration for several projects, including art. By convention, the names of FHICL files end in .fcl.

The first block of code in this file is the three lines:

#include "minimalMessageService.fcl"
#include "standardProducers.fcl"
#include "standardServices.fcl"
These include directives behave exactly like #include directives in C++ source code files and they are used to predefine standard parameter definitions that will be used later in ex01.fcl. When you see a parameter defined using the syntax, @local::xxxx, then look for definition of the FHICL name xxxx in the included files; inclusion is recursive so may need to open several files to find the defintion.

The next block is the single line:

process_name : Exercise01
The process name may contain only letters and numbers; in particular it must not contain the underscore character. There will be more discussion of the process name when we discuss the naming of data products.

The next block is the source block, which has already been discussed. After that comes the services block:

services : {
  message      : @local::default_message
  TFileService : { fileName : "out/ex01.root" }

  user : {
    GeometryService        : { inputFile      : "Mu2eG4/test/geom_01.txt"            }
    ConditionsService      : { conditionsfile : "Mu2eG4/test/conditions_01.txt"      }
    GlobalConstantsService : { inputFile      : "Mu2eG4/test/globalConstants_01.txt" }
This block provides configuration information for five entities called services The first two are part of art, proper; the last three are written by Mu2e and are found in the Mu2e base release. If we add another Mu2e written service, its parameter set must be put into the user block of the services block. Any service is callable by any method of any module.

The message service is used to control the behaviour of the message logger subsystem. The TFileService will be discussed in Exercise Ex03.

A module that needs to know about the geometry of some subsystem can access that information via the geometry service. The geometry service learns about the geometry of the experiment by reading the file "Mu2eG4/test/geom_01.txt". To use a different geometry, change the inputFile parameter in the GeometryService parameter set.

In the future, the information found in the geometry file will be put into a database. At that time the inputFile parameter will be replaced with something like a database key. We have written the geometry subsystem so that it has a front end, seen by the user, and a backend that is never seen by the user. If we have done this right, when we replace the file with a data base, only the backend need change and user code can stay unchanged.

Similarly the ConditionsService holds the time dependent calibration information. It is in a very primitive form. It too has a front end and a back end, in prepration for eventual migration from a file to a database.

The purpose of the GlobalConstantsService is similar to the ConditionsService but it may only hold information that is available at the start of the job and is certain to be constant for the entire job. This includes things like the particle data table, the measured lifetime of muonic aluminimum and so on.

Services are also permitted to use other services, but the dependency graph has to be acyclic.

Mu2e maintains a web page with more information about Services.

The last block of code is the physics block, which was discussed previously.

For more informaton, there is a webpage with Mu2e maintained documentation about the FHICL langauge; this includes a section on the structure of an art run-time configuration file.

Data Products and their Names

In Ex01/ we encountered KalRepCollection; this is an example of a data product. A pretty good functional defintion of a data product is that it is an object that can be accessed by calling one of the get methods of the art::Event class; an example is the getByLabel method used in

Data products are identified by a name with four parts, all of which are strings:

  1. A string representation of the type name.
  2. The module label of the module that put the data product into the event.
  3. A string called the instance name.
  4. The process name of the job that created the data product; this is the process_name parameter from the .fcl file.
The full name of a data product is formed by sticking the four parts together, separated by underscore characters; this is why module labels and process names may not contain underscores.

The designed use of the instance name is to allow one to distinguish data products that would otherwise have identical names. You will see this use when we discuss the output of the Mu2e Geant4 module.

In the case of the track pattern recognition code, it is used to encode meta-data about how the pattern recognition module was configured; this will be described in Dave Brown's talk.

Some additional information is available about the identifiers of data products.

Getting Data Products from the Event

In the analyze method of Ex01/, there is a block of three lines that is the recommended pattern to retrieve the fitted tracks from the event:

  void Ex01::analyze( const art::Event& event ) {

    art::Handle<KalRepCollection> krepsHandle;
    KalRepCollection const& kreps(*krepsHandle);

To understand these lines you need to know some background information. In future exercises it will become clear how you would know these things for yourself. They are:
  1. The class that holds information about fitted tracks is named KalRep.
  2. The class that holds all of the fitted tracks in the event is named KalRepCollection.
  3. In the general case, an event may contain many KalRepCollections, so we need to ask for the one we want.
  4. The KalRepCollection that we want was made by the module with the module label "trkPatRec".
  5. The KalRepCollection that we want has an instance name of "DownstreameMinus". The upcoming presentations by Dave Brown will discuss what this means.
The first line in this block declares that the variable krepsHandle is of type art::Handle<KalRepCollection>. For those of you who are not very familiar with C++, the angle brackets <> tell you that art::Handle is a class template. You do not need to understand this in any detail; all that you need to know is that a you can use a handle just as if it were pointer and that the thing that goes in the angle brackets is the name of the type that you want to get out of the event. Here the word type has its formal C++ meaning: a type is one of the following, a built-in type, class, a struct or a typedef. In this case it is the name of the typedef KalRepCollection.

The second line in this block asks the event to look for a data product that has the type KalRepCollection, the module label "trkPatRec" and the instance name "DownstreameMinus". In this search, the event object ignores the process name part of the event id. If the event object finds exactly one match it will copy the fill the variable krepsHandle with a pointer to that data product. If the event object finds no matches, or more than one match, it will not fill a pointer into krepsHandle, leaving it in an invalid state.

It's clear that the second line asks for the given module label and instance name; they are the first two arguments. But how did the event know the requested type? The answer is that it was able to figure that out from the type of the third argument. The details of how this works are beyond the scope of this discussion.

If, in line 2, the event successfully found a unique match, then the third line dereferences the pointer inside the handle and creates a new variable, kreps, which is a const reference to the data product. In the code fragment above the ampersand, denoting the reference, is highlighted in red. This creation of a reference is fast: a reference is essentially just a compile time alias. If the ampersand were not present, the variable kreps would be a copy of the data product. This would be a much slower operation because KalReps are very big objects. The reference must be const reference because once a data product is in the event no other code is permitted to modify it; this is enforced by the handle having only const acessors.

One of the most common beginner's mistakes is to omit the ampersand and make an unnecessary copy.

If, in line 2, the event did not find a unique match, then the handle remains in its default constructed, invalid state. When the code tries to deference the pointer, code inside the handle class will recognize that the handle is invalid and will throw and exception. The policy of art is that users should not catch exceptions; instead we let art catch exceptions. art can be configured at run time to respond in different ways to different sorts of exceptions. For this exception the default behaviour is to stop processing the current event and try the next event. For most other exceptions the default behaviour is to shutdown the program as gracefully as possible: this ensures that your work up to the exception will be saved; for example the histogram file and any event-data output files will be properly closed and all log files will be flushed.

You can check for an invalid handle as follows:

  if ( ! krepsHandle.isValid() ){
       // do something because the handle is not valid.
If your code expects to always find the requested data product then there is no point to doing this check yourself; not finding the data product is a true error and you should let art deal with it. If, on the other hand, your code will run on events that sometimes will not have the requested data product, then you should do this test yourself.

One might ask why we went through all of the bother of using a handle class. After all we could have achieved almost identical functionality by adopting the convention that getByLabel will return a bare pointer; if the pointer is non-null it points to the data product but if the pointer is null then getByLabel could not find a unique match. The main reason is that if you forget to check the pointer and it happens to be null then the program will abort. When this happens you do not get a useful error message why it stopped and you will need to use a debugger to figure it out. In addition all of your work to that point will be lost. In our judgement this benefit far outweighs the minor incovenience of a slightly longer learning curve.

One could have written the analyze method with one fewer lines:

 void Ex01::analyze( const art::Event& event ) {

    art::Handle<KalRepCollection> krepsHandle;

    std::cout << "Ex01: ReadKalmanFits  for event: " <<
              << "  Number of fitted tracks: "
              << krepsHandle->size()
              << std::endl;

Here the handle is used as a pointer and this version indeed works correctly. But you should be aware of one thing: every time that you dereference the handle the validity check is performed. If you use a handle only a few times the extra validity checks are not significant. If, on the other hand, you use a handle deep inside nested loops, the extra work may add up. In such a case you should certainly derefence the handle outside of the loop, yielding a const reference ( or a pointer to const ) to be used inside the loop. The bottom line is that, with a reasonable optimizing the compiler, the three line version will never be slower than the two line version but there are circumstances in which it may be faster.

One final comment is in order. A key design feature of art is that once a data product has been put into an event, it will remain in the event and will remain in the event an will remain unchanged until the end of processing the event. Therefore once the analyze method has a handle, reference or pointer to a data product, that object is guaranteed to remain valid until the end of the analyze method.

Exercises with art::Handles

Do these exercises.

  1. Remove the const from the declaration of kreps. Recompile and observe the compilation error.
  2. Misspell either the module label or the instance name. Recompile, run and observe the error.
  3. Try the alternate two line form for accessing the data product.

About The Rest of Ex01/

This file starts with a lot of include directives. The header files include from Mu2e are all relative paths rooted at $MU2E_BASE_RELEASE. So you can find the header file for KalRepCollection at:

You can also find it using the Mu2e cvs code browser. The header files included from art are relative paths rooted at $ART_INC. So you can find the header file for EDAnalyzer.h at:
You can also find the source for art code in the art redmine code browser.

There are some anti-intuitive aspects to both code browsers. In the Mu2e code browser, it is straightforward to navigate to a page that shows all of the different versions of a file; click on a version number to see the contents of that version. In the art code browser, after you have navigated to a file and clicked on the file name, you will see a long list of hash codes that are version numbers. Look at the top of the page and click "View" to see the file.

After the includes, the opens the namespace mu2e and declares a class with only a constructor and one method, named analyze. In the following I show the contents of, but with the defintions of the functions removed.

namespace mu2e {

 class Ex01 : public art::EDAnalyzer {

    explicit Ex01(fhicl::ParameterSet const& pset);
    // Accept compiler written d'tor.

    void analyze(const art::Event& e);


  // Implementations of c'tor and analyze not shown.

Mu2e requires that all Mu2e code live in the mu2e namespace. This provides a layer of insurance that our class names will not accidentally collide with the class names used by one of our external products. There are good reasons and bad reasons to make sub namespaces below the mu2e namespace; if you want to use a sub namespace for your project, speak with the software team first. If you want to define a new top level namespace, speak with the software team first. We ask you to speak with us first to ensure that you are considering all issues, even those outside the scope of your immediate problem.

What makes this file a module? There are three elements:

  1. This class inherits from one of the five module base clases, in this case art::EDAnalyzer.
  2. DEFINE_ART_MODULE(mu2e::Ex01);
  3. The class contains a method with the signature analyze( const art::Event& );
About point 1: you do not need to understand inheritance as used here; you can treat it as just a mantra.

About point 2: DEFINE_ART_MODULE is a C-PreProcessor (CPP) macro that writes a few additional functions into the .os file; these functions are used by art when it comes time to load the shared library and to instantiate this module. No Mu2e member will ever need to understand the details of how this works but feel free to figure it out if you want to. The CPP macro is defined in the file art/Framework/Core/ModuleMacros.h .

About point 3: if the analyze method were not present with the required signature, this class would not compile. ( For the experts, the analyze method is declared pure virtual in the art::EDAnalyzer base class ).

In general the use of CPP macros is strongly discouraged in Mu2e code; their use here is one of the few permitted uses. The other permitted uses are for include directives and header guards. Use of the ifdef and ifndef macros to enable debug sections of code is permitted. If you want to use a CPP macro for something else, consult with the software team; if there is a way to do what you want in the core language, this is strongly preferred because CPP macros do not have the type safety features of the core language.

A few final comments on the file Ex01/ are in order:

There is no destructor - what's up? If you do not write a destructor, then the compiler will write one for you. If your class contains no data members that manage resources, such as pointers, ostreams, istreams and the like, then the compiler written destructor will very likely be correct. We recommend that, if the compiler written destructor will do the right thing, then let the compiler write the code. This class has no data members so the compiler will certainly do the right thing; if, in a different case, you are not sure that the compiler will do the right thing, ask the software team.

One of the oddities of C++ is that the following two expressions have exactly the same meaning:

  const art::Event& event;
  art::Event const& event;
You will see both throughout the Mu2e code.

Finally, most people find it odd that class declaration and its definition are all in one file; among other things, this means that there is no header file for other classes or functions to include. I understand that answer and you can ask but I don;t know how to write a short concise answer that will mean anything to most people.

The Final Ex01 Exercises

  1. The method returns an object of type art::EventID. The header for the Event class is found in
    The header for the EventID class is found in
    Look in the EventID class to find the methods that return the run and event numbers. Using these methods, modify the printout so that it only prints out the run and event numbers ( I am asking you to print two numbers not to modify the streamer of EventID).
  2. Make a new module as follows:
  3. Comment out the analyze method from both the declaration and the definition. Recompile and observe the error message.

Exercise 2

This exercise introduces the concept of parameter sets for modules. The code in this exercise has exactly the same functionality as that of Ex01 but it promotes two hard coded text strings to parameters of the run-time configuration system.

To build this exercise and run it:

scons lib/
mu2e -c Ex02/ex02a.fcl >& out/ex02a.log
mu2e -c Ex02/ex02b.fcl >& out/ex02b.log
The file ex02a.fcl runs just Ex02 while ex02b.fcl runs both Ex01 and Ex02.

The code for this exercise is found in Ex02/; that code is shown below and the changes with respect to Ex01/ are highlighted in red.

This code has two new private data members. These data members are initialized in the constructor by getting information from the .fcl file and they are used in the analyze method.

namespace mu2e {

  class Ex02 : public art::EDAnalyzer {

    explicit Ex02(fhicl::ParameterSet const& pset);
    // Accept compiler written d'tor.

    void analyze(const art::Event& e);

    string _trkPatRecModuleLabel;
    string _instanceName;


  Ex02::Ex02(fhicl::ParameterSet const& pset):

  void Ex02::analyze( const art::Event& event ) {

    art::Handle<KalRepCollection> kRepsHandle;
    event.getByLabel(_trkPatRecModuleLabel,_instanceName ,kRepsHandle);
    KalRepCollection const& kReps = *kRepsHandle;

    cout << "Ex02: ReadKalmanFits  for event: " <<
         << "  Number of fitted tracks: "
         << kReps.size()
         << endl;

The new fcl file, Ex02/ex02a.fcl has two additional lines in the readfits parameter set; they are highlighted in red. And the module_type is changed to Ex02.
     readfits : {
       module_type          : Ex02
       trkPatRecModuleLabel : "trkPatRec"
       instanceName         : "DownstreameMinus"

In Ex01, the argument to the constructor was not used and we did not comment on it. When the constructor of Ex02 is called, art makes sure that the argument of the constructor contains an image of the readfits block from ex02a.fcl. This image is in format named fhicl::ParameterSet that is described in the header file:

There is also an online code browser for FHICLCPP. The package FHICLCPP is the C++ binding to the FHICL language, the language used for .fcl files. All classes within FHICLCPP are in the namespace fhicl. The FHICLCPP package is separate from art but is maintained by the same group of people.

If you are not familiar with the colon initializer syntax of the constructor, we could have written the constructor as:

  Ex02::Ex02(fhicl::ParameterSet const& pset){
    _trkPatRecModuleLabel = pset.get<string>("trkPatRecModuleLabel");
    _instanceName         = pset.get<string>("instanceName");
This would have produced exactly the same end result. The difference is that this alternate syntax tells the compiler to default construct _trkPatRecModuleLabel then copy the result of the a function calls into it; similarly for _instanceName. The preferred option tells the compiler to construct the two data members in place, using the output of the function calls.

For this example there is no significant difference between the two options. Indeed a good optimizing compiler might well generate the same code in the two cases. But the bottom line is that preferred syntax will never be slower than the alternate syntax and it may sometimes be much faster; so it seems wise to get into the habit of doing it right so that you don't forget about it when it actually is important.

Consider the following expression:

double xxx = pset.get<double>("xxx");
This tells the pset object to look within its parameter set for a parameter named xxx. If it finds this parameter, it will convert its value to a double and return that double by value. Internally parameter sets hold all values as strings until the user requests it, either as a string or as some other type. Again the angle brackets denote that the get method is a template. All that your really need to know about this template is that the name of a type goes between the angle brackets; and that type must match the type of the variable that is on the left hand side of the equals sign.

If the parameter set object cannot find the parameter with the requested name then it will throw an exception. Again you should not catch the exception but should let art catch it. When art catches an exception of this type its default behaviour is to perform a graceful shutdown.

If you read the header file for ParameterSet you will see that it has several ways to inspect the parameter set to see if a named parameter is present. You can get all of the keys as a std::vector<std::string>; and you can query just one parameter name using the get_if_present method.

There is no requirement that the variable name that receives the value from the parameter set have the same name as the parameter within the parameter set. But it seems wise to make them as close as other conventions allow. In this case they differ only by the underscore that is used in mark member data.

Exercises for Ex02

  1. Remove the definition of one of the parameters in the readfits block and observe the error.
  2. Add the following lines to the readfits block in ex02a.fcl
              a : 7.5
              b : true
              c : [ 1, 2, 3]
              d : { e : 4 f : 5}
  3. Add code to the body of the constructor of Ex02 to read each of these values from the parameter set and print it out. Read the parameter a as a double, the parameter b as a bool, the parameter c as a vector and d as a fhicl::ParameterSet; then read e and f from the parameter set d.
  4. When the previous part is working, try reading c as an int observe the error message.

Exercise 3

This exercise introduces three ideas.

  1. Using root to make histograms.
  2. Optional parameters in parameter sets.
  3. Some additional features in FHICL files.

To build this exercise and run it:

scons lib/
mu2e -c Ex03/ex03.fcl >& out/ex03.log
root -l Ex03/ex03.cint
The last line will run root to display the histogram created by the second line; it will also create a pdf file of the histogram, out/ex03.pdf. When root returns to its command line, exit root by typing ".q" ( without the quotes). The histogram contains the same information as the printed output: the number of events with 0 fitted tracks, the number with 1 fitted track, and so on. Check that the printout and the histograms agree! The histogram should look like:

To view the root file interactively, start root with the shell command:

root -l
At the root prompt, type:
TFile *  tf = new TFile("out/ex03.root");
TBrowser* b = new TBrowser("Browser",tf);

This will open a TBrowser window. In the left hand panel, you will see two root subdirectories named trkPatRec and readfits; the first was created by the track pattern recognition code. The second was created by your module. Click on the directory named "readfits". Then click on the histogram object named "hHTracks"; the histogram will be drawn in the browser. Return the root command window and type .q to quit.

There is an easier way to view histograms interactively:

browse out/ex03.root
This will invoke the script $MU2E_BASE_RELEASE/bin/browse. This will run root, telling it to do exactly what was done by hand in the previous instructions.

There is one important thing to notice about both ex03.cint and the browse program. Both of these tell root to show the count of all entries, the number of underflows, the number overflows and total counts within the bounds of the histogram. We strongly recommend including this information unless there is a compelling reason not to.

Now look at Ex03/ The outline for creating and filling histograms is:

  1. Make a data member that is a TH1F*.
  2. Create the histogram at beginJob ( or maybe beginRun ). Save a pointer to the histogram in the data member.
  3. In the analyze method, use the data member to fill the histogram.
  4. Count on art to properly close the file and write it out at the end of the job.

The only odd feature is the two lines:

 void Ex03::beginJob(){
    art::ServiceHandle<art::TFileService> tfs;
    _hNTracks   = tfs->make<TH1F>( "hNTracks", "Number of tracks per event.", 10, 0., 10. );
The TFileService is an art service provided by art itself. The way that you ask to use a service is to instantiate the art::ServiceHandle class template with the class name of the service class as the template argument. The handle object can be checked for validity and used like a pointer. A art::ServiceHandle is valid for the life of an art job; once a servce is instantiated its memory location never changes. Its internal state may change but the handle is valid for the life of the job.

The job of the TFileService is to help modules manage their use of root resources so that

If you want to use the TFileService, it must have a configuration block present in the .fcl file. Normally that block has only one argument, the name of the root output file to which histograms, ntuples, TTrees etc from all modules will be written.

Within an art job, the first time that TFileService is first requested it will open the root output file. If a file with that name already exists, it will be overwritten. I have not check recently but I think that this behaviour can be controlled using other parameters in the parameter set.

For each module instance in an art job, if that module uses the TFileService, then the TFileService will create a directory in the root output file. The name of that directory will be the name of the module label of that module instance. Because module labels are guaranteed unique with an art job, this ensures that every module instance has a unique subdirectory to work in. The TFileService also makes sure that on every transition from one module to another, the root current working directory is changed to the right directory.

When you create a histogram, ntuple or TTree, you should instantiate it via the TFilService, as shown in this exercise.

You are free to open your own TFile and to write root objects to that file. If you do this please ensure that whenever you exit any method of your module, that you leave all of the root global state in exactly the condition that you found it.

You may make root subdirectories within the directory managed on your behalf by TFileService. This will be discussed in a later exercise.

The second new element in this exercise is optional parameters in a parameter set. Consider the following line from the constructor of

Compared to previous uses of fhicl::ParameterSet, there is an extra argument, which serves as a default value. If the parameter maxPrint is present in the parameter set then FHICL will return its value to the caller. If that parameter is absent, then FHICL will return the default value to the caller.

One should use default values with care. It makes sense to provide default values for parameters that control debugging and logging printout. But it is usually not wise to provide default values for parameters that control the physics content of the job.

The syntax to provide a defalut value to an array type is:

vector<int> pdgIDs = pset.get<vector<int> >("interstingPDGIDs", vector<int>() );
The last argument is a default constructed vector<int>.

Exercises for Ex03 - Part 1

  1. Change the maxPrint parameter in ex03.fcl and observe the behaviour.
  2. Remove the maxPrint parameter from the parameter set and observe that the code does not throw and that the expected behaviour occurs.
  3. Change the name of the root output file. Run the code. Edit Ex03/ex03.cint so that it uses the new root file.

Redefining a FHICL Parameter

Before starting this section

cvs update -PdA Ex03
to pick up some changes that I just committed on Monday evening, August 13, 2012.

Copy the file ex03.fcl to ex03b.fcl. Then edit ex03b.fcl and add the following line to the bottom of the file:

physics.analyzers.readfits.maxPrint : 5

Hopefully the meaning of this line is clear: it changes the value of maxPrint, previously defined in the readfits analyzer parameter set, from 20 to 5. If a parameter is defined more than once, then the rule is that the last definition wins.

FHICL supports two representations of the heirarchical structure of the document. You have already seen the nested parentheses representation earlier in ex03b.fcl and in previous .fcl files. This line shows the dot separated representation of one element in that heirarchical structure; any element can be replaced by using its full dot separated name. Moreover, if maxPrint had not previously existed, this line would have created it.

Rerun mu2e and observe that there is now printout for only 5 events, not for 20 events.

mu2e -c Ex03/ex03b.fcl

A weakness in FHICL is that the dot separated identifiers only work if the dot separated name is a complete name, starting from the outermost scope. For example, the following does NOT work:

physics : {
  analyzers : {
    readfits : {
      module_type          : Ex03
      trkPatRecModuleLabel : "trkPatRec"
      instanceName         : "DownstreameMinus"
      maxPrint             : 20
  analyzers.readfits.maxPrint : 7   // Gives a parse error.  Not in outermost scope.
In fact this piece of FHICL will cause the parser to issue an error and stop.

Viewing the Canonical Form of a FHICL Document

You may request that FHICL print the final form of a document, after all substitutions have taken place. The syntax for doing so is weird.

ART_DEBUG_CONFIG=1 mu2e -c Ex03/ex03.fcl
Setting this environment variable tells art that it should not do the work described by the FHICL file; instead it should just print the final form of the FHICL file and stop execution. The syntax of setting a variable, followed by whitespace, followed by a valid unix command automatically unsets the variable at the end of the line. If you manage to permanently set ART_DEBUG_CONFIG so that art always just prints the FHICL file and never actually runs art, you can turn it off by:
The printed version of the FHICL files appears in a format called the canonical form of the FHICL document. If you compare the canonial form of ex03.fcl to its orignal form two properties of the FHICL language are instantly clear:
  1. There are very few places in which white space is significant.
  2. Order is only meaningful with a FHICL sequence ( the lists delimited by [] ).

Using @local:: and PROLOGs

The file Ex03/ex03a.fcl does the same job as Ex03/ex03.fcl but introduces a new element of FHICL (well, it does change the maxPrint variable but nothing else). Look at the file Ex03/ex03a.fcl and compare it to Ex03/ex03.fcl:

diff Ex03/ex03.fcl Ex03/ex03a.fcl
In ex03.fcl, the parameter set for the module label readfits was defined as:
 analyzers : {
    readfits : {
      module_type          : Ex03
      trkPatRecModuleLabel : "trkPatRec"
      instanceName         : "DownstreameMinus"
      maxPrint             : 20
In ex03a.fcl, the parameter set is defined as:
  analyzers : {
    readfits : @local::readfitsDefault
The @local:: prefix tells FHICL to look earlier in the file, find a parameter in the outermost scope whose name is readfitsDefault; it will then set the parameter physics.analyzers.readfits equal to the value of readfitsDefault. The definition of readfitsDefault is found in the file Ex03/readfits.fcl, which is included near the top of ex03a.fcl:
#include "Ex03/readfits.fcl"
That file contains:

readfitsDefault : {
  module_type          : Ex03
  trkPatRecModuleLabel : "trkPatRec"
  instanceName         : "DownstreameMinus"
  maxPrint             : 0

The definition of readfitsDefault is obvious. The function of the BEGIN/END PROLOG markers is as follows: the material between the BEGIN_PROLOG and END_PROLOG may be used to resolve @local:: references but it may not be used for other purposes and it will be deleted from the final FHICL document.

To illustrate this, do the following exercise:

ART_DEBUG_CONFIG=1 mu2e -c Ex03/ex03a.fcl >& out/ex03_withProlog.fcl
Edit the file Ex03/readfits.fcl to comment out the BEGIN_PROLOG and END_PROLOG lines. Then do:
ART_DEBUG_CONFIG=1 mu2e -c Ex03/ex03a.fcl >& out/ex03_noProlog.fcl
diff out/ex03_withProlog.fcl out/ex03_noProlog.fcl
You should see that, when the BEGIN/END PROLOG markers are not present present there is an additional top level parameter in the final document, the definition of readfitsDefault. This parameter does not mean anything to art is just useless clutter. So the PROLOG markers are used to remove it.

A sequence of PROLOGs, one following the other, may appear in a FHICL document but PROLOGs may not nest; that is, once you start a prolog, you must close it with its END_PROLOG before starting another PROLOG.

Mu2e makes extensive use of PROLOGs and we encapsulate each PROLOG within a file that is included into FHICL documents. For example look at:

This file defines many standard configurations for Mu2e modules. A typical job may only use a few of these definitions but it is very convenient to have them all in one place. The use of PROLOGs ensures that the unused parameters do not clutter up the final FHICL file.

Where to Find FHICL Include Files

A FHICL include directive has the form:

#include "relative_path_name"
where the second element may just be a file name or it may be a relative path to a file. When FHICL processes an include directive, it searches for the requested file in the search path defined by the environment variable FHICL_FILE_PATH. You can see the value of this environment variable by doing:
This is a colon separated list of paths to directories. When FHICL processes an include directive, it takes the first element in the FHICL_FILE_PATH, appends a "/" character if necessary, appends the relative path from the include directive and asks if a file with that pathname exists. If such a file exists, then FHICL decides that its search is over and includes that file. If such a file does not exist, it repeats the exercise with the second element of FHICL_FILE_PATH.

At this time, when you setup a release of the Mu2e Offline software, it defines:

If you subsequently setup a Mu2e test release, it will add two new elements to the front of the path:

Exercise 4

Before doing this exercise, update the code to pick up a change commited on the evening of Monday August 13.

cvs update -PdA Ex04

This exercise introduces the following ideas:

  1. It shows how to loop over fitted tracks and does a very shallow survey of the information available about fitted tracks.
  2. It shows how to write a root cint script to make and output pdf file with multiple pages.

A detailed discussion of the information available about each track will be deferred until after Dave Brown's presentations this week To build this exercise and run it:

scons lib/
mu2e -c Ex04/ex04.fcl >& out/ex04.log
This job is configured to use the input file with 2000 events. It should take about 4 minutes of CPU time to run. If the computers or disks are heavily loaded, the real time may be a few times longer than the CPU time.

In the file Ex04/, the analyze method begins the same way as in the previous exercises. THen there is a loop over all fitted tracks in the event:

    for ( KalRepCollection::const_iterator i=kreps.begin(); i != kreps.end(); ++i ){s

      // Reference to one fitted track.
      KalRep const& krep = **i;

      // Use the reference to get information about this track ...

Note the use of the const qualifier in two places; if you omit this qualifier the code will not compile because the event only grants const access to data products. Also note the use of references to avoid unecessary copies. One could also have written the last non-trival line in the above fragement as,
      KalRep const* krep = *i;
and change the downstream code from krep. to krep->. Both versions are acceptable.

Next, run the root script to produce a 3 page .pdf file that contains some of the histograms.

root -l Ex04/ex04.cint
After the first page of histograms has been drawn, the following text will appear in the root command window:
Info in : Current canvas added to pdf file out/ex04.pdf
Double click or hit return in the last active pad to continue:
After this text appears, double click in the middle of the histogram in the lower right hand corner. This will tell root to clear the screen and draw the next page of histograms. After that page has been drawn, again double click in the histogram in the lower right hand corner. This will tell root to draw the last page of histograms. Instead of double clicking to advance pages, you may also single click and then hit the return key. If you are running off site over a slow network, neither technique for advancing to the next page works particularly well.

The main thing to learn from ex04.cint is how to create a multipage pdf file to hold a summary of your work. You can also inspect ex04.cint to see how to do a few simple customizations in root. The statistics box has been customized to show overflows, underflows and the sum of the entries inside the histogram boundaries. The histogram line colour has been changed to blue. Each page has been divided to show 4 histograms on one page. Finally the last page shows how to draw two histograms on the same vertical scale and how to overlay two histograms.

Exercise 5

This exercise illustrates all of the callback methods that are currently supported for an EDAnalyzer module. To build and run this exercise, It also shows how to copy several input files into one output file.

scons lib/
mu2e -c Ex05/ex05.fcl >& out/ex05.log
This is the first exercise that does not run the track pattern recognition code. So the log file is much cleaner. Hopefully both and ex05.fcl are reasonably clear. The one signifcant new feature in ex05.fcl is the pattern to write to an output file. There are two parts to this pattern. First, you must create the parameter set for an output module:
outputs: {

  outFile : {
    module_type : RootOutput
    fileName    : "out/ex05_data.root"

At this time there is only one supported type of output module, RootOutput. The name of the output file is given by the fileName parameter. Second, you must add the module_label of the output module to one of the end_paths.
  e1        : [ allMethods, outFile  ]
  end_paths : [ e1 ]

As with all other module labels, the module label of the output file is an almost arbitrary string that must be unqiue within the art job; the string may not contain any underscore characters. The name outputs, appearing at the outermost scope, has special meaning to art.

Exercises for Ex05

Rerun ex03.fcl using as input the output event-data file from Ex05.

mu2e -c Ex03/ex03.fcl -s out/ex05_data.root  >& out/ex03_ex05data.log
Compare the log file from this step with out/ex03.log to see that the events read from the new file behave the same as the original events.

Exercise 6

This example:

  1. Introduces the mu2e GeometryService
  2. Introduces the Target geometry object.
To build and run the example:
scons lib/
mu2e -c Ex06/ex06.fcl >& out/ex06.log

The file ex06.fcl tells art to run only one event because all of the action is in the beginRun method of the module. Also, the .fcl file does not initialize the TFileService because it is not needed for this exercise.

The Mu2e geometry is described by a text file. The standard geometry is found in:

This format is not a very convenient representation of the geometry; nor is it intended to be. It is intended to be the smallest set of numbers needed to fully describe the detector. To provide more convenient access to the geometry information, Mu2e has written an art Service named the GeometryService. All Mu2e code should get its geometry informaton via the GeometryService; no Mu2e code outside of the GeometryService should access the geometry file directly. In most of the .fcl files seen so far, there is a FHICL parameter set name services.user.GeometeryService that is used to configure the GeometryService; this parameter set has one parameter, the name of the file that holds the geometry information.

Internally the GeometryService holds a TTracker object that parses the TTracker section of geom_01.txt and builds a useful description of the TTracker. The GeometryService also holds a Calorimeter object that parsers the Calorimeter section of geom_01.txt and builds a useful description of the Calorimeter. Similarly for the other major subsystems: the stopping target, the production target, the muon beam stop and so on. User code that wants to access, for example, the Target object need to access the GeometryService and ask it for the Target object; this is illustrated by the code in this example.

The current implementation of each geometry object is our best guess at what people might find useful. If you find some other view of the information to be more useful, we should add that view to the objects in the GeometryService. Do not build your own independent geometry system. Speak with the Mu2e software team and we will plan the work of integrating both views of the geometry into coherent system. This will keep all geometry related code in a small number of well defined places, which will aid in development and maintenance.

Once we have real experimental data some of the geometry information will be time dependent. For this reason, the GeometryService does not create any of its geometry objects until the first beginRun call. The present implementation just ignores subsequent beginRun calls and presumes that the geometry will be unchanged for the entire job. In a few years, when we we start to model a time dependent geometry, this will all change. Our goal is that user code, that uses geometry objects will not to change - only the code inside the service that is responsible for keeping everything up to date.

In, although the analyze method is empty, it must be present for this code to compile; the reason is that, in the EDAnalyzer base class, the analyzer class is declared to be pure virtual. In the beginRun method, the first block of code is

    GeomHandle<Target> target;
    GeomHandle<DetectorSystem> detectorSystem;
You have already seen two handle classes, an art::Handle that is used to access data products in the event, and an art::ServiceHandle that are used to access art services. A GeomHandle behaves similarly. On the first line of this example, the constructor of the GeomHandle contacts the Mu2e GeometryService and asks the service if it has an object whose type is Target. If the GeometryService has a Target object, it will return a pointer to it and the local variable target can be used exactly as if it were an object of type Target*. If the GeometryService does not have a target object, it will throw an exception; art will issue and error message and will shut down gracefully.

The Target class describes the stopping target; it should have been called StoppingTarget and probably will be some day. The header files relevant for the stopping target subsystem are in

Look at these files and see the structure of the information. The Target class holds a vector of TargetFoil objects and it also describes a cylinder that bounds the target foils; this cylinder does not correspond to a physical object but it is used to build the Geant4 model of the detector.

The section of this document that discussed coordinate systems mentioned that the Target and TargetFoil report their position in a strange, legacy coordinate system. The code in this example shows how to transform these positions to useful coordinate systems, the Mu2e coordinate system and the Detector coordinate system. In the near future, this bug in Target system will be fixed and the code fragment that makes this correction will be removed.

The Mu2eBuilding class describes many properties of the Mu2e building. For historical reasons that is the place that we stashed the numbers necessary to make the fix described in the previous pargraph.

Exercises for Ex06

Extend to do the following:

  1. Write a loop over the TargetFoils.
  2. For each foil, write out the following information:
    1. The center of the foil, in both detector coordinates and mu2e coordinates.
    2. The outer radius of the disk.
    3. The thickness of the disk
    4. The name of the material that the disk is made from.

Exercise 7

This example introduces two main ideas:

  1. The mu2e data product class GenParticle.
  2. Some new ROOT clases, TH2F and TNtuple.
To build and run the example:
scons lib/
mu2e -c Ex07/ex07.fcl >& out/ex07.log
The example looks at all 2000 events in the large file but it completes in a few seconds because it does so little work.

In the previous exercises, the histograms were booked in the beginJob method but in this exercise the histograms are booked in the beginRun method. The code that books the histograms contacts the geometry service to discover a convenient binning for the histgrams that show z positions. As was discussed in Ex06, the GeometryService does not initialize the geometry until the first beginRun; therefore these histograms cannot be booked at beginJob time and the first available opportunity is at beginRun.

The analyze method of this exercise begins with the usual three line pattern to extract a data product from the event:

    art::Handle<GenParticleCollection> gensHandle;
    GenParticleCollection const& gens(*gensHandle);
In this case the data product is of type GenParticleCollection, the module label is read from the .fcl file, and the instance name is the empty string. When the instance name is the empty string, it does not need to be specified explicity.

How did we know the correct module label and correct instance name? That will be discussed in Ex08.

The header files for GenParticleCollection and GenParticle are found in:


Exercises for Ex07

Running the Simulation Chain

In this exercise you will recreate the first two input files used by this Workbook.

time mu2e -c InputFiles/conversionOnly_01.fcl >& out/conversionOnly_01.log
This will take about 4 minutes of CPU time to complete; while it is running your can read the material below and look at the .fcl file. This job will make only 20 events but almost all of that time will be spent doing the initialization of Geant4; the job actually takes about 50 ms event during the event processing phase. For debugging code there are ways to tell Geant4 to use a less complete physics list that runs much, much faster but this exercise is to produce some events with the best configuration we have.

In InputFile/conversionOnly_01.fcl the source block is different than you have seen before.

source : {
  module_type : EmptyEvent
  maxEvents   : 20
  firstRun    :  1
When you choose the source module_type to be EmptyEvent, art start processing each event by creating an empty event; an empty event contains only an event id ( run number, subrun number and event number). That event is sent through the module chain just as events read from an input file.

There are also three new services defined, which are abstracted in the fragment below.

  RandomNumberGenerator : { }

  user : {
    G4Helper               : { }
    SeedService            : @local::automaticSeeds
A detailed discussion of these services will be deferred until later; for now all that you need to know is that they must be configured exactly as shown for any simulation job that you run.

The meat of this exercise is that

  producers: {
    generate             : @local::generate
    g4run                : @local::g4run
    makeSH               : @local::makeSH
    CaloReadoutHitsMaker : @local::CaloReadoutHitsMaker
    CaloCrystalHitsMaker : @local::CaloCrystalHitsMaker
    randomsaver          : @local::randomsaver
This defines module labels for all of the elements of the simulation chain and gives each element their standard parameter set; the standard parameter sets are found in the file:
The module labeled generate runs an event generator; later in the .fcl file, the default behaviour of the generator is overridden so that it will generate one conversion electron per event and nothing else. The module labeled g4run takes the output of the generator and runs those particles through Geant4. The module labeled makeSH takes the output of Geant4 and creates StrawHits, which are objects that represent data-like hits on straws. In the Mu2e calorimeter system each crystal is viewed by two avalanche photo diodes (APDS). The module labeled CaloReadoutHitsMaker takes the output of Geant4 and creates CaloHit objects; these represent data-like hits on the APDs. The module labeled CaloCrystalHitsMaker is the first step in the reconstruction chain; it reads the CaloHit objects and forms them into objects that represent hit crystals. One of the new Services introduced above is the RandomNumberGenerator service; among other things, this service saves, at the start of each event, the state of every random number engine. The module labeled randomsaver contacts the RandomNumberService, gets a copy of the engine state information that had been saved at the start of the event and writes it to the event.

The analyzer section is not particularly interesting; it defines configurations for some modules that make diagnostic histograms that you can inspect.

The next block defines the two paths:

  p1 : [generate, g4run, makeSH, CaloReadoutHitsMaker, CaloCrystalHitsMaker, randomsaver ]
  e1 : [checkhits, readStrawHits, outfile]

  trigger_paths  : [p1]
  end_paths      : [e1]
We finally have enough complexity in an exercise to talk a little about paths. The important thing is that the module labels listed in p1 will be executed in the listed order; moreover that list may only contain the names of producer modules and filter modules. On the other hand, art is free to execute the modules listed in e1 in any order; only analyzer and output modules are permitted in e1. Some of the reasoning behing these rules is discussed on the Mu2e web page about art paths.

The next block defines the output file; there is nothing new here.

The next block is used to configure the random number engines:

services.user.SeedService.baseSeed         :   0
services.user.SeedService.maxUniqueEngines :  20
The random number engines used by Mu2e can be seeded by giving them an integer in the interval [0,900000000]. Each seed produces a very, very long sequence of random variates. All random number engines used by Mu2e must get their seeds from an art service written by Mu2e, the SeedService. Within an art job, the first random engine to ask for a seed will be given the value baseSeed; the next baseSeed+1 and so on. If there are more than maxUniqueEngines requests for seeds within one art job, the SeedService will throw an exception.

The next line changes the default behaviour of the event generator module:

  physics.producers.generate.inputfile  : "EventGenerator/defaultConfigs/conversionGun.txt"

The remaining lines in the file lower the verbosity of the informational messages from some of the modules that have verbose defaults.

The Event Generator Module

If you look at the file:

you will see the default configuration for the event generator module. Recall that the conversionOnly_01.fcl file replaced the value of the inputfile parameter. After this substituion, the parameter set for the module labeled generate looks like:
  module_type          : EventGenerator
  inputfile            : "EventGenerator/defaultConfigs/conversionGun.txt"
  allowReplacement     : true
  messageOnReplacement : true
Ultimately the inputfile will be resolved to:
This file provides additional configuration information for the event generator. This configuration mechanism was written before FHICL was invented and we have not yet migrated this component of the configuration system to FHICL. The details of this configuration language will be discussed elsewhere. What you need to know for now is that this file tells the event generator do create one conversion electron per event, with the details described in the section Input Files for the Exerises, above.

Running a Second Job With Independent Random Numbers

The .fcl file InputFiles/conversionOnly_02.fcl runs essentially the same job as InputFiles/conversionOnly_01.fcl but with an independent sequence of random numbers, a different run number for the generated events and different output files:

To see this, diff the two .fcl files:

diff InputFiles/conversionOnly_01.fcl InputFiles/conversionOnly_02.fcl
< # $Id: conversionOnly_01.fcl,v 1.3 2012/08/14 14:14:19 kutschke Exp $
> # $Id: conversionOnly_02.fcl,v 1.3 2012/08/14 14:14:19 kutschke Exp $
<   firstRun    :  1
>   firstRun    :  2
<   TFileService          : { fileName : "out/conversionOnly_01.root" }
>   TFileService          : { fileName : "out/conversionOnly_02.root" }
<     fileName    : "out/conversionOnly_01_data.root"
>     fileName    : "out/conversionOnly_02_data.root"
< services.user.SeedService.baseSeed         :   0
> services.user.SeedService.baseSeed         :  20
The key to getting independent sequenes of random numbers is to change the baseSeed parameter in the SeedService. We recommend that from job to job you increment it by maxUniqueEngines. More details of the Mu2e random number system is available. There are basic instructions and complete instructions.

The third file in the directory, InputFiles/conversionOnly_03.fcl, runs the third job in the series, again with indpendent random numbers. This job runs 2000 events and takes about 8 minutes to complete.

Exercises for the InputFiles section

Run the three jobs, InputFile/*.fcl. These jobs will write their event-data output files to the out subdirector. Pick a few of the previous exercises, change their .fcl files to use the event-data files you just created as their input. Verify that they produce the same results as they did earlier.

Exercise 8

This example introduces two main ideas:

  1. The mu2e data product class SimParticle.
  2. The idea of art::Ptr
To build and run the example:
scons lib/
mu2e -c Ex08/ex08.fcl >& out/ex08.log

The SimParticle class represents all of the information available about one particle that was processed by Geant4. There is one SimParticle for every GenParticle. In addition, there is also one SimParticle for every particle that was tracked by Geant4. For those of you who know Geant4 internals, there is one SimParticle for every G4Track that both stacked and tracked; each SimParticle corresponds to one G4Track. Each G4Track has a track ID, which is just a number; the corresponding SimParticle also has an ID, which has the same value as that of the corresponding G4Track. There are two small subtleies with Geant4 track IDs and these are reflected in the IDs of SimParticles.

  1. G4Track ID numbers start at 1, not at 0.
  2. When Geant4 creates a track, it first assigns it an ID then it calls Mu2e code to ask "should" I track this particle. Mu2e code sometimes says "no"; in this case the ID will not appear among the list of SimParticles.
The short way of saying the previous 2 bullets is that the SimParticle ID numbers are not a dense set of integers starting at 0. So we cannot use a std::vector as a container for a collection of of SimParticles.

Instead we use a custom written container class template, cet::map_vector. To the end user, this feels a lot like std::map with some additional convenience features added. There are a few technical reasons why just using the std::map would have been awkward. More complete information is available in the Mu2e web page about the class template cet::map_vector. The namespace cet is used for classes and functions that are found the the CETLIB library. You can find the header files for this library at $CETLIB_INC. CETLIB is a utility library, that is not part of art but is maintained by the art team; the library holds code that does not have a better home. The acronym CET stands for Computing Enabling Technologies, which is the name of the group within the Fermilab Computing division in which the art team is orgcharted.

The code in shows how to write a loop over a SimParticleCollection. It also shows how to access some of the information inside a SimParticle. In particular it shows how to navigate the parent/child information. The parent/child information is encoded using the class template art::Ptr to make objects of type art::Ptr<SimParticle>. An art::Ptr<SimParticle> is a minimal persistable pointer class; that is, it behaves very much like a bare pointer, with the exception that you can write it out to an event-data file, read it back in a subsequent job and still have it do the right thing. The code in shows several ways of using an art::Ptr<SimParticle>. The main features are:

  1. It has operator->() const and operator*() const both of which behave just as if it were a SimParticle const*.
  2. To get the underlying bare pointer, use the get() const method.

Exercises for Ex08

To perform these exercises you will need to navigate the information in the header files of many classes. Most (all?) of them will be found in:


  1. Start with and add to it a histogram of the number of SimParticles in each event.
  2. For each SimParticle, compute its generation number relative to the primary particle. By definition, the primary particle has a generation number of 0, its children have a generation number of 1, and so on. Make a histogram of the depth of each SimParticle.
  3. Plot the kinetic energy of all electrons created in any of the Straw gas volumes.
  4. For all of electrons created in any of the straw gas volumes, count the number of times they were created by pair creation, by the compton effect, by ionization ... . Print out a table at the end of the job. Hint: you will want to make a data member of type std::map<ProcessCode,int>.

Exercise 9

This example introduces four main ideas:

  1. StepPointMC objects
  2. The TTracker geometry
  3. The Calorimeter geometry
  4. How do you tie StepPointMC objects to geometry and SimParticles.
To build and run the example:
scons lib/
mu2e -c Ex09/ex09.fcl >& out/ex09.log
When we run Geant4, it computes trajectories of particle through the geometry of the Mu2e detector. The fundamental piece of each trajectory is a Geant4 object called a G4Step. Most G4Steps are not very interesting - for example there may be many steps as a particle moves through the vacuum. When we build the Geant4 geometry we tell Geant4 that certain volumes are interesting to us; in Geant4 speak we tell Geant4 to treat them as sensitive detectors. All of the following are sensitive detectors: the straw gas, the calorimeter crystals, the calorimeter readout devices, the scintillator bars in the cosmic ray veto system, pixels detectors of the Fermilab design for the extinction monitor and the time of flight counters of the UCI design for the extinction monitor. All of these are devices that will produce some sort of readout in the real Mu2e experiment; Mu2e has now, or will soon have, code the processes this information to form data-like hits. In addition to the sensitive detectors listed above, Mu2e declares an number or other volumes to be sensitve detectors; these volume do not produce data-like hits; instead their role is to help to monitor what happend in the event as an aide to detector design and to provide tools for the development, debugging and characterization of reconstruction algorithms. These volumes include the stopping targets, the proton absorber, the support disks of the T-Tracker and as series of virtual detectors. Virtual detectors are thin disks, made of vacuum, inserted at various places along the muon beamline; their job is just to record particles that pass through them.

Mu2e records every G4Step that takes place in every sensitive detector. We make a summary of the information found in a G4Step and we put it into a Mu2e defined class called StepPointMC. You will find the source for this in:

The Mu2e code that interfaces to Geant4 places several collections of StepPointMCs into each event. All of the StepPointMCs that occur in any straw gas volume are put into one collection. All of those that occur in any crystal are put into another. All of those that occur in any crystal readout device are putting into a third. And so on. These collections are put into the event as data products of type StepPointMC; they are distinguished from each other using the data product instance name. You may wish to reread the section above on the naming conventions for data products to remind you what an instance name is. As of cvs tag v2_0_1, the StepPointMCCollection instances that are written to the event are:
Collection Name    Sensitive Volume
tracker straw gas volumes
virtualdetector virtual detectors
timeVD (see below - a different meaning)
stoppingtarget foils in thes stopping target
CRV bars of the cosmic ray veto scintillators
calorimeter calorimetr crystals
calorimeterRO readout device volumes in the calorimeter
ExtMonFNAL pixel detector for the FNAL extinction monitor design
ExtMonUCITof TOF devices for the UCI extinction monitor deisgn
trackerDS support hoops for the TTracker
protonabsorber proton absorber
PSVacuum Anywhere in the PS vacuum

All of the above objects are put into every event but some are empty by default. They are only filled if a run-time switch is set to fill the data product.

When you use StepPointMC's be aware that one track may take several steps to go through one sensitive volume, even if that sensitive volume is very thin. Moreover, a track may enter a sensitive volume, exit it and renter it. The Mu2e code records all of these steps; as a consumer of this information it is your job to avoid double counting.

The information for the timeVD instance has a different sort of meaning. One may, at run-time, define a list of times to which the timeVD code will react; by default the list is empty. Mu2e code will inspect each G4Step and identify, for each time in the list, the G4Step that is the earliest step with a time equal to or greater than the listed time. This set of identified G4Steps is recorded as StepPointMCs in the timeVD instance.

There is an enum-match-to-string class that defines the above strings; the syntax is little awkward but it will produce a compile time error if you make a spelling mistake. This class is found in

One way to use it is illustrated in

The contents of a StepPointMC are described below. Pay close attention to the volume Id and the position; their meaning is different from one StepPointMCollection to another:

  1. an art::Ptr<SimParticle> that points to the SimParticle that made the step; the data member _trackId is obsolete and will be removed.
  2. a volume id. This changes meaning for each StepPointMC instance - and some of the meanings are weird.
  3. the total energy depositted along the step
  4. the energy deposit due to ionization
  5. The 3-position at the START of the G4Step. For the tracker StepPointMCCollection this position is given in Detector coordinates. For all other StepPointMCCollections it is given in Mu2e coordinates.
  6. The 3-momentum at the START of the G4Step.
  7. The Geant4 global time at the start of the G4Step,
  8. The proper time, in the rest frame of the particle, at the start of the G4Step.
  9. The step length.
  10. The Geant4 physics process that caused the step to end.

For all of the StepPointMCCollections, the volumeId is an identifier of the volume in which the step took place; G4Steps are always contained within a single volume. Mu2e does not have a global volume identifier system so the volumeId of a StepPointMC is an indentifier of a volume with that subsystem; therefore it changes meaning when we change from tracker to calorimeter to CRV and so on. For example, if there are N straws in the Mu2e detector, then the volumeId is a straw number in the range [0...N-1]. The TTracker geometry object has an accessor that will return all of the information about a straw given such a number. The catch is that the accessor does not take a plain old integer as an argument; the integer must be turned into a data type called a StrawIndex. This is an attempt to make it harder for code to accidentally cross-stitch indices; the jury is still out on whether or not this has helped. To support this model, the StepPointMC class has an second accessor for the volumeId. The virtual detectors and the CRC system have a similar special index type. For this reason, the volumeId has 4 accessor methods:

   typedef unsigned long VolumeId_type;
   VolumeId_type           volumeId()          const { return                         _volumeId;  }
   StrawIndex              strawIndex()        const { return StrawIndex(             _volumeId); }
   VirtualDetectorId       virtualDetectorId() const { return VirtualDetectorId(      _volumeId); }
   CRSScintillatorBarIndex barIndex()          const { return CRSScintillatorBarIndex(_volumeId); }

For the calorimeter system the volumeId has yet another meaning. Suppose there are nc crystals in the calorimeter and nro readouts per crystal; then there are nc*nro total readouts. The volumeId found in the StepPointMCCollection for a calorimeter crystal is a number in the range []. A volumeId from the StepPointMCCollection for the calorimeter readout devices is a number in the range [*nro-1]. The catch is that, for the calorimeter geometry code, all of the accessors are written to take a calorimeter readout Id as their input argument! There are none that take a crystal number. The solution is that the end user has to turn the crystal number into a readout number by multiplying it by the number of readouts per crystal. In the future we will clean this up. The argument to these accessors is an ordinary int, not special type as for the three cases in the previous paragraph.

In the remaining StepPointMC instances, the volumeId is just a simple int and it is a key for looking up properties in the corresponding subsystem.

One unexpected results from this is that the local origin of the crystal is not at the center of the crystal! I have asked the calorimeter people to comment?

Exercises for Ex09

  1. Make a hisgtogram of the ionizing energy deposition in the straws.
  2. Look up the straw radius in the TTracker geometry. Turn the current radius variable into a normalized radius. Make a histogram of the normalized radius - there should be no entries larger than 1.
  3. For each event, compute the total energy deposited in the calorimeter crystals by summing it over all StepPointMCs in the calorimeter. Make a histogram.

Exercise 10

This example shows how to find out:

  1. What data products are in a file
  2. What run-time configuraiton was used for the modules that made these products.

To find out what data products are in a data file, you can run the dumpDataProducts.fcl script.

mu2e -c Analyses/test/dumpDataProducts.fcl -s /grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/conversionOnly_01_data.root
This command will find the dumpDataProducts.fcl script under $MU2E_BASE_RELEASE.

This will read one event and write out information for every data product that it finds in the event, the run and the subrun: For the conversionOnly_01_data.root used in these excersises, the output looks like:

Found 28 data products in this Event
Data products:
        Friendly Class Name          Module Label        Instance Name      Process Name     Product ID
         mu2e::StepPointMCs                 g4run       protonabsorber    ConversionOnly           1:20
         mu2e::StepPointMCs                 g4run       stoppingtarget    ConversionOnly           1:21
         mu2e::StepPointMCs                 g4run              tracker    ConversionOnly           1:23
             mu2e::CaloHits  CaloReadoutHitsMaker                         ConversionOnly           1:6
  mu2e::CaloCrystalOnlyHits  CaloReadoutHitsMaker                         ConversionOnly           1:4
        art::TriggerResults        TriggerResults                       DumpDataProducts           2:1
         mu2e::StepPointMCs                 g4run           ttrackerDS    ConversionOnly           1:24
        art::TriggerResults        TriggerResults                         ConversionOnly           1:2
         mu2e::GenParticles              generate                         ConversionOnly           1:7
            mu2e::StrawHits                makeSH                         ConversionOnly           1:27
         mu2e::StepPointMCs                 g4run             PSVacuum    ConversionOnly           1:17
     mu2e::StrawHitMCTruths                makeSH                         ConversionOnly           1:26
      mu2e::CaloHitMCTruths  CaloReadoutHitsMaker                         ConversionOnly           1:5
         mu2e::StepPointMCs                 g4run        calorimeterRO    ConversionOnly           1:19
         mu2e::StepPointMCs                 g4run                  CRV    ConversionOnly           1:14
mu2e::StepPointMCart::Ptrss  CaloReadoutHitsMaker  CaloHitMCReadoutPtr    ConversionOnly           1:12
mu2e::StepPointMCart::Ptrss  CaloReadoutHitsMaker  CaloHitMCCrystalPtr    ConversionOnly           1:11
             mu2e::StatusG4                 g4run                         ConversionOnly           1:10
         mu2e::StepPointMCs                 g4run          calorimeter    ConversionOnly           1:18
         mu2e::StepPointMCs                 g4run               timeVD    ConversionOnly           1:22
    mu2e::PointTrajectorymv                 g4run                         ConversionOnly           1:8
         mu2e::StepPointMCs                 g4run      virtualdetector    ConversionOnly           1:25
        mu2e::SimParticlemv                 g4run                         ConversionOnly           1:9
          art::RNGsnapshots           randomsaver                         ConversionOnly           1:1
         mu2e::StepPointMCs                 g4run         ExtMonUCITof    ConversionOnly           1:16
      mu2e::CaloCrystalHits  CaloCrystalHitsMaker                         ConversionOnly           1:3
mu2e::StepPointMCart::Ptrss                makeSH        StrawHitMCPtr    ConversionOnly           1:13
         mu2e::StepPointMCs                 g4run           ExtMonFNAL    ConversionOnly           1:15

Found 0 data products in this SubRun
Found 1 data products in this Run
Data products:
      Friendly Class Name  Module Label  Instance Name    Process Name     Product ID
mu2e::PhysicalVolumeInfos         g4run                 ConversionOnly           0:0

Art has completed and will exit with status 0.
For some background on how to read this output, read the Mu2e web page on the four part data product id. This includes a description of how the parse the "Friendly Class Name" field.

There is one new idea introduced here. The rightmost column contains a pair of small integers, the Product ID. Art knows the data products by two names,

  1. The string format name (which has four underscore-separated fields).
  2. The product ID
When a human readable representation is needed, the first form is used. This form is also used as the branch name of the data product in the Event TTree. When a compact representation is required, the second form is used. For example an art::Ptr<SimParticle> internally holds a product ID and a key that identifies the pointee within that product.

The other tool for inspecting the contents of a data file is the config_dumper program. This program is found in:

This directory is already in your path; it is the ame directory in which the mu2e executable lives. To run this program:
config_dumper -s /grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/conversionOnly_01_data.root
The output for the given file is:
diagLevel: 0
g4ModuleLabel: "g4run"
maxFullPrint: 0
module_type: "MakeCaloReadoutHits"

fileName: "out/conversionOnly_01_data.root"
module_type: "RootOutput"

diagLevel: 0
g4ModuleLabel: "g4run"
maxFullPrint: 0
module_type: "MakeStrawHit"

module_type: "RandomNumberSaver"

diagLevel: 0
makerModuleLabel: "makeSH"
maxFullPrint: 0
module_type: "ReadStrawHit"

generatorModuleLabel: "generate"
module_type: "G4"

firstRun: 1
maxEvents: 20
module_type: "EmptyEvent"

allowReplacement: "true"
inputfile: "EventGenerator/defaultConfigs/conversionGun.txt"
messageOnReplacement: "true"
module_type: "EventGenerator"

diagLevel: 0
g4ModuleLabel: "g4run"
maxFullPrint: 0
maximumEnergy: 1000
minimumEnergy: 0
minimumTimeGap: 100
module_type: "MakeCaloCrystalHits"

caloReadoutModuleLabel: "CaloReadoutHitsMaker"
diagLevel: 0
g4ModuleLabel: "g4run"
generatorModuleLabel: "generate"
maxFullPrint: 0
minimumEnergy: 0
module_type: "ReadBack"
Compare this against InputFiles/conversionOnly_01.fcl .

Exercises for Ex10

Run both of the commands your self and understand the two outputs:

mu2e -c Analyses/test/dumpDataProducts.fcl -s /grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/conversionOnly_01_data.root
config_dumper -s /grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/conversionOnly_01_data.root

Exercise 11

This example introduces StrawHits. To build and run this example,

scons lib/
mu2e -c Ex11/ex11.fcl >& out/ex11.log
Then look at the histgrams in out/ex11.root and understand their shapes.

In Ex09 the idea of StepPointMC's was introduced. It is important to understand that a StepPointMC is not a "hit". Here a "hit" refers to something that looks like a could have come from the actual experiment; at StepPointMC, on the other hand, contains a lot of information that will not be available in the experiment.

For the tracker, the lowest level data object that will come from the experiment will look something like:

  1. A electronics channel Id ( something like: crate/board/chip/channel).
  2. Two TDC values
  3. A digitized waveform (a series of ADC values).
The first step in processing this hit will be to turn it into something that looks like
  1. A straw number that is understood by the geometry system.
  2. A time and a time difference; the time is relative to the time at which the protons hit the production target.
  3. The total deposited energy integrated over the waveform.
  4. A calibrated version of the digitized waveform, in which the vertical axis is something like deposited energy or a pulse height instead of an integer ADC value.

The Mu2e software does not yet represent the first of these quantities. The second quantity is what we call a StrawHit ( but we do not yet have the calibrated version of the digitized waveform; all of the other elements are present ). The header file for the StrawHit is found in:

This class resides in RecoDataProducts because it is something that could be used during the analysis of actual experimental data. Classes that contain any MC information must not be in RecoDataProducts; they must be in MCDataProducts. The Mu2e code
reads in StepPointMCs and forms them into StrawHits. This module was run, with the module label makeSH, when the input file for this example was created.

The code in Ex11/ gets the StrawHits from the event and histograms the number of StrawHits per event, the measured time, the measured time difference and the measured energy deposition. Several of these are histogrammed on different horizontal scales; the reason for this will be apparent later in this section.

The code also illustrates how to turn the measured time difference into a position along the wire. The radius of this point, relative to the axis of the tracker, is one of the important quantities used early in the pattern recognition to build a sample enriched in signal hits.

Finally, the code shows how to access an important part of the Monte Carlo truth chain. It shows how to find all of the StepPointMCs that were combined to form this hit. In the present hit making code, each StepPointMC is assigned unqiuely to one StrawHit. makes a histogram of the number of StepPointMCs taht contribute to each StrawHit.

Exercises for Ex11 - Part 1

  1. Compute the number of times that each straw was hit during this event. Make a histogram of the number of times a straw was hit once in an event, twice in an event, and so on; suppress the zero bin.

Introducing EventMixing

In one typical 1694 ns cycle of the muon beamline, about 20,000 decay in orbit electrons will be produced. In addition, muon nuclear capture will break up nuclei producing about 3,100 protons, 37,400 neutrons and 62,300 photons. These numbers are current as of August 2012 and are documented in Mu2e-doc-2351-v1 ( the link takes you to the must recent version, which may be newer than v1 ).

It is not practical to simulate, all at once, a complete event with all of these background particles. We considered putting generator level cuts on the background species in order to reduce the multiplicity and make simualtions of a complete event practical. However all corners of the generator phase space have some probability of making background hits so we abandoned this strategy.

To solve the problem, we did the following. We generated 400,000,000 events each of which contains exactly one DIO electron; these were passed through Geant4 and we wrote out every event that had at least one StepPointMC in the tracker. Only 197,505 events were written to the output file. Following a calculation done in Mu2e-doc-2351, we can simulate the DIO content of a typical event by choosing 9.8 events from the output file and overlaying them on top of each other. To be more specific we draw a random variate from a Poisson distribution with a mean of 9.8 and we overlay that many events on top of each other.

We repeated this for the proton, neutron and photon backgrounds from nuclear breakup. The filenames that hold the output files from this step, and the correct Poisson means are documented in Mu2e-doc-2351, This document also describes where to find the scripts that made the mix-in files. The files are found in:

Their names and the Poisson means for mixing are:

Filename   Poisson Mean
dioBG_Tracker_data.root 9.8
neutronBG_Tracker_data.root 17.0
protonBG_Tracker_data.root 14.0
photonBG_Tracker_data.root 47.8

Creating an Event Mixed File

The script Workbook/InputFiles/mixing_01.fcl will make an event mixed flie, containing exactly 1 signal event plus the background cocktail described above. To check this out:

  1. cd to your Workbook directory:
  2. cd ..
  3. cvs co Workbook/InputFiles/mixing_01.fcl
  4. cd Workbook
It takes about 15 minutes to run this file. A copy of the output is available in

Exercises for Ex11 - Part 2

  1. Run the program on the file of mixed events:
      mu2e -c Ex11/ex11.fcl -T out/ex11Mixed.root -s /grid/fermiapp/mu2e/DataFiles/ExampleDataFiles/Workbook/mixedEvents_01_data.root
    Use the version of the program that includes the histogram that you added.
  2. Run the program on the four files used for mixing input. These files have StepPointMCs but not StrawHits so you must run the makeSH producer module before your analyzer module:
      mu2e -c Ex11/ex11makeSH.fcl
    This will run on the proton mix-in file. You can run on the other three input files without editing the .fcl file by using the -s and -T options as illustrated above ( or you can edit the .fcl file).
  3. Compare the histograms produced by the 6 jobs. Note where histograms for the background distributions differ from those of the signal distributions. Understand they physical processes that make them different.

Fermilab at Work ]  [ Mu2e Home ]  [ Mu2e @ Work ]  [ Mu2e DocDB ]  [ Mu2e Search ]

For web related questions:
For content related questions:
This file last modified Wednesday, 12-Aug-2015 21:23:19 CDT
Security, Privacy, Legal Fermi National Accelerator Laboratory