Difference: FullChainAnalysis (1 vs. 25)

Revision 2525 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS

Revision 2425 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS
Line: 1010 to 1010
 
META FILEATTACHMENT attachment="Batch.Generation.sh" attr="" comment="script that submits generation to the LXBATCH" date="1218105491" name="Batch.Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" size="816" stream="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" tmpFilename="" user="MartinZeman" version="2"
META FILEATTACHMENT attachment="Batch.Simulation.sh" attr="" comment="script that submits simulation to the LXBATCH" date="1218105502" name="Batch.Simulation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" size="846" stream="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" tmpFilename="" user="MartinZeman" version="2"
Changed:
<
<
META FILEATTACHMENT attachment="Generation.JobTransformation.sh" attr="" comment="generation job transformation that is to be run on LXBATCH" date="1218105395" name="Generation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" size="3906" stream="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
>
>
META FILEATTACHMENT attachment="Generation.JobTransformation.sh" attr="" comment="generation job transformation that is to be run on LXBATCH" date="1219631328" name="Generation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" size="6207" stream="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="2"
 
META FILEATTACHMENT attachment="Simulation.JobTransformation.sh" attr="" comment="simulation job transformation that is to be run on LXBATCH" date="1218105458" name="Simulation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Simulation.JobTransformation.sh" size="4021" stream="D:\Documents\Projects\CERN\TWiki\Simulation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Batch.Reconstruction.sh" attr="" comment="script that submits generation to the LXBATCH" date="1218105598" name="Batch.Reconstruction.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Reconstruction.sh" size="1029" stream="D:\Documents\Projects\CERN\TWiki\Batch.Reconstruction.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Reconstruction.JobTransformation.sh" attr="" comment="reconstruction job transformation that is to be run on LXBATCH" date="1218105614" name="Reconstruction.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" size="4311" stream="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Reconstruction.Custom.sh" attr="" comment="customisable reconstruction that is to be run on LXBATCH" date="1218126269" name="Reconstruction.Custom.sh" path="D:\Documents\Projects\CERN\TWiki\Reconstruction.Custom.sh" size="8949" stream="D:\Documents\Projects\CERN\TWiki\Reconstruction.Custom.sh" tmpFilename="" user="MartinZeman" version="1"
Added:
>
>
META FILEATTACHMENT attachment="Submit.Generation.sh" attr="" comment="convenient LXBATCH generation submitter" date="1219631371" name="Submit.Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Submit.Generation.sh" size="7923" stream="D:\Documents\Projects\CERN\TWiki\Submit.Generation.sh" tmpFilename="" user="MartinZeman" version="1"

Revision 2314 Aug 2008 - FZU.PavelJez

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS
Line: 95 to 95
  6. Logout.
Changed:
<
<
7. Login. Enter your CMT_HOME directory and source the Athena setup with release specifications (14.2.10 32-bit).
>
>
7. Login. Enter your CMT_HOME directory and source the Athena setup with release specifications (14.2.10).
 
> cd cmt-fullchain
Changed:
<
<
> source setup.sh -tag=14.2.10,32
>
>
> source setup.sh -tag=14.2.10
 

Congratulations, your lxplus environment is now ready

Line: 111 to 111
  Load Athena environment:
Changed:
<
<
> source setup.sh -tag=14.2.0,32
>
>
> source setup.sh -tag=14.2.10
 

0.0.1 Employing Startup Script

Line: 134 to 134
  echo "Your CMT home directory:" $CMT_HOME

# Use function Load/Athena

Changed:
<
<
Load/Athena 14.2.0,32
>
>
Load/Athena 14.2.10
  # Load Envrioment Variables # LOCAL export FULL_CHAIN=${HOME}/testarea/FullChain/

Revision 2211 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS
Line: 242 to 242
 

0.0.1 How to get Particle Data Group Table

PDG Table is a database of particles, their energies, charges and code-names being used throughout the Athena. You can get it like this:
Changed:
<
<
> get_files MeV?
>
>
> get_files -jo MeV?
 

0.0.1 How to get Job Option files:

Line: 986 to 986
 

Notes:

Changed:
<
<
  • Each XML output file has approx. 300 kB, be carefull about your quota
>
>
  • Each XML output file size can range from a few KB to a few MB, be carefull about your quota
 
  • Many of the JiveXML? outputs can be empty (no visible event reconstructed)
  • (Of course) you need to run the reconstruction again after making these changes.

Revision 2111 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS
Line: 917 to 917
 fi
Changed:
<
<
11. In the end, we just list all files and directories in the workspace for debugging purposes:
   
ls -lRt
>
>
11. In the end, we just list all files and directories in the workspace for debugging purposes:
ls -lRt
  12. Finally, clean workspace and exit:

Revision 2008 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS
Line: 209 to 209
 
> bkill <jobID>
Added:
>
>

0.1 Using GANGA

first run
ganga -g

Configuration file .gangarc

gangadir = /afs/cern.ch/user/LETTER/NAME/PLACE/gangadir 
 

Revision 1907 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running Athena Full Chain in Release 14

TABLE OF CONTENTS
Line: 7 to 7
  This tutorial is an extension of the Regular Computing Tutorial held at CERN: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial
Changed:
<
<
Its purpose is mainly creating scripts to run Athena Full Chain on LXBATCH and the GRID. Everything is fully compatible with the current 14.2.10 version.
>
>
Its purpose is mainly creating scripts to run Athena Full Chain on LXBATCH and the GRID. If you have some experience with writing scripts for the Athena, you can safely ignore this TWiki and download the files directly, I tried to make them well commented and understandable.

Scripts have been sucessfuly tested on the current 14.2.10 version.

 

About Notation

Added:
>
>

Notation Throughout the TWiki

 
Symbol Meaning
> something type something into the shell
script.sh font used for file names and code
Line: 19 to 22
 
< SOMETHING > substitute for your instance of something
IMPORTANT anything important within the text
Changed:
<
<
IDEA STRUCTURE
>
>

Idea Structure

 
ASSUMPTIONS everything what needs to be done before going to the procedures
PROCEDURES what to do to obtain the result
NOTES what to know to avoid being killed by Athena
Added:
>
>

File Name Structure

In order to preserve some order in the produced files, all scripts here are written in a way to produce unique filenames dependent on the run parameters. Generally the filenames look as follows:
STEP FILE NAME FILE TYPE
Generation <JOB OPTION>.<EVENTS>.<ID>.pool.root generated pool
Simulation <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<GENERATION TOTAL>.<ID>.sim.pool.root simulated pool (sim)
Digitization <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<GENERATION TOTAL>.<ID>.rdo.pool.root Raw Data Object (RDO)
Reconstruction <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<DIGITIZATION TOTAL>.<ID>.esd.pool.root Event Summary Data (ESD)
Reconstruction <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<DIGITIZATION TOTAL>.<ID>.aod.pool.root Analysis Object Data (AOD)
Reconstruction <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<DIGITIZATION TOTAL>.<ID>.tag.pool.root Tagged Data (TAG)
Reconstruction <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<DIGITIZATION TOTAL>.<ID>.ntuple.root Combined ntuple (NTUPLE)
Reconstruction <JOB OPTION>.<SKIP>-<SKIP + EVENTS>of<DIGITIZATION TOTAL>.<ID>.JiveXML.tar JiveXML (XML)
 

Line: 288 to 301
 

0.0.1 Making generation script for LXBATCH step by step:

Added:
>
>
Following is the description how the code of the Generation.JobTransformation.sh script was written.
  1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file:
Line: 384 to 398
 

0.0.1 LXBATCH Generation Submitter

The script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. For this reason, we create a batch submitter:
Added:
>
>
1. First get the parameters into variables:
 
 
Changed:
<
<
#!/bin/bash ### INPUT PARAMETERS export JOBOPTIONS=$1 # which file to run
>
>
export JOBOPTIONS=$1 # which file to run (string)
 export EVENTS=$2 # number of events to process (int) export ID=$3 # unique run identificator of your choice (string)
Added:
>
>
 
Changed:
<
<
# Parse environment variables amongst points
>
>
2. We want to parse some of the variables, so that we can adhere to the file name notation (symbol . is being used as a separator):
 PARSE=(`echo ${JOBOPTIONS} | tr '.' ' '`) OUTPUT=${PARSE[0]}.${PARSE[2]} # name of the CASTOR output for easy orientation
Changed:
<
<
## Remove all the parameters from $1, $2 and $3, so they don't get picked up by the next script while [ $# -gt 0 ] ; do shift ; done
>
>
 
Changed:
<
<
bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/${OUTPUT}.${EVENTS}.${ID}.Screen.txt Generation.sh ${JOBOPTIONS} ${EVENTS} ${ID}
>
>
3. Ensure that the parameters will not get picked up twice, we need to magically remove them:
while [ $# -gt 0 ] ; do shift ; done
 
Added:
>
>
4. And finally submit:
bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh  -o ${SCRATCH}/Generation.${OUTPUT}.${EVENTS}.${ID}.Screen.txt Generation.JobTransformation.sh ${JOBOPTIONS} ${EVENTS} ${ID}
 

0.1 Generation on the GRID

Line: 462 to 483
 
  • Simulation JobTransformation? runs together with digitization, therefore it takes very long. Make sure you try to simulate 50 events at most, if you want to simulate more, you need to modify the Batch.Simulation.sh script to run in longer queues (for instance change the 8 hour queue 8nh to 2 day queue 2nd)

0.1 Making simulation script for LXBATCH step by step:

Changed:
<
<
The whole code is very similar to Generation.JobTransformation.sh, it only works with more parameters and more outputs and it takes much longer to execute.
>
>
Following is the description how the code of the Simulation.JobTransformation.sh script was written. The whole code is very similar to Generation.JobTransformation.sh, it only works with more parameters and more outputs and it takes much longer to execute.
  1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file. It is essentially the same as in the Generation.JobTransformation.sh script.:
Line: 544 to 565
 

0.1 LXBATCH Simulation Submitter

Changed:
<
<
The script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. For this reason, we create a batch submitter:
>
>

0.1 LXBATCH Simulation Submitter

Again, the script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. This script is very similar to the Batch.Generation.sh script, it just works with more parameters. 1. First get the parameters into variables:
 
 
Deleted:
<
<
#!/bin/bash
 export INPUT=$1 # input Generation POOL ROOT export EVENTS=$2 # number of events to process export SKIP=$3 # number of events to skip export ID=$4 # unique run dentifier of your choice
Added:
>
>
 
Changed:
<
<
# Parse environment variables amongst points
>
>
2. Now we have to do some more parsing, just as in the script before:
 PARSE=(`echo ${INPUT} | tr '.' ' '`) OUTPUT=${PARSE[0]}.${PARSE[1]} # name of the CASTOR output for easy orientation TOTAL=${PARSE[2]} # total number of events generated in the input file LAST=$[${EVENTS}+${SKIP}] # arithmetic evaluation
Added:
>
>
 
Changed:
<
<
## Remove all the parameters from $1, $2, $3 and $4, so they don't get picked up by the next script
>
>
3. Again, here comes the magical line:
 while [ $# -gt 0 ] ; do shift ; done
Added:
>
>
 
Added:
>
>
4. And submit:
 bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.Screen.txt Simulation.sh ${INPUT} ${EVENTS} ${SKIP} ${ID}
Deleted:
<
<
 


Line: 571 to 599
 

1 Digitization

Digitization is run together with simulation using JobTransformation? . That is why it takes so long. Simulation produces the hits.pool.root and digitization produces rdo.pool.root file. If for some reason you need to run digitization separately, use the following Job Transformation:
Changed:
<
<
csc_digi_trf.py ...
>
>
> csc_digi_trf.py ...
  You can just simply change the csc_simul_trf.py command in the simulation script to use the digitization.
Line: 615 to 643
 

0.1 Reconstruction on LXPLUS

Changed:
<
<
Since simulation takes so incredibly long, we can now reconstruct only one event to the ESD, AOD and NTUPLE outputs:
>
>
You can use Job Transformation to run reconstruction on the RDO file to obtain ESD, AOD and/or other outputs:
 
> csc_reco_trf.py MC8.PythiaZee.0-1of110.Local.rdo.pool.root esd.pool.root aod.pool.root ntuple.root 1 0 ATLAS-CSC-02-01-00 DEFAULT 
Line: 633 to 661
  Now all you need to do is run the Batch.Reconstruction.sh script, which submits your job to the LXBATCH. The script has four parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:
Changed:
<
<
> ./Batch.Simulation.sh
>
>
> ./Batch.Reconstruction.sh
 
  • DIGITIZATION POOL ROOT is a file we obtained from digitization (in our case this step is identical to simulation step). It should be in your $CASTOR_HOME/fullchain/digitization folder. All you need to do is to COPY/PASTE its name, the scripts downloads it and accesses it automatically (string).
  • EVENTS is a number of events you want to reconstruct (int).
Line: 645 to 673
 
  • All directories and files specified in the environment variables exist.

0.0.1 Making customizable LXBATCH reconstruction script

Changed:
<
<
Now at this point there is little sense in repeating how to write a Job Transformation LXBATCH script, because all you need to do is to change a few lines in the code. The script has been provided as a attachement. What we should do know is to write a more customizable reconstruction script that allows us to play with the Job Options and Athena packages.
>
>
Now at this point there is little sense in repeating how to write a Job Transformation LXBATCH script, because all you need to do is to change a few lines in the code. The script has been provided as a attachement Reconstruction.JobTransformation.sh. What we should do now is to write a more customizable reconstruction script that allows us to play with the Job Options and Athena packages (Reconstruction.Custom.sh):
 
Added:
>
>
1. First things first, double-check the following environment variables suit your setup as in all previous scripts:
 
Deleted:
<
<

###################################################################################################

 ### ENVIRONMENT SETUP ## LOCAL (your AFS environment)
Changed:
<
<
# export HOME=/afs/cern.ch/user/m/mzeman # uncomment and change this in case you for are missing these environment variables for some reason # export CASTOR_HOME=/castor/cern.ch/user/m/mzeman # uncomment and change this in case you for are missing these environment variables for some reason
>
>
# export HOME=/afs/cern.ch/user/m/mzeman # uncomment and change if missing # export CASTOR_HOME=/castor/cern.ch/user/m/mzeman # uncomment and change if missing
 export CMT_HOME=${HOME}/cmt-fullchain
Deleted:
<
<
export CMT_VERSION=v1r20p20080222 # your CMT version export RELEASE=14.2.10 # your Athena Release (e.g.: 14.2.10) export PCACHE=14.2.10 # your PCache (e.g.: 14.2.10.1)
 export FULL_CHAIN=${HOME}/testarea/FullChain/

# CASTOR (your CASTOR environment)

Line: 668 to 691
 export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp export CASTOR_LOG=${CASTOR_HOME}/fullchain/log
Added:
>
>
 
Changed:
<
<
###################################################################################################
>
>
2. Now let us again go through the input paramters. Apologies for the overly complicated parsing, that is to extract the number of digitised events in the RDO input file.
 ### INPUT PARAMETERS
Changed:
<
<
export INPUT=$1 # input POOL ROOT file
>
>
export INPUT=$1 # input RDO file
 export EVENTS=$2 # number of events to process export SKIP=$3 # number of generated events to skip export ID=$4 # unique run identificator of your choice
Line: 688 to 713
  ## Remove all the parameters from $1, $2, $3 and $4, otherwise "source setup.sh ..." would pick them up and probably fail while [ $# -gt 0 ] ; do shift ; done
Added:
>
>
 
Changed:
<
<
################################################################################################### ### ISSUE CODE ## CREATE TEMPORARY WORKSPACE echo "###################################################################################################" echo "CREATING WORKSPACE"
>
>
3. As for the code itself, it is quite different from the one used for Job Transformations, however the workspace is the same:
 # Delete directory if exists if [ -d Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} ] ; then
Deleted:
<
<
echo "###################################################################################################" echo "DELETING ALREADY EXISTENT DIRECTORY"
  rm -fR Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} fi
Added:
>
>
 # Create new directory mkdir Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} cd Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}
Line: 706 to 728
  # Show the power of the processor grep MHz /var/log/dmesg
Added:
>
>
 
Changed:
<
<
# Copy entire run directory in my working place #rfcp ${CASTOR_DIGITIZATION}/${INPUT} .

## SETUP CMT echo "###################################################################################################" echo "SETTING UP CMT"

>
>
4. Since normal Athena jobs require a CMT setup, we have to create a ${CMT_HOME} directory:
 export CURRENTDIR=`pwd` # remember the current directory

# Create CMT directory mkdir cmt cd cmt

Added:
>
>
 
Changed:
<
<
# Create REQUIREMENTS file
>
>
5. Create the requirements file:
 touch requirements
Added:
>
>
 cat <|requirements #---- CMT HOME REQUIREMENTS FILE --------------------------------- set CMTSITE CERN
Line: 738 to 760
 #---------------------------------------------------------------- EOF
Deleted:
<
<
echo ""
 echo "YOUR REQUIREMENTS FILE:" cat requirements
Added:
>
>
 
Added:
>
>
We use cat to make sure the contents of the requirements appears in the Screen Output of bsub.

6. Source CMT!

 export CMT_ROOT=/afs/cern.ch/sw/contrib/CMT/${CMT_VERSION} source ${CMT_ROOT}/mgr/setup.sh
Line: 750 to 776
 source setup.sh -tag=${RELEASE},32 export CMTPATH=/afs/cern.ch/atlas/software/releases/$RELEASE/AtlasProduction/$PCACHE source /afs/cern.ch/atlas/software/releases/${RELEASE}/AtlasOffline/${PCACHE}/AtlasOfflineRunTime/cmt/setup.sh
Added:
>
>
 
Changed:
<
<
### RUN THE JOB
>
>
7. Insert the RDO input file into the Pool File Catalog:
 # Go back to working directory echo ${CURRENTDIR} cd ${CURRENTDIR}
Deleted:
<
<
echo "###################################################################################################" echo "BUILDING POOL FILE CATALOG"
 if [ -d PoolFileCatalog? .xml ] ; then
Deleted:
<
<
echo "###################################################################################################" echo "DELETING ALREADY EXISTENT POOL FILE CATALOG"
  rm -f PoolFileCatalog? .xml fi

pool_insertFileToCatalog rfio:${CASTOR_DIGITIZATION}/${INPUT} # create PoolFileCatalog.XML FCregisterLFN? -p rfio:${CASTOR_DIGITIZATION}/${INPUT} -l ${CASTOR_DIGITIZATION}/`whoami`.${INPUT}

Added:
>
>
 
Changed:
<
<
echo "" echo "###################################################################################################" echo "RUNNING RECONSTRUCTION"

# Get the JobOption? file #get_files -jo RecExCommon? /myTopOptions.py

>
>
8. Now we need to specify our own Job Options file and save it as myTopOptions.py. The maximum number of events EvtMax is written first according to the number of events specified while running the script. Please note that the order of the flags and includes in the code DOES MATTER:
 touch myTopOptions.py

# FLAGS NEED TO COME FIRST

Line: 840 to 858
 echo "YOUR JOB OPTIONS:" cat myTopOptions.py echo ""
Added:
>
>
 
Added:
>
>
Again we use cat to show the myTopOptions.py contents in the Screen Output.

9. Et voilá! Run the job on the myTopOptions.py:

 echo "RUNNING: athena.py myTopOptions.py" athena.py myTopOptions.py
Added:
>
>
 
Changed:
<
<
### COPY OUT RESULTS IF EXIST echo "" echo "###################################################################################################" echo "COPYING SIMULATION OUTPUT"
>
>
10. Copy out the results, if they exist (ESD, AOD, TAG, JiveXML put together in one tar and NTUPLE):
 if [ -e ESD.pool.root ] ; then echo "ESD file found, copying ..." rfcp ESD.pool.root ${CASTOR_RECONSTRUCTION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.esd.pool.root
Line: 882 to 903
 else echo "No NTUPLE file found." fi
Added:
>
>
 
Changed:
<
<
### LIST WORKSPACE CONTENT (for debugging purposes) echo "" echo "###################################################################################################" echo "LISTING WORKSPACE CONTENT"
>
>
11. In the end, we just list all files and directories in the workspace for debugging purposes:
   
 ls -lRt
Added:
>
>
 
Changed:
<
<
### CLEAN WORKSPACE BEFORE EXIT echo "" echo "###################################################################################################" echo "CLEANING WORKSPACE"
>
>
12. Finally, clean workspace and exit:
 cd .. rm -fR Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}

Revision 1807 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Changed:
<
<

Running General Full Chain

>
>

Running Athena Full Chain in Release 14

 TABLE OF CONTENTS

Changed:
<
<
This tutorial is an extension of the Regular Computing Tutorial held at CERN: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#Running_Athena.
>
>
This tutorial is an extension of the Regular Computing Tutorial held at CERN: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial
 
Changed:
<
<
Its purpose is mainly creating scripts to run Athena Full Chain on LXBATCH and the GRID.
>
>
Its purpose is mainly creating scripts to run Athena Full Chain on LXBATCH and the GRID. Everything is fully compatible with the current 14.2.10 version.
 
Added:
>
>

About Notation

Symbol Meaning
> something type something into the shell
script.sh font used for file names and code
Terminus Technicus any technical term
< SOMETHING > substitute for your instance of something
IMPORTANT anything important within the text

IDEA STRUCTURE
ASSUMPTIONS everything what needs to be done before going to the procedures
PROCEDURES what to do to obtain the result
NOTES what to know to avoid being killed by Athena



 

1 Configuration

Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.
Line: 237 to 255
 

0.1 Generation on LXPLUS

Changed:
<
<
Now in our case, we obtained MC8.105145.PythiaZmumu.py and modified file and since we are running locally we want to generate only about ~110 events of Z->e+,e- decay. Make sure you have changed evgenConfig.minevents to 100, otherwise the generation will crash on Too Few Events Requested.
>
>
Now in our case, we obtained MC8.105144.PythiaZee.py and modified file and since we are running locally we want to generate only about ~110 events of Z->e+,e- decay. Make sure you have changed evgenConfig.minevents to 100, otherwise the generation will crash on Too Few Events Requested.
 
Changed:
<
<
> csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root
>
>
> csc_evgen08_trf.py 105144 1 110 1324354657 ./MC8.105144 .PythiaZee.py PythiaZee? .110.Local.pool.root
 

0.1 Generation on LXBATCH

Line: 269 to 287
 
  • The script creates your environment on LXBATCH machine and runs the generation setup of your choosing
Changed:
<
<

0.1 Making generation script for LXBATCH step by step:

>
>

0.0.1 Making generation script for LXBATCH step by step:

  1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file:
Line: 364 to 382
 rm -fR Generation.${OUTPUT}.${EVENTS}.${ID}
Changed:
<
<

0.1 LXBATCH Generation Submitter

>
>

0.0.1 LXBATCH Generation Submitter

 The script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. For this reason, we create a batch submitter:
 
#!/bin/bash   
Line: 383 to 401
 bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/${OUTPUT}.${EVENTS}.${ID}.Screen.txt Generation.sh ${JOBOPTIONS} ${EVENTS} ${ID}

Added:
>
>

0.1 Generation on the GRID

 

1 Simulation

Changed:
<
<

0.1 Running Simulation JobTransformation?

>
>

0.1 Running Simulation

Simulation is run just as any job in Athena using the athena.py script on a Job Options file:
> athena.py <JOB OPTIONS> | tee Simulation.Output.txt

Screen output is saved into the Simulation.Output.txt file. Make sure you have enough disk space.

0.1.1 Running Simulation JobTransformation?

 You can run simulation together with digitization using Geant4 by running csc_simul_trf.py script (accessible after sourcing Athena). Type the help command
> csc_simul_trf.py -h
Changed:
<
<
to get information about script parameters.
>
>
to get information about script parameters. The most important are: the generation input file, type of geometry and SIM and RDO output names.
 

0.1 Simulation on LXPLUS

Changed:
<
<
Running simulation on LXPLUS with the JobTransformation? is very easy, however it takes INCREDIBLY LONG:
>
>
Assumptions: Your generation output file (if you followed the tutorial, its name should be MC8.PythiaZee.110.Local.pool.root) is in the same directory you are trying to run the Job Transformation.

Procedures: Running simulation on LXPLUS with the JobTransformation? is very easy, however it takes incredibly long:

 
Changed:
<
<
> csc_simul_trf.py hits.pool.root rdo.pool.root 1 0 1324354656 ATLAS-CSC-02-01-00 100 1000
>
>
> csc_simul_trf.py PythiaZee? .110.Local.pool.root PythiaZee? .0-1of110.Local.hits.pool.root PythiaZee? .0-1of110.Local.rdo.pool.root 1 0 1324354656 ATLAS-CSC-02-01-00 100 1000
 
Changed:
<
<
Don't forget to substitute < GENERATION POOL ROOT > for the path to your generated pool root file already obtained. Since simulating more than 10 events on LXPLUS is problematic, we need to use LXBATCH.
>
>
Notes: Since simulating more than 10 events on LXPLUS is problematic, we need to use LXBATCH.
 

0.1 Simulation on LXBATCH

To run simulation on LXBATCH, you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.
Line: 429 to 462
 
  • Simulation JobTransformation? runs together with digitization, therefore it takes very long. Make sure you try to simulate 50 events at most, if you want to simulate more, you need to modify the Batch.Simulation.sh script to run in longer queues (for instance change the 8 hour queue 8nh to 2 day queue 2nd)

0.1 Making simulation script for LXBATCH step by step:

Added:
>
>
The whole code is very similar to Generation.JobTransformation.sh, it only works with more parameters and more outputs and it takes much longer to execute.
 1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file. It is essentially the same as in the Generation.JobTransformation.sh script.:
Changed:
<
<
JobTransformation? .sh>
>
>
JobTransformation? .sh>
  Make sure ALL these paths are in accord with your actual directories. If that is not the case, you will undoubtedly FAIL.
Line: 453 to 488
 while [ $# -gt 0 ] ; do shift ; done
Changed:
<
<
3. Now we need to setup the workspace and CMT completely anew, since we are running on a remote machine:
>
>
3. Now we need to setup the workspace:
 
### ISSUE CODE
## CREATE TEMPORARY WORKSPACE
Line: 481 to 516
 . /afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10/AtlasProductionRunTime/cmt/setup.sh
Changed:
<
<
4. Now we can finally leave the CMT be and go back to our working directory, where we can RUN THE JOB:
>
>
4. Now we run the job:
 
# Go back to working directory and run the job
echo ${CURRENTDIR}
Line: 494 to 529
 csc_simul_trf.py ${INPUT} hits.pool.root rdo.pool.root ${EVENTS} ${SKIP} 1324354656 ATLAS-CSC-02-01-00 100 1000
Changed:
<
<
5. Finally we need to copy our generation results from the LXBATCH, the most convenient way is to put it to castor using rfcp command.
>
>
5. Finally we need to copy our generation results from the LXBATCH, the most convenient way is to put it to castor using rfcp command. Again, this is almost the same as in the Generation.JobTransformation.sh with the following exception:
 
Changed:
<
<
# Copy out the results if exist echo "###################################################################################################" echo "COPYING SIMULATION OUTPUT"
>
>
. . .
 if [ -e hits.pool.root ] ; then rfcp hits.pool.root ${CASTOR_SIMULATION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.sim.pool.root rfcp rdo.pool.root ${CASTOR_SIMULATION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.rdo.pool.root fi
Changed:
<
<
# List content of the working directory for debugging purposes ls -lRt

# Clean workspace before exit cd .. rm -fR Simulation.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}

>
>
. . .
 

0.1 LXBATCH Simulation Submitter

Line: 548 to 579
 

1 Reconstruction

Changed:
<
<
Reconstruction is the last step before you can view your data. Generally, it runs on the Reconstruction/RecExample/RecExCommon package. More information about how it works and ho write your JobOptions? can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/RunningReconstruction
>
>
Reconstruction is the last step before you can view and analyse your data. Generally, it runs on the Reconstruction/RecExample/RecExCommon package. More information about how it works and ho write your JobOptions? can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/RunningReconstruction
  Documentation: https://twiki.cern.ch/twiki/bin/view/Atlas/ReconstructionDocumentation

0.1 Running Reconstruction

Changed:
<
<
Generation is run just as any job in Athena using the athena.py script on a Job Options file. It is also good to print the output on the screen and store it into a file using | tee:
>
>
Again, reconstruction is run just as any job:
 
Changed:
<
<
> athena.py | tee Generation.Output.txt
>
>
> athena.py jobOptions.py | tee Generation.Output.txt
 
Added:
>
>
However this requires that you specify your input file. You can do so in your myTopOptions.py. Note that it has to be RDO file, digitization output. You also have to insert this file into the PoolFileCatalog.

Notes:

  • Make sure you have over 50 MB free space if you are running on LXPLUS.
  • By default, the jobOptions.py script is used, which is linked to the myTopOptions.py, which you can modify to suit your needs.
  • Be careful about the geometry and trigger flags in your myTopOptoins.py. Their order matters.
 

0.0.1 How to insert file to PoolFileCatalog?

Changed:
<
<
The classical F.A.Q.; you need to
>
>
The classical F.A.Q.; you need to do just this:
 
> pool_insertFileToCatalog  <PATH>/<FILENAME>
> FCregisterLFN -p <PATH>/<FILENAME> -l <USERNAME>.<FILENAME> 
If you are using a file from CASTOR, do not forget to add the rfio protocol like this: rfio:/<PATH>/<FILENAME>
Added:
>
>

0.0.1 Customizing Job Options

 

0.0.1 Running Reconstruction using Job Transformation

Changed:
<
<
You can run generation using Pythia Job Transformation by issuing:
>
>
You can run reconstruction using Job Transformation as follows:
 
> csc_reco_trf.py <INPUT RDO.pool.root> esd.pool.root aod.pool.root ntuple.root <MAX EVENTS> <SKIP> <GEOMETRY> DEFAULT 
Deleted:
<
<
Again the big advantage is you don't have to source your CMT home for the Athena to understand the command.
 

0.1 Reconstruction on LXPLUS

Changed:
<
<
Now in our case, we obtained MC8.105145.PythiaZmumu.py and modified file and since we are running locally we want to generate only about ~110 events of Z->e+,e- decay. Make sure you have changed evgenConfig.minevents to 100, otherwise the generation will crash on Too Few Events Requested.
>
>
Since simulation takes so incredibly long, we can now reconstruct only one event to the ESD, AOD and NTUPLE outputs:
 
Changed:
<
<
> csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root
>
>
> csc_reco_trf.py PythiaZee? .0-1of110.Local.rdo.pool.root esd.pool.root aod.pool.root ntuple.root 1 0 ATLAS-CSC-02-01-00 DEFAULT
 

0.1 Reconstruction on LXBATCH

To run simulation on LXBATCH, you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.
Changed:
<
<
>
>
  What you need to do ONLY ONCE is to make the scripts executable by:
Changed:
<
<
> chmod +x Simulation.sh
> chmod +x Batch.Simulation.sh
>
>
> chmod +x JobTransformation? .sh
> chmod +x Batch.Reconstruction.sh
 
Changed:
<
<
Now all you need to do is run the Batch.Simulation.sh script, which submits your job to the LXBATCH. The script has four parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:
>
>
Now all you need to do is run the Batch.Reconstruction.sh script, which submits your job to the LXBATCH. The script has four parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:
 
Changed:
<
<
> ./Batch.Simulation.sh
>
>
> ./Batch.Simulation.sh
 
Changed:
<
<
  • GENERATION POOL ROOT is a file we obtained from generation. It should be in your $CASTOR_HOME/fullchain/generation folder. All you need to do is to COPY/PASTE its name, the scripts downloads it and accesses it automatically (string).
  • EVENTS is a number of events you want to simulate (int).
  • SKIP is the number of events you want to skip (for example you want to simulate between 2000 and 2200 events of your generation file)
>
>
  • DIGITIZATION POOL ROOT is a file we obtained from digitization (in our case this step is identical to simulation step). It should be in your $CASTOR_HOME/fullchain/digitization folder. All you need to do is to COPY/PASTE its name, the scripts downloads it and accesses it automatically (string).
  • EVENTS is a number of events you want to reconstruct (int).
  • SKIP is the number of events you want to skip (int)
 
  • ID is an identifier of your choosing (string).
Changed:
<
<
A few notes:
  • You can run the submitter Batch.Simulation.sh from any folder (public, private - it does not matter)
  • Make sure both scripts are executable before panicking.
  • Make sure all directories and files specified in the environment variables exist !!! (if you followed this tutorial EXACTLY, everything should be working)
  • The script creates your environment on LXBATCH machine and runs the simulation setup of your choosing
  • Simulation JobTransformation? runs together with digitization, therefore it takes very long. Make sure you try to simulate 50 events at most, if you want to simulate more, you need to modify the Batch.Simulation.sh script to run in longer queues (for instance change the 8 hour queue 8nh to 2 day queue 2nd)
>
>
NOTES: You can run the submitter Batch.Reconstruction.sh from any folder (public, private - it does not matter). Again double-check that:
  • Both scripts are executable.
  • All directories and files specified in the environment variables exist.
 
Added:
>
>

0.0.1 Making customizable LXBATCH reconstruction script

Now at this point there is little sense in repeating how to write a Job Transformation LXBATCH script, because all you need to do is to change a few lines in the code. The script has been provided as a attachement. What we should do know is to write a more customizable reconstruction script that allows us to play with the Job Options and Athena packages.
 
Added:
>
>
 
Added:
>
>
################################################################################################### ### ENVIRONMENT SETUP ## LOCAL (your AFS environment) # export HOME=/afs/cern.ch/user/m/mzeman # uncomment and change this in case you for are missing these environment variables for some reason # export CASTOR_HOME=/castor/cern.ch/user/m/mzeman # uncomment and change this in case you for are missing these environment variables for some reason export CMT_HOME=${HOME}/cmt-fullchain export CMT_VERSION=v1r20p20080222 # your CMT version export RELEASE=14.2.10 # your Athena Release (e.g.: 14.2.10) export PCACHE=14.2.10 # your PCache (e.g.: 14.2.10.1) export FULL_CHAIN=${HOME}/testarea/FullChain/
 
Added:
>
>
# CASTOR (your CASTOR environment) export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp export CASTOR_LOG=${CASTOR_HOME}/fullchain/log
 
Added:
>
>
################################################################################################### ### INPUT PARAMETERS export INPUT=$1 # input POOL ROOT file export EVENTS=$2 # number of events to process export SKIP=$3 # number of generated events to skip export ID=$4 # unique run identificator of your choice
 
Changed:
<
<
Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
>
>
# Parse environment variables amongst points PARSE=(`echo ${INPUT} | tr '.' ' '`) OUTPUT=${PARSE[0]}.${PARSE[1]} # name of the CASTOR output for easy orientation
 
Changed:
<
<
# main jobOption
include ("RecExCommon/RecExCommon_topOptions.py")
>
>
PARSE=(`echo ${PARSE[2]} | tr '-' ' '`) # another parsing to obtain the total number of simulated events from the filename PARSE=(`echo ${PARSE[1]} | tr 'of' ' '`) TOTAL=${PARSE[0]} # total number of events generated in the input file

LAST=$[${EVENTS}+${SKIP}] # arithmetic evaluation

 
Added:
>
>
## Remove all the parameters from $1, $2, $3 and $4, otherwise "source setup.sh ..." would pick them up and probably fail while [ $# -gt 0 ] ; do shift ; done
 
Added:
>
>
################################################################################################### ### ISSUE CODE ## CREATE TEMPORARY WORKSPACE echo "###################################################################################################" echo "CREATING WORKSPACE" # Delete directory if exists if [ -d Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} ] ; then echo "###################################################################################################" echo "DELETING ALREADY EXISTENT DIRECTORY" rm -fR Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} fi # Create new directory mkdir Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} cd Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}

# Show the power of the processor grep MHz /var/log/dmesg

# Copy entire run directory in my working place #rfcp ${CASTOR_DIGITIZATION}/${INPUT} .

## SETUP CMT echo "###################################################################################################" echo "SETTING UP CMT" export CURRENTDIR=`pwd` # remember the current directory

# Create CMT directory mkdir cmt cd cmt

# Create REQUIREMENTS file touch requirements cat <|requirements #---- CMT HOME REQUIREMENTS FILE --------------------------------- set CMTSITE CERN set SITEROOT /afs/cern.ch macro ATLAS_DIST_AREA \${SITEROOT}/atlas/software/dist macro ATLAS_TEST_AREA /afs/cern.ch/user/m/mzeman/testarea/FullChain

apply_tag oneTest # use ATLAS working directory apply_tag setup # use working directory apply_tag 32 # use 32-bit apply_tag ${RELEASE}

use AtlasLogin? AtlasLogin? -* \$(ATLAS_DIST_AREA)

#---------------------------------------------------------------- EOF

echo "" echo "YOUR REQUIREMENTS FILE:" cat requirements

export CMT_ROOT=/afs/cern.ch/sw/contrib/CMT/${CMT_VERSION} source ${CMT_ROOT}/mgr/setup.sh

which cmt cmt config source setup.sh -tag=${RELEASE},32 export CMTPATH=/afs/cern.ch/atlas/software/releases/$RELEASE/AtlasProduction/$PCACHE source /afs/cern.ch/atlas/software/releases/${RELEASE}/AtlasOffline/${PCACHE}/AtlasOfflineRunTime/cmt/setup.sh

### RUN THE JOB # Go back to working directory echo ${CURRENTDIR} cd ${CURRENTDIR}

echo "###################################################################################################" echo "BUILDING POOL FILE CATALOG" if [ -d PoolFileCatalog? .xml ] ; then echo "###################################################################################################" echo "DELETING ALREADY EXISTENT POOL FILE CATALOG" rm -f PoolFileCatalog? .xml fi

pool_insertFileToCatalog rfio:${CASTOR_DIGITIZATION}/${INPUT} # create PoolFileCatalog.XML FCregisterLFN? -p rfio:${CASTOR_DIGITIZATION}/${INPUT} -l ${CASTOR_DIGITIZATION}/`whoami`.${INPUT}

echo "" echo "###################################################################################################" echo "RUNNING RECONSTRUCTION"

# Get the JobOption? file #get_files -jo RecExCommon? /myTopOptions.py touch myTopOptions.py

# FLAGS NEED TO COME FIRST if [ ${EVENTS} -ne 0 ] ; then echo "Setting EvtMax? to $EVENTS" echo "### NUMBER OF EVENTS" >> myTopOptions.py echo "EvtMax=${EVENTS}" >> myTopOptions.py fi

cat >> myTopOptions.py << EOF

### INPUT FILE (POOL FILE CATALOG needs to be defined) PoolRDOInput? = ["rfio:${CASTOR_DIGITIZATION}/${INPUT}"]

### GEOMETRY SELECTION DetDescrVersion? ="ATLAS-CSC-02-01-00" # new geometry for Job Transformations # DetDescrVersion? ="ATLAS-CSC-01-02-00" # default geometry

### GENERAL FLAGS # doTrigger = False # for example do not run trigger simulation # doTruth=False

### INCLUDE YOUR OWN ALGORITHMS(s) # UserAlgs? =[ "MyPackage/MyAlgorithm_jobOptions.py" ]

### ESD output CONFIGURATION # doESD=False # doWriteESD=False

### TRIGGER CNFIGURATION #(see https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerFlags) include ("TriggerRelease/TriggerFlags.py") # Trigger Flags TriggerFlags? .doLVL1=True TriggerFlags? .doLVL2=True TriggerFlags? .doEF=True

# ANALYSIS OBJECT DATA output CONFIGURATION # (see https://twiki.cern.ch/twiki/bin/view/Atlas/UserAnalysisTest#The_AOD_Production_Flags) # doAOD=False # doWriteAOD=False # doWriteTAG=False # from ParticleBuilderOptions? .AODFlags import AODFlags

### DETECTOR FLAGS # switch off Inner Detector, Calorimeters, or Muon Chambers #include ("RecExCommon/RecExCommon_flags.py") .Muon_setOff() .ID_setOff() .Calo_setOff()

### MAIN JOB OPTIONS include ("RecExCommon/RecExCommon_topOptions.py")

### USER MODIFIER ## ATLANTIS # if needed to create JiveXML? for Atlantis

 include( "JiveXML/JiveXML_jobOptionBase.py" ) include( "JiveXML/DataTypes_All.py" )
Changed:
<
<
# user modifier should come here doJiveXML=True JiveXML? =True OnlineJiveXML? =False AtlantisGeometry? =True
>
>
EOF

echo "YOUR JOB OPTIONS:" cat myTopOptions.py echo ""

echo "RUNNING: athena.py myTopOptions.py" athena.py myTopOptions.py

### COPY OUT RESULTS IF EXIST echo "" echo "###################################################################################################" echo "COPYING SIMULATION OUTPUT" if [ -e ESD.pool.root ] ; then echo "ESD file found, copying ..." rfcp ESD.pool.root ${CASTOR_RECONSTRUCTION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.esd.pool.root else echo "No ESD file found." fi

if [ -e AOD.pool.root ] ; then echo "AOD file found, copying ..." rfcp AOD.pool.root ${CASTOR_RECONSTRUCTION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.aod.pool.root else echo "No AOD file found." fi

if [ -e TAG.pool.root ] ; then echo "TAG files found, copying ..." rfcp TAG.pool.root ${CASTOR_RECONSTRUCTION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.tag.pool.root # ZIP ALL JiveXML? outputs to one file for FILE in `ls Jive*` do tar cf JiveXML? .tar Jive* done rfcp JiveXML? .tar ${CASTOR_RECONSTRUCTION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.JiveXML.tar else echo "No TAG files found." fi

if [ -e ntuple.root ] ; then echo "NTUPLE file found, copying ..." rfcp ntuple.root ${CASTOR_RECONSTRUCTION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.ntuple.root else echo "No NTUPLE file found." fi

### LIST WORKSPACE CONTENT (for debugging purposes) echo "" echo "###################################################################################################" echo "LISTING WORKSPACE CONTENT" ls -lRt

### CLEAN WORKSPACE BEFORE EXIT echo "" echo "###################################################################################################" echo "CLEANING WORKSPACE" cd .. rm -fR Reconstruction.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}

0.0.1 LXBATCH reconstruction submitter

This file is essentially the same as LXBATCH generation submitter and simulation submitter. It has been attached. Now let us go through writting a script that will enable us to submit more jobs in parallel and choose whether we want them customized or not.

.
.
.
still at work
.
.
.
 
Added:
>
>

0.1 Reconstruction on the GRID

 

1 Analysis

Added:
>
>

0.1 Analysis Packages

cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples

0.2 ROOT

Simple. Run your X server and type:
> root <MY RECONSTRUCTED FILE>
 

0.1 Atlantis

Is a Java-based event display tool that can be run on any computer and is not therefore dependent on Athena. Download here:
Line: 686 to 977
 

Deleted:
<
<

Running General Full Chain on the GRID

  -- MartinZeman - 28 Jul 2008
-- MichalMarcisovsky - 28 Jul 2008
Line: 696 to 986
 
META FILEATTACHMENT attachment="Generation.JobTransformation.sh" attr="" comment="generation job transformation that is to be run on LXBATCH" date="1218105395" name="Generation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" size="3906" stream="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Simulation.JobTransformation.sh" attr="" comment="simulation job transformation that is to be run on LXBATCH" date="1218105458" name="Simulation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Simulation.JobTransformation.sh" size="4021" stream="D:\Documents\Projects\CERN\TWiki\Simulation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Batch.Reconstruction.sh" attr="" comment="script that submits generation to the LXBATCH" date="1218105598" name="Batch.Reconstruction.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Reconstruction.sh" size="1029" stream="D:\Documents\Projects\CERN\TWiki\Batch.Reconstruction.sh" tmpFilename="" user="MartinZeman" version="1"
Changed:
<
<
META FILEATTACHMENT attachment="Reconstruction.JobTransformation.sh" attr="" comment="generation job transformation that is to be run on LXBATCH" date="1218105614" name="Reconstruction.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" size="4311" stream="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
>
>
META FILEATTACHMENT attachment="Reconstruction.JobTransformation.sh" attr="" comment="reconstruction job transformation that is to be run on LXBATCH" date="1218105614" name="Reconstruction.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" size="4311" stream="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Reconstruction.Custom.sh" attr="" comment="customisable reconstruction that is to be run on LXBATCH" date="1218126269" name="Reconstruction.Custom.sh" path="D:\Documents\Projects\CERN\TWiki\Reconstruction.Custom.sh" size="8949" stream="D:\Documents\Projects\CERN\TWiki\Reconstruction.Custom.sh" tmpFilename="" user="MartinZeman" version="1"

Revision 1707 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain

TABLE OF CONTENTS
Line: 244 to 244
 

0.1 Generation on LXBATCH

If we want to generate more events, for instance the default minimum 5000 events, we need to run Generation on LXBATCH. To do this you need to have the following scripts:
Changed:
<
<
>
>
 If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.

What you need to do ONLY ONCE is to make the scripts executable by:

Changed:
<
<
> chmod +x Generation.sh
>
>
> chmod +x JobTransformation? .sh
 > chmod +x Batch.Generation.sh
Line: 269 to 269
 
  • The script creates your environment on LXBATCH machine and runs the generation setup of your choosing
Changed:
<
<

0.1 Optional - Making generation script for LXBATCH step by step:

>
>

0.1 Making generation script for LXBATCH step by step:

  1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file:
Line: 403 to 403
 

0.1 Simulation on LXBATCH

To run simulation on LXBATCH, you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.
Changed:
<
<
>
>
  What you need to do ONLY ONCE is to make the scripts executable by:
Changed:
<
<
> chmod +x Simulation.sh
>
>
> chmod +x JobTransformation? .sh
 > chmod +x Batch.Simulation.sh
Line: 428 to 428
 
  • The script creates your environment on LXBATCH machine and runs the simulation setup of your choosing
  • Simulation JobTransformation? runs together with digitization, therefore it takes very long. Make sure you try to simulate 50 events at most, if you want to simulate more, you need to modify the Batch.Simulation.sh script to run in longer queues (for instance change the 8 hour queue 8nh to 2 day queue 2nd)
Changed:
<
<

0.1 Optional - Making simulation script for LXBATCH step by step:

1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file. It is essentially the same as in the Generation.sh script.:
>
>

0.1 Making simulation script for LXBATCH step by step:

1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file. It is essentially the same as in the Generation.JobTransformation.sh script.:
 
Changed:
<
<
### ENVIRONMENT SETUP ## LOCAL (your AFS environment) # export HOME=/afs/cern.ch/user/m/mzeman # uncomment and change this if missing # export CASTOR_HOME=/castor/cern.ch/user/m/mzeman # uncomment and change this if missing export CMT_HOME=${HOME}/cmt-fullchain export FULL_CHAIN=${HOME}/testarea/FullChain/

# CASTOR (your CASTOR environment) export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp export CASTOR_LOG=${CASTOR_HOME}/fullchain/log

>
>
JobTransformation? .sh>
  Make sure ALL these paths are in accord with your actual directories. If that is not the case, you will undoubtedly FAIL.
Line: 561 to 548
 

1 Reconstruction

Added:
>
>
Reconstruction is the last step before you can view your data. Generally, it runs on the Reconstruction/RecExample/RecExCommon package. More information about how it works and ho write your JobOptions? can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/RunningReconstruction

Documentation: https://twiki.cern.ch/twiki/bin/view/Atlas/ReconstructionDocumentation

0.1 Running Reconstruction

Generation is run just as any job in Athena using the athena.py script on a Job Options file. It is also good to print the output on the screen and store it into a file using | tee:

> athena.py <JOB OPTIONS> | tee Generation.Output.txt

0.1.1 How to insert file to PoolFileCatalog?

The classical F.A.Q.; you need to
> pool_insertFileToCatalog  <PATH>/<FILENAME>
> FCregisterLFN -p <PATH>/<FILENAME> -l <USERNAME>.<FILENAME> 
If you are using a file from CASTOR, do not forget to add the rfio protocol like this: rfio:/<PATH>/<FILENAME>

0.1.2 Running Reconstruction using Job Transformation

You can run generation using Pythia Job Transformation by issuing:
> csc_reco_trf.py <INPUT RDO.pool.root> esd.pool.root aod.pool.root ntuple.root <MAX EVENTS> <SKIP> <GEOMETRY> DEFAULT 

Again the big advantage is you don't have to source your CMT home for the Athena to understand the command.

0.2 Reconstruction on LXPLUS

Now in our case, we obtained MC8.105145.PythiaZmumu.py and modified file and since we are running locally we want to generate only about ~110 events of Z->e+,e- decay. Make sure you have changed evgenConfig.minevents to 100, otherwise the generation will crash on Too Few Events Requested.
> csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root

0.3 Reconstruction on LXBATCH

To run simulation on LXBATCH, you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.

What you need to do ONLY ONCE is to make the scripts executable by:

> chmod +x Simulation.sh
> chmod +x Batch.Simulation.sh

Now all you need to do is run the Batch.Simulation.sh script, which submits your job to the LXBATCH. The script has four parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:

> ./Batch.Simulation.sh <GENERATION POOL ROOT> <EVENTS> <SKIP> <ID>
  • GENERATION POOL ROOT is a file we obtained from generation. It should be in your $CASTOR_HOME/fullchain/generation folder. All you need to do is to COPY/PASTE its name, the scripts downloads it and accesses it automatically (string).
  • EVENTS is a number of events you want to simulate (int).
  • SKIP is the number of events you want to skip (for example you want to simulate between 2000 and 2200 events of your generation file)
  • ID is an identifier of your choosing (string).

A few notes:

  • You can run the submitter Batch.Simulation.sh from any folder (public, private - it does not matter)
  • Make sure both scripts are executable before panicking.
  • Make sure all directories and files specified in the environment variables exist !!! (if you followed this tutorial EXACTLY, everything should be working)
  • The script creates your environment on LXBATCH machine and runs the simulation setup of your choosing
  • Simulation JobTransformation? runs together with digitization, therefore it takes very long. Make sure you try to simulate 50 events at most, if you want to simulate more, you need to modify the Batch.Simulation.sh script to run in longer queues (for instance change the 8 hour queue 8nh to 2 day queue 2nd)

 Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
Line: 586 to 641
 

1 Analysis

1.1 Atlantis

Changed:
<
<
Is a Java-based event display tool that can be run on any computer and is not therefore dependent on Athena. Download here: http://www.hep.ucl.ac.uk/atlas/atlantis/?q=download
>
>
Is a Java-based event display tool that can be run on any computer and is not therefore dependent on Athena. Download here:

Documentation and download: http://cern.ch/atlantis
Online Atlantis (requires Java): http://www.hep.ucl.ac.uk/atlas/atlantis/webstart/atlantis.jnlp

0.0.1 How to create JiveXML?

Assumptions: your reconstruction package is installed and working
  http://www.hep.ucl.ac.uk/atlas/atlantis/?q=jivexml
Changed:
<
<

0.1 Virtual Point 1

Is a 3D visualization tool within Athena. http://cern.ch/atlas-vp1
>
>
Solutions: a) Either insert the following includes into your myTopOptions.py:
include("JiveXML/JiveXML_jobOptionBase.py")
include("JiveXML/DataTypes_All.py")

OR b) change the flags directly in your package (e.g.: =RecExCommon_flags.py) as follows:

# --- write Atlantis xml file 
JiveXML = True
OnlineJiveXML = False

# --- write Atlantis geometry xml file (JiveXML needs to be set True)
AtlantisGeometry = True

Notes:

  • Each XML output file has approx. 300 kB, be carefull about your quota
  • Many of the JiveXML? outputs can be empty (no visible event reconstructed)
  • (Of course) you need to run the reconstruction again after making these changes.
 
Deleted:
<
<
In order to run the program, you need to use SSH connection with enabled X11 forwarding and have your X server running. Windows users look here: http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/WindowsRelated
 
Changed:
<
<
To run the Virtual Point 1, type:
>
>

0.1 Virtual Point 1

Is a 3D ATLAS visualization tool within Athena. In order to run the program, you need to use SSH connection with enabled X11 forwarding and have your X server running. Windows users look here: http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/WindowsRelated

You need to run the Virtual Point 1 on a ESD file like this:

 
Changed:
<
<
> vp1 .pool.root
>
>
> vp1
 
Added:
>
>
Documentation here: http://cern.ch/atlas-vp1



 

Running General Full Chain on the GRID

-- MartinZeman - 28 Jul 2008

Revision 1607 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Changed:
<
<

Running General Full Chain on LXBATCH

TABLE OF CONTENTS:
>
>

Running General Full Chain

TABLE OF CONTENTS
 
Added:
>
>
This tutorial is an extension of the Regular Computing Tutorial held at CERN: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#Running_Athena.

Its purpose is mainly creating scripts to run Athena Full Chain on LXBATCH and the GRID.

 

1 Configuration

Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.
Line: 178 to 182
 

1 Generation

Changed:
<
<

0.1 Running Generation JobTransformation?

You can run generation using Pythia by issuing:
>
>
More information can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookGeneration

0.1 Running Generation

Generation is run just as any job in Athena using the athena.py script on a Job Options file. It is also good to print the output on the screen and store it into a file using | tee:
 
Changed:
<
<
> csc_evgen08_trf.py ./
>
>
> athena.py | tee Generation.Output.txt

You will need to obtain these two files:

PDGTABLE.MeV
jobOptions.pythia.py

0.0.1 How to get Particle Data Group Table

PDG Table is a database of particles, their energies, charges and code-names being used throughout the Athena. You can get it like this:
> get_files PDGTABLE.MeV
 

0.0.1 How to get Job Option files:

Line: 190 to 208
 
  • MC8.105144.PythiaZee.py for Z->e,e decay,
  • and many others
Changed:
<
<
Use the following command to get Job Options you want:
>
>
Use the following command to get Job Options you want, we are going to use the Z->e+,e- decay:
 
Changed:
<
<
> get_files -jo MC8.105145.PythiaZmumu.py
>
>
> get_files -jo MC8.105144.PythiaZee.py
 

0.0.1 How to change minimum number of events:

Changed:
<
<
The default value is 5000, therefore if you choose your > MAX EVENTS $lt below 5000, you will get into problems and generation will crash. What you need to do is edit the JobOptions? .py file (e.g. MC8.105144.PythiaZee.py) and add this line to the end:
>
>
The default value of the MC8.105144.PythiaZee.py is 5000 events, therefore if you choose your > MAX EVENTS $lt below 5000, you will get into problems and generation will crash. What you need to do is edit the JobOptions? .py file (e.g. MC8.105144.PythiaZee.py) and add this line to the end:
 
evgenConfig.minevents = 100 # default is 5000
Changed:
<
<
On LXBATCH we can of course leave the default 5000, however since we use get_files -jo on LXBATCH to obtain the JobOptions? file, it gets the unmodified version from the central repositories.
>
>
On LXBATCH we can of course leave the default 5000, however since we use get_files -jo on LXBATCH to obtain the JobOptions? file, it uses the unmodified version from the central repositories and your modified version from your AFS account.

0.0.1 Running Generation using Job Transformation

You can run generation using Pythia Job Transformation by issuing:
> csc_evgen08_trf.py <RUN no.> <FIRST EVENT> <MAX EVENTS> <RANDOM SEED> ./<JOB OPTIONS .py> <OUTPUT run.pool.root>

Why would we want to do this? Because the Job Transformation does not require any CMT setup and requirements files. You will only need to call these global repository CMTs and then you can run the Job:

>
  . /afs/cern.ch/atlas/software/releases/14.2.10/cmtsite/setup.sh -tag=14.2.10 
  export CMTPATH=/afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10 
  . /afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10/AtlasProductionRunTime/cmt/setup.sh 
>
 

0.1 Generation on LXPLUS

Changed:
<
<
Now in our case, we obtained MC8.105145.PythiaZmumu.py file and therefore we want to generate ~110 events of Z->mu,mu decay, do:
>
>
Now in our case, we obtained MC8.105145.PythiaZmumu.py and modified file and since we are running locally we want to generate only about ~110 events of Z->e+,e- decay. Make sure you have changed evgenConfig.minevents to 100, otherwise the generation will crash on Too Few Events Requested.
 
> csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root

0.1 Generation on LXBATCH

Changed:
<
<
To run Generation on LXBATCH (with default minimum 5000 events), you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.
>
>
If we want to generate more events, for instance the default minimum 5000 events, we need to run Generation on LXBATCH. To do this you need to have the following scripts:
 
Added:
>
>
If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.
  What you need to do ONLY ONCE is to make the scripts executable by:
Line: 352 to 386
 

Changed:
<
<

1 Simulation on LXBATCH

>
>

1 Simulation

 

0.1 Running Simulation JobTransformation?

You can run simulation together with digitization using Geant4 by running csc_simul_trf.py script (accessible after sourcing Athena). Type the help command
Line: 516 to 550
 

Changed:
<
<

1 Digitization on LXBATCH

>
>

1 Digitization

 Digitization is run together with simulation using JobTransformation? . That is why it takes so long. Simulation produces the hits.pool.root and digitization produces rdo.pool.root file. If for some reason you need to run digitization separately, use the following Job Transformation:
csc_digi_trf.py <INPUT hits.pool.root> <OUTPUT rdo.pool.root> <MAX EVENTS> <SKIP EVENTS> <GEOMETRY VERSION> <SEEDS> ...
Line: 526 to 560
 

Changed:
<
<

1 Reconstruction on LXBATCH

>
>

1 Reconstruction

 Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
Line: 549 to 583
 

Added:
>
>

1 Analysis

1.1 Atlantis

Is a Java-based event display tool that can be run on any computer and is not therefore dependent on Athena. Download here: http://www.hep.ucl.ac.uk/atlas/atlantis/?q=download

http://www.hep.ucl.ac.uk/atlas/atlantis/?q=jivexml

1.2 Virtual Point 1

Is a 3D visualization tool within Athena. http://cern.ch/atlas-vp1

In order to run the program, you need to use SSH connection with enabled X11 forwarding and have your X server running. Windows users look here: http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/WindowsRelated

To run the Virtual Point 1, type:

> vp1 <YOUR ESD FILE>.pool.root
 

Running General Full Chain on the GRID

Deleted:
<
<

1 Configuration on the GRID

2 Generation on the GRID

3 Simulation on the GRID

4 Reconstruction on the GRID

  -- MartinZeman - 28 Jul 2008
-- MichalMarcisovsky - 28 Jul 2008
Changed:
<
<
META FILEATTACHMENT attachment="Generation.sh" attr="" comment="generation job that is to be run on LXBATCH" date="1217860680" name="Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Generation.sh" size="3906" stream="D:\Documents\Projects\CERN\TWiki\Generation.sh" tmpFilename="" user="MartinZeman" version="2"
META FILEATTACHMENT attachment="Batch.Generation.sh" attr="" comment="script that submits Generation.sh to the LXBATCH" date="1217863788" name="Batch.Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" size="787" stream="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Batch.Simulation.sh" attr="" comment="script that submits Simulation.sh to the LXBATCH" date="1217863800" name="Batch.Simulation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" size="818" stream="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Simulation.sh" attr="" comment="simulation job that is to be run on LXBATCH" date="1217863850" name="Simulation.sh" path="D:\Documents\Projects\CERN\TWiki\Simulation.sh" size="4036" stream="D:\Documents\Projects\CERN\TWiki\Simulation.sh" tmpFilename="" user="MartinZeman" version="1"
>
>
META FILEATTACHMENT attachment="Batch.Generation.sh" attr="" comment="script that submits generation to the LXBATCH" date="1218105491" name="Batch.Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" size="816" stream="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" tmpFilename="" user="MartinZeman" version="2"
META FILEATTACHMENT attachment="Batch.Simulation.sh" attr="" comment="script that submits simulation to the LXBATCH" date="1218105502" name="Batch.Simulation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" size="846" stream="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" tmpFilename="" user="MartinZeman" version="2"
META FILEATTACHMENT attachment="Generation.JobTransformation.sh" attr="" comment="generation job transformation that is to be run on LXBATCH" date="1218105395" name="Generation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" size="3906" stream="D:\Documents\Projects\CERN\TWiki\Generation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Simulation.JobTransformation.sh" attr="" comment="simulation job transformation that is to be run on LXBATCH" date="1218105458" name="Simulation.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Simulation.JobTransformation.sh" size="4021" stream="D:\Documents\Projects\CERN\TWiki\Simulation.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Batch.Reconstruction.sh" attr="" comment="script that submits generation to the LXBATCH" date="1218105598" name="Batch.Reconstruction.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Reconstruction.sh" size="1029" stream="D:\Documents\Projects\CERN\TWiki\Batch.Reconstruction.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Reconstruction.JobTransformation.sh" attr="" comment="generation job transformation that is to be run on LXBATCH" date="1218105614" name="Reconstruction.JobTransformation.sh" path="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" size="4311" stream="D:\Documents\Projects\CERN\TWiki\Reconstruction.JobTransformation.sh" tmpFilename="" user="MartinZeman" version="1"

Revision 1504 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LXBATCH

TABLE OF CONTENTS:
Line: 103 to 103
  # Load Envrioment Variables # LOCAL export FULL_CHAIN=${HOME}/testarea/FullChain/
Added:
>
>
export SCRATCH=${HOME}/scratch0/
 
Changed:
<
<
# CASTOR (you need to create these)
>
>
# CASTOR
  export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation/ export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation/ export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization/
Line: 117 to 118
  #----------------------------------------------------------------
Added:
>
>
!!! Please note that $SCRATCH and all environment variables beginning with $CASTOR_ are necessary for the script to run correctly.
 

0.1 Preparing CASTOR

In case you have not already done it, do create the directories on Castor (*C*ERN *A*dvanced *STOR*age manager for large ammounts of data: http://castor.web.cern.ch/castor/). It is necessary for handling large files, since your AFS space quota is tight and it is being used it in the scripts here:
Line: 529 to 532
 cmt co -r AnalysisExamples? -00-20-14 PhysicsAnalysis? /AnalysisCommon/AnalysisExamples
Added:
>
>
# main jobOption
include ("RecExCommon/RecExCommon_topOptions.py")


include( "JiveXML/JiveXML_jobOptionBase.py" )
include( "JiveXML/DataTypes_All.py" )
# user modifier should come here
doJiveXML=True
JiveXML=True
OnlineJiveXML=False
AtlantisGeometry=True
 

Revision 1404 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Changed:
<
<

1 Running General Full Chain on LXBATCH

>
>

Running General Full Chain on LXBATCH

TABLE OF CONTENTS:
 
Changed:
<
<

0.1 Configuration

>
>

1 Configuration

 Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.
Changed:
<
<

0.1. Setting up the CMT Environment

>
>

0.1 Setting up the CMT Environment

 1. Login to lxplus.cern.ch using your credentials. (http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login)

2. Prepare the necessary directories. Create your $CMT_HOME directory for your configuration management (see http://www.cmtsite.org):

Line: 65 to 68
  Congratulations, your lxplus environment is now ready
Changed:
<
<

0.2. What to do every time you log on?

>
>

0.1 What to do every time you log on?

 Go to your CMT_HOME directory:
> cd cmt-fullchain
Line: 76 to 79
 > source setup.sh -tag=14.2.0,32
Changed:
<
<

0.2.1. Employing Startup Script

>
>

0.0.1 Employing Startup Script

 You can simplify this process by employing startup script with functions and environment variables, see http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login. Here is an example of a startup script with functions employed for the Full Chain:
#---- LXPLUS STARTUP SCRIPT ---------------------------------
Line: 115 to 118
 #----------------------------------------------------------------
Changed:
<
<

0.2.2. Preparing CASTOR

>
>

0.1 Preparing CASTOR

 In case you have not already done it, do create the directories on Castor (*C*ERN *A*dvanced *STOR*age manager for large ammounts of data: http://castor.web.cern.ch/castor/). It is necessary for handling large files, since your AFS space quota is tight and it is being used it in the scripts here:
>
Line: 130 to 133
 
Changed:
<
<

0.3. Running Full Chain on LXPLUS

>
>

0.1 Running Full Chain on LXPLUS

 Follow the tutorial here: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#Setting_up_job_transformation

Disadvantages:

Line: 140 to 143
  In conclusion, using lxplus on larger scale is impossible. The solution is to use LXBATCH or GRID.
Changed:
<
<

0.4. Using LXBATCH

>
>

0.1 Using LXBATCH

 LX batch submitting works in a way that the job you want to run is processed on some other computer than the one you are just logged in. There are some things to know about LXBATCH:
  • You are starting up clean, so there is
    • NO startup script (no functions and environment variables of your own),
Line: 168 to 171
 
> bkill <jobID>
Added:
>
>


 
Changed:
<
<

I. Generation

I.1. Running Generation JobTransformation?

>
>

1 Generation

1.1 Running Generation JobTransformation?

 You can run generation using Pythia by issuing:
> csc_evgen08_trf.py <RUN no.> <FIRST EVENT> <MAX EVENTS> <RANDOM SEED> ./<JOB OPTIONS .py> <OUTPUT run.pool.root>
Changed:
<
<

I.2. How to get Job Option files:

>
>

0.0.1 How to get Job Option files:

 You can choose from a variety of files available on http://reserve02.usatlas.bnl.gov/lxr/source/atlas/Generators/EvgenJobOptions/share/:
  • MC8.105145.PythiaZmumu.py for Z->mu,mu decay,
  • MC8.105144.PythiaZee.py for Z->e,e decay,
Line: 187 to 192
 > get_files -jo MC8.105145.PythiaZmumu.py
Changed:
<
<

I.3. How to change minimum number of events:

>
>

0.0.1 How to change minimum number of events:

 The default value is 5000, therefore if you choose your > MAX EVENTS $lt below 5000, you will get into problems and generation will crash. What you need to do is edit the JobOptions? .py file (e.g. MC8.105144.PythiaZee.py) and add this line to the end:
evgenConfig.minevents = 100 # default is 5000
Line: 195 to 200
  On LXBATCH we can of course leave the default 5000, however since we use get_files -jo on LXBATCH to obtain the JobOptions? file, it gets the unmodified version from the central repositories.
Changed:
<
<

I.4. Generation on LXPLUS

>
>

0.1 Generation on LXPLUS

 Now in our case, we obtained MC8.105145.PythiaZmumu.py file and therefore we want to generate ~110 events of Z->mu,mu decay, do:
> csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root
Changed:
<
<

I.5. Generation on LXBATCH

>
>

0.1 Generation on LXBATCH

 To run Generation on LXBATCH (with default minimum 5000 events), you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.
Changed:
<
<
>
>
 
Changed:
<
<
A few notes:
  • You can run this script from any folder (public, private - it does not matter)
  • Make sure you make it executable using > chmod +x Generation.sh
  • Make sure all directories and files specified in the environment variables exist !!! (if you followed this tutorial EXACTLY, everything should be working)
  • The script creates your environment on LXBATCH machine and runs the generation setup of your choosing
>
>
What you need to do ONLY ONCE is to make the scripts executable by:
> chmod +x Generation.sh
> chmod +x Batch.Generation.sh
 
Changed:
<
<
The script has three parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:
>
>
Now all you need to do is run the Batch.Generation.sh script, which submits your job to the LXBATCH. The script has three parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:
 
Changed:
<
<
> ./BatchGenerate.sh
>
>
> ./Batch.Generation.sh
 
Changed:
<
<
Making generation script for LXBATCH step by step:
>
>
A few notes:
  • You can run the submitter Batch.Generation.sh from any folder (public, private - it does not matter)
  • Make sure both scripts are executable before panicking.
  • Make sure all directories and files specified in the environment variables exist !!! (if you followed this tutorial EXACTLY, everything should be working)
  • The script creates your environment on LXBATCH machine and runs the generation setup of your choosing

0.1 Optional - Making generation script for LXBATCH step by step:

  1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file:
### ENVIRONMENT SETUP
## LOCAL (your AFS environment)
Changed:
<
<
# export HOME=/afs/cern.ch/user// # uncomment and change in case you are missing these # export CASTOR_HOME=/castor/cern.ch/user// # uncomment and change in case you are missing these
>
>
# export HOME=/afs/cern.ch/user/m/mzeman # uncomment and change if missing # export CASTOR_HOME=/castor/cern.ch/user/m/mzeman # uncomment and change if missing
 export CMT_HOME=${HOME}/cmt-fullchain export FULL_CHAIN=${HOME}/testarea/FullChain/
Line: 242 to 253
  Make sure ALL these paths are in accord with your actual directories. If that is not the case, you will undoubtedly FAIL.
Changed:
<
<
2. Secondly, we need to process the input parameters coming from the BatchGenerate.sh script:
>
>
2. Secondly, we need to process the input parameters coming from the Batch.Generation.sh script:
 
### INPUT PARAMETERS
Changed:
<
<
export JOBOPTIONS=$1 #MC8.105144.PythiaZee.py # which file to run (string) export EVENTS=$3 # number of events to process (int) export ID=$4 # unique run identificator of your choice (string)
>
>
export JOBOPTIONS=$1 # which file to run export EVENTS=$2 # number of events to process (int) export ID=$3 # unique run identificator of your choice (string)
  # Parse environment variables amongst points PARSE=(`echo ${JOBOPTIONS} | tr '.' ' '`)
Added:
>
>
OUTPUT=${PARSE[0]}.${PARSE[2]} # name of the CASTOR output for easy orientation
 RUN=${PARSE[1]} # generation Job Transformation requires RUN number parsed from the JobOptions? filename
Deleted:
<
<
OUTPUT=${PARSE[0]}.${PARSE[1]}.${PARSE[2]} # name of the CASTOR output for easy orientation
 
Changed:
<
<
## Remove all the parameters from $1, $2 and $3 (source setup.sh could pick them up)
>
>
## Remove all the parameters from $1, $2 and $3, otherwise "source setup.sh ..." would pick them up and probably fail
 while [ $# -gt 0 ] ; do shift ; done

3. Now we need to setup the workspace and CMT completely anew, since we are running on a remote machine:

Added:
>
>
### ISSUE CODE
 ## CREATE TEMPORARY WORKSPACE echo "###################################################################################################" echo "CREATING WORKSPACE"
Line: 295 to 307
 echo "###################################################################################################" echo "RUNNING GENERATION"
Changed:
<
<
csc_evgen08_trf.py ${RUN} 1 ${EVENTS} 1324354657 ./${JOBOPTIONS} ${OUTPUT}.${EVENTS}.${ID}pool.root
>
>
csc_evgen08_trf.py ${RUN} 1 ${EVENTS} 1324354657 ./${JOBOPTIONS} ${OUTPUT}.${EVENTS}.${ID}.pool.root
 

5. Finally we need to copy our generation results from the LXBATCH, the most convenient way is to put it to castor using rfcp command.

Line: 303 to 315
 # Copy out the results if they exist echo "###################################################################################################" echo "COPYING GENERATION OUTPUT"
Changed:
<
<
if [ -e ${OUTPUT}.${EVENTS}.${ID}pool.root ] ; then rfcp ${OUTPUT}.${EVENTS}.${ID}pool.root ${CASTOR_GENERATION}/${OUTPUT}.${EVENTS}.${ID}pool.root
>
>
if [ -e ${OUTPUT}.${EVENTS}.${ID}.pool.root ] ; then rfcp ${OUTPUT}.${EVENTS}.${ID}.pool.root ${CASTOR_GENERATION}/${OUTPUT}.${EVENTS}.${ID}.pool.root
 fi
Deleted:
<
<
 
Deleted:
<
<
6. In the final step we clean our mess:
 # List content of the working directory for debugging purposes ls -lRt

Line: 318 to 327
 rm -fR Generation.${OUTPUT}.${EVENTS}.${ID}
Changed:
<
<

I.6. LXBATCH Generation Submitter

>
>

0.1 LXBATCH Generation Submitter

 The script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. For this reason, we create a batch submitter:
 
Changed:
<
<
# export SCRATCH=${HOME}/scratch0/ export JOBOPTIONS=$1 #MC8.105144.PythiaZee.py # which file to run, look here: http://reserve02.usatlas.bnl.gov/lxr/source/atlas/Generators/EvgenJobOptions/share/ export EVENTS=$2 # number of events to process export ID=$3 # unique run identificator of your choice
>
>
#!/bin/bash ### INPUT PARAMETERS export JOBOPTIONS=$1 # which file to run export EVENTS=$2 # number of events to process (int) export ID=$3 # unique run identificator of your choice (string)
 
Added:
>
>
# Parse environment variables amongst points
 PARSE=(`echo ${JOBOPTIONS} | tr '.' ' '`)
Changed:
<
<
OUTPUT=${PARSE[0]}.${PARSE[1]}.${PARSE[2]} # name of the Screen Output
>
>
OUTPUT=${PARSE[0]}.${PARSE[2]} # name of the CASTOR output for easy orientation

## Remove all the parameters from $1, $2 and $3, so they don't get picked up by the next script while [ $# -gt 0 ] ; do shift ; done

bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/${OUTPUT}.${EVENTS}.${ID}.Screen.txt Generation.sh ${JOBOPTIONS} ${EVENTS} ${ID}

 
Deleted:
<
<
bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/ScreenOutput.${OUTPUT}.${EVENTS}.${ID}.txt Generation.sh ${JOBOPTIONS} ${EVENTS} ${ID}
 
Added:
>
>


 
Changed:
<
<

II. Simulation on LXBATCH

>
>

1 Simulation on LXBATCH

1.1 Running Simulation JobTransformation?

You can run simulation together with digitization using Geant4 by running csc_simul_trf.py script (accessible after sourcing Athena). Type the help command
> csc_simul_trf.py -h
to get information about script parameters.
 
Added:
>
>

0.1 Simulation on LXPLUS

Running simulation on LXPLUS with the JobTransformation? is very easy, however it takes INCREDIBLY LONG:
 
Changed:
<
<
#---------------------------------------------------------------- #!/bin/zsh # These commands are executed on LXbatch to run the Full Chain
>
>
> csc_simul_trf.py hits.pool.root rdo.pool.root 1 0 1324354656 ATLAS-CSC-02-01-00 100 1000
Don't forget to substitute < GENERATION POOL ROOT > for the path to your generated pool root file already obtained. Since simulating more than 10 events on LXPLUS is problematic, we need to use LXBATCH.
 
Changed:
<
<
# ENVIROMENT SETUP # LOCAL # export HOME=/afs/cern.ch/user/m/mzeman # export CASTOR_HOME=/castor/cern.ch/user/m/mzeman
>
>

0.1 Simulation on LXBATCH

To run simulation on LXBATCH, you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.

What you need to do ONLY ONCE is to make the scripts executable by:

> chmod +x Simulation.sh
> chmod +x Batch.Simulation.sh

Now all you need to do is run the Batch.Simulation.sh script, which submits your job to the LXBATCH. The script has four parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:

> ./Batch.Simulation.sh <GENERATION POOL ROOT> <EVENTS> <SKIP> <ID>
  • GENERATION POOL ROOT is a file we obtained from generation. It should be in your $CASTOR_HOME/fullchain/generation folder. All you need to do is to COPY/PASTE its name, the scripts downloads it and accesses it automatically (string).
  • EVENTS is a number of events you want to simulate (int).
  • SKIP is the number of events you want to skip (for example you want to simulate between 2000 and 2200 events of your generation file)
  • ID is an identifier of your choosing (string).

A few notes:

  • You can run the submitter Batch.Simulation.sh from any folder (public, private - it does not matter)
  • Make sure both scripts are executable before panicking.
  • Make sure all directories and files specified in the environment variables exist !!! (if you followed this tutorial EXACTLY, everything should be working)
  • The script creates your environment on LXBATCH machine and runs the simulation setup of your choosing
  • Simulation JobTransformation? runs together with digitization, therefore it takes very long. Make sure you try to simulate 50 events at most, if you want to simulate more, you need to modify the Batch.Simulation.sh script to run in longer queues (for instance change the 8 hour queue 8nh to 2 day queue 2nd)

0.2 Optional - Making simulation script for LXBATCH step by step:

1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file. It is essentially the same as in the Generation.sh script.:
### ENVIRONMENT SETUP
## LOCAL (your AFS environment)
# export HOME=/afs/cern.ch/user/m/mzeman # uncomment and change this if missing
# export CASTOR_HOME=/castor/cern.ch/user/m/mzeman # uncomment and change this if missing
 export CMT_HOME=${HOME}/cmt-fullchain export FULL_CHAIN=${HOME}/testarea/FullChain/
Changed:
<
<
# CASTOR
>
>
# CASTOR (your CASTOR environment)
 export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp export CASTOR_LOG=${CASTOR_HOME}/fullchain/log
Added:
>
>
Make sure ALL these paths are in accord with your actual directories. If that is not the case, you will undoubtedly FAIL.

2. Secondly, we need to process the input parameters coming from the Batch.Simulation.sh script:

### INPUT PARAMETERS
export INPUT=$1    # input POOL ROOT file
export EVENTS=$2 # number of events to process
export SKIP=$3 # number of generated events to skip
export ID=$4 # unique run identificator of your choice
 
Changed:
<
<
# INPUTS export GENERATION_INPUT=${CASTOR_GENERATION}/105144.pool.root # input RDO file export EVENTS=$1 # number of events to process export SKIP=$2 # number of generated events to skip export DSN=$3 # unique run identificator of your choice
>
>
# Parse environment variables amongst points PARSE=(`echo ${INPUT} | tr '.' ' '`) OUTPUT=${PARSE[0]}.${PARSE[1]} # name of the CASTOR output for easy orientation TOTAL=${PARSE[2]} # total number of events generated in the input file LAST=$[${EVENTS}+${SKIP}] # arithmetic evaluation
 
Changed:
<
<
# Remove all the parameters from $1, $2 and $3, otherwise # "source setup.sh ..." would pick them up and probably fail
>
>
## Remove all the parameters from $1, $2, $3 and $4, otherwise "source setup.sh ..." would pick them up and probably fail
 while [ $# -gt 0 ] ; do shift ; done
Added:
>
>
 
Changed:
<
<
# Create temporary workspace
>
>
3. Now we need to setup the workspace and CMT completely anew, since we are running on a remote machine:
### ISSUE CODE
## CREATE TEMPORARY WORKSPACE
 echo "###################################################################################################" echo "CREATING WORKSPACE"
Changed:
<
<
mkdir Temp.$DSN.$EVENTS.$SKIP cd Temp.$DSN.$EVENTS.$SKIP
>
>
mkdir Simulation.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID} cd Simulation.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}
  # Copy entire run directory in my working place
Changed:
<
<
rfcp ${GENERATION_INPUT} .
>
>
rfcp ${CASTOR_GENERATION}/${INPUT} .
  # Setup Athena environment # Have some experience with newer Athena releases,
Line: 381 to 449
 export CURRENTDIR=`pwd` # remember the current directory cd ${CMT_HOME}
Changed:
<
<
echo "Your CMT home directory:" $CMT_HOME source ${CMT_HOME}/setup.sh -tag=14.2.0,32
>
>
echo "Your CMT home directory:" ${CMT_HOME} source ${CMT_HOME}/setup.sh -tag=14.2.10,32
 
Changed:
<
<
. /afs/cern.ch/atlas/software/releases/14.2.0/cmtsite/setup.sh -tag=14.2.0 export CMTPATH=/afs/cern.ch/atlas/software/releases/14.2.0/AtlasProduction/14.2.0.1 . /afs/cern.ch/atlas/software/releases/14.2.0/AtlasProduction/14.2.0.1/AtlasProductionRunTime/cmt/setup.sh
>
>
. /afs/cern.ch/atlas/software/releases/14.2.10/cmtsite/setup.sh -tag=14.2.10 export CMTPATH=/afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10 . /afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10/AtlasProductionRunTime/cmt/setup.sh
 
Added:
>
>
4. Now we can finally leave the CMT be and go back to our working directory, where we can RUN THE JOB:
 # Go back to working directory and run the job
Added:
>
>
echo ${CURRENTDIR} cd ${CURRENTDIR}

# Run the Job

 echo "###################################################################################################" echo "RUNNING SIMULATION"
Deleted:
<
<
echo $CURRENTDIR cd $CURRENTDIR csc_simul_trf.py 105144.pool.root hits.pool.root rdo.pool.root ${EVENTS} ${SKIP} 1324354656 ATLAS-CSC-02-01-00 100 1000
 
Added:
>
>
csc_simul_trf.py ${INPUT} hits.pool.root rdo.pool.root ${EVENTS} ${SKIP} 1324354656 ATLAS-CSC-02-01-00 100 1000

5. Finally we need to copy our generation results from the LXBATCH, the most convenient way is to put it to castor using rfcp command.

 # Copy out the results if exist echo "###################################################################################################" echo "COPYING SIMULATION OUTPUT" if [ -e hits.pool.root ] ; then
Changed:
<
<
rfcp hits.pool.root ${CASTOR_SIMULATION}/hits.pool.$DSN.$SKIP.root rfcp rdo.pool.root ${CASTOR_SIMULATION}/rdo.pool.$DSN.$SKIP.root
>
>
rfcp hits.pool.root ${CASTOR_SIMULATION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.sim.pool.root rfcp rdo.pool.root ${CASTOR_SIMULATION}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.rdo.pool.root
 fi

# List content of the working directory for debugging purposes

Line: 408 to 485
  # Clean workspace before exit cd ..
Changed:
<
<
rm -fR Temp.$DSN.$EVENTS.$SKIP

#----------------------------------------------------------------

>
>
rm -fR Simulation.${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}
 
Changed:
<
<
LX batch submitter
>
>

0.1 LXBATCH Simulation Submitter

The script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. For this reason, we create a batch submitter:
 
Changed:
<
<
# export SCRATCH=${HOME}/scratch0/ export EVENTS=$1 # number of events to process export SKIP=$2 # number of events to skip export DSN=$3 # unique run dentifier of your choice
>
>
#!/bin/bash export INPUT=$1 # input Generation POOL ROOT export EVENTS=$2 # number of events to process export SKIP=$3 # number of events to skip export ID=$4 # unique run dentifier of your choice
 
Changed:
<
<
bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/ScreenOutput.$DSN.$EVENTS.$SKIP.txt Simulate.sh ${EVENTS} ${SKIP} ${DSN}
>
>
# Parse environment variables amongst points PARSE=(`echo ${INPUT} | tr '.' ' '`) OUTPUT=${PARSE[0]}.${PARSE[1]} # name of the CASTOR output for easy orientation TOTAL=${PARSE[2]} # total number of events generated in the input file LAST=$[${EVENTS}+${SKIP}] # arithmetic evaluation
 
Changed:
<
<
#----------------------------------------------------------------
>
>
## Remove all the parameters from $1, $2, $3 and $4, so they don't get picked up by the next script while [ $# -gt 0 ] ; do shift ; done

bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh -o ${SCRATCH}/${OUTPUT}.${SKIP}-${LAST}of${TOTAL}.${ID}.Screen.txt Simulation.sh ${INPUT} ${EVENTS} ${SKIP} ${ID}

 
Deleted:
<
<
Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
 
Changed:
<
<

Digitization on LXBATCH

>
>


1 Digitization on LXBATCH

 Digitization is run together with simulation using JobTransformation? . That is why it takes so long. Simulation produces the hits.pool.root and digitization produces rdo.pool.root file. If for some reason you need to run digitization separately, use the following Job Transformation:
csc_digi_trf.py <INPUT hits.pool.root> <OUTPUT rdo.pool.root> <MAX EVENTS> <SKIP EVENTS> <GEOMETRY VERSION> <SEEDS> ...
You can just simply change the csc_simul_trf.py command in the simulation script to use the digitization.
Changed:
<
<

Reconstruction on LXBATCH

>
>


 
Changed:
<
<

Running General Full Chain on the GRID

>
>

1 Reconstruction on LXBATCH

Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
 
Changed:
<
<
-- MartinZeman - 28 Jul 2008
>
>


 
Changed:
<
<
-- MichalMarcisovsky - 28 Jul 2008
>
>

Running General Full Chain on the GRID

1 Configuration on the GRID

2 Generation on the GRID

3 Simulation on the GRID

4 Reconstruction on the GRID

 
Changed:
<
<
>
>
-- MartinZeman - 28 Jul 2008
-- MichalMarcisovsky - 28 Jul 2008
 
Changed:
<
<
META FILEATTACHMENT attachment="Generation.sh" attr="" comment="generation job to be submitted to the LXBATCH" date="1217588382" name="Generation.sh" path="D:\Documents\Projects\CERN\LXBATCH\Generation.sh" size="3140" stream="D:\Documents\Projects\CERN\LXBATCH\Generation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="BatchGenerate.sh" attr="" comment="Script to submit Generation.sh to the LXBATCH" date="1217588404" name="BatchGenerate.sh" path="D:\Documents\Projects\CERN\LXBATCH\BatchGenerate.sh" size="601" stream="D:\Documents\Projects\CERN\LXBATCH\BatchGenerate.sh" tmpFilename="" user="MartinZeman" version="1"
>
>
META FILEATTACHMENT attachment="Generation.sh" attr="" comment="generation job that is to be run on LXBATCH" date="1217860680" name="Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Generation.sh" size="3906" stream="D:\Documents\Projects\CERN\TWiki\Generation.sh" tmpFilename="" user="MartinZeman" version="2"
META FILEATTACHMENT attachment="Batch.Generation.sh" attr="" comment="script that submits Generation.sh to the LXBATCH" date="1217863788" name="Batch.Generation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" size="787" stream="D:\Documents\Projects\CERN\TWiki\Batch.Generation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Batch.Simulation.sh" attr="" comment="script that submits Simulation.sh to the LXBATCH" date="1217863800" name="Batch.Simulation.sh" path="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" size="818" stream="D:\Documents\Projects\CERN\TWiki\Batch.Simulation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="Simulation.sh" attr="" comment="simulation job that is to be run on LXBATCH" date="1217863850" name="Simulation.sh" path="D:\Documents\Projects\CERN\TWiki\Simulation.sh" size="4036" stream="D:\Documents\Projects\CERN\TWiki\Simulation.sh" tmpFilename="" user="MartinZeman" version="1"

Revision 1304 Aug 2008 - FZU.OldrichKepka

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Changed:
<
<

Running General Full Chain on LXBATCH

>
>

1 Running General Full Chain on LXBATCH

 
Changed:
<
<

0. Configuration

>
>

0.1 Configuration

 Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.

0.1. Setting up the CMT Environment

Revision 1204 Aug 2008 - FZU.OldrichKepka

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LXBATCH

Added:
>
>
 

0. Configuration

Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.

Revision 1102 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LXBATCH

Line: 177 to 177
 

I.2. How to get Job Option files:

You can choose from a variety of files available on http://reserve02.usatlas.bnl.gov/lxr/source/atlas/Generators/EvgenJobOptions/share/:
Changed:
<
<
  • MC8.105145.PythiaZmumu.py for Z->mu,mu
  • MC8.105144.PythiaZee.py for Z->e,e
>
>
  • MC8.105145.PythiaZmumu.py for Z->mu,mu decay,
  • MC8.105144.PythiaZee.py for Z->e,e decay,
 
  • and many others

Use the following command to get Job Options you want:

Line: 220 to 220
 
  • ID is an identifier of your choosing (string)

Making generation script for LXBATCH step by step:

Added:
>
>

 1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file:
Changed:
<
<
### ENVIROMENT SETUP
>
>
### ENVIRONMENT SETUP
 ## LOCAL (your AFS environment) # export HOME=/afs/cern.ch/user// # uncomment and change in case you are missing these # export CASTOR_HOME=/castor/cern.ch/user// # uncomment and change in case you are missing these
Line: 241 to 243
  2. Secondly, we need to process the input parameters coming from the BatchGenerate.sh script:
Deleted:
<
<
###################################################################################################
 ### INPUT PARAMETERS export JOBOPTIONS=$1 #MC8.105144.PythiaZee.py # which file to run (string) export EVENTS=$3 # number of events to process (int)
Line: 256 to 257
 while [ $# -gt 0 ] ; do shift ; done
Changed:
<
<
3. Now
>
>
3. Now we need to setup the workspace and CMT completely anew, since we are running on a remote machine:
 
Changed:
<
<
# Create temporary workspace
>
>
## CREATE TEMPORARY WORKSPACE
 echo "###################################################################################################" echo "CREATING WORKSPACE"
Changed:
<
<
mkdir Generation.$JOBOPTIONS.$EVENTS.$DSN cd Generation.$JOBOPTIONS.$EVENTS.$DSN

# Copy entire run directory in my working place get_files -jo ${JOBOPTIONS}

>
>
mkdir Generation.${OUTPUT}.${EVENTS}.${ID} cd Generation.${OUTPUT}.${EVENTS}.${ID}
  # Setup Athena environment # Have some experience with newer Athena releases,
Line: 275 to 273
 export CURRENTDIR=`pwd` # remember the current directory cd ${CMT_HOME}
Changed:
<
<
echo "Your CMT home directory:" $CMT_HOME
>
>
echo "Your CMT home directory:" ${CMT_HOME}
 source ${CMT_HOME}/setup.sh -tag=14.2.10,32

. /afs/cern.ch/atlas/software/releases/14.2.10/cmtsite/setup.sh -tag=14.2.10 export CMTPATH=/afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10 . /afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10/AtlasProductionRunTime/cmt/setup.sh

Added:
>
>
 
Added:
>
>
4. Now we can finally leave the CMT be and go back to our working directory, where we can RUN THE JOB:
# Go back to working directory
echo ${CURRENTDIR}
cd ${CURRENTDIR}
 
Changed:
<
<
# Go back to working directory and run the job
>
>
# Download the Job Options for the current run: get_files -jo ${JOBOPTIONS}

# Run the Job

 echo "###################################################################################################"
Changed:
<
<
echo "RUNNING SIMULATION" echo $CURRENTDIR cd $CURRENTDIR csc_evgen08_trf.py ${RUN} 1 ${EVENTS} 1324354657 ./${JOBOPTIONS} $JOBOPTIONS.$EVENTS.$DSN.pool.root
>
>
echo "RUNNING GENERATION"
 
Changed:
<
<
# Copy out the results if exist
>
>
csc_evgen08_trf.py ${RUN} 1 ${EVENTS} 1324354657 ./${JOBOPTIONS} ${OUTPUT}.${EVENTS}.${ID}pool.root

5. Finally we need to copy our generation results from the LXBATCH, the most convenient way is to put it to castor using rfcp command.

# Copy out the results if they exist
 echo "###################################################################################################"
Changed:
<
<
echo "COPYING SIMULATION OUTPUT" if [ -e $JOBOPTIONS.$EVENTS.$DSN.pool.root ] ; then rfcp $JOBOPTIONS.$EVENTS.$DSN.pool.root ${CASTOR_GENERATION}/$JOBOPTIONS.$EVENTS.$DSN.pool.root
>
>
echo "COPYING GENERATION OUTPUT" if [ -e ${OUTPUT}.${EVENTS}.${ID}pool.root ] ; then rfcp ${OUTPUT}.${EVENTS}.${ID}pool.root ${CASTOR_GENERATION}/${OUTPUT}.${EVENTS}.${ID}pool.root
 fi
Added:
>
>
 
Added:
>
>
6. In the final step we clean our mess:
 # List content of the working directory for debugging purposes ls -lRt

# Clean workspace before exit cd ..

Changed:
<
<
rm -fR Generation.$JOBOPTIONS.$EVENTS.$DSN
>
>
rm -fR Generation.${OUTPUT}.${EVENTS}.${ID}
 
Added:
>
>

I.6. LXBATCH Generation Submitter

The script we have just made needs to be run on the LXBATCH machine, which is executed by the bsub command. For this reason, we create a batch submitter:
 
# export SCRATCH=${HOME}/scratch0/
export JOBOPTIONS=$1 #MC8.105144.PythiaZee.py                        # which file to run, look here: http://reserve02.usatlas.bnl.gov/lxr/source/atlas/Generators/EvgenJobOptions/share/
export EVENTS=$2                                       # number of events to process
export ID=$3                                          # unique run identificator of your choice

PARSE=(`echo ${JOBOPTIONS} | tr '.' ' '`)
OUTPUT=${PARSE[0]}.${PARSE[1]}.${PARSE[2]}         # name of the Screen Output

bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh  -o ${SCRATCH}/ScreenOutput.${OUTPUT}.${EVENTS}.${ID}.txt Generation.sh ${JOBOPTIONS} ${EVENTS} ${ID}
 

II. Simulation on LXBATCH

Line: 410 to 435
 
csc_digi_trf.py <INPUT hits.pool.root> <OUTPUT rdo.pool.root> <MAX EVENTS> <SKIP EVENTS> <GEOMETRY VERSION> <SEEDS> ...
Changed:
<
<
You can just simply change the csc_simul_trf.py command to suit this one.
>
>
You can just simply change the csc_simul_trf.py command in the simulation script to use the digitization.
 

Reconstruction on LXBATCH

Revision 1001 Aug 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Changed:
<
<

Running General Full Chain on LX batch

>
>

Running General Full Chain on LXBATCH

 
Changed:
<
<

0. Setting up the CMT Environment

>
>

0. Configuration

 Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.
Added:
>
>

0.1. Setting up the CMT Environment

 1. Login to lxplus.cern.ch using your credentials. (http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login)

2. Prepare the necessary directories. Create your $CMT_HOME directory for your configuration management (see http://www.cmtsite.org):

Line: 55 to 56
  6. Logout.
Changed:
<
<
7. Login. Enter your CMT_HOME directory and source the Athena setup with release specifications (14.2.0 32-bit).
>
>
7. Login. Enter your CMT_HOME directory and source the Athena setup with release specifications (14.2.10 32-bit).
 
> cd cmt-fullchain
Changed:
<
<
> source setup.sh -tag=14.2.0,32
>
>
> source setup.sh -tag=14.2.10,32
 

Congratulations, your lxplus environment is now ready

Changed:
<
<

What to do every time you log on?

>
>

0.2. What to do every time you log on?

 Go to your CMT_HOME directory:
> cd cmt-fullchain
Line: 74 to 75
 > source setup.sh -tag=14.2.0,32
Changed:
<
<

Employing Startup Script

>
>

0.2.1. Employing Startup Script

 You can simplify this process by employing startup script with functions and environment variables, see http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login. Here is an example of a startup script with functions employed for the Full Chain:
#---- LXPLUS STARTUP SCRIPT ---------------------------------
Line: 113 to 114
 #----------------------------------------------------------------
Changed:
<
<

Preparing CASTOR

>
>

0.2.2. Preparing CASTOR

 In case you have not already done it, do create the directories on Castor (*C*ERN *A*dvanced *STOR*age manager for large ammounts of data: http://castor.web.cern.ch/castor/). It is necessary for handling large files, since your AFS space quota is tight and it is being used it in the scripts here:
>
Line: 128 to 129
 
Changed:
<
<

Running Full Chain on LX plus

>
>

0.3. Running Full Chain on LXPLUS

 Follow the tutorial here: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#Setting_up_job_transformation

Disadvantages:

Line: 136 to 137
 
  • Your session gets busy, therefore you loose one terminal
  • Logging of lxplus, loosing connection or turning off your computer causes the run to stop
Changed:
<
<
In conclusion, using lxplus on larger scale is impossible. The solution is to use LX batch or Grid.
>
>
In conclusion, using lxplus on larger scale is impossible. The solution is to use LXBATCH or GRID.
 
Changed:
<
<

Using LX batch

LX batch submitting works in a way that the job you want to run is processed on some other computer than the one you are just logged in. There are some things to know about LX batch:
>
>

0.4. Using LXBATCH

LX batch submitting works in a way that the job you want to run is processed on some other computer than the one you are just logged in. There are some things to know about LXBATCH:
 
  • You are starting up clean, so there is
    • NO startup script (no functions and environment variables of your own),
    • NO CMT configured,
Line: 167 to 168
 > bkill
Changed:
<
<

I. Generation on LX batch

>
>

I. Generation

I.1. Running Generation JobTransformation?

You can run generation using Pythia by issuing:
 
Changed:
<
<
get_files -jo
>
>
> csc_evgen08_trf.py ./
 
Changed:
<
<
You can choose from a variety of files:
>
>

I.2. How to get Job Option files:

You can choose from a variety of files available on http://reserve02.usatlas.bnl.gov/lxr/source/atlas/Generators/EvgenJobOptions/share/:
 
  • MC8.105145.PythiaZmumu.py for Z->mu,mu
  • MC8.105144.PythiaZee.py for Z->e,e
  • and many others
Changed:
<
<
Edit: evgenConfig.minevents = 100
>
>
Use the following command to get Job Options you want:
> get_files -jo MC8.105145.PythiaZmumu.py

I.3. How to change minimum number of events:

The default value is 5000, therefore if you choose your > MAX EVENTS $lt below 5000, you will get into problems and generation will crash. What you need to do is edit the JobOptions? .py file (e.g. MC8.105144.PythiaZee.py) and add this line to the end:
evgenConfig.minevents = 100 # default is 5000

On LXBATCH we can of course leave the default 5000, however since we use get_files -jo on LXBATCH to obtain the JobOptions? file, it gets the unmodified version from the central repositories.

I.4. Generation on LXPLUS

Now in our case, we obtained MC8.105145.PythiaZmumu.py file and therefore we want to generate ~110 events of Z->mu,mu decay, do:
> csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root

I.5. Generation on LXBATCH

To run Generation on LXBATCH (with default minimum 5000 events), you need to have the following scripts. If you did everything just as it is in this tutorial (including all directory and file names), you can run it without modifying anything.

A few notes:

  • You can run this script from any folder (public, private - it does not matter)
  • Make sure you make it executable using > chmod +x Generation.sh
  • Make sure all directories and files specified in the environment variables exist !!! (if you followed this tutorial EXACTLY, everything should be working)
  • The script creates your environment on LXBATCH machine and runs the generation setup of your choosing

The script has three parameters you HAVE TO specify. To run the job, issue the following from the directory where you put BOTH of the scripts:

> ./BatchGenerate.sh <JOBOPTIONS> <EVENTS> <ID>

Making generation script for LXBATCH step by step: 1. First we want to specify the environment variables, so the script works generally everywhere and if something needs to be changed, it needs to be done only on one place which can be easily found at the beginning of the file:

### ENVIROMENT SETUP
## LOCAL (your AFS environment)
# export HOME=/afs/cern.ch/user/<LETTER>/<USERNAME>   # uncomment and change in case you are missing these
# export CASTOR_HOME=/castor/cern.ch/user/<LETTER>/<USERNAME>   # uncomment and change in case you are missing these
export CMT_HOME=${HOME}/cmt-fullchain
export FULL_CHAIN=${HOME}/testarea/FullChain/

# CASTOR (your CASTOR environment)
export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation
export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation
export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization
export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction
export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp
export CASTOR_LOG=${CASTOR_HOME}/fullchain/log
Make sure ALL these paths are in accord with your actual directories. If that is not the case, you will undoubtedly FAIL.

2. Secondly, we need to process the input parameters coming from the BatchGenerate.sh script:

###################################################################################################
### INPUT PARAMETERS
export JOBOPTIONS=$1 #MC8.105144.PythiaZee.py # which file to run (string)
export EVENTS=$3 # number of events to process (int)
export ID=$4 # unique run identificator of your choice (string)

# Parse environment variables amongst points
PARSE=(`echo ${JOBOPTIONS} | tr '.' ' '`)
RUN=${PARSE[1]} # generation Job Transformation requires RUN number parsed from the JobOptions filename
OUTPUT=${PARSE[0]}.${PARSE[1]}.${PARSE[2]} # name of the CASTOR output for easy orientation

## Remove all the parameters from $1, $2 and $3 (source setup.sh could pick them up)
while [ $# -gt 0 ] ; do shift ; done

3. Now

# Create temporary workspace
echo "###################################################################################################"
echo "CREATING WORKSPACE"
mkdir Generation.$JOBOPTIONS.$EVENTS.$DSN
cd Generation.$JOBOPTIONS.$EVENTS.$DSN
      
# Copy entire run directory in my working place
get_files -jo ${JOBOPTIONS}
      
# Setup Athena environment
# Have some experience with newer Athena releases,
# that first setup.sh have to be done in its directory
echo "###################################################################################################"
echo "SETTING UP CMT HOME"
export CURRENTDIR=`pwd`      # remember the current directory
cd ${CMT_HOME}

echo "Your CMT home directory:" $CMT_HOME
source ${CMT_HOME}/setup.sh -tag=14.2.10,32

. /afs/cern.ch/atlas/software/releases/14.2.10/cmtsite/setup.sh -tag=14.2.10 
export CMTPATH=/afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10 
. /afs/cern.ch/atlas/software/releases/14.2.10/AtlasProduction/14.2.10/AtlasProductionRunTime/cmt/setup.sh 
 
Deleted:
<
<
Run: csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root
 
Changed:
<
<

II. Simulation on LX batch

>
>
# Go back to working directory and run the job echo "###################################################################################################" echo "RUNNING SIMULATION" echo $CURRENTDIR cd $CURRENTDIR csc_evgen08_trf.py ${RUN} 1 ${EVENTS} 1324354657 ./${JOBOPTIONS} $JOBOPTIONS.$EVENTS.$DSN.pool.root

# Copy out the results if exist echo "###################################################################################################" echo "COPYING SIMULATION OUTPUT" if [ -e $JOBOPTIONS.$EVENTS.$DSN.pool.root ] ; then rfcp $JOBOPTIONS.$EVENTS.$DSN.pool.root ${CASTOR_GENERATION}/$JOBOPTIONS.$EVENTS.$DSN.pool.root fi

# List content of the working directory for debugging purposes ls -lRt

# Clean workspace before exit cd .. rm -fR Generation.$JOBOPTIONS.$EVENTS.$DSN

II. Simulation on LXBATCH

 
#----------------------------------------------------------------
Line: 279 to 403
 Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
Changed:
<
<
¨
>
>

Digitization on LXBATCH

Digitization is run together with simulation using JobTransformation? . That is why it takes so long. Simulation produces the hits.pool.root and digitization produces rdo.pool.root file. If for some reason you need to run digitization separately, use the following Job Transformation:
csc_digi_trf.py <INPUT hits.pool.root> <OUTPUT rdo.pool.root> <MAX EVENTS> <SKIP EVENTS> <GEOMETRY VERSION> <SEEDS> ...
You can just simply change the csc_simul_trf.py command to suit this one.

Reconstruction on LXBATCH

Running General Full Chain on the GRID

  -- MartinZeman - 28 Jul 2008

-- MichalMarcisovsky - 28 Jul 2008

Added:
>
>

META FILEATTACHMENT attachment="Generation.sh" attr="" comment="generation job to be submitted to the LXBATCH" date="1217588382" name="Generation.sh" path="D:\Documents\Projects\CERN\LXBATCH\Generation.sh" size="3140" stream="D:\Documents\Projects\CERN\LXBATCH\Generation.sh" tmpFilename="" user="MartinZeman" version="1"
META FILEATTACHMENT attachment="BatchGenerate.sh" attr="" comment="Script to submit Generation.sh to the LXBATCH" date="1217588404" name="BatchGenerate.sh" path="D:\Documents\Projects\CERN\LXBATCH\BatchGenerate.sh" size="601" stream="D:\Documents\Projects\CERN\LXBATCH\BatchGenerate.sh" tmpFilename="" user="MartinZeman" version="1"

Revision 931 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LX batch

Changed:
<
<

0. Preparing the LX plus environment

>
>

0. Setting up the CMT Environment

Following is a procedure how to set up your lxplus account without having to read much. Official guide on how to setup your environment can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount.
 1. Login to lxplus.cern.ch using your credentials. (http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login)

2. Prepare the necessary directories. Create your $CMT_HOME directory for your configuration management (see http://www.cmtsite.org):

Revision 831 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LX batch

Line: 19 to 19
  3. Create the requirements environment for the CMT. You can do it in the console like this:
Added:
>
>
> cd cmt-fullchain
 > touch requirements
> mcedit requirements
Line: 40 to 41
 #----------------------------------------------------------------
Changed:
<
<
4. Download the CMT enviroment to your $CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.
>
>
4. Download the CMT enviroment to your $CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is a tested well-working environment.
 
Deleted:
<
<
> cd cmt-fullchain
 > source /afs/cern.ch/sw/contrib/CMT/v1r20p20070208/mgr/setup.sh
Line: 72 to 72
 > source setup.sh -tag=14.2.0,32
Changed:
<
<
You can simplify this process by employing startup script with functions and environment variables, see http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login

Here is an example of a script employed for the FullChain? :

>
>

Employing Startup Script

You can simplify this process by employing startup script with functions and environment variables, see http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login. Here is an example of a startup script with functions employed for the Full Chain:
 
#---- LXPLUS STARTUP SCRIPT ---------------------------------
# Load Athena Function
Line: 112 to 111
 #----------------------------------------------------------------
Changed:
<
<
In case you have not already done it, do create the directories on Castor:
>
>

Preparing CASTOR

In case you have not already done it, do create the directories on Castor (*C*ERN *A*dvanced *STOR*age manager for large ammounts of data: http://castor.web.cern.ch/castor/). It is necessary for handling large files, since your AFS space quota is tight and it is being used it in the scripts here:
 
>
  rfmkdir ${CASTOR_HOME}/fullchain
Line: 150 to 150
 
> bsub -q <QUEUE>[ -R "type==<OS>&&swp><# megabytes>&&pool><# megabytes>" ] <SCRIPT NAME> [ parameters ]
Added:
>
>
You can check out which queues are available using:
> bqueues
  Listing the jobs currently running:

Revision 731 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Changed:
<
<

Running General Full Chain on LX batch using Job Transformations

>
>

Running General Full Chain on LX batch

 

0. Preparing the LX plus environment

1. Login to lxplus.cern.ch using your credentials. (http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login)

Revision 630 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LX batch using Job Transformations

Line: 17 to 17
 > mkdir testarea/FullChain
Changed:
<
<
3. Download the CMT enviroment to your $CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.
> cd cmt-fullchain
> source /afs/cern.ch/sw/contrib/CMT/v1r20p20070208/mgr/setup.sh

4. Create the requirements environment for the CMT.

>
>
3. Create the requirements environment for the CMT. You can do it in the console like this:
 
> touch requirements
> mcedit requirements
Changed:
<
<
or use pico or whichever editor you prefer and copy this
>
>
or use pico or whichever editor you prefer (you can also use any of your preferred editors using sFTP, see: http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/WindowsRelated) and copy this into the requirements file:
 
#---- CMT HOME REQUIREMENTS FILE ---------------------------------
Line: 46 to 40
 #----------------------------------------------------------------
Added:
>
>
4. Download the CMT enviroment to your $CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.
> cd cmt-fullchain
> source /afs/cern.ch/sw/contrib/CMT/v1r20p20070208/mgr/setup.sh
 5. Configure the CMT to your AFS account (you do this only once per each new CMT_HOME you are using):
> cmt config
Line: 114 to 114
  In case you have not already done it, do create the directories on Castor:
Changed:
<
<
> rfmkdir ${CASTOR_HOME}/fullchain
> rfmkdir ${CASTOR_HOME}/fullchain/generation
> rfmkdir ${CASTOR_HOME}/fullchain/simulation
. . .
>
>
>
rfmkdir ${CASTOR_HOME}/fullchain rfmkdir ${CASTOR_HOME}/fullchain/generation rfmkdir ${CASTOR_HOME}/fullchain/simulation rfmkdir ${CASTOR_HOME}/fullchain/digitization/ rfmkdir ${CASTOR_HOME}/fullchain/reconstruction/ rfmkdir ${CASTOR_HOME}/fullchain/temp/ rfmkdir ${CASTOR_HOME}/fullchain/log/ >
 

Revision 530 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LX batch using Job Transformations

Line: 267 to 267
 #----------------------------------------------------------------
Added:
>
>
Compatible with 14.2.10
cmt co -r AnalysisExamples-00-20-14 PhysicsAnalysis/AnalysisCommon/AnalysisExamples
¨
 -- MartinZeman - 28 Jul 2008

-- MichalMarcisovsky - 28 Jul 2008 \ No newline at end of file

Revision 430 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LX batch using Job Transformations

0. Preparing the LX plus environment

1. Login to lxplus.cern.ch using your credentials. (http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login)
Changed:
<
<
2. Prepare the necessary directories. Create your CMT_HOME directory for your configuration management (see http://www.cmtsite.org):
>
>
2. Prepare the necessary directories. Create your $CMT_HOME directory for your configuration management (see http://www.cmtsite.org):
 
> cd $HOME
> mkdir cmt-fullchain
Line: 17 to 17
 > mkdir testarea/FullChain
Changed:
<
<
3. Download the CMT enviroment to your CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.
>
>
3. Download the CMT enviroment to your $CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.
 
> cd cmt-fullchain
> source /afs/cern.ch/sw/contrib/CMT/v1r20p20070208/mgr/setup.sh

Revision 329 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"

Running General Full Chain on LX batch using Job Transformations

Line: 19 to 19
  3. Download the CMT enviroment to your CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.
Changed:
<
<
> cd cmt-cosmics
>
>
> cd cmt-fullchain
 > source /afs/cern.ch/sw/contrib/CMT/v1r20p20070208/mgr/setup.sh

Revision 228 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
 
META TOPICPARENT name="AtlasSoftware"
Added:
>
>

Running General Full Chain on LX batch using Job Transformations

 
Added:
>
>

0. Preparing the LX plus environment

1. Login to lxplus.cern.ch using your credentials. (http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login)
 
Changed:
<
<

Z->ee

>
>
2. Prepare the necessary directories. Create your CMT_HOME directory for your configuration management (see http://www.cmtsite.org):
> cd $HOME
> mkdir cmt-fullchain

Create your $TestArea, where your packages will be installed:

> mkdir testarea
> mkdir testarea/FullChain

3. Download the CMT enviroment to your CMT_HOME. The v1r20p20070208 (an iteration for the v1r20 release) is tested well-working environment.

> cd cmt-cosmics
> source /afs/cern.ch/sw/contrib/CMT/v1r20p20070208/mgr/setup.sh

4. Create the requirements environment for the CMT.

> touch requirements
> mcedit requirements
or use pico or whichever editor you prefer and copy this

#---- CMT HOME REQUIREMENTS FILE ---------------------------------
set   CMTSITE  CERN
set   SITEROOT /afs/cern.ch
macro ATLAS_DIST_AREA ${SITEROOT}/atlas/software/dist
macro ATLAS_TEST_AREA /afs/cern.ch/user/m/mzeman/testarea/FullChain

apply_tag oneTest  # use ATLAS working directory
apply_tag setup    # use working directory
apply_tag 32       # use 32-bit

use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA)

#----------------------------------------------------------------

5. Configure the CMT to your AFS account (you do this only once per each new CMT_HOME you are using):

> cmt config

6. Logout.

7. Login. Enter your CMT_HOME directory and source the Athena setup with release specifications (14.2.0 32-bit).

> cd cmt-fullchain
> source setup.sh -tag=14.2.0,32

Congratulations, your lxplus environment is now ready

What to do every time you log on?

Go to your CMT_HOME directory:
> cd cmt-fullchain

Load Athena environment:

> source setup.sh -tag=14.2.0,32

You can simplify this process by employing startup script with functions and environment variables, see http://www-hep2.fzu.cz/twiki/bin/view/ATLAS/AthenaRelated#LXplus_Login

Here is an example of a script employed for the FullChain? :

#---- LXPLUS STARTUP SCRIPT ---------------------------------
# Load Athena Function
function Load/Athena {
    source ${CMT_HOME}/setup.sh -tag=$*
   echo "Athena" $* "Loaded"
    shift
}

# Full Chain Setup
function FullChain {
   echo "Loading Full Chain Environment"
   
   # Specify CMT home directory
   export CMT_HOME=${HOME}/cmt-fullchain
   echo "Your CMT home directory:" $CMT_HOME

   # Use function Load/Athena
   Load/Athena 14.2.0,32
   # Load Envrioment Variables
   # LOCAL
   export FULL_CHAIN=${HOME}/testarea/FullChain/

   # CASTOR (you need to create these)
   export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation/
   export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation/
   export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization/
   export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction/
   export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp/
   export CASTOR_LOG=${CASTOR_HOME}/fullchain/log/
   
   echo "Environment Ready"
}

#----------------------------------------------------------------

In case you have not already done it, do create the directories on Castor:

> rfmkdir ${CASTOR_HOME}/fullchain
> rfmkdir ${CASTOR_HOME}/fullchain/generation
> rfmkdir ${CASTOR_HOME}/fullchain/simulation
.
.
.

Running Full Chain on LX plus

Follow the tutorial here: https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#Setting_up_job_transformation

Disadvantages:

  • LX plus will kill all your runs after 40 minutes, therefore you can simulate and reconstruct only a few events.
  • Your session gets busy, therefore you loose one terminal
  • Logging of lxplus, loosing connection or turning off your computer causes the run to stop

In conclusion, using lxplus on larger scale is impossible. The solution is to use LX batch or Grid.

Using LX batch

LX batch submitting works in a way that the job you want to run is processed on some other computer than the one you are just logged in. There are some things to know about LX batch:
  • You are starting up clean, so there is
    • NO startup script (no functions and environment variables of your own),
    • NO CMT configured,
    • NO Athena version is loaded
  • Your home directory /afs/cern.ch/user// is accessible as if you were logged in (not just the public folder)
  • All other ATLAS repositories are visible normally (/afs/cern.ch/atlas/software/... and other users)
  • CASTOR is visible normally

Submitting jobs to LX batch is done by the following command

> bsub -q <QUEUE>[ -R "type==<OS>&&swp><# megabytes>&&pool><# megabytes>" ] <SCRIPT NAME> [ parameters ]

Listing the jobs currently running:

> bjobs [ -l ] [ -w ] [ jobID ] 

Killing a job assuming you know its ID (use bjobs to find out)

> bkill <jobID>

I. Generation on LX batch

get_files -jo <FILE NAME>

You can choose from a variety of files:

  • MC8.105145.PythiaZmumu.py for Z->mu,mu
  • MC8.105144.PythiaZee.py for Z->e,e
  • and many others

Edit: =evgenConfig.minevents = 100 =

Run: =csc_evgen08_trf.py 105145 1 110 1324354657 ./MC8.105145.PythiaZmumu.py 105145.pool.root =

II. Simulation on LX batch

#----------------------------------------------------------------
#!/bin/zsh      
# These commands are executed on LXbatch to run the Full Chain

# ENVIROMENT SETUP
# LOCAL
# export HOME=/afs/cern.ch/user/m/mzeman
# export CASTOR_HOME=/castor/cern.ch/user/m/mzeman
export CMT_HOME=${HOME}/cmt-fullchain
export FULL_CHAIN=${HOME}/testarea/FullChain/

# CASTOR
export CASTOR_GENERATION=${CASTOR_HOME}/fullchain/generation
export CASTOR_SIMULATION=${CASTOR_HOME}/fullchain/simulation
export CASTOR_DIGITIZATION=${CASTOR_HOME}/fullchain/digitization
export CASTOR_RECONSTRUCTION=${CASTOR_HOME}/fullchain/reconstruction
export CASTOR_TEMP=${CASTOR_HOME}/fullchain/temp
export CASTOR_LOG=${CASTOR_HOME}/fullchain/log

# INPUTS
export GENERATION_INPUT=${CASTOR_GENERATION}/105144.pool.root      # input RDO file
export EVENTS=$1                                       # number of events to process
export SKIP=$2                                          # number of generated events to skip
export DSN=$3                                          # unique run identificator of your choice

# Remove all the parameters from $1, $2 and $3, otherwise
# "source setup.sh ..." would pick them up and probably fail
while [ $# -gt 0 ] ; do shift ; done

# Create temporary workspace
echo "###################################################################################################"
echo "CREATING WORKSPACE"
mkdir Temp.$DSN.$EVENTS.$SKIP
cd Temp.$DSN.$EVENTS.$SKIP
      
# Copy entire run directory in my working place
rfcp ${GENERATION_INPUT} .
      
# Setup Athena environment
# Have some experience with newer Athena releases,
# that first setup.sh have to be done in its directory
echo "###################################################################################################"
echo "SETTING UP CMT HOME"
export CURRENTDIR=`pwd`      # remember the current directory
cd ${CMT_HOME}

echo "Your CMT home directory:" $CMT_HOME
source ${CMT_HOME}/setup.sh -tag=14.2.0,32

. /afs/cern.ch/atlas/software/releases/14.2.0/cmtsite/setup.sh -tag=14.2.0 
export CMTPATH=/afs/cern.ch/atlas/software/releases/14.2.0/AtlasProduction/14.2.0.1 
. /afs/cern.ch/atlas/software/releases/14.2.0/AtlasProduction/14.2.0.1/AtlasProductionRunTime/cmt/setup.sh 

# Go back to working directory and run the job
echo "###################################################################################################"
echo "RUNNING SIMULATION"
echo $CURRENTDIR
cd $CURRENTDIR
csc_simul_trf.py 105144.pool.root hits.pool.root rdo.pool.root ${EVENTS} ${SKIP} 1324354656 ATLAS-CSC-02-01-00 100 1000 

# Copy out the results if exist
echo "###################################################################################################"
echo "COPYING SIMULATION OUTPUT"
if [ -e hits.pool.root ] ; then
    rfcp hits.pool.root ${CASTOR_SIMULATION}/hits.pool.$DSN.$SKIP.root
   rfcp rdo.pool.root ${CASTOR_SIMULATION}/rdo.pool.$DSN.$SKIP.root
fi
       
# List content of the working directory for debugging purposes
ls -lRt
      
# Clean workspace before exit
cd ..
rm -fR Temp.$DSN.$EVENTS.$SKIP

#----------------------------------------------------------------

LX batch submitter

# export SCRATCH=${HOME}/scratch0/
export EVENTS=$1   # number of events to process
export SKIP=$2      # number of events to skip
export DSN=$3      # unique run dentifier of your choice

bsub -R "type==SLC4&&swp>4000&&pool>2000" -q 8nh  -o ${SCRATCH}/ScreenOutput.$DSN.$EVENTS.$SKIP.txt Simulate.sh ${EVENTS} ${SKIP} ${DSN}

#----------------------------------------------------------------
 
Deleted:
<
<
-- MichalMarcisovsky - 28 Jul 2008
 -- MartinZeman - 28 Jul 2008 \ No newline at end of file
Added:
>
>
-- MichalMarcisovsky - 28 Jul 2008

Revision 128 Jul 2008 - FZU.MartinZeman

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="AtlasSoftware"

Z->ee

-- MichalMarcisovsky - 28 Jul 2008 -- MartinZeman - 28 Jul 2008

 
This site is powered by the TWiki collaboration platformCopyright &Š by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback