Elsevier Science Home
Computer Physics Communications Program Library
Full text online from Science Direct
Programs in Physics & Physical Chemistry
CPC Home

[Licence| Download | New Version Template] aess_v2_0.tar.gz(4163 Kbytes)
Manuscript Title: Introducing MCgrid 2.0: projecting cross section calculations on grids
Authors: Enrico Bothmann, Nathan Hartland, Steffen Schumann
Program title: MCgrid
Catalogue identifier: AESS_v2_0
Distribution format: tar.gz
Journal reference: Comput. Phys. Commun. 196(2015)617
Programming language: C++, shell, Python.
Computer: PC running Linux, Mac.
Operating system: Linux, Mac OS.
RAM: Varying
Keywords: QCD NLO calculations, Event generator, PDFs.
Classification: 11.2, 11.5, 11.9.

External routines: HepMC [1], Rivet [2], APPLgrid [3] and fastNLO [4]. A SHERPA [5] installation is also required.

Does the new version supersede the previous version?: Yes

Nature of problem:
Efficient filling of cross section grid files from fully exclusive parton level Monte Carlo events.

Solution method:
Analyse Monte Carlo events via the Rivet program, which projects events on discretized cross section tables from APPLgrid [3] or fastNLO [4].

Reasons for new version:

  • Previous MCgrid releases, cf. [6], supported only a single interpolation tool: APPLgrid [3]. Interfacing to more than one is important for cross checks and allows the use of MCgrid in a wider range of existing workflows.
  • The recently released SHERPA 2.2.0 provides more information in the HepMC [1] event record, allowing for the filling of the exact next-to-leading-order expansion of an MC@NLO calculation, see e.g. [7] for details, into an interpolation grid. To process the additional information and adopt the new weight naming convention used in SHERPA 2.2.0, modifications on the MCgrid side have been necessary. The possibility of filling grids for MC@NLO-type calculations broadens the scope of MCgrid. It allows for the quantification of the residual dependencies on the parton showers that are beyond the fixed-order approximation. Understanding these dependencies and eventually taking them into account during the creation of interpolation grids in an automated way would help in the fitting of PDFs to data that are not appropriately described by fixed-order calculations.

Summary of revisions:

  • As an additional interpolation tool fastNLO [4] is now supported. This is the first time the fastNLO package can be used in conjunction with a multi-purpose Monte Carlo event generator. The required version of the fastNLO toolkit [8] is 2.3.1pre-2125 or later. With APPLgrid and fastNLO, all currently available interpolation tools for fixed-order QCD cross sections can now be used in conjunction with MCgrid.
  • Modifications have been made in order to adopt the new naming conventions in the HepMC event record format introduced in SHERPA 2.2.0.
  • The filling of the exact next-to-leading-order expansion of MC@NLO calculations has been implemented. The required information must be provided with the HepMC event record, which is the case for SHERPA 2.2.0.
  • The MCgrid::BinnedGrid class has been added. It corresponds to the Rivet::BinnedHistogram class and allows for the direct creation of grids for every Rivet histogram combined therein.
  • MCGRID_OUTPUT_PATH has been introduced, an environmental variable for specifying the grid output directory.
  • An automatic counter suffix for grid file names has been added to prevent overwriting.
  • The API has been streamlined for easier enabling of Rivet analyses for MCgrid.
  • The provided examples have been updated for use with SHERPA 2.2.0 and Rivet 2.2.1.

Running time:
Approximately 6 minutes per 1 million Drell-Yan events from SHERPA. This includes both the event generation and the MCgrid computations. The times can vary quite dramatically. The process used in the test case (which is a relatively quick one) takes about 2 minutes 30 seconds for the initial (phase-space fill) run and about 3 minutes for the second and final run. This is for 1 million events on a 2.9GhZ Ivy Bridge i7 processor.

References:
[1] M. Dobbs, J. B. Hansen, The HepMC C++ Monte Carlo event record for High Energy Physics, Comput.Phys.Commun. 134 (2001) 41-46. doi: 10.1016/S0010-4655(00)00189-2.
[2] A. Buckley, J. Butterworth, L. Lönnblad, D. Grellscheid, H. Hoeth, et al., Rivet user manual, Comput.Phys.Commun. 184 (2013) 2803-2819. arXiv: 1003.0694, doi:10.1016/j.cpc.2013.05.021.
[3] T. Carli, D. Clements, A. Cooper-Sarkar, C. Gwenlan, G. P. Salam, et al., A posteriori inclusion of parton density functions in NLO QCD final-state calculations at hadron colliders: The APPLGRID Project, Eur.Phys.J. C66 (2010) 503-524. arXiv:0911.2985, doi:10.1140/epjc/ s10052-010-1255-0.
[4] T. Kluge, K. Rabbertz, M. Wobisch, FastNLO: Fast pQCD calculations for PDF fits (2006) 483-486arXiv:hep-ph/0609285.
[5] T. Gleisberg, S. Höche, F. Krauss, M. Schönherr, S. Schumann, et al., Event generation with SHERPA 1.1, JHEP 0902 (2009) 007. arXiv: 0811.4622, doi:10.1088/1126-6708/2009/02/007.
[6] L. Del Debbio, N. P. Hartland, S. Schumann, MCgrid: projecting cross section calculations on grids, Comput.Phys.Commun. 185 (2014) 2115-2126. arXiv:1312.4460, doi:10.1016/j.cpc.2014.03.023.
[7] S. Höche, F. Krauss, M. Schönherr, F. Siegert, A critical appraisal of NLO+PS matching methods, JHEP 1209 (2012) 049. arXiv:1111.1220, doi:10.1007/JHEP09(2012)049.
[8] D. Britzger, K. Rabbertz, F. Stober, M. Wobisch, New features in version 2 of the fastNLO project (2012) 217-221arXiv:1208.3641, doi:10.3204/ DESY-PROC-2012-02/165.