Programs in Physics & Physical Chemistry
|[Licence| Download | New Version Template] acgv_v1_0.gz(32 Kbytes)|
|Manuscript Title: Pattern recognition in high energy physics with artificial neural networks: JETNET 2.0.|
|Authors: L. Lonnblad, C. Peterson, T. Rognvaldsson|
|Program title: JETNET 2.0|
|Catalogue identifier: ACGV_v1_0|
Distribution format: gz
|Journal reference: Comput. Phys. Commun. 70(1992)167|
|Programming language: Fortran.|
|Computer: DECstation 3100.|
|Operating system: ULTRIX RISC 4.2.|
|RAM: 90K words|
|Word size: 32|
|Keywords: Pattern recognition, Jet identification, Artificial neural network.|
|Classification: 6.4, 11.9.|
Nature of problem:
High energy physics offers many challenging pattern recognition problems. It could be separating photons from leptons based on calorimeter information or the identification of a quark based on the kinematics of the hadronic fragmentation products. Standard procedures for such recognition problems is the introduction of relevant cuts in the multi-dimensional data.
Artifical neural networks (ANN) have turned out to be a very powerful paradigm for automated feature recognition in a wide range of problem areas. In particular, feed-forward multilayer networks are widely used due to their simplicity and good performance. JETNET 2.0 implements such a network with the back-propagation updating algorithm in F77. Also a self-organizing map algorithm is included. JETNET 2.0 consists of a number of subroutines, most of which handle training and test data, which should be loaded with a main application specific program supplied by the user. The package was originally mainly intended for jet triggering applications, where it has been used with success for heavy quark tagging and quark-gluon separation, but it is of general nature and can be used for any pattern recognition problem area.
The only restriction of the complexity for an application is set by available memory and CPU time (see below). For a problem that is encoded with ni input nodes, no output (feature) nodes, H layers of hidden nodes with nh(j) (j=1,...H) nodes in each layer the program requires (with full connectivity) the storage of 2Mc real numbers given by
Mc=ninh(1) + Sigma j=1,H-1 (nh(j)nh(j+1)) + nh(H)noAlso, the neurons required the storage of 4Mn real numbers according to
Mn=ni + Sigma j=1,H (nh(j)) + noIn addition one needs, of course, to store at least temporarily the patterns; Mp=ni + no real numbers.
The CPU time consumption for the learning process is proportional to Mc, the number of training patterns Np and the number of learning passes (or epochs) Nepoch needed.
tau proportional to Nepoch Np McAs an example we take a b-quark identification where 100 epochs are required to train a network with 16 input nodes, one output node and one hidden layer with 10 nodes. With 6000 training patterns, tau approximately 360 seconds (DEC 3100).
|Disclaimer | ScienceDirect | CPC Journal | CPC | QUB|