Programs in Physics & Physical Chemistry
|[Licence| Download | New Version Template] aaot_v1_0.gz(11 Kbytes)|
|Manuscript Title: Monte Carlo simulation of pure U(N) and SU(N) lattice gauge theories with fundamental and adjoint couplings.|
|Authors: R.W.B. Ardill, K.J.M. Moriarty, M. Creutz|
|Program title: SUUNFA|
|Catalogue identifier: AAOT_v1_0|
Distribution format: gz
|Journal reference: Comput. Phys. Commun. 29(1983)97|
|Programming language: Fortran.|
|Computer: CDC 6600.|
|Operating system: CDC NOS/BE, SCOPE.|
|RAM: 26K words|
|Word size: 60|
|Keywords: Elementary, Particle physics, Scattering, Structure, Lattice gauge theory, U(n), su(n), u(n)/zn and Su(n)/zn gauge theories, Fundamental and adjoint Representations, Yang-mills theory, Abelian and non-abelian Gauge theories, Qcd and qed models, Non-perturbative effects, Phase transitions, Confining and Deconfining phases, Quark theory, Statistical mechanical Analogies, Action per plaquette, Metropolis algorithm, Monte carlo techniques.|
Nature of problem:
The program simulates thermal equilibrium for U(N) and SU(N) lattice gauge theories with couplings in both the fundamental and adjoint representations. Gauge theories on a lattice were originally proposed by Wilson and Polyakov.
A Monte Carlo simulation of the system set up on a lattice of variable dimensionality and lattice size generates a sequence of field configurations on the lattice links. The Metropolis algorithm, originally developed for Monte Carlo simulations in statistical mechanics, is used to generate statistical equilibrium. New configurations are generated link by link and convergence to equilibrium is accelerated by performing the Metropolis algorithm NTMAX times on a given link before passing to the next link. The matrix for a given link is updated using a table of matrices of the correct group symmetry. The program permits the choice of a cold (ordered) or hot (disordered) start.
In practice, the storage requirement is crucially connected with the array ALAT which stores the link matrices for a given configuration on lattice. This array is placed via a LEVEL2 statement in the LARGE CORE MEMORY of the CDC 7600 computer, the statement being ignored by the CDC 6600 computer. ALAT is a complex array requiring a total storage of 2DS**DN**2 words, where D is the dimensionality of the lattice space, S the number of sites per dimension and N the degree of the group (i.e., U(N) or SU(N)). For efficient runs N should be 2 or more. The U(1) case is added to the program merely for completeness and for testing the program against other U(1) programs. It is inefficient for two reasons:
(i) The heat bath method is usually more efficient than the Metropolis algorithm for U(1).
(ii)The program for uniformity employs 1*1 arrays for the U(1) case. Clearly from a computer point of view the location of the real or complex variables should prove more efficient than for real or complex 1*1 arrays. If a user wishes to make a series of U(1) runs it would be better to use a program such as Comp. Phys. Commun. 22(1981)433. to produce the results. It should be noted that for U(1) the fundamental and ajoint representations are identical. For the test run N, S, D took the values 2, 4, 4, respectively. Certain other arrays in the program, to be found in COMMON BLOCKS throughout the program, and also as local arrays in subroutines MONTE and RENORM are dependent for their dimensions on the values of N and D. Comments in the program indicate how these arrays should be dimensioned.
The execution time increases with the number of links, the degree N of the group and the number of complete Monte Carlo iterations (or "passes") through the lattice. It is also dependent on the value for NTMAX ("number of hits per link") used. It increases with NTMAX through convergence towards equilibrium is accelerated. There can be an ultimate payoff in having NTMAX fairly large, say 20. For the test run NTMAX was set 5, S and D set 4, and the time for the 15 SU(2) iterations shown, was 109 s(i.e.~~ 0.1 s per link) for the CDC 6600 computer, the CDC 7600 being approximately 5 times faster.
|Disclaimer | ScienceDirect | CPC Journal | CPC | QUB|