Elsevier Science Home
Computer Physics Communications Program Library
Full text online from Science Direct
Programs in Physics & Physical Chemistry
CPC Home

[Licence| Download | New Version Template] admg_v2_0.tar.gz(3709 Kbytes)
Manuscript Title: SCELib2: The New Revision of SCELib, the Parallel Computational Library of Molecular Properties in the Single Center Approach.
Authors: N. Sanna, G. Morelli
Program title: SCELib
Catalogue identifier: ADMG_v2_0
Distribution format: tar.gz
Journal reference: Comput. Phys. Commun. 162(2004)51
Programming language: C.
Computer: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors.
Operating system: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0.
Has the code been vectorised or parallelized?: yes
Number of processors used: 1 to 32
RAM: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters
Word size: 64 bits
Keywords: Single Center Expansion Library, SCE molecular properties, electron- molecule scattering.
PACS: 34.50GB, 34.80Bm.
Classification: 16.1, 16.5.

Nature of problem:
In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and exchange/correlation potentials can then be used via a proper Application Programming Interface (API) to describe the target molecular system which can be employed in electron-molecule scattering calculations. The molecular properties expanded over a single center turn out to also be of more general application and some possible uses in quantum chemistry, biomodelling and drug design are also outlined.

Solution method:
The polycentre Hartee-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ, φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behaviour for the leading dipole molecular polarizabilities.

Restrictions:
Depending on the molecular system under study and on the operating conditions the program may or may not fit into the available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device.

Unusual features:
The code has been engineered to use dynamical, runtime determined, global parameters with the aim to have all the data fitted in the RAM memory. Some unusual circumstances, e.g. when using large values of those parameters, may cause the program to run with unexpected performance reductions due to runtime bottlenecks like those caused by memory swap operations which strongly depend on the hardware used. In such cases, a parallel execution of the code is generally sufficient to fix the problem since the data size is partitioned over the available processors. When a suitable parallel system is not available for execution, a mechanism of memory mapped file can be used; with this option on, all the available memory will be used as a buffer for a disk file which contains the whole data set, thus having a better throughput with respect to the traditional swapping/paging of the Unix OS.

Running time:
The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the (r, θ, φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the number of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable.