Elsevier Science Home
Computer Physics Communications Program Library
Full text online from Science Direct
Programs in Physics & Physical Chemistry
CPC Home

[Licence| Download | New Version Template] aeyr_v1_0.tar.gz(152 Kbytes)
Manuscript Title: corr3p_tr: A particle approach for the three-body problem
Authors: S. Edvardsson, Kristoffer Karlsson, H. Olin
Program title: corr3p_tr
Catalogue identifier: AEYR_v1_0
Distribution format: tar.gz
Journal reference: Comput. Phys. Commun. 200(2016)259
Programming language: ANSI C.
Computer: Linux 64bit PC.
Operating system: Linux 64bit.
RAM: 300 M bytes
Keywords: Second order dynamical system, Schrödinger Equation, mass polarization, correlation, exotic atoms, S-states.
PACS: 31.15.Fx, 31.25.-v, 31.25.Eb, 31.25.Jf.
Classification: 2.7, 2.8, 2.9.

Nature of problem:
The Schrödinger equation for an arbitrary three-particle system is solved using finite differences and a fast particle method for the eigenvalue problem [15].

Solution method:
A fast eigensolver is applied (see Appendix). This solver works for both symmetrical and nonsymmetrical matrices (which opens up for more accurate nonsymmetrical finite difference expressions to be applied at the boundaries). The three-particle Schrödinger equation is transformed in two major steps. First step is to introduce the function Q (r1, r2, μ) = r1r2 (1 - μ2) φ (r1, r2, μ), where μ = cos (012). The cusps (r1 = r2, μ = 1) are then transformed into boundary conditions. The derivatives of Q are then continuous in the whole computational space and thus the finite difference expressions are well defined. Three-particle coalescence (r1 = r2 = 0, μ) is treated in the same way. The second step is to replace Q (r1, r2, μ) with (2√x1x2)-1 Q (x1, x2, μ). The space (x1, x2, μ) is much more appropriate for a finite difference approach since the square roots x1 = √r1, x2 = √r2 allow the boundaries to be much further out. The non-linearity of the x-grid also leads to a finer description near the nucleus and a coarser one further out thus resulting in a saving of grid points. Also, in contrast to the usual variable r12, we have instead used μ which is an independent variable. This simplifies the mathematics and numerical treatments. Several different grids can naturally run completely independent of each other thus making parallel computations trivial. From several grid results the physical property of interest is extrapolated to continuum space. The extrapolations are made in a matlab m-script where all computations can be made symbolically so the loss of decimal figures are minimized during this process. The computer code, including correlation effects and mass polarization, is highly optimized and deals with either triangular or quadratic domains in (x1, x2).

Restrictions:
The amount of CPU time may become unreasonable for states needing boundary conditions very far beyond the origin. Also if the condition number of the corresponding Hamiltonian matrix is very high, the number of iterations will grow. The use of double precision computations also puts a limit on the accuracy of extrapolated results to about 6-7 decimal figures.

Unusual features:
The numerical solver is based on a particle method presented in [15,16,26]. In the Appendix we provide specific details of dealing with eigenvalue problems. The program uses a 64bit environment (Linux 64bit). Parallel runs can be made conveniently through a simple bash script.

Additional comments:
The discretized wavefunction is complete on every given grid. New interactions can therefore conveniently be added to the Hamiltonian without the need to seek for an appropriate basis set.

Running time:
Given a modern CPU such as Intel core i5 and that the outer boundary conditions of r1 and r2 is limited to, say 16 atomic units, the total CPU time of totally 10 grids of a serial run is typically limited to a few minutes. One can then expect about 6-7 correct figures in the extrapolated eigenvalue. A single grid of say h1 = h2 = h3 = 1/16 converges in less than 1 s (with an error in the eigenvalue of about 1 percent). Parallel runs are possible and can further minimize CPU times for more demanding tasks.

References:
[15] S. Edvardsson, M. Gulliksson, and J. Persson. J. Appl. Mech. ASME, 79:021012, 2012.
[16] S. Edvardsson, M. Neuman, P. Edstrom, and H. Olin. Solving equations through particle dynamics. Submitted to Comp. Phys. Commun., 2015.
[26] M. Gulliksson, S. Edvardsson, and A. Lind. http://arxiv.org/abs/1303.5317, 2013.