Elsevier Science Home
Computer Physics Communications Program Library
Full text online from Science Direct
Programs in Physics & Physical Chemistry
CPC Home

[Licence| Download | New Version Template] aedu_v3_0.tar.gz(155 Kbytes)
Manuscript Title: Hybrid OpenMP/MPI programs for solving the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap
Authors: Bogdan Satarić, Vladimir Slavnić, Aleksandar Belić, Antun Balaz;, Paulsamy Muruganandam, Sadhan K. Adhikari
Program title: GP-SCL-HYB package, consisting of: (i) imagtime3d-hyb, (ii) realtime3d-hyb.
Catalogue identifier: AEDU_v3_0
Distribution format: tar.gz
Journal reference: Comput. Phys. Commun. 200(2016)411
Programming language: C/OpenMP/MPI.
Computer: Any modern computer with C language, OpenMP- and MPI-capable compiler installed.
Operating system: Linux, Unix, Mac OS X, Windows.
RAM: Total memory required to run programs with the supplied input files, distributed over the used MPI nodes: (i) 310 MB, (ii) 400 MB. Larger grid sizes require more memory, which scales with Nx*Ny*Nz.
Supplementary material: A pdf of the full manuscript for this version can be downloaded. It includes an individual summary for each of the above programs and the "Summary of revisions" information.
Keywords: Bose-Einstein condensate, Gross-Pitaevskii equation, Split-step Crank-Nicolson scheme, Real- and imaginary-time propagation, C program, MPI, OpenMP, Partial differential equation.
PACS: 02.60.Lj, 02.60.Jh, 02.60.Cb, 03.75.-b.
Classification: 2.9, 4.3, 4.12.

Does the new version supersede the previous version?: No

Nature of problem:
These programs are designed to solve the time-dependent Gross-Pitaevskii (GP) nonlinear partial differential equation in three spatial dimensions in a fully anisotropic trap using a hybrid OpenMP/MPI parallelization approach. The GP equation describes the properties of a dilute trapped Bose-Einstein condensate.

Solution method:
The time-dependent GP equation is solved by the split-step Crank-Nicolson method using discretization in space and time. The discretized equation is then solved by propagation, in either imaginary or real time, over small time steps. The method yields solutions of stationary and/or non-stationary problems.

Reasons for new version:
Previous C [1] and Fortran [2] programs are widely used within the ultracold atoms and nonlinear optics communities, as well as in various other fields [3]. This new version represents extension of the two previously OpenMP-parallelized programs (imagtime3d-th and realtime3d-th) for propagation in imaginary and real time in three spatial dimensions to a hybrid, fully distributed OpenMP/MPI programs (imagtime3d-hyb and realtime3d-hyb). Hybrid extensions of previous OpenMP codes enable interested researchers to numerically study Bose-Einstein condensates in much greater detail (i.e., with much finer resolution) than with OpenMP codes. In OpenMP (threaded) versions of programs, numbers of discretization points in X, Y, and Z directions are bound by the total amount of available memory on a single computing node where the code is being executed. New, hybrid versions of programs are not limited in this way, as large numbers of grid points in each spatial direction can be evenly distributed among the nodes of a cluster, effectively distributing required memory over many MPI nodes.
This is the first reason for development of hybrid versions of 3d codes. The second reason for new versions is speedup in the execution of numerical simulations that can be gained by using multiple computing nodes with OpenMP/MPI codes.

Summary of revisions:
See "Supplemenary material" above.

Additional comments:
This package consists of 2 programs, see Program title above. Both are hybrid, threaded and distributed (OpenMP/MPI parallelized). For the particular purpose of each program, see the full manuscript (Supplementary material).

Special features: (1) Since the condensate wavefunction data are distributed among the MPI nodes, when writing wavefunction output files each MPI process saves its data into a separate file, to avoid I/O issues. Concatenating the corresponding files from all MPI processes will created the complete wavefunction file. (2) Due to a known bug in OpenMPI up to version 1.8.4, allocation of memory for indexed datatype on a single node for large grids (such as 800x640x480) may fail. The fix for this bug is already in 3c489ea branch and is fixed in OpenMPI as of version 1.8.5.

Running time:
All running times refer to programs compiled with OpenMPI/GCC compiler and executed on 8 to 32 nodes with 2 x 8-core Sandy Bridge Xeon 2.6 GHz processors with 32 GB of RAM and Infiniband QDR interconnect. With the supplied input files for small grid sizes, running wallclock times of several minutes are required on 8 to 10 MPI nodes.

References:
[1] D. Vudragović, I. Vidanović, A. Balaž, P. Muruganandam, S. K. Adhikari, C programs for solving the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap, Comput. Phys. Commun. 183 (2012) 2021.
[2] P. Muruganandam and S. K. Adhikari, Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap, Comput. Phys. Commun. 180 (2009) 1888.
[3] R. K. Kumar and P. Muruganandam, J. Phys. B: At. Mol. Opt. Phys. 45 (2012) 215301;
L. E. Young-S. and S. K. Adhikari, Phys. Rev. A 86 (2012) 063611;
S. K. Adhikari, J. Phys. B: At. Mol. Opt. Phys. 45 (2012) 235303;
I. Vidanović, N. J. van Druten, and M. Haque, New J. Phys. 15 (2013) 035008;
S. Balasubramanian, R. Ramaswamy, and A. I. Nicolin, Rom. Rep. Phys. 65 (2013) 820;
L. E. Young-S. and S. K. Adhikari, Phys. Rev. A 87 (2013) 013618;
H. Al-Jibbouri, I. Vidanovic, A. Balaz, and A. Pelster, J. Phys. B: At. Mol. Opt. Phys. 46 (2013) 065303;
X. Antoine, W. Bao, and C. Besse, Comput. Phys. Commun. 184 (2013) 2621;
B. Nikolić, A. Balaž, and A. Pelster, Phys. Rev. A 88 (2013) 013624;
H. Al-Jibbouri and A. Pelster, Phys. Rev. A 88 (2013) 033621;
S. K. Adhikari, Phys. Rev. A 88 (2013) 043603;
J. B. Sudharsan, R. Radha, and P. Muruganandam, J. Phys. B: At. Mol. Opt. Phys. 46 (2013) 155302;
R. R. Sakhel, A. R. Sakhel, and H. B. Ghassib, J. Low Temp. Phys. 173 (2013) 177;
E. J. M. Madarassy and V. T. Toth, Comput. Phys. Commun. 184 (2013) 1339;
R. K. Kumar, P. Muruganandam, and B. A. Malomed, J. Phys. B: At. Mol. Opt. Phys. 46 (2013) 175302;
W. Bao, Q. Tang, and Z. Xu, J. Comput. Phys. 235 (2013) 423;
A. I. Nicolin, Proc. Rom. Acad. Ser. A-Math. Phys. 14 (2013) 35;
R. M. Caplan, Comput. Phys. Commun. 184 (2013) 1250;
S. K. Adhikari, J. Phys. B: At. Mol. Opt. Phys. 46 (2013) 115301;
Ž Marojević, E. Göklü, and C. Lämmerzahl, Comput. Phys. Commun. 184 (2013) 1920;
X. Antoine and R. Duboscq, Comput. Phys. Commun. 185 (2014) 2969;
S. K. Adhikari and L. E. Young-S, J. Phys. B: At. Mol. Opt. Phys. 47 (2014) 015302;
K. Manikandan, P. Muruganandam, M. Senthilvelan, and M. Lakshmanan, Phys. Rev. E 90 (2014) 062905;
S. K. Adhikari, Phys. Rev. A 90 (2014) 055601;
A. Balaž, R. Paun, A. I. Nicolin, S. Balasubramanian, and R. Ramaswamy, Phys. Rev. A 89 (2014) 023609;
S. K. Adhikari, Phys. Rev. A 89 (2014) 013630;
J. Luo, Commun. Nonlinear Sci. Numer. Simul. 19 (2014) 3591;
S. K. Adhikari, Phys. Rev. A 89 (2014) 043609;
K.-T. Xi, J. Li, and D.-N. Shi, Physica B 436 (2014) 149;
M. C. Raportaru, J. Jovanovski, B. Jakimovski, D. Jakimovski, and A. Mishev, Rom. J. Phys. 59 (2014) 677;
S. Gautam and S. K. Adhikari, Phys. Rev. A 90 (2014) 043619;
A. I. Nicolin, A. Balaž, J. B. Sudharsan, and R. Radha, Rom. J. Phys. 59 (2014) 204;
K. Sakkaravarthi, T. Kanna, M. Vijayajayanthi, and M. Lakshmanan, Phys. Rev. E 90 (2014) 052912;
S. K. Adhikari, J. Phys. B: At. Mol. Opt. Phys. 47 (2014) 225304;
R. K. Kumar and P. Muruganandam, Numerical studies on vortices in rotating dipolar Bose-Einstein condensates, Proceedings of the 22nd International Laser Physics Workshop, J. Phys. Conf. Ser. 497 (2014) 012036;
A. I. Nicolin and I. Rata, Density waves in dipolar Bose-Einstein condensates by means of symbolic computations, High-Performance Computing Infrastructure for South East Europe's Research Communities: Results of the HP-SEE User Forum 2012, in Springer Series: Modeling and Optimization in Science and Technologies 2 (2014) 15;
S. K. Adhikari, Phys. Rev. A 89 (2014) 043615;
R. K. Kumar and P. Muruganandam, Eur. Phys. J. D 68 (2014) 289;
J. B. Sudharsan, R. Radha, H. Fabrelli, A. Gammal, and B. A. Malomed, Phys. Rev. A 92 (2015) 053601;
S. K. Adhikari, J. Phys. B: At. Mol. Opt. Phys. 48 (2015) 165303;
F. I. Moxley III, T. Byrnes, B. Ma, Y. Yan, and W. Dai, J. Comput. Phys. 282 (2015) 303;
S. K. Adhikari, Phys. Rev. E 92 (2015) 042926;
R. R. Sakhel, A. R. Sakhel, and H. B. Ghassib, Physica B 478 (2015) 68;
S. Gautam and S. K. Adhikari, Phys. Rev. A 92 (2015) 023616;
D. Novoa, D. Tommasini, and J. A. Nóvoa-López, Phys. Rev. E 91 (2015) 012904;
S. Gautam and S. K. Adhikari, Laser Phys. Lett. 12 (2015) 045501;
K.-T. Xi, J. Li, and D.-N. Shi, Physica B 459 (2015) 6;
R. K. Kumar, L. E. Young-S., D. Vudragović, A. Balaž, P. Muruganandam, and S. K. Adhikari, Comput. Phys. Commun. 195 (2015) 117;
S. Gautam and S. K. Adhikari, Phys. Rev. A 91 (2015) 013624;
A. I. Nicolin, M. C. Raportaru, and A. Balaž, Rom. Rep. Phys. 67 (2015) 143;
S. Gautam and S. K. Adhikari, Phys. Rev. A 91 (2015) 063617;
E. J. M. Madarassy and V. T. Toth, Phys. Rev. D 91 (2015) 044041
[4] Open Message Passing Interface (OpenMPI), http://www.open-mpi.org/ (2015).
[5] Message Passing Interface Chameleon (MPICH), https://www.mpich.org/ (2015).
[6] J. Choi, J. J. Dongarra, D. W. Walker, Parallel matrix transpose algorithms on distributed memory concurrent computers, Parallel Comput. 21 (1995) 1387.