Elsevier Science Home
Computer Physics Communications Program Library
Full text online from Science Direct
Programs in Physics & Physical Chemistry
CPC Home

[Licence| Download | New Version Template] aeyn_v1_0.tar.gz(53 Kbytes)
Manuscript Title: Parallel implementation of the time-evolving block decimation algorithm for the Bose-Hubbard model
Authors: Miroslav Urbanek, Pavel Soldán
Program title: TEBDOL
Catalogue identifier: AEYN_v1_0
Distribution format: tar.gz
Journal reference: Comput. Phys. Commun. 199(2016)170
Programming language: Common Lisp.
Computer: x86-64.
Operating system: Linux.
Has the code been vectorised or parallelized?: Parallelized using MPI
RAM: 1-64 GB
Keywords: Bose-Hubbard model, Common Lisp, Message Passing Interface, Optical lattice, Tensor network, Time-evolving block decimation.
Classification: 7.7.

External routines: Basic Linear Algebra Subprograms (BLAS), Linear Algebra Package (LAPACK), Message Passing Interface (MPI)

Nature of problem:
A system of neutral atoms in an optical lattice is a many-body quantum system that can be described using the Bose-Hubbard model. Hilbert space dimensions of many-body quantum models grow exponentially with the number of particles. Simulating time evolution in the Bose-Hubbard model is therefore a hard problem even in case of a moderate number of particles.

Solution method:
A system state is represented by a tensor network. Its time evolution is then simulated using the time-evolving block decimation (TEBD) algorithm.

Restrictions:
TEBDOL is limited to one-dimensional systems. The times accessible in the simulations are restricted by the growth of entanglement in the system.

Unusual features:
Tensor networks in TEBDOL support a global Abelian symmetry, i.e., the program conserves the total number of particles. Models with multiple particle species are supported as well. TEBDOL is implemented in Common Lisp and can run in parallel on a computer cluster.

Running time:
Running time depends on the lattice size, on the number of particles, and on the maximal allowed tensor dimensions. Simulations of simple models take a few seconds on a single CPU while simulations of large and complex models can take up to a week on a cluster with hundreds of CPUs.