Computer Physics Communications Program LibraryPrograms in Physics & Physical Chemistry |

[Licence| Download | New Version Template] aenz_v1_2.tar.gz(1363 Kbytes) | ||
---|---|---|

Manuscript Title: MCMC^{2} (version 1.1.1): A Monte Carlo code for multiply charged clusters | ||

Authors: David A. Bonhommeau | ||

Program title: MCMC^{2} | ||

Catalogue identifier: AENZ_v1_2Distribution format: tar.gz | ||

Journal reference: Comput. Phys. Commun. 196(2015)614 | ||

Programming language: Fortran 90 with MPI extensions for parallelization. | ||

Computer: x86 and IBM platforms. | ||

Operating system: - CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH2;
- CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 + MPICH2;
- Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI;
- IBM Power 6 4.7 GHz, xlf + PESS (IBM parallel library)
| ||

Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: Up to 999 | ||

RAM: (per CPU core) 10-20 MB. The physical memory needed for the simulation depends on the cluster size, the values indicated are typical for small or medium-sized clusters (N ≤ 300 - 400). The size of A clusters (^{n+}_{N}N = number of particles, n = number of charged particles with n≤N ) should not exceed 1.6 x 10^{4} (respectively 2.0 x 10^{4}) particles on servers with 2 GB (respectively 3 GB) of RAM per CPU core if n = 0 (neutral clusters) or n = N ("fully-charged" clusters). For charged clusters composed of neutral and charged particles (eg, n = N/2), the maximum cluster size can drop to 1.4 x 10^{4} and 1.8 x 10^{4} on servers with 2 GB and 3 GB of RAM, respectively (see the figure given in Supplementary Material). | ||

Supplementary material: A figure showing the amount of RAM required per replica as a function of the size of A clusters can be downloaded.^{n+}_{N} | ||

Keywords: Monte Carlo simulations, Coarse-grained models, Charged clusters, Charged droplets, Electrospray ionisation, Parallel Tempering, Parallel Charging. | ||

PACS: 05.10.Ln, 36.40.Wa, 36.40.Ei, 36.40.Qv. | ||

Classification: 23. | ||

Does the new version supersede the previous version?: Yes | ||

Nature of problem:We provide a general parallel code to investigate structural and thermodynamic properties of multiply charged clusters. | ||

Solution method:Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature) having different particle charges or numbers of charged particles are distributed on different CPUs. | ||

Reasons for new version:This new version of the MCMC ^{2} program for modeling the thermodynamic and structural properties of multiply-charged clusters fixes some minor bugs present in earlier versions. A figure representing the required RAM per replica as a function of the cluster size (N ≤ 20000) is also provided as benchmark. | ||

Summary of revisions:- Additional features of MCMC
^{2}version 1.1.1: Same as in the previous version; - Modifications or corrections to MCMC
^{2}version 1.1 [2,3] (a) Several minor bugs were fixed in this version i. A default value for the integer "irand", used to select the type of random number generator (keyword SEED, subkeyword METHOD), was missing. It is set to 0. ii. The subkeyword "EVERY" used to define the frequency of statistics printing (keyword "STATISTICS") was missing and it has been implemented in the program. Before version 1.1.1, the choice entered into the setup file was simply ignored and the frequency was always set to its default value, namely a printing every 100 Monte Carlo sweeps. (b) Some useless integers are removed from subroutines in lib4-pol.f90 and lib4-dampol.f90 and some test runs are slightly modified. In particular, in test run 2, the particle and probe diameters used to evaluate the number of surface particles were fixed to 0.8 and 1.2, respectively (see keyword "SURFACE"). Actually, the probe diameter should be smaller than the particle diameter [4] and the two values were therefore swapped. (c) The subroutines dLJ_nopol_hom (in lib4-nopol.f90), dLJ_pol_hom (in lib4-pol.f90), and dLJ_dampol_hom (in lib4-dampol.f90) are renamed dLJ_nopol, dLJ_pol, and dLJ_dampol, respectively, to avoid any ambiguity. The suffix "Hom", that stood for "homogeneity" in order to indicate that Lennard-Jones interactions between particles were the same, was improper since homogeneity is commonly related to invariance by translation and all the properties of multiply charged clusters cannot be considered invariant by translation in the most general case. The renaming of the three subroutines has obviously no influence on the results and some related comments have been modified accordingly.
| ||

Restrictions:The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge-charge, charge-induced dipole, induced dipole-induced dipole, polarisation). Furthermore, the Monte Carlo simulations can only be performed in the N V T ensemble and the size of charged clusters should not exceed 2.0x10^{4} particles on CPU cores with less than 3GB of RAM each. It is worth noting that the latter restriction is not significantly crippling since MCMC^{2} should be mainly devoted to the investigation of medium-sized cluster properties due to the difficulty to converge Monte Carlo simulations on large systems (N ≥ 10^{3}) [1]. | ||

Unusual features:The Parallel Charging methods, based on the same philosophy as Parallel Tempering but with particle charges and number of charged particles as parameters instead of temperature, is an interesting new approach to explore energy landscapes. Splitting of the simulations is allowed and averages are accordingly updated. | ||

Running time:The running time depends on the number of Monte Carlo steps, cluster size, and the type of interactions selected (eg, polarisation turned on or off, and method used for calculating the induced dipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for small clusters ( N ≤ 100, not including polarisation interactions), to one week for large clusters (N ≥ 1000 not including polarisation interactions), and several weeks for large clusters (N ≥ 1000) when including polarisation interactions. A restart procedure has been implemented that enables a splitting of the simulation accumulation phase. | ||

References: | ||

[1] | E. Pahl, F. Calvo, L. Koci, P. Schwerdtfeger, Accurate Melting Temperatures for Neon and Argon from Ab Initio Monte Carlo Simulations, Angew. Chem. Int. Ed. 47 (2008) 8207. | |

[2] | D. A. Bonhommeau, M.-P. Gaigeot, MCMC^{2}: A Monte Carlo code for
multiply-charged clusters, Comput. Phys. Commun. 184 (2013) 873-884. | |

[3] | D. A. Bonhommeau, M. Lewerenz, M.-P. Gaigeot, MCMC^{2} (version 1.1): A
Monte Carlo code for multiply-charged clusters, Comput. Phys. Commun. 185 (2014) 1188-1191. | |

[4] | M. A. Miller, D. A. Bonhommeau, C. J. Heard, Y. Shin, R. Spezia, M.-P. Gaigeot, Structure and stability of charged clusters, J. Phys.: Condens. Matter. 24 (2012) 284130. |

Disclaimer | ScienceDirect | CPC Journal | CPC | QUB |