GROMACS Tutorial

Step Eight: Production MD

Upon completion of the two equilibration phases, the system is now well-equilibrated at the desired temperature and pressure. We are now ready to release the position restraints and run production MD for data collection. The process is just like we have seen before, as we will make use of the checkpoint file (which in this case now contains preserve pressure coupling information) to grompp. We will run a 1-ns MD simulation, the script for which can be found here.

gmx grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -o md_0_1.tpr

gmx mdrun -deffnm md_0_1
Estimate for the relative computational load of the PME mesh part: 0.25

The estimate for PME load will dictate how many processors should be dedicated to the PME calculation, and how many for the PP calculations. Refer to the GROMACS 4 publication and the manual for details. For a cubic box, the optimal setup will have a PME load of 0.25 (3:1 PP:PME - we're in luck!); for a dodecahedral box, the optimal PME load is 0.33 (2:1 PP:PME). When executing mdrun, the program should automatically determine the best number of processors to assign for the PP and PME calculations. Thus, make sure you indicate an appropriate number of nodes for your calculation (the value of -np X), so that you can get the best performance. For this system, I achieved roughly 14 ns/day performance on 24 CPU's (18 PP, 6 PME).

Running GROMACS on GPU

As of version 4.6, GROMACS supports the use of GPU accelerators for running MD simulations. The nonbonded interactions are calculated on the GPU, and bonded and PME interactions are calculated on standard CPU hardware. When building GROMACS (see www.gromacs.org for installation instructions), GPU hardware will automatically be detected, if present. The minimum requirements for using GPU acceleration are the CUDA libraries and SDK, and a GPU card with a compute capability of 2.0. A nice list of some of the more common cards and their specifications can be found here. To use a GPU, the only change to the .mdp file posted above would be to add the following line to make use of the Verlet cutoff scheme (the old group scheme is not supported on GPU):

cutoff-scheme = Verlet

Assuming you have one GPU available, the mdrun command to make use of it is as simple as:

gmx mdrun -deffnm md_0_1 -nb gpu

If you have more than one card available, or require customization of how the work is divided up via the hybrid parallelization scheme available in GROMACS, please consult the GROMACS manual and webpage. Such technical details are beyond the scope of this tutorial.

Back: NPT Equilibration Next: Analysis

Bevan Lab Homepage

Virginia Tech Homepage

Virginia Tech Biochemistry


Site design and content copyright 2008-2015 by Justin Lemkul
Problems with the site? Send them to the Webmaster