Skip to content

Running NAMD on Argo

The NAMD program (Nanoscale Molecular Dynamics, formerly Not Another Molecular Dynamics Program) is similar to AMBER and GROMACS in its capabilities. We currently have two versions installed, a multi-node distributed version and a GPU version. Because NAMD use Charm++ instead of MPI to implement its multiprocessing, there are some important details that you need to be aware of if you want to use it on the ARGO cluster.

Single-Core NAMD Jobs

Running NAMD on a single core is relatively simple. The first step is to find and load the appropriate module.

Loading the NAMD Module

To see what NAMD modules are available, use the following command:

module avail namd
You can load a NAMD module with this command (for example):
module load namd/2.13/ibverbs/smp

Running NAMD on a Single Core

You can copy some example files for testing NAMB with the following command:

cp -r $NAMD_HOME/lib .
To test NAMD, change to the this demonstration directory

cd lib/namdcph/examples/ace
And then type this command to run NAMD:

namd2 ace.namd
You should see a lot of output, and it should finish with something like this:
TCL: namdcph) *************************************************
TCL: namdcph) move label : attempts accept. rate
TCL: namdcph) MOL:1:ACE : 5 0.00
TCL: namdcph) *************************************************
The last position output (seq=-2) takes 0.014 seconds, 274.883 MB of memory in use


WallClock: 77.182838 CPUTime: 76.197739 Memory: 274.882812 MB
Program finished after 77.186453 seconds.

Submitting a Single-Core NAMD Job to Slurm

The following Slurm submission script can be used to run your job on a compute node. NAMD has a number of output files that are setup by default to write to the current directory, so this script first does a "cd" to $SCRATCH so that those files will not be written to your /home/ directory. /home is is read-only on the compute nodes, and attempting write to it will create errors.


#SBATCH --job-name=NAMD_single
#SBATCH --partition=all-HiPri

## Deal with output and errors. Separate into 2 files (not the default).
## May help to put your result files in a directory: e.g. /scratch/%u/logs/...
## NOTE: %u=userID, %x=jobName, %N=nodeID, %j=jobID, %A=arrayID, %a=arrayTaskID
#SBATCH --output=/scratch/%u/%x-%N-%j.out # Output file
#SBATCH --error=/scratch/%u/%x-%N-%j.err # Error file
#SBATCH --mail-user=<GMUnetID> # Put your GMU email address here

## Load the relevant modules needed for the job
module load namd/2.13/ibverbs/smp

## Run your program or script
cd $SCRATCH/lib/namdcph/examples/ace
namd2 ace.namd

Distributed NAMD

One of the great advantages of using an HPC cluster is that significant reductions in computation time can be achieved by distributing tasks over multiple nodes and cores. NAMD has been designed specifically to make this possible.

Here we use the same version of NAMD that we used in the single-core example:

module load namd/2.13/ibverbs/smp
The "ibverbs" library is similar to the MPI library, and is used to pass information between nodes over the fast Infiniband network used in the ARGO cluster. The "smp" library is used to divide tasks between multiple cores on a node, taking advantage of shared-memory in the process.

Submitting a Distributed NAMD Job to Slurm

NAMD also uses Charm++ for launching multi-process jobs, and so must be launched with the charmrun program. This means that we'll have to do a little bit of translation in order to get it to work well with Slurm. This Slurm script shows how to launch a distributed NAMD job. For more details on how to use NAMD with Charm++, you can read the release notes here:


# Credit where due, this post helped alot:
# [](

#SBATCH --partition=all-HiPri
#SBATCH --job-name=<jobName>

#SBATCH --output=/scratch/%u/%x-%N-%j.out # Output file
#SBATCH --error=/scratch/%u/%x-%N-%j.err # Error file
#SBATCH --mail-user=<GMUnetID> # Put your GMU email address here

##SBATCH --time=<d-hh-mm> # Uncomment and set these to improve job priority
##SBATCH --mem=<x>G

#SBATCH --ntasks-per-node=1 # We're using threads, so set this to 1
#SBATCH --nodes=<n> # The number of nodes you want to use
#SBATCH --ntasks=<n> # ntasks must == nodes
#SBATCH --cpus-per-task=<c> # Num cpus per node: between 2 and 16
                            # 1 cpu is reserved to inter-node communication

# Load needed modules
module load namd/2.13/ibverbs/smp

# Make sure we use ssh
export CONV_RSH=ssh

echo "group main" > $NODELIST
 for n in echo $SLURM_NODELIST | scontrol show hostnames; do 
  echo "host $n ++cpus 16" >> $NODELIST # All ARGO nodes have 16 cpus or more

# calculate total processes (P) and procs per node (PPN)

# NAMD wants to write to the current directory, so we need to cd to $SCRATCH
cd $SCRATCH/lib/namdcph/examples/ace

charmrun $NAMD_HOME/namd2 ++p $P ++ppn $PPN ++nodelist $NODELIST ace.namd
     # +setcpuaffinity # This is recommended, but it seems to give errors

The test job used here is very small, so there is almost no speedup at all from using more nodes and cores. With larger jobs however, the speedup can be significant.

Running NAMD Jobs on a GPU

Tests with the GPU version have shown that better performance can be achieved with the distributed version. The NAMD Release Notes also warn of certain important caveats when using this version in the "CUDA GPU Acceleration" section. Because of this, making the GPU version available has become a lower priority. However, If this version is important for your work then please let us know and we will increase the priority.