Tutorials - NAMD GPU Jobs

Submitting and Running NAMD GPU Jobs

NAMD is a parallel, molecular dynamics simulation program used to model large biomolecular systems using high performance computing clusters. The program is freely available for academic work. If you are interested in running NAMD simulations you should also install a local copy of VMD on your own computer. VMD is a molecular viewing and editing program that is used to set up NAMD simulations and to analyze and visualize NAMD output.

Developers have recently enabled NAMD to take advantage of nVidia's CUDA platform, which uses the Graphics Processing Unit (GPU), to perform additional computation to greatly speed up simulations by a factor of 8 to 16 times over single cpu jobs. VPAC's Tango cluster now has three nVidia GTX-280 processors attached to its main processing nodes available for users for development and research purposes. Jobs can be run using the nVidia processors by setting the appropriate parameters in the PBS script.

It should be noted that the CUDA version of NAMD is still in development and does not have the full functionality of the regular NAMD. The current version of CUDA enabled NAMD is not meant for production quality molecular dynamics simulations!

VMD can be freely obtained here

Additional tutorials and information about NAMD is available here

Tutorials and information about CUDA is available here

Information about GPU computing is available here

Running a NAMD GPU Job

An example of a solvated HIV protease simulation is available for running on the GPU cluster. This particular job is quite short compared to a regular NAMD simulation and should be finished in about 20 minutes.
After logging on to the Tango cluster, copy the entire example directory to your home directory as follows:

cp -r /common/examples/NAMD_CUDA_hiv_example .

Change into this directory and launch the job with using the "qsub" command followed by the name of the PBS script:

cd NAMD_CUDA_hiv_example
qsub pbs_script_cuda_example

Check the job is running with the "showq" command:


The "showq" command shows all currently running jobs on the cluster. When the cluster is busy it might be difficult to find your job amongst the others that are running. In order to display only your own jobs type the following:

showq -u username

As the job runs, various output files are produced but the main one you will be interested in is the trajectory file with the .dcd suffix.
After some time has elapsed, check the status of your job using:

showq -u username

The directory you ran the job from will now contains the various output files produced by NAMD.
The .dcd file is the main output file while the .xsc, .coor, .vel files all are used to restart the simulation at a later date. The .BAK are backup files while the .txt file contains the text output from the simulation.

This particular example only generates 10 trajectory frames. If you want to run a longer example, modify the configuration file to include a longer run time ie) run 500000 and also modify the dcd write frequency so you don't generate too much data! ie)

dcdfreq 5000

(this new configuration would generate 100 frames, representing 1 nanosecond of simulation).

Make sure to also add a longer walltime in the pbs script! (ie)

#PBS -l walltime=24:0:0


You have just run a short molecular dynamics simulation using the GPUs attached to the Tango cluster. The program VMD can be used to visualize the trajectory data of the molecule you have generated. See the "Visualizing NAMD results with VMD" section of the NAMD tutorial for further information.

Additional PBS Script Parameters for GPU enabled jobs.

The PBS script contains various parameters that are used to run the simulation on the cluster (see the NAMD tutorial for further information). In the case of NAMD GPU jobs, several additional commands are required.
These are:

#PBS -q gpu

module load namd/2.7b1-openmpi-intel-cuda

The line “#PBS -q gpu” ask for the job to be submitted to the gpu nodes. CUDA jobs launched to the regular cluster nodes won't work! The line begining with “module load” is asking for the specific cuda-enabled version of NAMD to be used. The regular version of NAMD won't benefit any acceleration from running on the GPU-enabled nodes.

Top of Page