Skip to content

Hopper Quick Start Guide

About Hopper

Read more about Hopper on this page.

Getting an account

All GMU faculty, staff and students can get an account on the Argo and Hopper clusters. Account requests must be sponsored by a GMU faculty member.

For new users, to request an account, please fill out the account request form.

If you are already on Argo and are not yet able to submit jobs on Hopper, you can send us an email at orchelp@gmu.edu. Once you log in, you should see that the /home, /scratch and /projects directories are all mounted on Hopper as they are on Argo.

To log in to Hopper, use

ssh -X userID@hopper.orc.gmu.edu

Transferring files

The process is similar to the details for Argo transfers

Partitions on Hopper

The partition/queue structure is different on Hopper than on Argo

Partition Time Limit (Days-Hours:Min) Description
debug 0-01:00 intended for very short tests
interactive 0-12:00 interactive jobs e.g Open OnDemand
normal 3-00:00 default queue
contrib 6-00:00 users can submit jobs to this partition however they will be subject to preemption
gpuq 1-00:00 gpu node access

Equivalence between Argo and Hopper partitions

The tale below compares the Argo partitions to the ones on Hopper. It's important to note that contributor queues don't necessarily transfer between Argo and Hopper.

Argo Hopper
gpuq gpuq
all-LoPri, all-HiPri, bigmem-HiPri, bigmem-LoPri, all-long, bigmem-long normal
CDS_q, COS_q, CS_q, EMH_q contrib

More details on the main on the main differences between Argo and Hopper can be found on the page Argo vs. Hopper

Setting up your Environment

Hopper uses a hierarchical module scheme (LMOD) that displays only modules built with the particular compiler and MPI library you have loaded at a given time to avoid incompatibilities.

A detailed explainer on navigating LMOD modules on Hopper can be found on this page.

For a quick start, the best way of searching for packages is using the

module spider <package_name> 

command. That will report all available packages matching <package_name>. To see the available versions of a package on Hopper, use:

ml spider <package>

To find detailed information about a particular package you must specify the version if there is more than one version:

 module spider <package>/<version>
For example, to see the available versions of R on Hopper
$module spider r

------------------------------------------------------------------------------------------------------------------------------------------------------------------
  r:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
     Versions:
        r/3.6.3
        r/4.0.3-hx
        r/4.0.3-ta

You can then load the relevant version with

module load <package>/<version>

CPU Jobs on Hopper

Sample SLURM job script

Jobs are submitted to the cluster and the resources managed through SLURM. Below is a sample script for submitting to the default 'normal' partition

#!/bin/bash
#SBATCH   --partition=normal            # submit   to the normal(default) partition
#SBATCH   --job-name=r-test             # name the job
#SBATCH   --output=r-test-%j.out        # write stdout/stderr   to named file
#SBATCH   --error=r-test-%j.err      
#SBATCH   --time=0-02:00:00             # Run for max of 02 hrs, 00 mins, 00 secs
#SBATCH   --nodes=1                     # Request N nodes
#SBATCH   --cpus-per-task=48            # Request n   cores per node
#SBATCH   --mem-per-cpu=2GB             # Request nGB RAM per core

#load modules with  
module load r  #will load default r version

Rscript   r-hello.R

GPU Jobs on Hopper

Sample SLURM job script

Sample script for submitting to the gpuq partition:

#!/bin/bash
#SBATCH --job-name=gpu_job
#SBATCH --qos=gpu
#SBATCH --partition=gpuq
#SBATCH --gres=gpu:A100.40gb:1

#SBATCH --output=gpu_job-%j.out
#SBATCH --error=gpu_job-%j.err

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

#SBATCH --mem=10G
#SBATCH --time=1-23:59:00

#Load needed modules
module load <package>

#Execute
python script.py

Refer to the DGX User Guide for detailed instructions and examples on running gpu jobs and containers on the Hopper gpu nodes.

Interactive Computing on Hopper

Interactive session on a node

To work directly on a compute node, you can request an interactive session using the salloc <slurm parameters> command.

Interactive apps using Open OnDemand

The Open OnDemand Server enables launching of interactive apps such as RStudio, JupyterLab and MATLAB though a web interface. These interactive sessions can be used for up to 12 hours.

To access the web server, login to

https://ondemand.orc.gmu.edu

using your GMU username and credentials. Currently you must have the VPN active to access the site, we hope to relax this restriction once shibboleth authentication is enabled.

More details on using Open OnDemand to work on Hopper can be found here.