Skip to content

Argo Frequently Asked Questions

What is the Argo Cluster?

The Argo Cluster is a High Performance compute cluster operated by the Office of Research Computing. It is located in the Aquia Data Center on the Fairfax Campus.

How many nodes are in the cluster?

The Cluster currently comprises 77 nodes (75 compute,2 head nodes) with a total of 1496 compute cores and over to 8 TB of RAM. Here is the summary of the compute nodes:

Nodes CPU Cores RAM Hardware Arch Total Nodes
1-33, 36-39, 58-68 16 64 GB Intel SSE (Sandybridge) 48
34,35 64 512 GB AMD Opteron 2
41-45 20 96 GB Intel Haswell 5
46-49, 51-54, 57 24 128 GB Intel Broadwell 9
69,70 24 512 GB Intel Broadwell 2
71-75 28 128 GB Intel Skylake 5

GPU Nodes:

Nodes CPU Cores GPU Info RAM Hardware Arch Total Nodes
40 24 4x K80 128 GB Intel Haswell 1
50 24 2x K80 128 GB Intel Broadwell 1
55 24 2x K80 512 GB Intel Broadwell 1
56 24 2x K80 256 GB Intel Broadwell 1

How do I access the cluster?

You SSH into the head node using the hostname “argo.orc.gmu.edu”. There are two head nodes of the Argo cluster – argo-1 and argo-2 and users are logged into one of these in round-robin manner to manage load. Use your GMU NetID and password to log into the cluster.

$ ssh <GMU-netID>@argo.orc.gmu.edu

How does one get an account on the Argo cluster?

Faculty can get an account on the Argo cluster. Students need to be sponsored by a GMU Faculty.

To request an account, please fill the account form.

How do I get help?

If you cannot find the answer you are looking for here in the FAQ, or elsewhere in the Argo Wiki, then you can send email to the support staff at orchelp@gmu.edu. This email address is attached to a ticketing system which gives the entire staff access to your request, and allows us to track open requests to assure that they are handled in a timely manor.

Please refrain from sending email to any of the staff members directly. They may not see it immediately, and when they ultimately do, they will just forward it to that address anyway.

Is Python installed on the cluster?

Yes, Python is installed. To find various available versions, use the command:

$ module avail python

Use the following command to load Python for your use:

$ module load python/ <version>

The Anaconda distribution of Python is also available. Check with the command:

$ module avail anaconda

Is R installed on the cluster?

Yes, R is installed on the Argo cluster. To find the available versions, use the following command:

$ module avail R

More Information can be found here: How to run R on Argo

Is Matlab installed on the cluster?

Yes, MATLAB is installed. Use the following command to find available versions:

$ module avail matlab

To run MATLAB jobs for on the compute nodes, you will need to compile your code. See How to Run MATLAB on Argo for more details.

What are modules?

The Argo cluster uses a system called Environment Modules to manage applications. Modules make sure that your environment variables are set up for the software you want to use. The main commands are:

Command

Description

module avail

Shows all the available modules

module avail

Shows available modules with the given prefix

module load
module add

Loads the given module into your environment

module list

Shows the modules that you have loaded at the moment

module unload
module rm

Removes the given module from your environment

module purge

Removes all modules from your environment

module show
module display

Gives a description of the module and also shows what it will do to your environment

module

Gives a list of all the module commands (i.e. avail, load, list, purge, etc.)

See Environment Modules for more details.

Can I run jobs on the head node?

You can use the head node to develop, compile and test a sample of your job before submitting to the queue. Users cannot run computationally intensive jobs on the head nodes. If such jobs are running on the head node, they will killed without notice.

All jobs have to be submitted on the head node via Slurm scheduler which will schedule your jobs to run on the compute nodes.

Can I log into individual nodes to submit jobs?

Users should not log into individual nodes to run jobs. Users have to submit jobs to the Slurm scheduler on the head node. Compute intensive jobs running on nodes that are not under scheduler control (i.e. directly started on the nodes) will be killed without notice.

Users can log into nodes on which their jobs are running which were previously submitted via the scheduler. This ability to ssh into individual nodes is only for checking on your job/(s) that is currently running on that node. Please note that if users are using this mode of “sshing” into nodes to start new jobs on the nodes without going through the scheduler, then their ability to ssh into nodes to check on jobs will be removed.

Do you have a quota for each user?

The amount of file space used on your /home directory should not exceed 50 GB. You can check your current usage with the following command:

$ du -sh $HOME

PhD students or their advisors can request additional space on the /projects filesystem. Usage here should not exceed 1 TB per student.

A /scratch/$USER directory is available to each user for temporary storage, such as job results. We will perform occasional sweeps of this filesystem, removing any files that are older than 120 days.

What are the partition (queue) names?

Partition Name Nodes in Partition Restricted Access
all-HiPri* 1-39, 41-49, 51-54, 57-75 no
all-LoPri 1-39, 41-49, 51-54, 57-75 no
bigmem-HiPri 34, 35, 69, 70 no
bigmem-LoPri 34, 35, 69, 70 no
gpuq 40, 50 no
COS_q 28-35 yes
CS_q 7-24, 56 yes
CDS_q 46-49, 51 yes
STATS_q 36, 37 yes
HH_q 25-27, 55 yes
GA_q 40 yes
ES_q 57 yes

*all-HiPri is the default partition (queue).

all-HiPri and bigmem-HiPri both have a run time limit of 12 hours. Jobs exceeding the time limit will be killed. all-LoPri and bigmem-LoPri both have a 5 day run time limit for jobs. The partitions bigmem-LoPri and bigmem-HiPri are intended for jobs that will require a lot of memory. Access to the queues marked as “restricted access” are limited to members of research groups and departments that have funded nodes in the cluster.

How do I submit jobs?

Jobs are submitted through Slurm. Slurm is a workload manager for Linux that manages job submission, deletion, and monitoring.

The command for submitting a batch job is:

$ sbatch