Skip to content

About Hopper

The Hopper Cluster is a batch computing resource available to all faculty and their students at George Mason University.

Hardware

Node Type # Nodes CPU Architecture # Cores Memory GPUs
Login Nodes 2 Intel 48 / node 384 GB No GPU
2 AMD 64 / node 256 GB 1 Nvidia T4
Compute Nodes 74 Intel - 3552 cores 48 / node 192 GB No GPU
48 AMD - 3072 cores 64 / node 256 GB No GPU
20 AMD - 1280 cores 64 / node 512 GB No GPU
12 AMD - 768 cores 64 / node 1024 GB No GPU
8 AMD - 1024 cores 128 / node 2048 GB No GPU
2 AMD - 256 cores 128 / node 4096 GB No GPU
GPU Nodes 31 AMD - 1984 cores 64 / node 512 GB 124 Nvidia A100 - 80GB GPUs
GPU Nodes (DGX) 2 AMD - 256 cores 128 / node 1024 GB 16 Nvidia A100 - 40GB GPUs

Storage

  • /home - is subject to the same 60GB/user quota restriction as before
  • /projects - is shared among members of a group or project
  • /scratch - is an 1.5 PB (1500 TB) VAST flash-based high performance filesystem. Users' scratch files are subject to a 90-day purge policy

Heavy users can use Globus to transfer data to and from the /home, /projects and /scratch

Networking

  • 100Gbps internal Mellanox HDR100 InfiniBand interconnect
  • 25Gbps internal 25G ethernet network
  • 20Gbps public connection via login nodes (20Gbps)

Software

Hopper is built using standard OpenHPC 2.x tools, namely

  • Warewulf for provisioning custom CentOS 8 images to nodes
  • Slurm for job scheduling and resource management
  • OpenHPC repositories for essential HPC software
  • Lmod modules for software provisioning to users

These applications are provisioned to users using Lmod modules.

Spack

Spack is used to build a lot of the software on top of the tools provided by OpenHPC

Containers

Users are encouraged to use Singularity containers to make applications more portable and easier to run. Users would need to convert Docker containers to Singularity before running them on Hopper.

See Also