Skip to content

About HOPPER

Hopper-image-1-300x148.jpg
Some of the HOPPER Cluster.

The Hopper Cluster is a batch computing resource available to all faculty and their students at George Mason University.

Key Statistics: 12352 cores, 84 TB RAM, 40 NVIDIA A100 GPUS.


Hardware

Login Nodes
hopper1-2 hop-amd-1-2
2x Dell PowerEdge R640 2x Dell PowerEdge R6525
1x NVIDIA Tesla T4 GPUs
2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz 2x AMD Epyc Milan 7543 CPUs @ 2.8 GHz
48 cores per node 64 cores per node
384 GB DDR4 RAM 256 GB DDR4 RAM
RHEL8[Rocky Linux release 8.5(green obsidian)] RHEL8[Rocky Linux release 8.5(green obsidian)]

Compute Nodes
hop001-hop074 amd001-048
74x Dell PowerEdge R640 48x Dell PowerEdge C6525
2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz 2x AMD Epyc Milan 7543 CPUs @ 2.8 GHz
48 cores per node 64 cores per node
192 GB DDR4 RAM 256 GB DDR4 RAM
960 GB local SSD storage
RHEL8[Rocky Linux release 8.5(Green obsidian)] RHEL8[Rocky Linux release 8.5(Green obsidian)]
amd049-068 amd069-080
20x Dell PowerWdge C6525 22x Dell PowerEdge C6525
2x AMD Epyc Milan 7543 CPUs @ 2.8 GHz 2x AMD Epyc Milan 7543 CPUs @ 2.8 GHz
64 cores per node 64 cores per node
512 GB DDR4 RAM 1TB DDR4 RAM
RHEL8[Rocky Linux release 8.5(Green obsidian)] RHEL8[Rocky Linux release 8.5(Green obsidian)]
amd081-088 amd089-090
8x Dell PowerEdge R6525 2x Dell PowerEdge R6525
2x AMD Epyc Milan 7763 CPUs @ 2.45 GHz 2x AMD Epyc Milan 7763 CPUs @ 2.45 GHz
128 cores per node 128 cores per node
2 TB DDR4 RAM 4 TB DDR4 RAM
RHEL8[Rocky Linux release 8.5(Green obsidian)] RHEL8[Rocky Linux release 8.5(Green obsidian)]

GPU Nodes
dgx001-002 gpu001-024
2x NVIDIA DGX-A100 24x Dell XE8545
8x NVIDIA A100-SXM4-40GB GPUs 4x NVIDIA A100-SXM-80GB GPUs
2x AMD EPYC Rome 7742 CPUs @ 2.60 GHz 2x AMD EPYC Rome 7543 CPUs @ 2.80 GHz
128 cores per node 64 cores per node
1 TB DDR4 RAM 512 GB DDR4 RAM
RHEL8[release 8.5(Ootpa)] RHEL8[Rocky Linux release 8.5(Green obsidian)]


Storage

  • /home - is subject to the same 60GB/user quota restriction as before
  • /projects - is shared among members of a group or project
  • /scratch - is an 1.5 PB (1500 TB) VAST flash-based high performance filesystem. Users' scratch files are subject to a 90-day purge policy

Heavy users can use Globus to transfer data to and from the /home, /projects and /scratch

Networking

  • 100Gbps internal Mellanox HDR100 InfiniBand interconnect
  • 25Gbps internal 25G ethernet network
  • 20Gbps public connection via login nodes (20Gbps)

Software

Hopper is built using standard OpenHPC 2.x tools, namely

  • Warewulf for provisioning custom CentOS 8 images to nodes
  • SLURM for job scheduling and resource management
  • OpenHPC repositories for essential HPC software
  • Lmod modules for software provisioning to users

Spack is used to build other software on top of those provided by OpenHPC.

Singularity containers are provided as a compliment to or replacement of native applications in many cases.

These applications are provisioned to users using Lmod modules.

OpenHPC

Category Component
Base OS RHEL8[Rocky Linux release 8.5(Green obsidian)]
Compilers GCC 4.8.5
MPI libraries OpenMPI, MPICH, MPICH2, MVAPICH2, Intel MPI (IMPI)
Software provisioning Lmod
Scheduler SLURM 20.05
Math/Numerical Libraries OpenBLAS, Intel MKL, ATLAS, Scalapack, Boost, GSL, FFTW, Hypre, PETSc, SuperLU, Trilinos
I/O libraries HDF5 (pHDF5), NetCDF, Adios
Development tools Autotools (autoconf, automake, libtool), Valgrind, numactl, hwloc
Debugging and profiling tools Gprof, TAU, Likwid, Dimemas, remora

Spack

Spack is used to build a lot of the software on top of the tools provided by OpenHPC

Containers

Users are encouraged to use Singularity containers to make applications more portable and easier to run. Users would need to convert Docker containers to Singularity before running them on Hopper.

See Also