Skip to content

About HOPPER

Hopper-image-1-300x148.jpg
Some of the HOPPER Cluster.

The HOPPER Cluster is a batch computing resource available to all faculty and their students at George Mason University.

Hardware

Compute

Login nodes

  • 2x Dell PowerEdge R640
    • 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
    • 48 cores per node
    • 384 GB DDR4 RAM
    • CentOS 8

Compute Nodes

  • named hop001-hop074
  • 74x Dell PowerEdge R640
    • 2x Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
    • 48 cores per node
    • 192 GB DDR4 RAM
    • 960 GB local SSD storage
    • CentOS 8

GPU Nodes

  • named dgx-a100-01
  • 1x NVIDIA DGX-A100
    • 8x NVIDIA A100-SXM4-40GB GPUs
    • 2x AMD EPYC Rome 7742 CPUs @ 2.60 GHz
    • 128 cores per node
    • 1 TB DDR4 RAM
    • 14 TB local NVMe SSD storage
    • Ubuntu 20.04 LTS

Storage

  • /home - is subject to the same 60GB/user quota restriction as before
  • /projects - is shared among members of a group or project
  • /scratch - is an 1.5 PB (1500 TB) VAST flash-based high performance filesystem. Users' scratch files are subject to a 90-day purge policy
  • /tmp on each compute node - ~800 GB local storage in each compute node that users can access during when they have active jobs on that node

Heavy users can use Globus to transfer data to and from the /home, /projects and /scratch

Networking

  • 100Gbps internal Mellanox HDR100 InfiniBand interconnect
  • 25Gbps internal 25G ethernet network
  • 20Gbps public connection via login nodes (20Gbps)

Software

Hopper is built using standard OpenHPC 2.x tools, namely

  • Warewulf for provisioning custom CentOS 8 images to nodes
  • SLURM for job scheduling and resource management
  • OpenHPC repositories for essential HPC software
  • Lmod modules for software provisioning to users

Spack is used to build other software on top of those provided by OpenHPC.

Singularity containers are provided as a compliment to or replacement of native applications in many cases.

These applications are provisioned to users using Lmod modules.

OpenHPC

Category Component
Base OS CentOS 8
Compilers GNU 9.3, Intel 2020 update 2
MPI libraries OpenMPI, MPICH, MPICH2, MVAPICH2, Intel MPI (IMPI)
Software provisioning Lmod
Scheduler SLURM 20.05
Math/Numerical Libraries OpenBLAS, Intel MKL, ATLAS, Scalapack, Boost, GSL, FFTW, Hypre, PETSc, SuperLU, Trilinos
I/O libraries HDF5 (pHDF5), NetCDF, Adios
Development tools Autotools (autoconf, automake, libtool), Valgrind, numactl, hwloc
Debugging and profiling tools Gprof, TAU, Likwid, Dimemas, remora

Spack

Spack is used to build a lot of the software on top of the tools provided by OpenHPC

Containers

Users are encouraged to use Singularity containers to make applications more portable and easier to run. Users would need to convert Docker containers to Singularity before running them on Hopper.

See Also