Skip to content

In addition to Open OnDemand (OOD) on the HPC clusters, you can use MATLAB in the following modes:

  1. Interactive mode: There are two ways to use MATLAB interactively on the HPC clusters. One way is using OOD, which provides its own GUI to the user. The other way is via X-forwarding (using ssh -X or ssh -Y) from a local computer (e.g. your laptop) when you log in to the cluster. This mode allows users to use a GUI of their own choice, and thereby appears as the ‘usual’ mode of using MATLAB on their laptop. It usually runs on a single CPU-core, also known as a serial job.

  2. Batch mode: In this mode, a job is submitted to be run on a compute node. This consists of a MATLAB script and a Slurm script. You can run serial jobs as well as parallel jobs in this mode.

Each of these modes is discussed below.

Interactive mode – from /scratch directory or a compute node

Interactive mode is the ‘usual’ mode of using MATLAB on a local computer (e.g. your laptop). Typically, this would be on a single CPU on a single core. On the HPC cluster nodes, interactive mode is intended for developing and/or debugging your code.

The login nodes are not intended to be used interactively for compute-intensive processes or I/O processes. They are mainly to be used as a location for your program scripts and Slurm scripts, which are needed to submit batch jobs. We recommend that interactive jobs be run from the /scratch directory or by allocating resources on a compute node.

For computation-intensive or I/O-intensive MATLAB jobs, you can run your jobs from the /scratch directory OR go to a specific compute node and run interactive MATLAB jobs from that node. Both are discussed below.

Running MATLAB jobs interactively in the /scratch directory

You need to ssh into the Hopper cluster, go to the /scratch directory, and load the matlab module. This will then allow you to interactively use MATLAB from the terminal prompt.

The steps required are as follows. From a terminal window on your local computer (e.g. your laptop):

$ ssh <YourNetID>
$ cd /scratch/<YourNetID>
$ mkdir mymatlab
$ cd mymatlab
$ module load matlab
$ matlab
MATLAB is selecting SOFTWARE OPENGL rendering.
                                            < M A T L A B (R) >
                              Copyright 1984-2022 The MathWorks, Inc.
                         R2022b Update 1 ( 64-bit (glnxa64)
                                         September 28, 2022

To get started, type `doc`.
For product information, visit

This brings you to the >> prompt and you can use MATLAB interactively in the /scratch directory.

(Note: You can also submit batch jobs from the /scratch directory, covered in a later section.)

Running a MATLAB job interactively from a compute node

You need to first allocate resources for the job using the salloc command from the command line of a login node. You do not need to specify a particular node; it is allocated automatically on the basis of the options provided in the salloc command, and the available resources at that time. In the example below, the node hop057 was assigned by the resource manager. The steps shown below can be used as a guide and modified to suit your needs.

$ ssh <YourNetID>
$ module list
$ module load matlab

$ salloc --partition=normal --nodes=1 --cpus-per-task=1 --mem=4GB --constraint=intel
alloc: Granted job allocation 78815
salloc: Waiting for resource configuration
salloc: Nodes hop057 are ready for job

[YourNetID@hop057 YourNetID]$ matlab
MATLAB is selecting SOFTWARE OPENGL rendering.

                                                < M A T L A B (R) >
                                      Copyright 1984-2022 The MathWorks, Inc.
                                 R2022b Update 1 ( 64-bit (glnxa64)
                                                 September 28, 2022

To get started, type doc.
For product information, visit

You can now use MATLAB interactively from the allocated compute node hop057.

Note: Mason ITS currently provides r2021a for individual users, which you may be using on your laptop. If you need or prefer a different version than the default r2022b on the cluster, issue the command module avail matlab before step 2 to see the versions available.

$ module avail matlab

----------------------- Independent -----------------------------------------
matlab/r2020b matlab/r2021a matlab/r2022b (D)

D: Default Module

You can then choose one of the above MATLAB versions for your jobs.

Submitting Batch Jobs to the Slurm Scheduler

In the interactive mode, you can develop your code and debug it. For compute-intensive jobs, you need to submit your MATLAB jobs to be run as a batch job via the Slurm scheduler.

A MATLAB batch job consists of a MATLAB script and a Slurm script:

  1. MATLAB script: This is your program file (e.g. myjob.m, with the ‘.m extension) that you would use on your laptop, which performs computations and I/O operations for your work – course project(s), research, etc.
  2. Slurm script: this specifies the needed resources on the cluster, sets the environment and lists the commands to be run.

Example scripts are provided below for two common modes for MATLAB jobs on the cluster nodes – serial and parallel. In the example shown below, the MATLAB script is the same for both cases but with different settings for each mode.

Running a Serial MATLAB Job

The MATLAB scipt is shown below (job.m). This script can be used for serial as well as parallel jobs by appropriately specifying the parameters, as discussed below.

%Setting the number of workers is not supported when creating thread-based parallel pools. To do so, you need to use the ‘local’ pool.

start = tic;
clear A
parfor i = 1:100000000
        A(i) = i;
pend = toc(start);
fprintf('time it takes is %12.9f secs\n',pend)


The Slurm script (job.slurm) below can be used for serial jobs:


#SBATCH --job-name=uiowa-parfor1           # name for your job

#SBATCH  --partition=normal                # name of Slurm partition (queue)
#SBATCH  --qos=normal

#SBATCH --output=/scratch/%u/%x-%N-%j.out  # Output file
#SBATCH --error=/scratch/%u/%x-%N-%j.err   # Error file

#SBATCH  --time=0-01:00:00                 # run time limit (DD-HH:MM:SS)
#SBATCH  --nodes=1                         # node count
#SBATCH  --ntasks-per-node=1
#SBATCH  --cpus-per-task=1        #cpu-cores per task (>1 for multi-threaded)
#SBATCH  --mem-per-cpu=4GB                 # memory per cpu-core (2G default)

## Load the relevant modules needed for the job
module load matlab/r2022b

## Run your program or script
matlab -nodisplay  -nosplash -nodesktop -r uiowa_parfor1, exit

In the last line, the -nodisplay -nosplash options suppress the GUI. The -nodesktop option speeds up starting the job. The option cpus-per-task=1 signifies that the job is a serial job. For a parallel job, this number is changed to greater than 1.

To run the MATLAB script, submit the job to the scheduler with the following command:

$ sbatch job.slurm

After the job completes, the error and output files are written out. If the job runs to completion, the error file (.err) is empty, and the output file (.err) has information that can be viewed with an editor. An example of an output file is shown below.


                            < M A T L A B (R) >
                  Copyright 1984-2022 The MathWorks, Inc.
             R2022b Update 1 ( 64-bit (glnxa64)
                             September 28, 2022

To get started, type doc.
For product information, visit

Starting parallel pool (parpool) using the 'threads' profile ...
Connected to the parallel pool (number of workers: 1).

pend =


time it takes is  1.314994000 secs
Elapsed time is 1.318487 seconds.

Running a Multi-threaded MATLAB Job with the Parallel Computing Toolbox

MATLAB has a Parallel Computing Toolbox (e.g., parfor) whereby a job with intense computations can benefit from the built-in multi-threading provided by MATLAB's BLAS implementation. (A ‘thread’ in MATLAB terminology corresponds to a CPU-core of the Slurm scheduler). One can use up to all the CPU-cores on a single node in this mode.

To determine the value of --cpus-per-task for optimum performance usually requires some trial-and-error. For example, the criterion for such a procedure could be the ‘Elapsed time’ shown in the output file provided above. For 2 threads, the elapsed time for the above example was about 0.95 seconds (averaged over a number of runs).

Setting the number of cores used by a MATLAB job

By default, MATLAB will try to use all available cores of a node for a job. While this may be acceptable for a job on a laptop with, say, 4 CPU-cores, it is not recommended for an HPC cluster job because a cluster node has 48 (or more) cores, and different number of cores may be available at different times, depending on node usage. Additionally, such usage may not be optimal usage of computing resources for the job.

To override the default mode of MATLAB, the user can specify the number of cores in two ways – using Slurm options or in the MATLAB code itself.

In Slurm, the number of CPU-cores can be specified by the --cpus-per-task option, either interactively from the command line (salloc), or in the Slurm script(#SBATCH) as follows:

$ salloc --partition=normal --nodes=1 --cpus-per-task=8

#SBATCH --cpus-per-task=8        #cpu-cores per task (>1 for multi-threaded)

In the MATLAB code, the number of CPU-cores (NumWorkers in MATLAB terminology) can be specified when creating the parallel pool, for example, with 8 threads:

poolobj = parpool('local', 8);

If you use more than one thread, you need to make sure that your code can take advantage of all the CPU-cores that you assign to the job. Some of the ways of determining this are discussed below.

Determining the Effectiveness of Parallelizing Your Code

A parfor statement is a clear indication of a parallelized MATLAB code. However, there are cases when the parallelization is not obvious. For example, a code that uses linear algebra operations such as matrix multiplication will use the BLAS library which offers multithreaded routines.

There are two common ways to determine whether or not a MATLAB code can take advantage of parallelism without knowing anything about the code.

  1. Run the code using 1 CPU-core and then do a second run using, say, 4 CPU-cores. Look to see if there is a significant difference in the execution time of the two codes.
  2. Launch the job using, say, 4 CPU-cores then ssh to the compute node where the job is running and use top -u $USER to inspect the CPU usage.

To get the name of the compute node where your job is running use the following command:

$ squeue -u $USER

The rightmost column labeled "NODELIST(REASON)" gives the name of the node where your job is running. SSH to this node, for example:

$ ssh amd068

Once on the compute node, run top -u $USER. If your job is running in parallel you should see a process using much more than 100% in the %CPU column. For 4 CPU-cores this number would ideally be 400%.

Where to Store Your Files

You should run your jobs from the /scratch/ directory on the HPC clusters. These filesystems perform I/O operations much faster than the login nodes, and provide vast amounts of temporary storage.

Do not run jobs out of the /home or /projects directories. That is, you should never be writing the output of actively running jobs to these filesystems. This is because, unlike the /scratch directory, the disk space on these directories is limited to 50 GB per user, and the filesystems do not have the high-speed I/O capabilities of the /scratch filesystem. This means that running compute jobs or I/O-intensive jobs on these directories adversely affects the cluster performance and the work of other users who are also logged in at that time on the login nodes.

The /home and /projects directories should only be used for backing up the files that you produce on /scratch/ by using the following command. The -r option below is for recursively copying an entire folder; it should be omitted if you are copying individual files.

$ cp -r /scratch/<YourNetID>/myjob /home/<YourNetID>
Some Useful Links and Tips on Parallelizing Your MATLAB Code

The following link shows you how to start on your local multicore desktop and measure the time required to run a calculation, as a function of increasing numbers of workers.

Scale Up parfor-Loops to Cluster and Cloud

It enables you to measure the decrease in time required for the calculation if you add more workers. You can then decide whether it is useful to increase the number of workers in your parallel pool, and scale up to cluster and cloud computing.

Basic Guidelines for parfor-loops

The following link lists 3 basic guidelines when using parfor-loops to parallelize your code:

Convert for-Loops into parfor-Loops

The 3 guidelines can be summarized as follows:

  1. The body of the parfor-loop must be independent. One loop iteration cannot depend on a previous iteration, because the iterations are executed in parallel in a nondeterministic order.
  2. You cannot nest a parfor-loop inside another parfor-loop. See the next link for nested parfor-loops.
  3. parfor-loop variables must be consecutive increasing integers. Thus, for example, if you are running a non-integer parameter sweep, the code needs to be modified to transform the parameter into an integer variable for use in the parfor-loop.

Nested parfor-loops and for-loops

The following link provides numerous examples and useful tips on when to use parfor-loops to replace for-loops to effectively utilize computing resources.

Nested parfor-Loops and for-Loops

These include:

  • converting nested for-loops to parfor-loops. There are 5 examples, and the take-home lesson of the examples is labeled as Tip. The main point of these examples is: If you want to speed up your code, always run the outer loop in parallel, because you reduce parallel overhead.

  • parfor-Loop limitations: examples of valid and invalid program structures placed side-by-side for ease of comparison. If you want to convert a nested for-loop to a parfor-loop, you must ensure that your loop variables are properly classified. If your code does not adhere to the guidelines and restrictions labeled as Required, you get an error. MATLAB catches some of these errors at the time it reads the code. There are 6 examples of such errors on this page. These errors are labeled as Required (static).