Skip to content

Running MYSQL

The following instructions provide details on how to run the containerized version of MySQL on the cluster.

Setting up to run

Two configuration files are required, templates can be found in the /containers/hopper/Containers/mysql

cp /containers/hopper/Containers/mysql/my.cnf       ${HOME}/.my.cnf
cp /containers/hopper/Containers/mysql/mysqlrootpw  ${HOME}/.mysqlrootpw

Change the root password in both files (default is 'my-secret-pw').

Create a directory for the database. This directory houses the data and settings for the database and will persist when the instance is stopped. You can copy this folder off of the cluster for permanent storage, or it may be deleted if you want to run a fresh mysql instance.

mkdir -p ${PWD}/mysql/var/lib/mysql ${PWD}/mysql/run/mysqld

Running the instance

Load the singularity module

module load singularity

Start the singularity instance

singularity instance start --bind ${HOME} \
    --bind ${PWD}/mysql/var/lib/mysql/:/var/lib/mysql \
    --bind ${PWD}/mysql/run/mysqld:/run/mysqld \
    /containers/hopper/Containers/mysql/mysql.simg mysql

Run the instance

singularity run instance://mysql

If running jobs on multiple nodes which need to write to this database instance, a user must be created with remote access privileges. A script is provided within the image, which can be run with the following command:

singularity exec instance://mysql create_remote_admin_user.sh

This will output a username (remote_usr) and a random password. Use these credentials from the remote nodes (more information in the following section).

Do not forget to stop the instance when done:

singularity instance stop mysql

Accessing the database instance

Locally

To access the instance from the local socket (from the same node the instance is being run on):

module load mysql
mysql -S ${PWD}/mysql/run/mysqld/mysqld.sock

Remotely

To access the instance remotely (from another node), you will need to know the ip or hostname of the node running the container (NODE_IP) and you will need the random password for the remote_usr account (see previous section).

mysql -h ${NODE_IP}

Multi-Node job

In a multi node job in which all the nodes will be writing data to the same mysql instance, you will need a reference in your script to the node which is running that instance.

The slurm variable SLURM_NODELIST can be parsed for this purpose:

FIRST_NODE=$(echo $SLURM_NODELIST | cut -d ',' -f 1)