MPICH
MPI is a widely used parallel programming model that establishes a practical, portable, efficient, and flexible standard for passing messages between ranks in parallel processes. Cray MPI is derived from Argonne National Laboratory MPICH and implements the MPI-3.1 standard.
By default PrgEnv-cray together with the MPICH module can be used when no additional modules are loaded by the user
MPICH Example
This example program below showcases the very basics for MPI programming using the functions defined in the mpi.h header file.
#include <stdio.h>
#include <mpi.h>
int main(int argc, char **argv)
{
int rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
printf("Hello from rank %d\n", rank);
MPI_Finalize();
return 0;
}
Introduction to Message Passing Interface (MPI): https://cpe.ext.hpe.com/docs/mpt/mpich/intro_mpi.html
MPICH CPU
The program can be compiled using cc from PrgEnv-cray module:
cc mpi.c
You can check whether dynamic linking contains the Cray MPI library:
$ ldd a.out | grep mpi
libmpi_cray.so.12 => /opt/cray/pe/lib64/libmpi_cray.so.12 (0x00007f5613dc2000)
MPICH GPU
Before compiling for GPU, the craype-accel-nvidia80 module must be loaded in order to use GPU offload. This module contains acceleration for the Nvidia Ampere architecture A100 GPUs available on the Komondor GPU partitions.
module load craype-accel-nvidia80
export CRAY_ACCEL_TARGET=nvidia80
cc mpi.c
You can check whether dynamic linking contains the Cray MPI library and GPU offload:
$ ldd a.out | grep mpi
libmpi_cray.so.12 => /opt/cray/pe/lib64/libmpi_cray.so.12 (0x00007f2b5e715000)
libmpi_gtl_cuda.so.0 => /opt/cray/pe/lib64/libmpi_gtl_cuda.so.0 (0x00007f2b5e4cf000)
MPICH CPU Batch Job
In the next batch job example we run 8 MPI tasks on 4 nodes.
#!/bin/bash
#SBATCH -A hpcteszt
#SBATCH --partition=cpu
#SBATCH --job-name=mpi-cpu
#SBATCH --output=mpi-cpu.out
#SBATCH --time=06:00:00
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=8
srun ./mpi-cpu
MPICH GPU Batch Job
The MPICH_GPU_SUPPORT_ENABLED environment variable must be set to 1. This will ensure that the application is GPU-aware.
#!/bin/bash
#SBATCH -A hpcteszt
#SBATCH --partition=gpu
#SBATCH --job-name=mpi-gpu
#SBATCH --output=mpi-gpu.out
#SBATCH --time=06:00:00
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=8
#SBATCH --gres=gpu:1
export MPICH_GPU_SUPPORT_ENABLED=1
srun ./mpi-gpu