Intel MPI
Intel Compiler
CPE provides bundled support in the programming environment to enable the Intel compiler.
The Intel compiler can be loaded as follows:
module swap PrgEnv-cray PrgEnv-intel
module load intel/compiler
The Intel oneAPI suit is installed in the Komondor module environment with the following compiler options:
icx
icpx
ifx (beta, not installed)
More information: https://cpe.ext.hpe.com/docs/guides/CPE/user_guides/About_The_Intel_Compiler.html
Intel MPICH
The CCE environment supports MPICH ABI compliant applications, so Intel MPI 5.0 standard programs are also compatible with HPE Cray MPI. To compile applications, make sure to swap with the normal cray-mpich module or unload the cray-mpich module before loading the cray-mpich-abi module.
module swap PrgEnv-cray PrgEnv-gnu
module unload cray-mpich
module load cray-mpich-abi
Many older ISV applications (built with Intel MPI versions prior to Intel MPI 5.0) can work with HPE Cray MPI. Some earlier Intel MPI versions are ABI compatible with the MPI library, but have different library names/version numbers. These applications can be compiled by loading the cray-mpich-abi-pre-intel5.0 module instead of the cray-mpich-abi module.
module swap PrgEnv-cray PrgEnv-gnu
module unload cray-mpich
module load cray-mpich-abi-pre-intel-5.0
More information: https://www.intel.com/content/www/us/en/developer/articles/technical/intel-mpi-itac-and-mps-in-a-cray-environment.html
Intel Hybrid MPI
You can compile hybrid applications with the GNU compiler:
module swap PrgEnv-cray PrgEnv-gnu
cc hybrid.c -fopenmp
More information: https://www.intel.com/content/www/us/en/developer/articles/technical/beginning-hybrid-mpiopenmp-development.html
Note
It may result in a decrease in performance if you are using the “intel/mpi” module instead of the Cray Programming Environment.
Intel mpirun
The Cray MPI library does not support the mpirun and mpiexec commands. (We run srun instead.) If it is a requirement, the Intel MPI environment can be loaded:
$ module load intel/mpi
Loading mpi version 2021.9.0
The parallel program can be started with the mpirun command as follows:
#!/bin/bash
#SBATCH -A hpcteszt
#SBATCH --job-name=mpirun
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --time=12:00:00
mpirun --report-pid ${TMPDIR}/mpirun.pid ./a.out