Reaching Others University at Buffalo - The State University of New York
Skip to Content

INTEL MPI

Intel implementation of the Message Passing Interface (v2.x).

Category:  MPI

 

INTEL MPI has multi-network support (TCP/IP, Infiniband, Myrinet, etc.) - by default the best network is tried first.

Compiler wrappers are provided for C, C++ and Fortran compilers.  A wrapper is a script that links in the require include and library files.

  • INTEL compiler wrappers:  mpiicc, mpiicpc, mpiifort
  • GNU compiler wrappers:  mpicc, mpicxx, mpif77, mpif99

Usage Notes:

Show the software versions:  module avail intel-mpi

Loading the module will set the path and any necessary variables:  module load intel-mpi/version

Load the INTEL compiler module to use the INTEL compiler wrappers:  module load intel/version

 

Job Startup

Intel MPI is integrated with the SLURM resource manager and performs best when jobs are launched using the SLURM srun command. Alternative launchers like mpiexec.hydra and mpirun are not recommended. srun is the task launcher of choice when running Intel MPI jobs on the UB CCR cluster.

 

Code samples are written in bash shell.

Sample code for launching MPI tasks with srun:

 

 
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
srun -n $SLURM_NPROCS ./a.out

The intel-mpi module automatically sets environment variables when the module is loaded. Some of these settings are assigned based on the type of infiniband communications hardware that resides on the nodes. Thus, it is a good idea to unload and then reload the intel-mpi module prior to launching the MPI application. This will ensure that the correct modeule settings are in place - otherwise these settings might be inherited from modules previously loaded on the front-end. The following example illustrates the reload procedure:

 
module unload intel-mpi
module load intel-mpi
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
srun -n $SLURM_NPROCS ./a.out