Skip to content
Howard Pritchard edited this page Oct 14, 2016 · 5 revisions

This page describes how to use the Intel MPI 2017 with the OFI libfabric GNI provider. This instructions are intended for the NERSC edison/cori systems, but may be used at other Cray XC installations running SLURM and CLE 5.2up04 or newer. SLURM PMI must also be available on the system. Intel MPI does not work with the Cray PMI library.

This wiki assumes OFI libfabric 1.4 or higher has been built and installed on the system. See building OFI libfabric with GNI provider. OFI libfabic can be downloaded from the libfabric release page


Using Intel MPI 2017 with OFI libfabric

First you will need to set the following environment variable to tell Intel MPI where to find the libfabric shared library:

% export I_MPI_OFI_LIBRARY=path-to-libfabric-library/libfabric.so

Another environment variable is used to specify the location of the SLURM PMI library. For cori the setting is

% export I_MPI_PMI_LIBRARY=/usr/lib64/slurmpmi/libpmi.so

while for edison it is

% export I_MPI_PMI_LIBRARY=/opt/slurm/default/lib64/slurmpmi/libpmi.so

Next load the Intel 17 and Intel MPI 2017 modules. As of this writing, on NERSC edison system this would be

module load intel/17.0.0.098
module load impi/2017

After loading the Intel MPI module, a third environment variable needs to be set to tell Intel MPI to use OFI libfabric:

% export I_MPI_FABRICS=ofi

Note the Intel MPI module at NERSC sets this environment variable so the order of loading the module then setting the environment variable is important.

If you have an application which makes use of MPI RMA (one-sided) communication, you may also want to set the following environment variable:

% export I_MPI_OFI_DIRECT_RMA=1

Note that this environment variable is not necessary for the gold update release of Intel MPI.

Applications compiled with Intel MPI can be launched using SLURM's srun command.

Known Issues

There is currently a bug in the MPICH one-sided accumulate operation that results in a runtime error. The issue is understood and is fixed in the IMPI 2017 U1 release.