Skip to content

jolatechno/mpi-utils

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mpi-utils

This is a simple repository containing header file that provide utility functions for MPI that I use in my HPC projects.

The assignment.hpp file contains functions made to send blocks of multi-dimensional arrays, with varying sizes, and with the offsets (in each dimensions) from the sending node able to be different from the offsets of the receiving node. This is perfect for simulations, where a fractured space can have different indexes on different nodes.

Compilation

You can use mpic++ to compile the provided examples, and to run them using mpirun.

Usage

assignment

available functions

To use the send and receive functions provided by this repository, you need to import the assignment.hpp file.

int mpi::send(int to, int start, int length, T *arr)
int mpi::send(int to, int (&sizes)[NDIM], int (&start)[NDIM], int (&length)[NDIM], T *arr)
int mpi::receive(int from, int start, int length, T *arr)
int mpi::receive(int from, int (&sizes)[NDIM], int (&start)[NDIM], int (&length)[NDIM], T *arr)

/* only if compiled with openmp */
int mpi::omp_send(int dev, int to, int start, int length, T *arr)
int mpi::omp_send(int dev, int to, int (&sizes)[NDIM], int (&start)[NDIM], int (&length)[NDIM], T *arr)
int mpi::omp_receive(int dev, int from, int start, int length, T *arr)
int mpi::omp_receive(int dev, int from, int (&sizes)[NDIM], int (&start)[NDIM], int (&length)[NDIM], T *arr)

example

You can see an example code in the examples/test_assignment.cpp file, which is explained bellow:

//testing
double mat[3][5][5];
int sizes[3] = {3, 5, 5};
int length[3] = {2, 4, 3}; //length of the block to copy

We first create a new 3d array, and define its size and the length of the block to copy.

We then fill it with recognizable values (see Results), and print it.

We then decide the offsets of the block that will be sent, and send it to the other node :

//sending matrix
int start[3] = {0, 0, 2};

err = mpi::send(1, sizes, start, length, &mat[0][0][0]); if (err != 0) return err;

And we decide the offsets at which the block will be received, and receive it from the other node :

if (rank == 1) {
  //receving matrix
  int start[3] = {1, 1, 0};

  err = mpi::receive(0, sizes, start, length, &mat[0][0][0]); if (err != 0) return err;

  //print the matrix
}

We can see that the program worked as intended (see Results), copying a block of the right size, with the right offsets both at the receiving end and at the sending end.

CUDA (and gpu) aware

You can send directly to a CUDA device (leveraging cuda-aware MPI) by using the same function with a device buffer (generated by cudaMalloc or by OPENMP or OPENACC inside a target section).

You can also send to any OPENMP device by using the same function with a omp_ prefix, and by adding as a first argument the device_number of the device from which to send or receive.

You might prefer implementing your own OPENMP send and receive function for more special workcases, as those provided by this repository allocate and free cpu buffers as intermediary.

Results

The first matrix print returned :

  0.00     0.01     0.02     0.03     0.04   
  0.10     0.11     0.12     0.13     0.14   
  0.20     0.21     0.22     0.23     0.24   
  0.30     0.31     0.32     0.33     0.34   
  0.40     0.41     0.42     0.43     0.44   

  1.00     1.01     1.02     1.03     1.04   
  1.10     1.11     1.12     1.13     1.14   
  1.20     1.21     1.22     1.23     1.24   
  1.30     1.31     1.32     1.33     1.34   
  1.40     1.41     1.42     1.43     1.44   

  2.00     2.01     2.02     2.03     2.04   
  2.10     2.11     2.12     2.13     2.14   
  2.20     2.21     2.22     2.23     2.24   
  2.30     2.31     2.32     2.33     2.34   
  2.40     2.41     2.42     2.43     2.44

And the second print returned :

  0.00     0.00     0.00     0.00     0.00   
  0.00     0.00     0.00     0.00     0.00   
  0.00     0.00     0.00     0.00     0.00   
  0.00     0.00     0.00     0.00     0.00   
  0.00     0.00     0.00     0.00     0.00   

  0.00     0.00     0.00     0.00     0.00   
  0.02     0.03     0.04     0.00     0.00   
  0.12     0.13     0.14     0.00     0.00   
  0.22     0.23     0.24     0.00     0.00   
  0.32     0.33     0.34     0.00     0.00   

  0.00     0.00     0.00     0.00     0.00   
  1.02     1.03     1.04     0.00     0.00   
  1.12     1.13     1.14     0.00     0.00   
  1.22     1.23     1.24     0.00     0.00   
  1.32     1.33     1.34     0.00     0.00

About

Utilities for MPI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages