MPI_Iallgather function
Gathers data from all members of a group and sends the data to all members of the group in a non-blocking way.
Syntax
int MPIAPI MPI_Iallgather(
_In_opt_ const void *sendbuf,
_In_ int sendcount,
_In_ MPI_Datatype sendtype,
_Out_opt_ void *recvbuf,
_In_ int recvcount,
_In_ MPI_Datatype recvtype,
_In_ MPI_Comm comm,
_Out_ MPI_Request *request
);
Parameters
sendbuf [in, optional]
The pointer to the data to be sent to all processes in the group. The number and data type of the elements in the buffer are specified in the sendcount and sendtype parameters. Each element in the buffer corresponds to a process in the group.If the comm parameter references an intracommunicator, you can specify an in-place option by specifying MPI_IN_PLACE in all processes. The sendcount and sendtype parameters are ignored. Each process enters data in the corresponding receive buffer element. The nth process sends data to the nth element of the receive buffer.
sendcount [in]
The number of elements in the buffer that is specified in the sendbuf parameter. If sendcount is zero, the data part of the message is empty.sendtype [in]
The MPI data type of the elements in the send buffer.recvbuf [out, optional]
The pointer to a buffer that contains the data that is received from each process. The number and data type of the elements in the buffer are specified in the recvcount and recvtype parameters.recvcount [in]
The number of elements in the receive buffer. If the count is zero, the data part of the message is empty.recvtype [in]
The MPI data type of the elements in the receive buffer.comm [in]
The MPI_Comm communicator handle.request [out]
The MPI_Request handle representing the communication operation.
Return value
Returns MPI_SUCCESS on success. Otherwise, the return value is an error code.
In Fortran, the return value is stored in the IERROR parameter.
Fortran
MPI_IALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, REQUEST, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, REQUEST, IERROR
Remarks
A non-blocking call initiates a collective reduction operation which must be completed in a separate completion call. Once initiated, the operation may progress independently of any computation or other communication at participating processes. In this manner, non-blocking reduction operations can mitigate possible synchronizing effects of reduction operations by running them in the “background.”
All completion calls (e.g., MPI_Wait) are supported for non-blocking reduction operations.
Requirements
Product |
Microsoft MPI v7 |
Header |
Mpi.h; Mpif.h |
Library |
Msmpi.lib |
DLL |
Msmpi.dll |