Module RMPI

Rimu.RMPIModule

Module for providing MPI functionality for Rimu. This module is unexported. To use it, run

using Rimu.RMPI
source

MPIData

Rimu.RMPI.MPIDataType
MPIData(data; kwargs...)

Wrapper used for signaling that this data is part of a distributed data structure and communication should happen with MPI. MPIData can generally be used where an AbstractDVec would be used otherwise. Unlike AbstractDVecs, MPIData does not support indexing, or iteration over keys, values, and pairs.

Keyword arguments:

source

Setup functions

The following distribute strategies are available. The functions are unexported.

Rimu.RMPI.mpi_one_sidedFunction
mpi_one_sided(data, comm = mpi_comm(), root = mpi_root; capacity)

Declare data as mpi-distributed and set communication strategy to one-sided with remote memory access (RMA). capacity sets the capacity of the RMA windows.

Sets up the MPIData structure with MPIOneSided strategy.

source

Strategies

Rimu.RMPI.MPIPointToPointType
MPIPointToPoint{N,A}

Point-to-point communication strategy. Uses circular communication using MPI.Send and MPI.Recv!.

Constructor

  • MPIPointToPoint(::Type{P}, np, id, comm): Construct an instance with pair type P on np processes with current rank id.
source
Rimu.RMPI.MPIOneSidedType
MPIOneSided(nprocs, myrank, comm, ::Type{T}, capacity)

Communication buffer for use with MPI one-sided communication (remote memory access). Up to capacity elements of type T can be exchanged between MPI ranks via put. It is important that isbitstype(T) == true. Objects of type MPIOneSided have to be freed manually with a (blocking) call to free().

source
Rimu.RMPI.MPIAllToAllType
 MPIAllToAll

All-to-all communication strategy. The communication works in two steps: first MPI.Alltoall! is used to communicate the number of walkers each rank wants to send to other ranks, then MPI.Alltoallv! is used to send the walkers around.

Constructor

  • MPIAllToAll(Type{P}, np, id, comm): Construct an instance with pair type P on np processes with current rank id.
source
Rimu.RMPI.MPINoWalkerExchangeType
MPINoWalkerExchange(nprocs, my_rank, comm)

Strategy for not exchanging walkers between ranks. Consequently there will be no cross-rank annihilations.

source

MPI convenience functions

Rimu.RMPI.mpi_combine_walkers!Method
mpi_combine_walkers!(target, source, [strategy])

Distribute the entries of source to the target data structure such that all entries in the target are on the process with the correct mpi rank as controlled by targetrank(). MPI syncronizing.

Note: the storage of the source is communicated rather than the source itself.

source
Rimu.RMPI.mpi_seed!Function
mpi_seed!(seed = rand(Random.RandomDevice(), UInt))

Re-seed the random number generators in an MPI-safe way. If seed is provided, the random numbers from rand will follow a deterministic sequence.

Independence of the random number generators on different MPI ranks is achieved by adding hash(mpi_rank()) to seed.

source
Rimu.RMPI.@mpi_rootMacro
@mpi_root expr

Evaluate expression only on the root rank. Extra care needs to be taken as expr must not contain any code that involves syncronising MPI operations, i.e. actions that would require syncronous action of all MPI ranks.

Example:

wn = walkernumber(dv)   # an MPI syncronising function call that gathers
                        # information from all MPI ranks
@mpi_root @info "The current walker number is" wn # print info message on root only
source

Index