Example 2: Rimu with MPI

In this example, we will demonstrate using Rimu with MPI.

A runnable script for this example is located here. Run it with mpirun julia BHM-example-mpi.jl.

We start by importing Rimu and Rimu.RMPI, which contains MPI-related functionality.

using Rimu
using Rimu.RMPI

Note that it is not necessary to initialise the MPI library, as this is already done automatically when Rimu is loaded.

We will compute the ground state of a Bose-Hubbard model in momentum space with 10 particles in 10 sites.

First, we define the Hamiltonian. We want to start from an address with zero momentum.

address = BoseFS(10, 5 => 10)
BoseFS{10,10}(0, 0, 0, 0, 10, 0, 0, 0, 0, 0)

We will set the interaction strength u to 6.0. The hopping strength t defaults to 1.0.

H = HubbardMom1D(address; u=6.0)
HubbardMom1D(BoseFS{10,10}(0, 0, 0, 0, 10, 0, 0, 0, 0, 0); u=6.0, t=1.0)

Next, we construct the starting vector. We use a PDVec, which is automatically MPI-distributed if MPI is available. We set the vector's stochastic style to IsDynamicSemistochastic, which improves statistics and reduces the sign problem.

initial_vector = PDVec(address => 1.0; style=IsDynamicSemistochastic())
1-element PDVec: style = IsDynamicSemistochastic{Float64,ThresholdCompression,DynamicSemistochastic}()
  fs"|0 0 0 0 10 0 0 0 0 0⟩" => 1.0

We set a reporting strategy. We will use ReportToFile, which writes the reports directly to a file. This is useful for reducing memory use in long-running jobs, as we don't need to keep the results in memory. It also allows us to inspect the results before the computation finishes and recover some data if it fails. Setting save_if=is_mpi_root() will ensure only the root MPI rank will write to the file. The chunk_size parameter determines how often the data is saved to the file. Progress messages are suppressed with io=devnull.

r_strat = ReportToFile(
    filename="result.arrow",
    save_if=is_mpi_root(),
    reporting_interval=1,
    chunk_size=1000,
    io=devnull
)
ReportToFile{Symbol}("result.arrow", 1, 1000, true, false, Base.DevNull(), :zstd, nothing)

Now, we can set other parameters as usual. We will perform the computation with 10_000 walkers. We will also compute the projected energy.

s_strat = DoubleLogUpdate(targetwalkers=10_000)
post_step = ProjectedEnergy(H, initial_vector)
ProjectedEnergy{HubbardMom1D{Float64, 10, BoseFS{10, 10, BitString{19, 1, UInt32}}, 6.0, 1.0}, Rimu.DictVectors.FrozenPDVec{BoseFS{10, 10, BitString{19, 1, UInt32}}, Float64, 1}, Rimu.DictVectors.FrozenPDVec{BoseFS{10, 10, BitString{19, 1, UInt32}}, Float64, 1}}(:vproj, :hproj, HubbardMom1D(BoseFS{10,10}(0, 0, 0, 0, 10, 0, 0, 0, 0, 0); u=6.0, t=1.0), Rimu.DictVectors.FrozenPDVec{BoseFS{10, 10, BitString{19, 1, UInt32}}, Float64, 1}((Pair{BoseFS{10, 10, BitString{19, 1, UInt32}}, Float64}[fs"|0 0 0 0 10 0 0 0 0 0⟩" => 1.0],)), Rimu.DictVectors.FrozenPDVec{BoseFS{10, 10, BitString{19, 1, UInt32}}, Float64, 1}((Pair{BoseFS{10, 10, BitString{19, 1, UInt32}}, Float64}[fs"|1 0 0 0 8 0 0 0 1 0⟩" => 5.692099788303083, fs"|0 0 0 0 8 0 0 0 0 2⟩" => 4.024922359499621, fs"|0 0 0 0 10 0 0 0 0 0⟩" => 7.0, fs"|0 0 1 0 8 0 1 0 0 0⟩" => 5.692099788303083, fs"|0 0 0 1 8 1 0 0 0 0⟩" => 5.692099788303083, fs"|0 1 0 0 8 0 0 1 0 0⟩" => 5.692099788303083],)))

The @mpi_root macro performs an action on the root rank only, which is useful for printing.

@mpi_root println("Running FCIQMC with ", mpi_size(), " rank(s).")
Running FCIQMC with 1 rank(s).

Finally, we can run the computation.

lomc!(H, initial_vector; r_strat, s_strat, post_step, dτ=1e-4, laststep=10_000);

This page was generated using Literate.jl.