Package Overview

QuOp_MPI provides an object-oriented framework for the design and simulation of QVAs. It enables researchers with any level of parallel programming experience to design simulation workflows that are efficiently scalable on massively parallel systems.

QVA Simulation

Predefined Algorithms

For combinatorial optimisation problems:

For the optimisation of continuous multivariable functions:

User-Defined Algorithms

Novel QVAs may be designed by working directly with the Ansatz class and propagator submodules. See,

Adaptive Operator and Optimisation Schemes

The QuOp_MPI Ansatz and Unitary classes are configured via QuOp Functions. These allow the implementation of arbitrarily parameterised operators and adaptive optimisation schemes. QuOp_MPI includes default QuOp Functions that support the interfacing of user-defined serial Python functions with its parallelisation scheme for QVA simulation. Users may also define MPI-compatible custom QuOp Functions with minimal parallel programming experience.

Key Features

Toolkit Module

The toolkit module provides convenience functions for constructing quantum operators:

  • Pauli operators: I(), X(), Y(), Z() — single-qubit Pauli matrices acting on specified qubits in an n-qubit system

  • Kronecker products: kron(), kron_power() — utilities for constructing tensor products

  • String operators: string() — for building operators from string representations

These are particularly useful for defining cost Hamiltonians in combinatorial optimisation problems (see the maxcut example).

Initial States and Observables

QuOp_MPI provides predefined functions for common initial states and observable configurations:

  • Initial states (state): equal (uniform superposition), basis (computational basis state), serial (user-defined function), array (from NumPy array), position_grid (for multivariable problems)

  • Observables (observable): serial (user-defined function), csv (from CSV file), hdf5 (from HDF5 file), array (from NumPy array), rand (random observables for testing)

See set_initial_state() and set_observables().

Parameter Mapping

The set_parameter_map() method enables flexible control over variational parameters, allowing:

  • Reduction of the number of free parameters (e.g., parameter sharing across ansatz layers)

  • Custom mappings from a reduced parameter space to the full variational parameter vector

  • Improved optimisation efficiency for problems with inherent symmetries

Optimiser Support

QuOp_MPI supports classical optimisers from both SciPy and NLopt:

  • SciPy: All methods from scipy.optimize.minimize (default: L-BFGS-B)

  • NLopt: Gradient-based and derivative-free methods (requires pip install 'quop_mpi[nlopt]')

Configure via set_optimiser().

Custom Objective Functions

While QuOp_MPI defaults to minimising the expectation value of the observables, custom objective functions can be defined via set_objective(). This enables:

  • Alternative figures of merit (e.g., CVaR, Gibbs objective)

  • Multi-objective optimisation schemes

  • Problem-specific objective formulations

Sampling Simulation

The set_sampling() method enables simulation of quantum measurement, returning sampled basis states and their associated observable values rather than just the expectation value.

Data I/O

QuOp_MPI provides comprehensive data persistence:

  • HDF5 output: Save final states, observables, and optimisation results via save() (parallel HDF5 for large-scale simulations)

  • CSV logging: Record optimisation progress across multiple runs via set_log()

  • Benchmark data: Automated saving during benchmark() runs

Parallelisation Schemes

QuOp_MPI implements several MPI-based parallelisation schemes, which are all mutually compatible. Of these, parallel Quantum simulation is relevant to QVA simulation on personal computers, workstations and clusters. While parallel gradient computation and Ansatz swarms support simulation workflows on clusters.

Quantum Simulation

For an MPI (sub)communicator with \(m\) processes with integer ID \(\text{rank} \in (0,...,m-1)\), the system state and observables of a simulated QVA are partitioned over an MPI (sub)communicator with,

\[\text{local_i}\]

elements per rank and a global index offset of,

\[\text{local_i_offset} = \sum_{i<\text{rank}} \text{local_i}.\]

These distributed arrays are acted on by instances of the Unitary class, which provides an interface to efficient Python extensions which compute the action of the QVA unitaries in MPI parallel.

Gradient Evaluation

For optimisation methods that make use of gradient information, computation of the objective function gradient may be carried out in MPI parallel by duplicating an Ansatz over multiple MPI subcommunicators (see set_parallel_jacobian()).

Ansatz Swarms

Optimisation of QVA variational parameters over a large search domain, or other QVA meta-optimisation tasks can be accelerated through creation of an swarm, which manages multiple QVA simulation instances.

Parallel Overview

The diagram below depicts a swarm of two QVA simulation instances with parallel gradient evaluation (see set_parallel_jacobian()). Each QVA simulation occurs over three MPI subcommunicators with two of the subcommunicators carrying out computation of the partial derivatives of the objective function and the remaining managing optimisation of the variational parameters and evaluation of the objective function. The six Ansatz subcommunicators call the propagate() method of Unitary instances which compute the action of the QVA’s phase-shift and mixing unitaries in MPI parallel.

digraph "sphinx-ext-graphviz" {
    node [fontsize="10"];

    mpi[label="MPI.COMM_WORLD\n12 nodes, 1536 cores", shape=oval]

    swarm[label="Interrelated QVA simulations.\n(swarm)", shape="trapezium"];

    ansatz_0[label="QVA Simulation 1\n(Ansatz)",shape="invhouse"];
    ansatz_N[label="QVA Simulation 2\n(Ansatz)",shape="invhouse"];

    gradient_0_0[label="Gradient\ncomputation.\n(Ansatz)",shape="rectangle"];
    gradient_0_N[label="Gradient\ncomputation.\n(Ansatz)",shape="rectangle"];
    gradient_N_0[label="Gradient\ncomputation.\n(Ansatz)",shape="rectangle"];
    gradient_N_N[label="Gradient\ncomputation.\n(Ansatz)",shape="rectangle"];

    propagation_0[label="Objective function\nminimisation.\n(Ansatz)",shape="rectangle"];
    propagation_N[label="Objective function\nminimisation.\n(Ansatz)",shape="rectangle"];

    propagation_gradient_0_0[label="Parallel quantum\nstate propagation\n(Unitary)", shape="rectangle"];
    propagation_gradient_0_N[label="Parallel quantum\nstate propagation\n(Unitary)", shape="rectangle"];
    propagation_gradient_N_0[label="Parallel quantum\nstate propagation\n(Unitary)", shape="rectangle"];
    propagation_gradient_N_N[label="Parallel quantum\nstate propagation\n(Unitary)", shape="rectangle"];

    obj_propagation_0[label="Parallel quantum\nstate propagation\n(Unitary)", shape="rectangle"];
    obj_propagation_N[label="Parallel quantum\nstate propagation\n(Unitary)", shape="rectangle"];

    mpi -> swarm [style=dashed, dir=none];

    swarm-> ansatz_0 [label=768];
    swarm-> ansatz_N [label=768];

    ansatz_0 -> propagation_0 [label=256];
    ansatz_N -> propagation_N [label=256];

    propagation_0 -> obj_propagation_0 [style=dashed, dir=none];
    propagation_N -> obj_propagation_N [style=dashed, dir=none];

    ansatz_0 -> gradient_0_0 [label=256];
    ansatz_0 -> gradient_0_N [label=256];

    ansatz_N -> gradient_N_0 [label=256];
    ansatz_N -> gradient_N_N [label=256];

    gradient_0_0 -> propagation_gradient_0_0 [style="dashed", dir="none"];
    gradient_0_N -> propagation_gradient_0_N [style="dashed", dir="none"];
    gradient_N_0 -> propagation_gradient_N_0 [style="dashed", dir="none"];
    gradient_N_N -> propagation_gradient_N_N [style="dashed", dir="none"];

}

Example parallel structure for QuOp_MPI running on a cluster with 128 cores per node. Solid arrows indicate splitting of an MPI (sub)communicator and dashed lines sharing of an MPI (sub)communicator. Numbered edges indicate MPI subcommunicator size. Relevant QuOp_MPI classes are indicated in parenthesises.

Support for Clusters with Job-Scheduling

For clusters with time-limited job-scheduling, QuOp_MPI supports automated job suspension and resumption for long-running workflows. This allows simulations to be split across multiple job submissions without manual intervention.

Note

Suspend/resume functionality is available for multi-execution workflows only:

  • benchmark() — systematic studies across ansatz depths and repeats

  • execute_swarm() — parallel execution of multiple QVA instances

  • benchmark_swarm() — benchmarking across swarm configurations

A single execute() call cannot be suspended and resumed, as it represents one atomic optimisation run.

Suspend and Resume

When a time limit is set, QuOp_MPI monitors execution time and suspends before the limit is reached, saving progress to a suspend file. On the next job submission, execution resumes from where it left off—completed iterations are skipped and only remaining work is performed.

Example usage:

alg.benchmark(
    ansatz_depths=range(1, 21),
    repeats=10,
    time_limit=3600,        # Suspend after ~1 hour
    suspend_path="my_simulation"
)

Environment Variables

The suspend/resume behaviour can be configured via environment variables, which is useful for cluster job scripts:

Variable

Description

QUOP_TIME_LIMIT

Total allocated time in seconds. Overrides the time_limit parameter.

QUOP_SUSPEND_PATH

Path for suspend files. Overrides the suspend_path parameter.

QUOP_FORCE_RESUME

If set to 1, force resume from suspend file even if source code has changed.

Example SLURM job script:

#!/bin/bash
#SBATCH --time=04:00:00
#SBATCH --nodes=4

export QUOP_TIME_LIMIT=14000  # ~3.9 hours (leave margin for cleanup)
export QUOP_SUSPEND_PATH="simulation_checkpoint"

srun python my_simulation.py

Performance Profiling

For performance analysis, QuOp_MPI includes a built-in profiler that traces function calls and execution times:

QUOP_PROFILE=1 mpiexec -n 4 python my_simulation.py

This creates a quop_profile_<timestamp>/ directory containing per-rank trace files with timing information for all QuOp_MPI function calls.