Quantum Tomography
PastaQ.tomography
— Functiontomography(data::Matrix{Pair{String, Int}}, sites::Vector{<:Index};
method::String = "linear_inversion",
fillzeros::Bool = true,
trρ::Number = 1.0,
max_iters::Int = 10000,
kwargs...)
tomography(data::Matrix{Pair{String,Pair{String, Int}}}, sites::Vector{<:Index};
method::String="linear_inversion", kwargs...)
Run full quantum tomography for a set of input measurement data data
. If the input data consists of a list of Pair{String, Int}
, it is interpreted as quantum state tomography. Each data point is a single measurement outcome, e.g. "X" => 1
to refer to a measurement in the X
basis with binary outcome 1
. If instead the input data is a collection of Pair{String,Pair{String, Int}}
, it is interpreted as quantum process tomography, with each data-point corresponding to having input a given state to the channel, followed by a measurement in a basis, e.g. "X+" => ("Z" => 0)
referring to an input $|+\rangle$ state, followed by a measurement in the Z
basis with outcome 0
.
There are three methods to perform tomography (we show state tomography here as an example):
method = "linear_inversion"
(or"LI"
): optimize a variational density matrix $\rho$ (or Choi matrix)1
method = "least_squares"
(or"LS"
):method = "maximum_likelihood"
(or"ML"
):
tomography(train_data::AbstractMatrix, L::LPDO;
optimizer::Optimizer,
observer! = nothing,
kwargs...)
Run quantum process tomography using a variational model $L$ to fit train_data
. Depending on the type of train_data
, run state or process tomography (see full tomography docs for the data format).
For quantum state tomography, optimize the average Kullbach-Leibler (KL) divergence for projective measurements in a set of bases:
where the cost function is computed as
for input MPS variational wavefunction, and
for input LPDO variational density operators. Here $U_b$ is the depth-1 unitary that rotates the variational state into the measurement basis $b$ for outcome $x^{(b)}$.
For quantum process tomography, optimize the KL divergence for the process probability distribution
where $\xi$ is the input state to the channel. The cost function is computed as
for input MPO variational unitary operator (trated as a MPS $|\Phi\rangle$ after appropriate vectorization), and
for input LPDO variational Choi matrix. Here we refer to $\tilde\Phi$ and $\tilde\Lambda$ respectively as the projection of the unitary operator or Choi matrix into the input state $|\xi\rangle$ to the channel.
Keyword arguments:
train_data
: pairs of preparation/ (basis/outcome):("X+"=>"X"=>0, "Z-"=>"Y"=>1, "Y+"=>"Z"=>0, …)
.L
: variational model (MPO/LPDO).optimizer
: optimizer object for stochastic optimization (fromOptimisers.jl
).observer!
: if provided, keep track of training metrics (fromObservers.jl
).batch_size
: number of samples used for one gradient update.epochs
: number of training iterations.target
: target quantum state/process for distance measures (e.g. fidelity).test_data
: data for computing cross-validation.observer_step = 1
: how often the Observer is called to record measurements.outputlevel = 1
: amount of information printed on screen during training.outputpath
: if provided, save metrics on file.savestate
: iftrue
, save the variational state on file.print_metrics = []
: print these metrics on screen during training.earlystop
: iftrue
, use pre-defined early-stop function. A function can also be passed asearlystop
, in which case if the function (evaluated at each iteration) returnstrue
, the training is halted.