Algorithms
Here is a collection of the algorithms that have been added to MPSKit.jl. If a particular algorithm is missing, feel free to let us know via an issue, or contribute via a PR.
Groundstates
One of the most prominent use-cases of MPS is to obtain the groundstate of a given (quasi-) one-dimensional quantum Hamiltonian. In MPSKit.jl, this can be achieved through find_groundstate
:
MPSKit.find_groundstate
— Functionfind_groundstate(ψ₀, H, [environments]; kwargs...) -> (ψ, environments, ϵ)
find_groundstate(ψ₀, H, algorithm, environments) -> (ψ, environments, ϵ)
Compute the groundstate for Hamiltonian H
with initial guess ψ
. If not specified, an optimization algorithm will be attempted based on the supplied keywords.
Arguments
ψ₀::AbstractMPS
: initial guessH::AbstractMPO
: operator for which to find the groundstate[environments]
: MPS environment manageralgorithm
: optimization algorithm
Keywords
tol::Float64
: tolerance for convergence criteriummaxiter::Int
: maximum amount of iterationsverbosity::Int
: display progress information
Returns
ψ::AbstractMPS
: converged groundstateenvironments
: environments corresponding to the converged stateϵ::Float64
: final convergence error upon terminating the algorithm
There is a variety of algorithms that have been developed over the years, and many of them have been implemented in MPSKit. Keep in mind that some of them are exclusive to finite or infinite systems, while others may work for both. Many of these algorithms have different advantages and disadvantages, and figuring out the optimal algorithm is not always straightforward, since this may strongly depend on the model. Here, we enumerate some of their properties in hopes of pointing you in the right direction.
DMRG
Probably the most widely used algorithm for optimizing groundstates with MPS is DMRG
and its variants. This algorithm sweeps through the system, optimizing a single site while keeping all others fixed. Since this local problem can be solved efficiently, the global optimal state follows by alternating through the system. However, because of the single-site nature of this algorithm, this can never alter the bond dimension of the state, such that there is no way of dynamically increasing the precision. This can become particularly relevant in the cases where symmetries are involved, since then finding a good distribution of charges is also required. To circumvent this, it is also possible to optimize over two sites at the same time with DMRG2
, followed by a truncation back to the single site states. This can dynamically change the bond dimension but comes at an increase in cost.
MPSKit.DMRG
— Typestruct DMRG{A, F} <: MPSKit.Algorithm
Single-site DMRG algorithm for finding the dominant eigenvector.
Fields
tol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal amount of iterationsfinalize::Any
: callback function applied after each iteration, of signaturefinalize(iter, ψ, H, envs) -> ψ, envs
verbosity::Int64
: setting for how much information is displayedeigalg::Any
: algorithm used for the eigenvalue solvers
MPSKit.DMRG2
— Typestruct DMRG2{A, F} <: MPSKit.Algorithm
Two-site DMRG algorithm for finding the dominant eigenvector.
Fields
tol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal amount of iterationsfinalize::Any
: callback function applied after each iteration, of signaturefinalize(iter, ψ, H, envs) -> ψ, envs
verbosity::Int64
: setting for how much information is displayedeigalg::Any
: algorithm used for the eigenvalue solverstrscheme::TensorKit.TruncationScheme
: algorithm used for truncation of the two-site update
For infinite systems, a similar approach can be used by dynamically adding new sites to the middle of the system and optimizing over them. This gradually increases the system size until the boundary effects are no longer felt. However, because of this approach, for critical systems this algorithm can be quite slow to converge, since the number of steps needs to be larger than the correlation length of the system. Again, both a single-site and a two-site version are implemented, to have the option to dynamically increase the bonddimension at a higher cost.
MPSKit.IDMRG
— Typestruct IDMRG{A} <: MPSKit.Algorithm
Single site infinite DMRG algorithm for finding the dominant eigenvector.
Fields
tol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal amount of iterationsverbosity::Int64
: setting for how much information is displayedalg_gauge::Any
: algorithm used for gauging the MPSalg_eigsolve::Any
: algorithm used for the eigenvalue solvers
MPSKit.IDMRG2
— Typestruct IDMRG2{A} <: MPSKit.Algorithm
Two-site infinite DMRG algorithm for finding the dominant eigenvector.
Fields
tol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal amount of iterationsverbosity::Int64
: setting for how much information is displayedalg_gauge::Any
: algorithm used for gauging the MPSalg_eigsolve::Any
: algorithm used for the eigenvalue solverstrscheme::TensorKit.TruncationScheme
: algorithm used for truncation of the two-site update
VUMPS
VUMPS
is an (I)DMRG inspired algorithm that can be used to Variationally find the groundstate as a Uniform (infinite) Matrix Product State. In particular, a local update is followed by a re-gauging procedure that effectively replaces the entire network with the newly updated tensor. Compared to IDMRG, this often achieves a higher rate of convergence, since updates are felt throughout the system immediately. Nevertheless, this algorithm only works whenever the state is injective, i.e. there is a unique ground state. Since this is a single-site algorithm, this cannot alter the bond dimension.
MPSKit.VUMPS
— Typestruct VUMPS{F} <: MPSKit.Algorithm
Variational optimization algorithm for uniform matrix product states, based on the combination of DMRG with matrix product state tangent space concepts.
Fields
tol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal amount of iterationsverbosity::Int64
: setting for how much information is displayedalg_gauge::Any
: algorithm used for gauging theInfiniteMPS
alg_eigsolve::Any
: algorithm used for the eigenvalue solversalg_environments::Any
: algorithm used for the MPS environmentsfinalize::Any
: callback function applied after each iteration, of signaturefinalize(iter, ψ, H, envs) -> ψ, envs
References
Gradient descent
Both finite and infinite matrix product states can be parametrized by a set of isometric tensors, which we can optimize over. Making use of the geometry of the manifold (a Grassmann manifold), we can greatly outperform naive optimization strategies. Compared to the other algorithms, quite often the convergence rate in the tail of the optimization procedure is higher, such that often the fastest method combines a different algorithm far from convergence with this algorithm close to convergence. Since this is again a single-site algorithm, there is no way to alter the bond dimension.
MPSKit.GradientGrassmann
— Typestruct GradientGrassmann{O<:OptimKit.OptimizationAlgorithm, F} <: MPSKit.Algorithm
Variational gradient-based optimization algorithm that keeps the MPS in left-canonical form, as points on a Grassmann manifold. The optimization is then a Riemannian gradient descent with a preconditioner to induce the metric from the Hilbert space inner product.
Fields
method::OptimKit.OptimizationAlgorithm
: optimization algorithmfinalize!::Any
: callback function applied after each iteration, of signaturefinalize!(x, f, g, numiter) -> x, f, g
References
Constructors
GradientGrassmann(; kwargs...)
Keywords
method=ConjugateGradient
: instance of optimization algorithm, or type of optimization algorithm to constructfinalize!
: finalizer algorithmtol::Float64
: tolerance for convergence criteriummaxiter::Int
: maximum amount of iterationsverbosity::Int
: level of information display
Time evolution
Given a particular state, it can also often be useful to have the ability to examine the evolution of certain properties over time. To that end, there are two main approaches to solving the Schrödinger equation in MPSKit.
\[i \hbar \frac{d}{dt} \Psi = H \Psi \implies \Psi(t) = \exp{\left(-iH(t - t_0)\right)} \Psi(t_0)\]
MPSKit.timestep
— Functiontimestep(ψ₀, H, t, dt, [alg], [envs]; kwargs...) -> (ψ, envs)
timestep!(ψ₀, H, t, dt, [alg], [envs]; kwargs...) -> (ψ₀, envs)
Time-step the state ψ₀
with Hamiltonian H
over a given time step dt
at time t
, solving the Schroedinger equation: $i ∂ψ/∂t = H ψ$.
Arguments
ψ₀::AbstractMPS
: initial stateH::AbstractMPO
: operator that generates the time evolution (can be time-dependent).t::Number
: starting time of time-stepdt::Number
: time-step magnitude[alg]
: algorithm to use for the time evolution. Defaults toTDVP
.[envs]
: MPS environment manager
MPSKit.time_evolve
— Functiontime_evolve(ψ₀, H, t_span, [alg], [envs]; kwargs...) -> (ψ, envs)
time_evolve!(ψ₀, H, t_span, [alg], [envs]; kwargs...) -> (ψ₀, envs)
Time-evolve the initial state ψ₀
with Hamiltonian H
over a given time span by stepping through each of the time points obtained by iterating t_span.
Arguments
ψ₀::AbstractMPS
: initial stateH::AbstractMPO
: operator that generates the time evolution (can be time-dependent).t_span::AbstractVector{<:Number}
: time points over which the time evolution is stepped[alg]
: algorithm to use for the time evolution. Defaults toTDVP
.[envs]
: MPS environment manager
MPSKit.make_time_mpo
— Functionmake_time_mpo(H::MPOHamiltonian, dt::Number, alg)
Construct an MPO that approximates $\exp(-iHdt)$.
TDVP
The first is focused around approximately solving the equation for a small timestep, and repeating this until the desired evolution is achieved. This can be achieved by projecting the equation onto the tangent space of the MPS, and then solving the results. This procedure is commonly referred to as the TDVP
algorithm, which again has a two-site variant to allow for dynamically altering the bond dimension.
MPSKit.TDVP
— Typestruct TDVP{A, F} <: MPSKit.Algorithm
Single site MPS time-evolution algorithm based on the Time-Dependent Variational Principle.
Fields
integrator::Any
: algorithm used in the exponential solverstolgauge::Float64
: tolerance for gauging algorithmgaugemaxiter::Int64
: maximal amount of iterations for gauging algorithmfinalize::Any
: callback function applied after each iteration, of signaturefinalize(iter, ψ, H, envs) -> ψ, envs
References
MPSKit.TDVP2
— Typestruct TDVP2{A, F} <: MPSKit.Algorithm
Two-site MPS time-evolution algorithm based on the Time-Dependent Variational Principle.
Fields
integrator::Any
: algorithm used in the exponential solverstolgauge::Float64
: tolerance for gauging algorithmgaugemaxiter::Int64
: maximal amount of iterations for gauging algorithmtrscheme::TensorKit.TruncationScheme
: algorithm used for truncation of the two-site updatefinalize::Any
: callback function applied after each iteration, of signaturefinalize(iter, ψ, H, envs) -> ψ, envs
References
Time evolution MPO
The other approach instead tries to first approximately represent the evolution operator, and only then attempts to apply this operator to the initial state. Typically the first step happens through make_time_mpo
, while the second can be achieved through approximate
. Here, there are several algorithms available
MPSKit.WI
— Constantconst WI = TaylorCluster(; N=1, extension=false, compression=false)
First order Taylor expansion for a time-evolution MPO.
MPSKit.WII
— Typestruct WII <: MPSKit.Algorithm
Generalization of the Euler approximation of the operator exponential for MPOs.
Fields
tol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal number of iterations
References
MPSKit.TaylorCluster
— Typestruct TaylorCluster <: MPSKit.Algorithm
Algorithm for constructing the N
th order time evolution MPO using the Taylor cluster expansion.
Fields
N::Int64
: order of the Taylor expansionextension::Bool
: include higher-order correctionscompression::Bool
: approximate compression of corrections, accurate up to orderN
References
Excitations
It might also be desirable to obtain information beyond the lowest energy state of a given system, and study the dispersion relation. While it is typically not feasible to resolve states in the middle of the energy spectrum, there are several ways to target a few of the lowest-lying energy states.
MPSKit.excitations
— Functionexcitations(H, algorithm::QuasiparticleAnsatz, ψ::FiniteQP, [left_environments],
[right_environments]; num=1) -> (energies, states)
excitations(H, algorithm::QuasiparticleAnsatz, ψ::InfiniteQP, [left_environments],
[right_environments]; num=1, solver=Defaults.solver) -> (energies, states)
excitations(H, algorithm::FiniteExcited, ψs::NTuple{<:Any, <:FiniteMPS};
num=1, init=copy(first(ψs))) -> (energies, states)
excitations(H, algorithm::ChepigaAnsatz, ψ::FiniteMPS, [envs];
num=1, pos=length(ψ)÷2) -> (energies, states)
excitations(H, algorithm::ChepigaAnsatz2, ψ::FiniteMPS, [envs];
num=1, pos=length(ψ)÷2) -> (energies, states)
Compute the first excited states and their energy gap above a groundstate.
Arguments
H::AbstractMPO
: operator for which to find the excitationsalgorithm
: optimization algorithmψ::QP
: initial quasiparticle guessψs::NTuple{N, <:FiniteMPS}
:N
first excited states[left_environments]
: left groundstate environment[right_environments]
: right groundstate environment
Keywords
num::Int
: number of excited states to computesolver
: algorithm for the linear solver of the quasiparticle environmentsinit
: initial excited state guesspos
: position of perturbation
Quasiparticle Ansatz
The Quasiparticle Ansatz offers an approach to compute low-energy eigenstates in quantum systems, playing a key role in both finite and infinite systems. It leverages localized perturbations for approximations, as detailed in (Haegeman et al., 2013).
Finite Systems:
In finite systems, we approximate low-energy states by altering a single tensor in the Matrix Product State (MPS) for each site, and summing these across all sites. This method introduces additional gauge freedoms, utilized to ensure orthogonality to the ground state. Optimizing within this framework translates to solving an eigenvalue problem. For example, in the transverse field Ising model, we calculate the first excited state as shown in the provided code snippet, amd check the accuracy against theoretical values. Some deviations are expected, both due to finite-bond-dimension and finite-size effects.
# Model parameters
g = 10.0
L = 16
H = transverse_field_ising(FiniteChain(L); g)
# Finding the ground state
ψ₀ = FiniteMPS(L, ℂ^2, ℂ^32)
ψ, = find_groundstate(ψ₀, H; verbosity=0)
# Computing excitations using the Quasiparticle Ansatz
Es, ϕs = excitations(H, QuasiparticleAnsatz(), ψ; num=1)
isapprox(Es[1], 2(g - 1); rtol=1e-2)
true
Infinite Systems:
The ansatz in infinite systems maintains translational invariance by perturbing every site in the unit cell in a plane-wave superposition, requiring momentum specification. The Haldane gap computation in the Heisenberg model illustrates this approach.
# Setting up the model and momentum
momentum = π
H = heisenberg_XXX()
# Ground state computation
ψ₀ = InfiniteMPS(ℂ^3, ℂ^48)
ψ, = find_groundstate(ψ₀, H; verbosity=0)
# Excitation calculations
Es, ϕs = excitations(H, QuasiparticleAnsatz(), momentum, ψ)
isapprox(Es[1], 0.41047925; atol=1e-4)
true
Charged excitations:
When dealing with symmetric systems, the default optimization is for eigenvectors with trivial total charge. However, quasiparticles with different charges can be obtained using the sector keyword. For instance, in the transverse field Ising model, we consider an excitation built up of flipping a single spin, aligning with Z2Irrep(1)
.
g = 10.0
L = 16
H = transverse_field_ising(Z2Irrep, FiniteChain(L); g)
ψ₀ = FiniteMPS(L, Z2Space(0 => 1, 1 => 1), Z2Space(0 => 16, 1 => 16))
ψ, = find_groundstate(ψ₀, H; verbosity=0)
Es, ϕs = excitations(H, QuasiparticleAnsatz(), ψ; num=1, sector=Z2Irrep(1))
isapprox(Es[1], 2(g - 1); rtol=1e-2) # infinite analytical result
true
MPSKit.QuasiparticleAnsatz
— Typestruct QuasiparticleAnsatz{A} <: MPSKit.Algorithm
Optimization algorithm for quasi-particle excitations on top of MPS groundstates.
Fields
alg::Any
: algorithm used for the eigenvalue solvers
Constructors
QuasiparticleAnsatz()
QuasiparticleAnsatz(; kwargs...)
QuasiparticleAnsatz(alg)
Create a QuasiparticleAnsatz
algorithm with the given algorithm, or by passing the keyword arguments to Arnoldi
.
References
Finite excitations
For finite systems we can also do something else - find the groundstate of the hamiltonian + $\\text{weight} \sum_i | \\psi_i ⟩ ⟨ \\psi_i$. This is also supported by calling
# Model parameters
g = 10.0
L = 16
H = transverse_field_ising(FiniteChain(L); g)
# Finding the ground state
ψ₀ = FiniteMPS(L, ℂ^2, ℂ^32)
ψ, = find_groundstate(ψ₀, H; verbosity=0)
Es, ϕs = excitations(H, FiniteExcited(), ψ; num=1)
isapprox(Es[1], 2(g - 1); rtol=1e-2)
false
MPSKit.FiniteExcited
— Typestruct FiniteExcited{A} <: MPSKit.Algorithm
Variational optimization algorithm for excitations of finite MPS by minimizing the energy of
\[H - λᵢ |ψᵢ⟩⟨ψᵢ|\]
Fields
gsalg::Any
: optimization algorithmweight::Float64
: energy penalty for enforcing orthogonality with previous states
"Chepiga Ansatz"
Computing excitations in critical systems poses a significant challenge due to the diverging correlation length, which requires very large bond dimensions. However, we can leverage this long-range correlation to effectively identify excitations. In this context, the left/right gauged MPS, serving as isometries, are effectively projecting the Hamiltonian into the low-energy sector. This projection method is particularly effective in long-range systems, where excitations are distributed throughout the entire system. Consequently, the low-lying energy spectrum can be extracted by diagonalizing the effective Hamiltonian (without any additional DMRG costs!). The states of these excitations are then represented by the ground state MPS, with one site substituted by the corresponding eigenvector. This approach is often referred to as the 'Chepiga ansatz', named after one of the authors of this paper (Chepiga and Mila, 2017).
This is supported via the following syntax:
g = 10.0
L = 16
H = transverse_field_ising(FiniteChain(L); g)
ψ₀ = FiniteMPS(L, ComplexSpace(2), ComplexSpace(32))
ψ, envs, = find_groundstate(ψ₀, H; verbosity=0)
E₀ = real(sum(expectation_value(ψ, H, envs)))
Es, ϕs = excitations(H, ChepigaAnsatz(), ψ, envs; num=1)
isapprox(Es[1] - E₀, 2(g - 1); rtol=1e-2) # infinite analytical result
true
In order to improve the accuracy, a two-site version also exists, which varies two neighbouring sites:
Es, ϕs = excitations(H, ChepigaAnsatz2(), ψ, envs; num=1)
isapprox(Es[1] - E₀, 2(g - 1); rtol=1e-2) # infinite analytical result
true
changebonds
Many of the previously mentioned algorithms do not possess a way to dynamically change to bond dimension. This is often a problem, as the optimal bond dimension is often not a priori known, or needs to increase because of entanglement growth throughout the course of a simulation. changebonds
exposes a way to change the bond dimension of a given state.
MPSKit.changebonds
— Functionchangebonds(ψ::AbstractMPS, H, alg, envs) -> ψ′, envs′
changebonds(ψ::AbstractMPS, alg) -> ψ′
Change the bond dimension of ψ
using the algorithm alg
, and return the new ψ
and the new envs
.
See also: SvdCut
, RandExpand
, VUMPSSvdCut
, OptimalExpand
There are several different algorithms implemented, each having their own advantages and disadvantages:
SvdCut
: The simplest method for changing the bonddimension is found by simply locally truncating the state using an SVD decomposition. This yields a (locally) optimal truncation, but clearly cannot be used to increase the bond dimension. Note that a globally optimal truncation can be obtained by using theSvdCut
algorithm in combination withapproximate
. Since the output of this method might have a truncated bonddimension, the new state might not be identical to the input state. The truncation is controlled throughtrscheme
, which dictates how the singular values of the original state are truncated.
OptimalExpand
: This algorithm is based on the idea of expanding the bond dimension by investigating the two-site derivative, and adding the most important blocks which are orthogonal to the current state. From the point of view of a local two-site update, this procedure is optimal, but it requires to evaluate a two-site derivative, which can be costly when the physical space is large. The state will remain unchanged, but a one-site scheme will now be able to push the optimization further. The subspace used for expansion can be truncated throughtrscheme
, which dictates how many singular values will be added.RandExpand
: This algorithm similarly adds blocks orthogonal to the current state, but does not attempt to select the most important ones, and rather just selects them at random. The advantage here is that this is much cheaper than the optimal expand, and if the bond dimension is grown slow enough, this still obtains a very good expansion scheme. Again, The state will remain unchanged and a one-site scheme will now be able to push the optimization further. The subspace used for expansion can be truncated throughtrscheme
, which dictates how many singular values will be added.VUMPSSvdCut
: This algorithm is based on theVUMPS
algorithm, and consists of performing a two-site update, and then truncating the state back down. Because of the two-site update, this can again become expensive, but the algorithm has the option of both expanding as well as truncating the bond dimension. Here,trscheme
controls the truncation of the full state after the two-site update.
Leading boundary
For statistical mechanics partition functions we want to find the approximate leading boundary MPS. Again this can be done with VUMPS:
th = nonsym_ising_mpo()
ts = InfiniteMPS([ℂ^2],[ℂ^20]);
(ts,envs,_) = leading_boundary(ts,th,VUMPS(maxiter=400,verbosity=false));
If the mpo satisfies certain properties (positive and hermitian), it may also be possible to use GradientGrassmann.
MPSKit.leading_boundary
— Functionleading_boundary(ψ₀, O, [environments]; kwargs...) -> (ψ, environments, ϵ)
leading_boundary(ψ₀, O, algorithm, environments) -> (ψ, environments, ϵ)
Compute the leading boundary MPS for operator O
with initial guess ψ
. If not specified, an optimization algorithm will be attempted based on the supplied keywords.
Arguments
ψ₀::AbstractMPS
: initial guessO::AbstractMPO
: operator for which to find the leading_boundary[environments]
: MPS environment manageralgorithm
: optimization algorithm
Keywords
tol::Float64
: tolerance for convergence criteriummaxiter::Int
: maximum amount of iterationsverbosity::Int
: display progress information
Returns
ψ::AbstractMPS
: converged leading boundary MPSenvironments
: environments corresponding to the converged boundaryϵ::Float64
: final convergence error upon terminating the algorithm
approximate
Often, it is useful to approximate a given MPS by another, typically by one of a different bond dimension. This is achieved by approximating an application of an MPO to the initial state, by a new state.
MPSKit.approximate
— Functionapproximate(ψ₀, (O, ψ), algorithm, [environments]; kwargs...) -> (ψ, environments)
approximate!(ψ₀, (O, ψ), algorithm, [environments]; kwargs...) -> (ψ, environments)
Compute an approximation to the application of an operator O
to the state ψ
in the form of an MPS ψ₀
.
Arguments
ψ₀::AbstractMPS
: initial guess of the approximated state(O::AbstractMPO, ψ::AbstractMPS)
: operatorO
and stateψ
to be approximatedalgorithm
: approximation algorithm. See below for a list of available algorithms.[environments]
: MPS environment manager
Keywords
tol::Float64
: tolerance for convergence criteriummaxiter::Int
: maximum amount of iterationsverbosity::Int
: display progress information
Algorithms
DMRG
: Alternating least square method for maximizing the fidelity with a single-site scheme.DMRG2
: Alternating least square method for maximizing the fidelity with a two-site scheme.IDMRG
: Variant ofDMRG
for maximizing fidelity density in the thermodynamic limit.IDMRG2
: Variant ofDMRG2
for maximizing fidelity density in the thermodynamic limit.VOMPS
: Tangent space method for truncating uniform MPS.
Varia
What follows is a medley of lesser known (or used) algorithms and don't entirely fit under one of the above categories.
Dynamical DMRG
Dynamical DMRG has been described in other papers and is a way to find the propagator. The basic idea is that to calculate $G(z) = ⟨ V | (H-z)^{-1} | V ⟩$ , one can variationally find $(H-z) |W ⟩ = | V ⟩$ and then the propagator simply equals $G(z) = ⟨ V | W ⟩$.
MPSKit.propagator
— Functionpropagator(ψ₀::AbstractFiniteMPS, z::Number, H::MPOHamiltonian, alg::DynamicalDMRG; init=copy(ψ₀))
Calculate the propagator $\frac{1}{E₀ + z - H}|ψ₀⟩$ using the dynamical DMRG algorithm.
MPSKit.DynamicalDMRG
— Typestruct DynamicalDMRG{F<:MPSKit.DDMRG_Flavour, S} <: MPSKit.Algorithm
A dynamical DMRG method for calculating dynamical properties and excited states, based on a variational principle for dynamical correlation functions.
Fields
flavour::MPSKit.DDMRG_Flavour
: flavour of the algorithm to use, either of typeNaiveInvert
orJeckelmann
solver::Any
: algorithm used for the linear solverstol::Float64
: tolerance for convergence criteriummaxiter::Int64
: maximal amount of iterationsverbosity::Int64
: setting for how much information is displayed
References
MPSKit.NaiveInvert
— Typestruct NaiveInvert <: MPSKit.DDMRG_Flavour
An alternative approach to the dynamical DMRG algorithm, without quadratic terms but with a less controlled approximation. This algorithm minimizes the following cost function
\[⟨ψ|(H - E)|ψ⟩ - ⟨ψ|ψ₀⟩ - ⟨ψ₀|ψ⟩\]
which is equivalent to the original approach if
\[|ψ₀⟩ = (H - E)|ψ⟩\]
See also Jeckelmann
for the original approach.
MPSKit.Jeckelmann
— Typestruct Jeckelmann <: MPSKit.DDMRG_Flavour
The original flavour of dynamical DMRG, which minimizes the following (quadratic) cost function:
\[|| (H - E) |ψ₀⟩ - |ψ⟩ ||\]
See also NaiveInvert
for a less costly but less accurate alternative.
References
fidelity susceptibility
The fidelity susceptibility measures how much the groundstate changes when tuning a parameter in your hamiltonian. Divergences occur at phase transitions, making it a valuable measure when no order parameter is known.
MPSKit.fidelity_susceptibility
— Functionfidelity_susceptibility(state::Union{FiniteMPS,InfiniteMPS}, H₀::T,
Vs::AbstractVector{T}, [henvs=environments(state, H₀)];
maxiter=Defaults.maxiter,
tol=Defaults.tol) where {T<:MPOHamiltonian}
Computes the fidelity susceptibility of a the ground state state
of a base Hamiltonian H₀
with respect to a set of perturbing Hamiltonians Vs
. Each of the perturbing Hamiltonians can be interpreted as corresponding to a tuning parameter $aᵢ$ in a 'total' Hamiltonian $H = H₀ + ∑ᵢ aᵢ Vᵢ$.
Returns a matrix containing the overlaps of the elementary excitations on top of state
corresponding to each of the perturbing Hamiltonians.
Boundary conditions
You can impose periodic or open boundary conditions on an infinite Hamiltonian, to generate a finite counterpart. In particular, for periodic boundary conditions we still return an MPO that does not form a closed loop, such that it can be used with regular matrix product states. This is straightforward to implement but, and while this effectively squares the bond dimension, it is still competitive with more advanced periodic MPS algorithms.
MPSKit.open_boundary_conditions
— Functionopen_boundary_conditions(mpo::InfiniteMPO, L::Int) -> FiniteMPO
Convert an infinite MPO into a finite MPO of length L
, by applying open boundary conditions.
open_boundary_conditions(mpo::InfiniteMPOHamiltonian, L::Int) -> FiniteMPOHamiltonian
Convert an infinite MPO into a finite MPO of length L
, by applying open boundary conditions.
MPSKit.periodic_boundary_conditions
— Functionperiodic_boundary_conditions(mpo::AbstractInfiniteMPO, L::Int)
Convert an infinite MPO into a finite MPO of length L
, by mapping periodic boundary conditions onto an open system.
Exact diagonalization
As a side effect, our code support exact diagonalization. The idea is to construct a finite matrix product state with maximal bond dimension, and then optimize the middle site. Because we never truncated the bond dimension, this single site effectively parametrizes the entire hilbert space.
MPSKit.exact_diagonalization
— Functionexact_diagonalization(opp::FiniteMPOHamiltonian;
sector=first(sectors(oneunit(physicalspace(opp, 1)))),
len::Int=length(opp), num::Int=1, which::Symbol=:SR,
alg=Defaults.alg_eigsolve(; dynamic_tols=false))
-> vals, state_vecs, convhist
Use KrylovKit.eigsolve
to perform exact diagonalization on a FiniteMPOHamiltonian
to find its eigenvectors as FiniteMPS
of maximal rank, essentially equivalent to dense eigenvectors.