Decompositions
A rather large class of matrix algebra methods consists of taking a single input A, and determining some factorization of that input. In order to streamline these functions, they all follow a similar common code pattern. For a given factorization f, this consists of the following methods:
f(A; kwargs...) -> F...
f!(A, [F]; kwargs...) -> F...Here, the input matrix is always the first argument, and optionally the output can be provided as well. The keywords are algorithm-specific, and can be used to influence the behavior of the algorithms. Importantly, for generic code patterns it is recommended to always use the output F explicitly, since some implementations may not be able to reuse the provided memory. Additionally, the f! method typically assumes that it is allowed to destroy the input A, and making use of the contents of A afterwards should be deemed as undefined behavior.
QR and LQ Decompositions
The QR decomposition transforms a matrix A into a product Q * R, where Q is orthonormal and R upper triangular. This is often used to solve linear least squares problems, or construct orthogonal bases, since it is typically less expensive than the Singular Value Decomposition. If the input A is invertible, Q and R are unique if we require the diagonal elements of R to be positive.
For rectangular matrices A of size (m, n), there are two modes of operation, qr_full and qr_compact. The former ensures that the resulting Q is a square unitary matrix of size (m, m), while the latter creates an isometric Q of size (m, min(m, n)).
Similarly, the LQ decomposition transforms a matrix A into a product L * Q, where L is lower triangular and Q orthonormal. This is equivalent to the transpose of the QR decomposition of the transpose matrix, but can be computed directly. Again there are two modes of operation, lq_full and lq_compact, with the same behavior as the QR decomposition.
MatrixAlgebraKit.qr_full — Function
qr_full(A; kwargs...) -> Q, R
qr_full(A, alg::AbstractAlgorithm) -> Q, R
qr_full!(A, [QR]; kwargs...) -> Q, R
qr_full!(A, [QR], alg::AbstractAlgorithm) -> Q, RCompute the full QR decomposition of the rectangular matrix A, such that A = Q * R where Q is a square unitary matrix with the same number of rows as A and R is an upper triangular matrix with the same size as A.
The bang method qr_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided QR as output.
See also qr_compact(!).
MatrixAlgebraKit.qr_compact — Function
qr_compact(A; kwargs...) -> Q, R
qr_compact(A, alg::AbstractAlgorithm) -> Q, R
qr_compact!(A, [QR]; kwargs...) -> Q, R
qr_compact!(A, [QR], alg::AbstractAlgorithm) -> Q, RCompute the compact QR decomposition of the rectangular matrix A of size (m,n), such that A = Q * R where the isometric matrix Q of size (m, min(m,n)) has orthogonal columns spanning the image of A, and the matrix R of size (min(m,n), n) is upper triangular.
The bang method qr_compact! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided QR as output.
The compact QR decomposition is equivalent to the full QR decomposition when m >= n. Some algorithms may require m >= n.
See also qr_full(!).
MatrixAlgebraKit.lq_full — Function
lq_full(A; kwargs...) -> L, Q
lq_full(A, alg::AbstractAlgorithm) -> L, Q
lq_full!(A, [LQ]; kwargs...) -> L, Q
lq_full!(A, [LQ], alg::AbstractAlgorithm) -> L, QCompute the full LQ decomposition of the rectangular matrix A, such that A = L * Q where Q is a square unitary matrix with the same number of rows as A and L is a lower triangular matrix with the same size as A.
The bang method lq_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided LQ as output.
See also lq_compact(!).
MatrixAlgebraKit.lq_compact — Function
lq_compact(A; kwargs...) -> L, Q
lq_compact(A, alg::AbstractAlgorithm) -> L, Q
lq_compact!(A, [LQ]; kwargs...) -> L, Q
lq_compact!(A, [LQ], alg::AbstractAlgorithm) -> L, QCompute the compact LQ decomposition of the rectangular matrix A of size (m,n), such that A = L * Q where the matrix Q of size (min(m,n), n) has orthogonal rows spanning the image of A', and the matrix L of size (m, min(m,n)) is lower triangular.
The bang method lq_compact! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided LQ as output.
The compact QR decomposition is equivalent to the full LQ decomposition when m >= n. Some algorithms may require m <= n.
See also lq_full(!).
Alongside these functions, we provide a LAPACK-based implementation for dense arrays, as provided by the following algorithm:
MatrixAlgebraKit.LAPACK_HouseholderQR — Type
LAPACK_HouseholderQR(; blocksize, positive = false, pivoted = false)Algorithm type to denote the standard LAPACK algorithm for computing the QR decomposition of a matrix using Householder reflectors. The specific LAPACK function can be controlled using the keyword arugments, i.e. ?geqrt will be chosen if blocksize > 1. With blocksize == 1, ?geqrf will be chosen if pivoted == false and ?geqp3 will be chosen if pivoted == true. The keyword positive = true can be used to ensure that the diagonal elements of R are non-negative.
MatrixAlgebraKit.LAPACK_HouseholderLQ — Type
LAPACK_HouseholderLQ(; blocksize, positive = false)Algorithm type to denote the standard LAPACK algorithm for computing the LQ decomposition of a matrix using Householder reflectors. The specific LAPACK function can be controlled using the keyword arugments, i.e. ?gelqt will be chosen if blocksize > 1 or ?gelqf will be chosen if blocksize == 1. The keyword positive = true can be used to ensure that the diagonal elements of L are non-negative.
Eigenvalue Decomposition
The Eigenvalue Decomposition transforms a square matrix A into a product V * D * V⁻¹. Equivalently, it finds V and D that satisfy A * V = V * D.
Not all matrices can be diagonalized, and some real matrices can only be diagonalized using complex arithmetic. In particular, the resulting decomposition can only guaranteed to be real for real symmetric inputs A. Therefore, we provide eig_ and eigh_ variants, where eig always results in complex-valued V and D, while eigh requires symmetric inputs but retains the scalartype of the input.
The full set of eigenvalues and eigenvectors can be computed using the eig_full and eigh_full functions. If only the eigenvalues are required, the eig_vals and eigh_vals functions can be used. These functions return the diagonal elements of D in a vector.
Finally, it is also possible to compute a partial or truncated eigenvalue decomposition, using the eig_trunc and eigh_trunc functions. To control the behavior of the truncation, we refer to Truncations for more information.
Symmetric Eigenvalue Decomposition
For symmetric matrices, we provide the following functions:
MatrixAlgebraKit.eigh_full — Function
eigh_full(A; kwargs...) -> D, V, ϵ
eigh_full(A, alg::AbstractAlgorithm) -> D, V, ϵ
eigh_full!(A, [DV]; kwargs...) -> D, V, ϵ
eigh_full!(A, [DV], alg::AbstractAlgorithm) -> D, V, ϵCompute the full eigenvalue decomposition of the symmetric or hermitian matrix A, such that A * V = V * D, where the unitary matrix V contains the orthogonal eigenvectors and the real diagonal matrix D contains the associated eigenvalues.
The function also returns ϵ, the truncation error defined as the 2-norm of the discarded eigenvalues.
The bang method eigh_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.
Note that eigh_full and its variants assume that the input matrix is hermitian, or thus symmetric if the input is real. The resulting algorithms exploit this structure, and return eigenvalues that are always real, and eigenvectors that are orthogonal and have the same eltype as the input matrix. If the input matrix does not have this structure, the generic eigenvalue decomposition provided by eig_full and its variants should be used instead.
See also eigh_vals(!) and eigh_trunc(!).
MatrixAlgebraKit.eigh_trunc — Function
eigh_trunc(A; [trunc], kwargs...) -> D, V, ϵ
eigh_trunc(A, alg::AbstractAlgorithm) -> D, V, ϵ
eigh_trunc!(A, [DV]; [trunc], kwargs...) -> D, V, ϵ
eigh_trunc!(A, [DV], alg::AbstractAlgorithm) -> D, V, ϵCompute a partial or truncated eigenvalue decomposition of the symmetric or hermitian matrix A, such that A * V ≈ V * D, where the isometric matrix V contains a subset of the orthogonal eigenvectors and the real diagonal matrix D contains the associated eigenvalues, selected according to a truncation strategy.
The function also returns ϵ, the truncation error defined as the 2-norm of the discarded eigenvalues.
Truncation
The truncation strategy can be controlled via the trunc keyword argument. This can be either a NamedTuple or a TruncationStrategy. If trunc is not provided or nothing, all values will be kept.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxrank::Real: Maximal rank for the truncationmaxerror::Real: Maximal truncation error.filter: Custom filter to select truncated values.
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Keyword arguments
Other keyword arguments are passed to the algorithm selection procedure. If no explicit alg is provided, these keywords are used to select and configure the algorithm through MatrixAlgebraKit.select_algorithm. The remaining keywords after algorithm selection are passed to the algorithm constructor. See MatrixAlgebraKit.default_algorithm for the default algorithm selection behavior.
When alg is a TruncatedAlgorithm, the trunc keyword cannot be specified as the truncation strategy is already embedded in the algorithm.
The bang method eigh_trunc! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.
Note that eigh_full and its variants assume that the input matrix is hermitian, or thus symmetric if the input is real. The resulting algorithms exploit this structure, and return eigenvalues that are always real, and eigenvectors that are orthogonal and have the same eltype as the input matrix. If the input matrix does not have this structure, the generic eigenvalue decomposition provided by eig_full and its variants should be used instead.
See also eigh_full(!), eigh_vals(!), and Truncations for more information on truncation strategies.
MatrixAlgebraKit.eigh_vals — Function
eigh_vals(A; kwargs...) -> D
eigh_vals(A, alg::AbstractAlgorithm) -> D
eigh_vals!(A, [D]; kwargs...) -> D
eigh_vals!(A, [D], alg::AbstractAlgorithm) -> DCompute the list of (real) eigenvalues of the symmetric or hermitian matrix A.
The bang method eigh_vals! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.
Note that eigh_full and its variants assume that the input matrix is hermitian, or thus symmetric if the input is real. The resulting algorithms exploit this structure, and return eigenvalues that are always real, and eigenvectors that are orthogonal and have the same eltype as the input matrix. If the input matrix does not have this structure, the generic eigenvalue decomposition provided by eig_full and its variants should be used instead.
See also eigh_full(!) and eigh_trunc(!).
The eigenvectors returned by these functions have residual phase degrees of freedom. By default, MatrixAlgebraKit applies a gauge fixing convention to ensure reproducible results. See Gauge choices for more details.
Alongside these functions, we provide a LAPACK-based implementation for dense arrays, as provided by the following algorithms:
MatrixAlgebraKit.LAPACK_Bisection — Type
LAPACK_Bisection(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Bisection algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigen or singular vectors, see also gaugefix!.
MatrixAlgebraKit.LAPACK_DivideAndConquer — Type
LAPACK_DivideAndConquer(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Divide and Conquer algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigen or singular vectors, see also gaugefix!.
MatrixAlgebraKit.LAPACK_MultipleRelativelyRobustRepresentations — Type
LAPACK_MultipleRelativelyRobustRepresentations(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix using the Multiple Relatively Robust Representations algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigenvectors, see also gaugefix!.
MatrixAlgebraKit.LAPACK_QRIteration — Type
LAPACK_QRIteration(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the QR Iteration algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigen or singular vectors, see also gaugefix!.
Eigenvalue Decomposition
For general matrices, we provide the following functions:
MatrixAlgebraKit.eig_full — Function
eig_full(A; kwargs...) -> D, V
eig_full(A, alg::AbstractAlgorithm) -> D, V
eig_full!(A, [DV]; kwargs...) -> D, V
eig_full!(A, [DV], alg::AbstractAlgorithm) -> D, VCompute the full eigenvalue decomposition of the square matrix A, such that A * V = V * D, where the invertible matrix V contains the eigenvectors and the diagonal matrix D contains the associated eigenvalues.
The bang method eig_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.
See also eig_vals(!) and eig_trunc(!).
MatrixAlgebraKit.eig_trunc — Function
eig_trunc(A; [trunc], kwargs...) -> D, V, ϵ
eig_trunc(A, alg::AbstractAlgorithm) -> D, V, ϵ
eig_trunc!(A, [DV]; [trunc], kwargs...) -> D, V, ϵ
eig_trunc!(A, [DV], alg::AbstractAlgorithm) -> D, V, ϵCompute a partial or truncated eigenvalue decomposition of the matrix A, such that A * V ≈ V * D, where the (possibly rectangular) matrix V contains a subset of eigenvectors and the diagonal matrix D contains the associated eigenvalues, selected according to a truncation strategy.
The function also returns ϵ, the truncation error defined as the 2-norm of the discarded eigenvalues.
Truncation
The truncation strategy can be controlled via the trunc keyword argument. This can be either a NamedTuple or a TruncationStrategy. If trunc is not provided or nothing, all values will be kept.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxrank::Real: Maximal rank for the truncationmaxerror::Real: Maximal truncation error.filter: Custom filter to select truncated values.
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Keyword Arguments
Other keyword arguments are passed to the algorithm selection procedure. If no explicit alg is provided, these keywords are used to select and configure the algorithm through MatrixAlgebraKit.select_algorithm. The remaining keywords after algorithm selection are passed to the algorithm constructor. See MatrixAlgebraKit.default_algorithm for the default algorithm selection behavior.
When alg is a TruncatedAlgorithm, the trunc keyword cannot be specified as the truncation strategy is already embedded in the algorithm.
The bang method eig_trunc! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.
See also eig_full(!), eig_vals(!), and Truncations for more information on truncation strategies.
MatrixAlgebraKit.eig_vals — Function
eig_vals(A; kwargs...) -> D
eig_vals(A, alg::AbstractAlgorithm) -> D
eig_vals!(A, [D]; kwargs...) -> D
eig_vals!(A, [D], alg::AbstractAlgorithm) -> DCompute the list of eigenvalues of A.
The bang method eig_vals! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided D as output.
See also eig_full(!) and eig_trunc(!).
The eigenvectors returned by these functions have residual phase degrees of freedom. By default, MatrixAlgebraKit applies a gauge fixing convention to ensure reproducible results. See Gauge choices for more details.
Alongside these functions, we provide a LAPACK-based implementation for dense arrays, as provided by the following algorithms:
Schur Decomposition
The Schur decomposition transforms a complex square matrix A into a product Q * T * Qᴴ, where Q is unitary and T is upper triangular. It rewrites an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of A. For real matrices, the same decomposition can be achieved in real arithmetic by allowing T to be quasi-upper triangular, i.e. triangular with blocks of size (1, 1) and (2, 2) on the diagonal.
This decomposition is also useful for computing the eigenvalues of a matrix, which is exposed through the schur_vals function.
MatrixAlgebraKit.schur_full — Function
schur_full(A; kwargs...) -> T, Z, vals
schur_full(A, alg::AbstractAlgorithm) -> T, Z, vals
schur_full!(A, [TZv]; kwargs...) -> T, Z, vals
schur_full!(A, [TZv], alg::AbstractAlgorithm) -> T, Z, valsCompute the full Schur decomposition of the square matrix A, such that A * Z = Z * T, where the orthogonal or unitary matrix Z contains the Schur vectors and the square matrix T is upper triangular (in the complex case) or quasi-upper triangular (in the real case). The list vals contains the (complex-valued) eigenvalues of A, as extracted from the (quasi-)diagonal of T.
The bang method schur_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided TZv as output.
MatrixAlgebraKit.schur_vals — Function
schur_vals(A; kwargs...) -> vals
schur_vals(A, alg::AbstractAlgorithm) -> vals
schur_vals!(A, [vals]; kwargs...) -> vals
schur_vals!(A, [vals], alg::AbstractAlgorithm) -> valsCompute the list of eigenvalues of A by computing the Schur decomposition of A.
The bang method schur_vals! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided vals as output.
See also eig_full(!) and eig_trunc(!).
The LAPACK-based implementation for dense arrays is provided by the following algorithms:
Singular Value Decomposition
The Singular Value Decomposition transforms a matrix A into a product U * Σ * Vᴴ, where U and Vᴴ are unitary, and Σ is diagonal, real and non-negative. For a square matrix A, both U and Vᴴ are unitary, and if the singular values are distinct, the decomposition is unique.
For rectangular matrices A of size (m, n), there are two modes of operation, svd_full and svd_compact. The former ensures that the resulting U, and Vᴴ remain square unitary matrices, of size (m, m) and (n, n), with rectangular Σ of size (m, n). The latter creates an isometric U of size (m, min(m, n)), and V = (Vᴴ)' of size (n, min(m, n)), with a square Σ of size (min(m, n), min(m, n)).
It is also possible to compute the singular values only, using the svd_vals function. This then returns a vector of the values on the diagonal of Σ.
Finally, we also support computing a partial or truncated SVD, using the svd_trunc function.
MatrixAlgebraKit.svd_full — Function
svd_full(A; kwargs...) -> U, S, Vᴴ
svd_full(A, alg::AbstractAlgorithm) -> U, S, Vᴴ
svd_full!(A, [USVᴴ]; kwargs...) -> U, S, Vᴴ
svd_full!(A, [USVᴴ], alg::AbstractAlgorithm) -> U, S, VᴴCompute the full singular value decomposition (SVD) of the rectangular matrix A of size (m, n), such that A = U * S * Vᴴ. Here, U and Vᴴ are unitary matrices of size (m, m) and (n, n) respectively, and S is a diagonal matrix of size (m, n).
The bang method svd_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided USVᴴ as output.
See also svd_compact(!), svd_vals(!) and svd_trunc(!).
MatrixAlgebraKit.svd_compact — Function
svd_compact(A; kwargs...) -> U, S, Vᴴ
svd_compact(A, alg::AbstractAlgorithm) -> U, S, Vᴴ
svd_compact!(A, [USVᴴ]; kwargs...) -> U, S, Vᴴ
svd_compact!(A, [USVᴴ], alg::AbstractAlgorithm) -> U, S, VᴴCompute the compact singular value decomposition (SVD) of the rectangular matrix A of size (m, n), such that A = U * S * Vᴴ. Here, U is an isometric matrix (orthonormal columns) of size (m, k), whereas Vᴴ is a matrix of size (k, n) with orthonormal rows and S is a square diagonal matrix of size (k, k), with k = min(m, n).
The bang method svd_compact! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided USVᴴ as output.
See also svd_full(!), svd_vals(!) and svd_trunc(!).
MatrixAlgebraKit.svd_vals — Function
svd_vals(A; kwargs...) -> S
svd_vals(A, alg::AbstractAlgorithm) -> S
svd_vals!(A, [S]; kwargs...) -> S
svd_vals!(A, [S], alg::AbstractAlgorithm) -> SCompute the vector of singular values of A, such that for an M×N matrix A, S is a vector of size K = min(M, N), the number of kept singular values.
See also svd_full(!), svd_compact(!) and svd_trunc(!).
MatrixAlgebraKit.svd_trunc — Function
svd_trunc(A; [trunc], kwargs...) -> U, S, Vᴴ, ϵ
svd_trunc(A, alg::AbstractAlgorithm) -> U, S, Vᴴ, ϵ
svd_trunc!(A, [USVᴴ]; [trunc], kwargs...) -> U, S, Vᴴ, ϵ
svd_trunc!(A, [USVᴴ], alg::AbstractAlgorithm) -> U, S, Vᴴ, ϵCompute a partial or truncated singular value decomposition (SVD) of A, such that A * (Vᴴ)' ≈ U * S. Here, U is an isometric matrix (orthonormal columns) of size (m, k), whereas Vᴴ is a matrix of size (k, n) with orthonormal rows and S is a square diagonal matrix of size (k, k), with k is set by the truncation strategy.
The function also returns ϵ, the truncation error defined as the 2-norm of the discarded singular values.
Truncation
The truncation strategy can be controlled via the trunc keyword argument. This can be either a NamedTuple or a TruncationStrategy. If trunc is not provided or nothing, all values will be kept.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxrank::Real: Maximal rank for the truncationmaxerror::Real: Maximal truncation error.filter: Custom filter to select truncated values.
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Keyword arguments
Other keyword arguments are passed to the algorithm selection procedure. If no explicit alg is provided, these keywords are used to select and configure the algorithm through MatrixAlgebraKit.select_algorithm. The remaining keywords after algorithm selection are passed to the algorithm constructor. See MatrixAlgebraKit.default_algorithm for the default algorithm selection behavior.
When alg is a TruncatedAlgorithm, the trunc keyword cannot be specified as the truncation strategy is already embedded in the algorithm.
The bang method svd_trunc! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided USVᴴ as output.
See also svd_full(!), svd_compact(!), svd_vals(!), and Truncations for more information on truncation strategies.
The singular vectors returned by these functions have residual phase degrees of freedom. By default, MatrixAlgebraKit applies a gauge fixing convention to ensure reproducible results. See Gauge choices for more details.
MatrixAlgebraKit again ships with LAPACK-based implementations for dense arrays:
MatrixAlgebraKit.LAPACK_Bisection — Type
LAPACK_Bisection(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Bisection algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigen or singular vectors, see also gaugefix!.
MatrixAlgebraKit.LAPACK_DivideAndConquer — Type
LAPACK_DivideAndConquer(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Divide and Conquer algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigen or singular vectors, see also gaugefix!.
MatrixAlgebraKit.LAPACK_Jacobi — Type
LAPACK_Jacobi(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the singular value decomposition of a general matrix using the Jacobi algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the singular vectors, see also gaugefix!.
MatrixAlgebraKit.LAPACK_QRIteration — Type
LAPACK_QRIteration(; fixgauge::Bool = true)Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the QR Iteration algorithm. The fixgauge keyword can be used to toggle whether or not to fix the gauge of the eigen or singular vectors, see also gaugefix!.
Polar Decomposition
The Polar Decomposition of a matrix A is a factorization A = W * P, where W is unitary and P is positive semi-definite. If A is invertible (and therefore square), the polar decomposition always exists and is unique. For non-square matrices A of size (m, n), the decomposition A = W * P with P positive semi-definite of size (n, n) and W isometric of size (m, n) exists only if m >= n, and is unique if A and thus P is full rank. For m <= n, we can analoguously decompose A as A = P * Wᴴ with P positive semi-definite of size (m, m) and Wᴴ of size (m, n) such that W = (Wᴴ)' is isometric. Only in the case m = n do both decompositions exist.
The decompositions A = W * P or A = P * Wᴴ can be computed with the left_polar and right_polar functions, respectively.
MatrixAlgebraKit.left_polar — Function
left_polar(A; kwargs...) -> W, P
left_polar(A, alg::AbstractAlgorithm) -> W, P
left_polar!(A, [WP]; kwargs...) -> W, P
left_polar!(A, [WP], alg::AbstractAlgorithm) -> W, PCompute the full polar decomposition of the rectangular matrix A of size (m, n) with m >= n, such that A = W * P. Here, W is an isometric matrix (orthonormal columns) of size (m, n), whereas P is a positive (semi)definite matrix of size (n, n).
The bang method left_polar! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided WP as output.
See also right_polar(!).
MatrixAlgebraKit.right_polar — Function
right_polar(A; kwargs...) -> P, Wᴴ
right_polar(A, alg::AbstractAlgorithm) -> P, Wᴴ
right_polar!(A, [PWᴴ]; kwargs...) -> P, Wᴴ
right_polar!(A, [PWᴴ], alg::AbstractAlgorithm) -> P, WᴴCompute the full polar decomposition of the rectangular matrix A of size (m, n) with n >= m, such that A = P * Wᴴ. Here, P is a positive (semi)definite matrix of size (m, m), whereas Wᴴ is a matrix with orthonormal rows (its adjoint is isometric) of size (n, m).
The bang method right_polar! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided WP as output.
See also left_polar(!).
These functions can be implemented by first computing a singular value decomposition, and then constructing the polar decomposition from the singular values and vectors. Alternatively, the polar decomposition can be computed using an iterative method based on Newton's method, that can be more efficient for large matrices, especially if they are close to being isometric already.
MatrixAlgebraKit.PolarViaSVD — Type
PolarViaSVD(svd_alg)Algorithm for computing the polar decomposition of a matrix A via the singular value decomposition (SVD) of A. The svd_alg argument specifies the SVD algorithm to use.
MatrixAlgebraKit.PolarNewton — Type
PolarNewton(; maxiter = 10, tol = defaulttol(A))Algorithm for computing the polar decomposition of a matrix A via scaled Newton iteration, with a maximum of maxiter iterations and until convergence up to tolerance tol.
Orthogonal Subspaces
Often it is useful to compute orthogonal bases for particular subspaces defined by a matrix. Given a matrix A, we can compute an orthonormal basis for its image or coimage, and factorize the matrix accordingly. These bases are accessible through left_orth and right_orth respectively.
Overview
The left_orth function computes an orthonormal basis V for the image (column space) of A, along with a corestriction matrix C such that A = V * C. The resulting V has orthonormal columns (V' * V ≈ I or isisometric(V)).
Similarly, right_orth computes an orthonormal basis for the coimage (row space) of A, i.e., the image of A'. It returns matrices C and Vᴴ such that A = C * Vᴴ, where V = (Vᴴ)' has orthonormal columns (isisometric(Vᴴ; side = :right)).
These functions serve as high-level interfaces that automatically select the most appropriate decomposition based on the specified options, making them convenient for users who want orthonormalization without worrying about the underlying implementation details.
MatrixAlgebraKit.left_orth — Function
left_orth(A; [alg], [trunc], kwargs...) -> V, C
left_orth!(A, [VC], [alg]; [trunc], kwargs...) -> V, CCompute an orthonormal basis V for the image of the matrix A, as well as a matrix C (the corestriction) such that A factors as A = V * C.
This is a high-level wrapper where the keyword arguments can be used to specify and control the specific orthogonal decomposition that should be used to factor A, whereas trunc can optionally be used to control the precision in determining the rank of A, typically via its singular values.
Truncation
The optional truncation strategy can be controlled via the trunc keyword argument, and any non-trivial strategy typically requires an SVD-based decompositions. This keyword can be either a NamedTuple or a TruncationStrategy.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxrank::Real: Maximal rank for the truncationmaxerror::Real: Maximal truncation error.filter: Custom filter to select truncated values.
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Keyword arguments
There are 3 major modes of operation, based on the alg keyword, with slightly different application purposes.
alg::Nothing
This default mode uses the presence of a truncation strategy trunc to determine an optimal decomposition type, which will typically be QR-based for no truncation, or SVD-based for truncation. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type.
alg::Symbol
Here, the driving selector is alg, which is used to select the kind of decomposition. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type. By default, the supported kinds are:
:qr: Factorize via QR decomposition, with further customizations through the other keywords. This mode requiresisnothing(trunc), and is equivalent to
V, C = qr_compact(A; kwargs...):polar: Factorize via polar decomposition, with further customizations through the other keywords. This mode requiresisnothing(trunc), and is equivalent to
V, C = left_polar(A; kwargs...):svd: Factorize via SVD, with further customizations through the other keywords. This mode further allows truncation, which can be selected through thetruncargument, and is roughly equivalent to:
V, S, C = svd_trunc(A; trunc, kwargs...)
C = lmul!(S, C)alg::AbstractAlgorithm
In this expert mode the algorithm is supplied directly, and the kind of decomposition is deduced from that. This is achieved either directly by providing a LeftOrthAlgorithm{kind}, or automatically by attempting to deduce the decomposition kind with left_orth_alg(alg).
The bang method left_orth! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided CV as output.
See also right_orth(!), left_null(!) and right_null(!).
MatrixAlgebraKit.right_orth — Function
right_orth(A; [alg], [trunc], kwargs...) -> C, Vᴴ
right_orth!(A, [CVᴴ], [alg]; [trunc], kwargs...) -> C, VᴴCompute an orthonormal basis V = adjoint(Vᴴ) for the coimage of the matrix A, i.e. for the image of adjoint(A), as well as a matrix C such that A factors as A = C * Vᴴ.
This is a high-level wrapper where the keyword arguments can be used to specify and control the specific orthogonal decomposition that should be used to factor A, whereas trunc can optionally be used to control the precision in determining the rank of A, typically via its singular values.
Truncation
The optional truncation strategy can be controlled via the trunc keyword argument, and any non-trivial strategy typically requires an SVD-based decompositions. This keyword can be either a NamedTuple or a TruncationStrategy.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxrank::Real: Maximal rank for the truncationmaxerror::Real: Maximal truncation error.filter: Custom filter to select truncated values.
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Keyword arguments
There are 3 major modes of operation, based on the alg keyword, with slightly different application purposes.
alg::Nothing
This default mode uses the presence of a truncation strategy trunc to determine an optimal decomposition type, which will typicall be LQ-based for no truncation, or SVD-based for truncation. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type.
alg::Symbol
Here, the driving selector is alg, which is used to select the kind of decomposition. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type. By default, the supported kinds are:
:lq: Factorize via LQ decomposition, with further customizations through the other keywords. This mode requiresisnothing(trunc), and is equivalent to
C, Vᴴ = lq_compact(A; kwargs...):polar: Factorize via polar decomposition, with further customizations through the other keywords. This mode requiresisnothing(trunc), and is equivalent to
C, Vᴴ = right_polar(A; kwargs...):svd: Factorize via SVD, with further customizations through the other keywords. This mode further allows truncation, which can be selected through thetruncargument, and is roughly equivalent to:
C, S, Vᴴ = svd_trunc(A; trunc, kwargs...)
C = rmul!(C, S)alg::AbstractAlgorithm
In this expert mode the algorithm is supplied directly, and the kind of decomposition is deduced from that. This is achieved either directly by providing a RightOrthAlgorithm{kind}, or automatically by attempting to deduce the decomposition kind with right_orth_alg.
The bang method right_orth! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided CVᴴ as output.
See also left_orth(!), left_null(!) and right_null(!).
Algorithm Selection
Both functions support multiple decomposition drivers, which can be selected through the alg keyword argument:
For left_orth:
alg = :qr(default without truncation): Uses QR decomposition viaqr_compactalg = :polar: Uses polar decomposition vialeft_polaralg = :svd(default with truncation): Uses SVD viasvd_compactorsvd_trunc
For right_orth:
alg = :lq(default without truncation): Uses LQ decomposition vialq_compactalg = :polar: Uses polar decomposition viaright_polaralg = :svd(default with truncation): Uses SVD viasvd_compactorsvd_trunc
When alg is not specified, the function automatically selects :qr/:lq for exact orthogonalization, or :svd when a truncation strategy is provided.
Extending with Custom Algorithms
To register a custom algorithm type for use with these functions, you need to define the appropriate conversion function, for example:
# For left_orth
MatrixAlgebraKit.left_orth_alg(alg::MyCustomAlgorithm) = LeftOrthAlgorithm{:qr}(alg)
# For right_orth
MatrixAlgebraKit.right_orth_alg(alg::MyCustomAlgorithm) = RightOrthAlgorithm{:lq}(alg)The type parameter (:qr, :lq, :polar, or :svd) indicates which factorization backend will be used. The wrapper algorithm types handle the dispatch to the appropriate implementation:
MatrixAlgebraKit.left_orth_alg — Function
left_orth_alg(alg::AbstractAlgorithm) -> LeftOrthAlgorithmConvert an algorithm to a LeftOrthAlgorithm wrapper for use with left_orth.
This function attempts to deduce the appropriate factorization kind (:qr, :polar, or :svd) from the algorithm type and wraps it in a LeftOrthAlgorithm. Custom algorithm types can be registered by defining:
MatrixAlgebraKit.left_orth_alg(alg::CustomAlgorithm) = LeftOrthAlgorithm{kind}(alg)where kind specifies the factorization backend to use.
See also LeftOrthAlgorithm, left_orth.
MatrixAlgebraKit.right_orth_alg — Function
right_orth_alg(alg::AbstractAlgorithm) -> RightOrthAlgorithmConvert an algorithm to a RightOrthAlgorithm wrapper for use with right_orth.
This function attempts to deduce the appropriate factorization kind (:lq, :polar, or :svd) from the algorithm type and wraps it in a RightOrthAlgorithm. Custom algorithm types can be registered by defining:
MatrixAlgebraKit.right_orth_alg(alg::CustomAlgorithm) = RightOrthAlgorithm{kind}(alg)where kind specifies the factorization backend to use.
See also RightOrthAlgorithm, right_orth.
MatrixAlgebraKit.RightOrthAlgorithm — Type
RightOrthAlgorithm{Kind, Alg <: AbstractAlgorithm}(alg)Wrapper type to denote the Kind of factorization that is used as a backend for right_orth. By default Kind is a symbol, which can be either :lq, :polar or :svd.
Examples
Basic orthogonalization:
using MatrixAlgebraKit
using LinearAlgebra
A = [1.0 2.0; 3.0 4.0; 5.0 6.0]
V, C = left_orth(A)
(V' * V) ≈ I && A ≈ V * CUsing different algorithms:
A = randn(4, 3)
V1, C1 = left_orth(A; alg = :qr)
V2, C2 = left_orth(A; alg = :polar)
V3, C3 = left_orth(A; alg = :svd)
A ≈ V1 * C1 ≈ V2 * C2 ≈ V3 * C3With truncation:
A = [1.0 0.0; 0.0 1e-10; 0.0 0.0]
V, C = left_orth(A; trunc = (atol = 1e-8,))
size(V, 2) == 1 # Only one column retainedNull Spaces
Similarly, it can be convenient to obtain an orthogonal basis for the kernel or cokernel of a matrix. These are the orthogonal complements of the coimage and image, respectively, and can be computed using the left_null and right_null functions.
Overview
The left_null function computes an orthonormal basis N for the cokernel (left nullspace) of A, which is the nullspace of A'. This means A' * N ≈ 0 and N' * N ≈ I.
Similarly, right_null computes an orthonormal basis for the kernel (right nullspace) of A. It returns Nᴴ such that A * Nᴴ' ≈ 0 and Nᴴ * Nᴴ' ≈ I, where N = (Nᴴ)' has orthonormal columns.
These functions automatically handle rank determination and provide convenient access to nullspace computation without requiring detailed knowledge of the underlying decomposition methods.
MatrixAlgebraKit.left_null — Function
left_null(A; [alg], [trunc], kwargs...) -> N
left_null!(A, [N], [alg]; [trunc], kwargs...) -> NCompute an orthonormal basis N for the cokernel of the matrix A, i.e. the nullspace of adjoint(A), such that adjoint(A) * N ≈ 0 and N' * N ≈ I.
This is a high-level wrapper where the keyword arguments can be used to specify and control the underlying orthogonal decomposition that should be used to find the null space of A', whereas trunc can optionally be used to control the precision in determining the rank of A, typically via its singular values.
Truncation
The optional truncation strategy can be controlled via the trunc keyword argument, and any non-trivial strategy typically requires an SVD-based decomposition. This keyword can be either a NamedTuple or a TruncationStrategy.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxnullity::Real: Maximal rank for the truncation
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Here notrunc has special meaning, and signifies keeping the values that correspond to the exact zeros determined from the additional columns of A.
Keyword arguments
There are 3 major modes of operation, based on the alg keyword, with slightly different application purposes.
alg::Nothing
This default mode uses the presence of a truncation strategy trunc to determine an optimal decomposition type, which will be QR-based for no truncation, or SVD-based for truncation. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type.
alg::Symbol
Here, the driving selector is alg, which is used to select the kind of decomposition. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type. By default, the supported kinds are:
:qr: Factorize via QR nullspace, with further customizations through the other keywords. This mode requiresisnothing(trunc), and is equivalent to
N = qr_null(A; kwargs...):svd: Factorize via SVD, with further customizations through the other keywords. This mode further allows truncation, which can be selected through thetruncargument. It is roughly equivalent to:
U, S, _ = svd_full(A; kwargs...)
N = truncate(left_null, (U, S), trunc)alg::AbstractAlgorithm
In this expert mode the algorithm is supplied directly, and the kind of decomposition is deduced from that. This is achieved either directly by providing a LeftNullAlgorithm{kind}, or automatically by attempting to deduce the decomposition kind with left_null_alg(alg).
The bang method left_null! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided N as output.
See also right_null(!), left_orth(!) and right_orth(!).
MatrixAlgebraKit.right_null — Function
right_null(A; [alg], [trunc], kwargs...) -> Nᴴ
right_null!(A, [Nᴴ], [alg]; [trunc], kwargs...) -> NᴴCompute an orthonormal basis N = adjoint(Nᴴ) for the kernel of the matrix A, i.e. the nullspace of A, such that A * Nᴴ' ≈ 0 and Nᴴ * Nᴴ' ≈ I.
This is a high-level wrapper where the keyword arguments can be used to specify and control the underlying orthogonal decomposition that should be used to find the null space of A, whereas trunc can optionally be used to control the precision in determining the rank of A, typically via its singular values.
Truncation
The optional truncation strategy can be controlled via the trunc keyword argument, and any non-trivial strategy typically requires an SVD-based decomposition. This keyword can be either a NamedTuple or a TruncationStrategy.
trunc::NamedTuple
The supported truncation keyword arguments are:
atol::Real: Absolute tolerance for the truncationrtol::Real: Relative tolerance for the truncationmaxnullity::Real: Maximal rank for the truncation
trunc::TruncationStrategy
For more control, a truncation strategy can be supplied directly. By default, MatrixAlgebraKit supplies the following:
Here notrunc has special meaning, and signifies keeping the values that correspond to the exact zeros determined from the additional rows of A.
Keyword arguments
There are 3 major modes of operation, based on the alg keyword, with slightly different application purposes.
alg::Nothing
This default mode uses the presence of a truncation strategy trunc to determine an optimal decomposition type, which will be LQ-based for no truncation, or SVD-based for truncation. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type.
alg::Symbol
Here, the driving selector is alg, which is used to select the kind of decomposition. The remaining keyword arguments are passed on directly to the algorithm selection procedure of the chosen decomposition type. By default, the supported kinds are:
:lq: Factorize via LQ nullspace, with further customizations through the other keywords. This mode requiresisnothing(trunc), and is equivalent to
Nᴴ = lq_null(A; kwargs...):svd: Factorize via SVD, with further customizations through the other keywords. This mode further allows truncation, which can be selected through thetruncargument. It is roughly equivalent to:
_, S, Vᴴ = svd_full(A; kwargs...)
Nᴴ = truncate(right_null, (S, Vᴴ), trunc)alg::AbstractAlgorithm
In this expert mode the algorithm is supplied directly, and the kind of decomposition is deduced from that. This is achieved either directly by providing a RightNullAlgorithm{kind}, or automatically by attempting to deduce the decomposition kind with right_null_alg(alg).
The bang method right_null! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided Nᴴ as output.
See also left_null(!), left_orth(!) and right_orth(!).
Algorithm Selection
Both functions support multiple decomposition drivers, which can be selected through the alg keyword argument:
For left_null:
alg = :qr(default without truncation): Uses QR-based nullspace computation viaqr_nullalg = :svd(default with truncation): Uses SVD viasvd_fullwith appropriate truncation
For right_null:
alg = :lq(default without truncation): Uses LQ-based nullspace computation vialq_nullalg = :svd(default with truncation): Uses SVD viasvd_fullwith appropriate truncation
When alg is not specified, the function automatically selects :qr/:lq for exact nullspace computation, or :svd when a truncation strategy is provided to handle numerical rank determination.
For nullspace functions, notrunc has special meaning when used with the default QR/LQ algorithms. It indicates that the nullspace should be computed from the exact zeros determined by the additional rows/columns of the extended matrix, without any tolerance-based truncation.
Extending with Custom Algorithms
To register a custom algorithm type for use with these functions, you need to define the appropriate conversion function:
# For left_null
MatrixAlgebraKit.left_null_alg(alg::MyCustomAlgorithm) = LeftNullAlgorithm{:qr}(alg)
# For right_null
MatrixAlgebraKit.right_null_alg(alg::MyCustomAlgorithm) = RightNullAlgorithm{:lq}(alg)The type parameter (:qr, :lq, or :svd) indicates which factorization backend will be used. The wrapper algorithm types handle the dispatch to the appropriate implementation:
MatrixAlgebraKit.RightNullAlgorithm — Type
RightNullAlgorithm{Kind, Alg <: AbstractAlgorithm}(alg)Wrapper type to denote the Kind of factorization that is used as a backend for right_null. By default Kind is a symbol, which can be either :lq or :svd.
MatrixAlgebraKit.left_null_alg — Function
left_null_alg(alg::AbstractAlgorithm) -> LeftNullAlgorithmConvert an algorithm to a LeftNullAlgorithm wrapper for use with left_null.
This function attempts to deduce the appropriate factorization kind (:qr or :svd) from the algorithm type and wraps it in a LeftNullAlgorithm. Custom algorithm types can be registered by defining:
MatrixAlgebraKit.left_null_alg(alg::CustomAlgorithm) = LeftNullAlgorithm{kind}(alg)where kind specifies the factorization backend to use.
See also LeftNullAlgorithm, left_null.
MatrixAlgebraKit.right_null_alg — Function
right_null_alg(alg::AbstractAlgorithm) -> RightNullAlgorithmConvert an algorithm to a RightNullAlgorithm wrapper for use with right_null.
This function attempts to deduce the appropriate factorization kind (:lq or :svd) from the algorithm type and wraps it in a RightNullAlgorithm. Custom algorithm types can be registered by defining:
MatrixAlgebraKit.right_null_alg(alg::CustomAlgorithm) = RightNullAlgorithm{kind}(alg)where kind specifies the factorization backend to use.
See also RightNullAlgorithm, right_null.
Examples
Basic nullspace computation:
A = [1.0 2.0 3.0; 4.0 5.0 6.0] # Rank 2 matrix
N = left_null(A)
size(N) == (2, 0)Nᴴ = right_null(A)
size(Nᴴ) == (1, 3) && norm(A * Nᴴ') < 1e-14 && isisometric(Nᴴ; side = :right)Computing nullspace with rank detection:
A = [1.0 2.0; 2.0 4.0; 3.0 6.0] # Rank 1 matrix (second column = 2*first)
N = left_null(A; alg = :svd, trunc = (atol = 1e-10,))
size(N) == (3, 2) && norm(A' * N) < 1e-12 && isisometric(N)Using different algorithms:
A = [1.0 0.0 0.0; 0.0 1.0 0.0]
N1 = right_null(A; alg = :lq)
N2 = right_null(A; alg = :svd)
norm(A * N1') < 1e-14 && norm(A * N2') < 1e-14 &&
isisometric(N1; side = :right) && isisometric(N2; side = :right)Gauge choices
Both eigenvalue and singular value decompositions have residual gauge degrees of freedom even when the eigenvalues or singular values are unique. These arise from the fact that even after normalization, the eigenvectors and singular vectors are only determined up to a phase factor.
Phase Ambiguity in Decompositions
For the eigenvalue decomposition A * V = V * D, if v is an eigenvector with eigenvalue λ and |v| = 1, then so is e^(iθ) * v for any real phase θ. When λ is non-degenerate (i.e., has multiplicity 1), the eigenvector is unique up to this phase.
Similarly, for the singular value decomposition A = U * Σ * Vᴴ, the singular vectors u and v corresponding to a non-degenerate singular value σ are unique only up to a common phase. We can replace u → e^(iθ) * u and vᴴ → e^(-iθ) * vᴴ simultaneously.
Gauge Fixing Convention
To remove this phase ambiguity and ensure reproducible results, MatrixAlgebraKit implements a gauge fixing convention by default. The convention ensures that the entry with the largest magnitude in each eigenvector or left singular vector is real and positive.
For eigenvectors, this means that for each column v of V, we multiply by conj(sign(v[i])) where i is the index of the entry with largest absolute value.
For singular vectors, we apply the phase factor to both u and v based on the entry with largest magnitude in u. This preserves the decomposition A = U * Σ * Vᴴ while fixing the gauge.
Disabling Gauge Fixing
Gauge fixing is enabled by default for all eigenvalue and singular value decompositions. If you prefer to obtain the raw results from the underlying computational routines without gauge fixing, you can disable it using the fixgauge keyword argument:
# With gauge fixing (default)
D, V = eigh_full(A)
# Without gauge fixing
D, V = eigh_full(A; fixgauge = false)The same keyword is available for eig_full, eig_trunc, svd_full, svd_compact, and svd_trunc functions. Additionally, the default value can also be controlled with a global toggle using MatrixAlgebraKit.default_fixgauge.
MatrixAlgebraKit.gaugefix! — Function
gaugefix!(f_eig, V)
gaugefix!(f_svd, U, Vᴴ)Fix the residual gauge degrees of freedom in the eigen or singular vectors, that are obtained from the decomposition performed by f_eig or f_svd. This is achieved by ensuring that the entry with the largest magnitude in V or U is real and positive.
MatrixAlgebraKit.default_fixgauge — Function
default_fixgauge() -> current_value
default_fixgauge(new_value::Bool) -> previous_valueGlobal toggle for enabling or disabling the default behavior of gauge fixing the output of the eigen- and singular value decompositions.
source