Decompositions

A rather large class of matrix algebra methods consists of taking a single input A, and determining some factorization of that input. In order to streamline these functions, they all follow a similar common code pattern. For a given factorization f, this consists of the following methods:

f(A; kwargs...) -> F...
f!(A, [F]; kwargs...) -> F...

Here, the input matrix is always the first argument, and optionally the output can be provided as well. The keywords are algorithm-specific, and can be used to influence the behavior of the algorithms. Importantly, for generic code patterns it is recommended to always use the output F explicitly, since some implementations may not be able to reuse the provided memory. Additionally, the f! method typically assumes that it is allowed to destroy the input A, and making use of the contents of A afterwards should be deemed as undefined behavior.

QR and LQ Decompositions

The QR decomposition transforms a matrix A into a product Q * R, where Q is orthonormal and R upper triangular. This is often used to solve linear least squares problems, or construct orthogonal bases, since it is typically less expensive than the Singular Value Decomposition. If the input A is invertible, Q and R are unique if we require the diagonal elements of R to be positive.

For rectangular matrices A of size (m, n), there are two modes of operation, qr_full and qr_compact. The former ensures that the resulting Q is a square unitary matrix of size (m, m), while the latter creates an isometric Q of size (m, min(m, n)).

Similarly, the LQ decomposition transforms a matrix A into a product L * Q, where L is lower triangular and Q orthonormal. This is equivalent to the transpose of the QR decomposition of the transpose matrix, but can be computed directly. Again there are two modes of operation, lq_full and lq_compact, with the same behavior as the QR decomposition.

MatrixAlgebraKit.qr_fullFunction
qr_full(A; kwargs...) -> Q, R
qr_full(A, alg::AbstractAlgorithm) -> Q, R
qr_full!(A, [QR]; kwargs...) -> Q, R
qr_full!(A, [QR], alg::AbstractAlgorithm) -> Q, R

Compute the full QR decomposition of the rectangular matrix A, such that A = Q * R where Q is a square unitary matrix with the same number of rows as A and R is an upper triangular matrix with the same size as A.

Note

The bang method qr_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided QR as output.

See also qr_compact(!).

source
MatrixAlgebraKit.qr_compactFunction
qr_compact(A; kwargs...) -> Q, R
qr_compact(A, alg::AbstractAlgorithm) -> Q, R
qr_compact!(A, [QR]; kwargs...) -> Q, R
qr_compact!(A, [QR], alg::AbstractAlgorithm) -> Q, R

Compute the compact QR decomposition of the rectangular matrix A of size (m,n), such that A = Q * R where the isometric matrix Q of size (m, min(m,n)) has orthogonal columns spanning the image of A, and the matrix R of size (min(m,n), n) is upper triangular.

Note

The bang method qr_compact! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided QR as output.

Note

The compact QR decomposition is equivalent to the full QR decomposition when m >= n. Some algorithms may require m >= n.

See also qr_full(!).

source
MatrixAlgebraKit.lq_fullFunction
lq_full(A; kwargs...) -> L, Q
lq_full(A, alg::AbstractAlgorithm) -> L, Q
lq_full!(A, [LQ]; kwargs...) -> L, Q
lq_full!(A, [LQ], alg::AbstractAlgorithm) -> L, Q

Compute the full LQ decomposition of the rectangular matrix A, such that A = L * Q where Q is a square unitary matrix with the same number of rows as A and L is a lower triangular matrix with the same size as A.

Note

The bang method lq_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided LQ as output.

See also lq_compact(!).

source
MatrixAlgebraKit.lq_compactFunction
lq_compact(A; kwargs...) -> L, Q
lq_compact(A, alg::AbstractAlgorithm) -> L, Q
lq_compact!(A, [LQ]; kwargs...) -> L, Q
lq_compact!(A, [LQ], alg::AbstractAlgorithm) -> L, Q

Compute the compact LQ decomposition of the rectangular matrix A of size (m,n), such that A = L * Q where the matrix Q of size (min(m,n), n) has orthogonal rows spanning the image of A', and the matrix L of size (m, min(m,n)) is lower triangular.

Note

The bang method lq_compact! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided LQ as output.

Note

The compact QR decomposition is equivalent to the full LQ decomposition when m >= n. Some algorithms may require m <= n.

See also lq_full(!).

source

Alongside these functions, we provide a LAPACK-based implementation for dense arrays, as provided by the following algorithm:

MatrixAlgebraKit.LAPACK_HouseholderQRType
LAPACK_HouseholderQR(; blocksize, positive = false, pivoted = false)

Algorithm type to denote the standard LAPACK algorithm for computing the QR decomposition of a matrix using Householder reflectors. The specific LAPACK function can be controlled using the keyword arugments, i.e. ?geqrt will be chosen if blocksize > 1. With blocksize == 1, ?geqrf will be chosen if pivoted == false and ?geqp3 will be chosen if pivoted == true. The keyword positive=true can be used to ensure that the diagonal elements of R are non-negative.

source
MatrixAlgebraKit.LAPACK_HouseholderLQType
LAPACK_HouseholderLQ(; blocksize, positive = false)

Algorithm type to denote the standard LAPACK algorithm for computing the LQ decomposition of a matrix using Householder reflectors. The specific LAPACK function can be controlled using the keyword arugments, i.e. ?gelqt will be chosen if blocksize > 1 or ?gelqf will be chosen if blocksize == 1. The keyword positive=true can be used to ensure that the diagonal elements of L are non-negative.

source

Eigenvalue Decomposition

The Eigenvalue Decomposition transforms a square matrix A into a product V * D * V⁻¹. Equivalently, it finds V and D that satisfy A * V = V * D.

Not all matrices can be diagonalized, and some real matrices can only be diagonalized using complex arithmetic. In particular, the resulting decomposition can only guaranteed to be real for real symmetric inputs A. Therefore, we provide eig_ and eigh_ variants, where eig always results in complex-valued V and D, while eigh requires symmetric inputs but retains the scalartype of the input.

If only the eigenvalues are required, the eig_vals and eigh_vals functions can be used. These functions return the diagonal elements of D in a vector.

Finally, it is also possible to compute a partial or truncated eigenvalue decomposition, using the eig_trunc and eigh_trunc functions. To control the behavior of the truncation, we refer to Truncations for more information.

Symmetric Eigenvalue Decomposition

For symmetric matrices, we provide the following functions:

MatrixAlgebraKit.eigh_fullFunction
eigh_full(A; kwargs...) -> D, V
eigh_full(A, alg::AbstractAlgorithm) -> D, V
eigh_full!(A, [DV]; kwargs...) -> D, V
eigh_full!(A, [DV], alg::AbstractAlgorithm) -> D, V

Compute the full eigenvalue decomposition of the symmetric or hermitian matrix A, such that A * V = V * D, where the unitary matrix V contains the orthogonal eigenvectors and the real diagonal matrix D contains the associated eigenvalues.

Note

The bang method eigh_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.

Note

Note that eigh_full and its variants assume additional structure on the input,

and therefore will retain the eltype of the input for the eigenvalues and eigenvectors. For generic eigenvalue decompositions, see eig_full.

See also eigh_vals(!) and eigh_trunc(!).

source
MatrixAlgebraKit.eigh_truncFunction
eigh_trunc(A; kwargs...) -> D, V
eigh_trunc(A, alg::AbstractAlgorithm) -> D, V
eigh_trunc!(A, [DV]; kwargs...) -> D, V
eigh_trunc!(A, [DV], alg::AbstractAlgorithm) -> D, V

Compute a partial or truncated eigenvalue decomposition of the symmetric or hermitian matrix A, such that A * V ≈ V * D, where the isometric matrix V contains a subset of the orthogonal eigenvectors and the real diagonal matrix D contains the associated eigenvalues, selected according to a truncation strategy.

Note

The bang method eigh_trunc! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.

Note

Note that eigh_full and its variants assume additional structure on the input,

and therefore will retain the eltype of the input for the eigenvalues and eigenvectors. For generic eigenvalue decompositions, see eig_full.

See also eigh_full(!) and eigh_vals(!).

source
MatrixAlgebraKit.eigh_valsFunction
eigh_vals(A; kwargs...) -> D
eigh_vals(A, alg::AbstractAlgorithm) -> D
eigh_vals!(A, [D]; kwargs...) -> D
eigh_vals!(A, [D], alg::AbstractAlgorithm) -> D

Compute the list of (real) eigenvalues of the symmetric or hermitian matrix A.

Note

The bang method eigh_vals! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.

Note

Note that eigh_full and its variants assume additional structure on the input,

and therefore will retain the eltype of the input for the eigenvalues and eigenvectors. For generic eigenvalue decompositions, see eig_full.

See also eigh_full(!) and eigh_trunc(!).

source

Alongside these functions, we provide a LAPACK-based implementation for dense arrays, as provided by the following algorithms:

MatrixAlgebraKit.LAPACK_BisectionType
LAPACK_Bisection()

Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Bisection algorithm.

source
MatrixAlgebraKit.LAPACK_DivideAndConquerType
LAPACK_DivideAndConquer()

Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Divide and Conquer algorithm.

source
MatrixAlgebraKit.LAPACK_QRIterationType
LAPACK_QRIteration()

Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the QR Iteration algorithm.

source

Eigenvalue Decomposition

For general matrices, we provide the following functions:

MatrixAlgebraKit.eig_fullFunction
eig_full(A; kwargs...) -> D, V
eig_full(A, alg::AbstractAlgorithm) -> D, V
eig_full!(A, [DV]; kwargs...) -> D, V
eig_full!(A, [DV], alg::AbstractAlgorithm) -> D, V

Compute the full eigenvalue decomposition of the square matrix A, such that A * V = V * D, where the invertible matrix V contains the eigenvectors and the diagonal matrix D contains the associated eigenvalues.

Note

The bang method eig_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.

Note

Note that eig_full and its variants do not assume additional structure on the input,

and therefore will always return complex eigenvalues and eigenvectors. For the real eigenvalue decomposition of symmetric or hermitian operators, see eigh_full.

See also eig_vals(!) and eig_trunc(!).

source
MatrixAlgebraKit.eig_truncFunction
eig_trunc(A; kwargs...) -> D, V
eig_trunc(A, alg::AbstractAlgorithm) -> D, V
eig_trunc!(A, [DV]; kwargs...) -> D, V
eig_trunc!(A, [DV], alg::AbstractAlgorithm) -> D, V

Compute a partial or truncated eigenvalue decomposition of the matrix A, such that A * V ≈ V * D, where the (possibly rectangular) matrix V contains a subset of eigenvectors and the diagonal matrix D contains the associated eigenvalues, selected according to a truncation strategy.

Note

The bang method eig_trunc! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided DV as output.

Note

Note that eig_full and its variants do not assume additional structure on the input,

and therefore will always return complex eigenvalues and eigenvectors. For the real eigenvalue decomposition of symmetric or hermitian operators, see eigh_full.

See also eig_full(!) and eig_vals(!).

source
MatrixAlgebraKit.eig_valsFunction
eig_vals(A; kwargs...) -> D
eig_vals(A, alg::AbstractAlgorithm) -> D
eig_vals!(A, [D]; kwargs...) -> D
eig_vals!(A, [D], alg::AbstractAlgorithm) -> D

Compute the list of eigenvalues of A.

Note

The bang method eig_vals! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided D as output.

Note

Note that eig_full and its variants do not assume additional structure on the input,

and therefore will always return complex eigenvalues and eigenvectors. For the real eigenvalue decomposition of symmetric or hermitian operators, see eigh_full.

See also eig_full(!) and eig_trunc(!).

source

Alongside these functions, we provide a LAPACK-based implementation for dense arrays, as provided by the following algorithms:

MatrixAlgebraKit.LAPACK_ExpertType
LAPACK_Expert()

Algorithm type to denote the expert LAPACK driver for computing the Schur or non-Hermitian eigenvalue decomposition of a matrix.

source
MatrixAlgebraKit.LAPACK_SimpleType
LAPACK_Simple()

Algorithm type to denote the simple LAPACK driver for computing the Schur or non-Hermitian eigenvalue decomposition of a matrix.

source

Schur Decomposition

The Schur decomposition transforms a complex square matrix A into a product Q * T * Qᴴ, where Q is unitary and T is upper triangular. It rewrites an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of A. For real matrices, the same decomposition can be achieved with T being quasi-upper triangular, ie triangular with blocks of size (1, 1) and (2, 2) on the diagonal.

This decomposition is also useful for computing the eigenvalues of a matrix, which is exposed through the schur_vals function.

MatrixAlgebraKit.schur_fullFunction
schur_full(A; kwargs...) -> T, Z, vals
schur_full(A, alg::AbstractAlgorithm) -> T, Z, vals
schur_full!(A, [TZv]; kwargs...) -> T, Z, vals
schur_full!(A, [TZv], alg::AbstractAlgorithm) -> T, Z, vals

Compute the full Schur decomposition of the square matrix A, such that A * Z = Z * T, where the orthogonal or unitary matrix Z contains the Schur vectors and the square matrix T is upper triangular (in the complex case) or quasi-upper triangular (in the real case). The list vals contains the (complex-valued) eigenvalues of A, as extracted from the (quasi-)diagonal of T.

Note

The bang method schur_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided TZv as output.

source
MatrixAlgebraKit.schur_valsFunction
schur_vals(A; kwargs...) -> vals
schur_vals(A, alg::AbstractAlgorithm) -> vals
schur_vals!(A, [vals]; kwargs...) -> vals
schur_vals!(A, [vals], alg::AbstractAlgorithm) -> vals

Compute the list of eigenvalues of A by computing the Schur decomposition of A.

Note

The bang method schur_vals! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided vals as output.

See also eig_full(!) and eig_trunc(!).

source

The LAPACK-based implementation for dense arrays is provided by the following algorithms:

MatrixAlgebraKit.LAPACK_ExpertType
LAPACK_Expert()

Algorithm type to denote the expert LAPACK driver for computing the Schur or non-Hermitian eigenvalue decomposition of a matrix.

source
MatrixAlgebraKit.LAPACK_SimpleType
LAPACK_Simple()

Algorithm type to denote the simple LAPACK driver for computing the Schur or non-Hermitian eigenvalue decomposition of a matrix.

source

Singular Value Decomposition

The Singular Value Decomposition transforms a matrix A into a product U * Σ * Vᴴ, where U and V are orthogonal, and Σ is diagonal, real and non-negative. For a square matrix A, both U and V are unitary, and if the singular values are distinct, the decomposition is unique.

For rectangular matrices A of size (m, n), there are two modes of operation, svd_full and svd_compact. The former ensures that the resulting U, and V remain square unitary matrices, of size (m, m) and (n, n), with rectangular Σ of size (m, n). The latter creates an isometric U of size (m, min(m, n)), and V of size (n, min(m, n)), with a square Σ of size (min(m, n), min(m, n)).

It is also possible to compute the singular values only, using the svd_vals function. This then returns a vector of the values on the diagonal of Σ.

Finally, we also support computing a partial or truncated SVD, using the svd_trunc function.

MatrixAlgebraKit.svd_fullFunction
svd_full(A; kwargs...) -> U, S, Vᴴ
svd_full(A, alg::AbstractAlgorithm) -> U, S, Vᴴ
svd_full!(A, [USVᴴ]; kwargs...) -> U, S, Vᴴ
svd_full!(A, [USVᴴ], alg::AbstractAlgorithm) -> U, S, Vᴴ

Compute the full singular value decomposition (SVD) of the rectangular matrix A of size (m, n), such that A = U * S * Vᴴ. Here, U and Vᴴ are unitary matrices of size (m, m) and (n, n) respectively, and S is a diagonal matrix of size (m, n).

Note

The bang method svd_full! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided USVᴴ as output.

See also svd_compact(!), svd_vals(!) and svd_trunc(!).

source
MatrixAlgebraKit.svd_compactFunction
svd_compact(A; kwargs...) -> U, S, Vᴴ
svd_compact(A, alg::AbstractAlgorithm) -> U, S, Vᴴ
svd_compact!(A, [USVᴴ]; kwargs...) -> U, S, Vᴴ
svd_compact!(A, [USVᴴ], alg::AbstractAlgorithm) -> U, S, Vᴴ

Compute the compact singular value decomposition (SVD) of the rectangular matrix A of size (m, n), such that A = U * S * Vᴴ. Here, U is an isometric matrix (orthonormal columns) of size (m, k), whereas Vᴴ is a matrix of size (k, n) with orthonormal rows and S is a square diagonal matrix of size (k, k), with k = min(m, n).

Note

The bang method svd_compact! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided USVᴴ as output.

See also svd_full(!), svd_vals(!) and svd_trunc(!).

source
MatrixAlgebraKit.svd_valsFunction
svd_vals(A; kwargs...) -> S
svd_vals(A, alg::AbstractAlgorithm) -> S
svd_vals!(A, [S]; kwargs...) -> S
svd_vals!(A, [S], alg::AbstractAlgorithm) -> S

Compute the vector of singular values of A, such that for an M×N matrix A, S is a vector of size K = min(M, N), the number of kept singular values.

See also svd_full(!), svd_compact(!) and svd_trunc(!).

source
MatrixAlgebraKit.svd_truncFunction
svd_trunc(A; kwargs...) -> U, S, Vᴴ
svd_trunc(A, alg::AbstractAlgorithm) -> U, S, Vᴴ
svd_trunc!(A, [USVᴴ]; kwargs...) -> U, S, Vᴴ
svd_trunc!(A, [USVᴴ], alg::AbstractAlgorithm) -> U, S, Vᴴ

Compute a partial or truncated singular value decomposition (SVD) of A, such that A * (Vᴴ)' = U * S. Here, U is an isometric matrix (orthonormal columns) of size (m, k), whereas Vᴴ is a matrix of size (k, n) with orthonormal rows and S is a square diagonal matrix of size (k, k), with k is set by the truncation strategy.

Note

The bang method svd_trunc! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided USVᴴ as output.

See also svd_full(!), svd_compact(!) and svd_vals(!).

source

MatrixAlgebraKit again ships with LAPACK-based implementations for dense arrays:

MatrixAlgebraKit.LAPACK_BisectionType
LAPACK_Bisection()

Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Bisection algorithm.

source
MatrixAlgebraKit.LAPACK_DivideAndConquerType
LAPACK_DivideAndConquer()

Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the Divide and Conquer algorithm.

source
MatrixAlgebraKit.LAPACK_JacobiType
LAPACK_Jacobi()

Algorithm type to denote the LAPACK driver for computing the singular value decomposition of a general matrix using the Jacobi algorithm.

source
MatrixAlgebraKit.LAPACK_QRIterationType
LAPACK_QRIteration()

Algorithm type to denote the LAPACK driver for computing the eigenvalue decomposition of a Hermitian matrix, or the singular value decomposition of a general matrix using the QR Iteration algorithm.

source

Polar Decomposition

The Polar Decomposition of a matrix A is a factorization A = W * P, where W is unitary and P is positive semi-definite. If A is invertible (and therefore square), the polar decomposition always exists and is unique. For non-square matrices, the polar decomposition is not unique, but P is. In particular, the polar decomposition is unique if A is full rank.

This decomposition can be computed for both sides, resulting in the left_polar and right_polar functions.

MatrixAlgebraKit.left_polarFunction
left_polar(A; kwargs...) -> W, P
left_polar(A, alg::AbstractAlgorithm) -> W, P
left_polar!(A, [WP]; kwargs...) -> W, P
left_polar!(A, [WP], alg::AbstractAlgorithm) -> W, P

Compute the full polar decomposition of the rectangular matrix A of size (m, n) with m >= n, such that A = W * P. Here, W is an isometric matrix (orthonormal columns) of size (m, n), whereas P is a positive (semi)definite matrix of size (n, n).

Note

The bang method left_polar! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided WP as output.

See also right_polar(!).

source
MatrixAlgebraKit.right_polarFunction
right_polar(A; kwargs...) -> P, Wᴴ
right_polar(A, alg::AbstractAlgorithm) -> P, Wᴴ
right_polar!(A, [PWᴴ]; kwargs...) -> P, Wᴴ
right_polar!(A, [PWᴴ], alg::AbstractAlgorithm) -> P, Wᴴ

Compute the full polar decomposition of the rectangular matrix A of size (m, n) with n >= m, such that A = P * Wᴴ. Here, P is a positive (semi)definite matrix of size (m, m), whereas Wᴴ is a matrix with orthonormal rows (its adjoint is isometric) of size (n, m).

Note

The bang method right_polar! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided WP as output.

See also left_polar(!).

source

These functions are implemented by first computing a singular value decomposition, and then constructing the polar decomposition from the singular values and vectors. Therefore, the relevant LAPACK-based implementation is the one for the SVD:

MatrixAlgebraKit.PolarViaSVDType
PolarViaSVD(svdalg)

Algorithm for computing the polar decomposition of a matrix A via the singular value decomposition (SVD) of A. The svdalg argument specifies the SVD algorithm to use.

source

Orthogonal Subspaces

Often it is useful to compute orthogonal bases for a particular subspace defined by a matrix. Given a matrix A we can compute an orthonormal basis for its image or coimage, and factorize the matrix accordingly. These bases are accessible through left_orth and right_orth respectively. This is implemented through a combination of the decompositions mentioned above, and serves as a convenient interface to these operations.

MatrixAlgebraKit.left_orthFunction
left_orth(A; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> V, C
left_orth!(A, [VC]; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> V, C

Compute an orthonormal basis V for the image of the matrix A of size (m, n), as well as a matrix C (the corestriction) such that A factors as A = V * C. The keyword argument kind can be used to specify the specific orthogonal decomposition that should be used to factor A, whereas atol and rtol can be used to control the precision in determining the rank of A via its singular values.

This is a high-level wrapper and will use one of the decompositions qr!, svd!, and left_polar! to compute the orthogonal basis V, as controlled by the keyword arguments.

When kind is provided, its possible values are

  • kind == :qrpos: V and C are computed using the positive QR decomposition. This requires iszero(atol) && iszero(rtol) and left_orth!(A, [VC]) is equivalent to qr_compact!(A, [VC], alg) with a default value alg = select_algorithm(qr_compact!, A; positive=true)

  • kind == :qr: V and C are computed using the QR decomposition, This requires iszero(atol) && iszero(rtol) and left_orth!(A, [VC]) is equivalent to qr_compact!(A, [VC], alg) with a default value alg = select_algorithm(qr_compact!, A)

  • kind == :polar: V and C are computed using the polar decomposition, This requires iszero(atol) && iszero(rtol) and left_orth!(A, [VC]) is equivalent to left_polar!(A, [VC], alg) with a default value alg = select_algorithm(left_polar!, A)

  • kind == :svd: V and C are computed using the singular value decomposition svd_trunc!, where V will contain the left singular vectors corresponding to the singular values that are larger than max(atol, rtol * σ₁), where σ₁ is the largest singular value of A. C is computed as the product of the singular values and the right singular vectors, i.e. with U, S, Vᴴ = svd_trunc!(A), we have V = U and C = S * Vᴴ.

When kind is not provided, the default value is :qrpos when iszero(atol) && iszero(rtol) and :svd otherwise. Finally, finer control is obtained by providing an explicit algorithm using the alg keyword argument, which should be compatible with the chosen or default value of kind.

Note

The bang method left_orth! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided CV as output.

See also right_orth(!), left_null(!), right_null(!)

source
MatrixAlgebraKit.right_orthFunction
right_orth(A; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> C, Vᴴ
right_orth!(A, [CVᴴ]; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> C, Vᴴ

Compute an orthonormal basis V = adjoint(Vᴴ) for the coimage of the matrix A, i.e. for the image of adjoint(A), as well as a matrix C such that A = C * Vᴴ. The keyword argument kind can be used to specify the specific orthogonal decomposition that should be used to factor A, whereas atol and rtol can be used to control the precision in determining the rank of A via its singular values.

This is a high-level wrapper and will use call one of the decompositions qr!, svd!, and left_polar! to compute the orthogonal basis V, as controlled by the keyword arguments.

When kind is provided, its possible values are

  • kind == :lqpos: C and Vᴴ are computed using the positive QR decomposition. This requires iszero(atol) && iszero(rtol) and right_orth!(A, [CVᴴ]) is equivalent to lq_compact!(A, [CVᴴ], alg) with a default value alg = select_algorithm(lq_compact!, A; positive=true)

  • kind == :lq: C and Vᴴ are computed using the QR decomposition, This requires iszero(atol) && iszero(rtol) and right_orth!(A, [CVᴴ]) is equivalent to lq_compact!(A, [CVᴴ], alg) with a default value alg = select_algorithm(lq_compact!, A))

  • kind == :polar: C and Vᴴ are computed using the polar decomposition, This requires iszero(atol) && iszero(rtol) and right_orth!(A, [CVᴴ]) is equivalent to right_polar!(A, [CVᴴ], alg) with a default value alg = select_algorithm(right_polar!, A))

  • kind == :svd: C and Vᴴ are computed using the singular value decomposition svd_trunc!, where V = adjoint(Vᴴ) will contain the right singular vectors corresponding to the singular values that are larger than max(atol, rtol * σ₁), where σ₁ is the largest singular value of A. C is computed as the product of the singular values and the right singular vectors, i.e. with U, S, Vᴴ = svd_trunc!(A), we have C = rmul!(U, S) and Vᴴ = Vᴴ.

When kind is not provided, the default value is :lqpos when iszero(atol) && iszero(rtol) and :svd otherwise. Finally, finer control is obtained by providing an explicit algorithm using the alg keyword argument, which should be compatible with the chosen or default value of kind.

Note

The bang method right_orth! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided CVᴴ as output.

See also left_orth(!), left_null(!), right_null(!)

source

Null Spaces

Similarly, it can be convenient to obtain an orthogonal basis for the kernel or cokernel of a matrix. These are the compliments of the image and coimage, and can be computed using the left_null and right_null functions. Again, this is typically implemented through a combination of the decompositions mentioned above, and serves as a convenient interface to these operations.

MatrixAlgebraKit.left_nullFunction
left_null(A; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> N
left_null!(A, [N]; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> N

Compute an orthonormal basis N for the cokernel of the matrix A of size (m, n), i.e. the nullspace of adjoint(A), such that adjoint(A)*N ≈ 0 and N'*N ≈ I. The keyword argument kind can be used to specify the specific orthogonal decomposition that should be used to factor A, whereas atol and rtol can be used to control the precision in determining the rank of A via its singular values.

This is a high-level wrapper and will use one of the decompositions qr! or svd! to compute the orthogonal basis N, as controlled by the keyword arguments.

When kind is provided, its possible values are

  • kind == :qrpos: N is computed using the positive QR decomposition. This requires iszero(atol) && iszero(rtol) and left_null!(A, [N], kind=:qrpos) is equivalent to qr_null!(A, [N], alg) with a default value alg = select_algorithm(qr_compact!, A; positive=true)

  • kind == :qr: N is computed using the (nonpositive) QR decomposition. This requires iszero(atol) && iszero(rtol) and left_null!(A, [N], kind=:qr) is equivalent to qr_null!(A, [N], alg) with a default value alg = select_algorithm(qr_compact!, A)

  • kind == :svd: N is computed using the singular value decomposition and will contain the left singular vectors corresponding to the singular values that are smaller than max(atol, rtol * σ₁), where σ₁ is the largest singular value of A.

When kind is not provided, the default value is :qrpos when iszero(atol) && iszero(rtol) and :svd otherwise. Finally, finer control is obtained by providing an explicit algorithm using the alg keyword argument, which should be compatible with the chosen or default value of kind.

Note

The bang method left_null! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided N as output.

See also right_null(!), left_orth(!), right_orth(!)

source
MatrixAlgebraKit.right_nullFunction
right_null(A; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> Nᴴ
right_null!(A, [Nᴴ]; [kind::Symbol, atol::Real=0, rtol::Real=0, alg]) -> Nᴴ

Compute an orthonormal basis N = adjoint(Nᴴ) for the kernel or nullspace of the matrix A of size (m, n), such that A*adjoint(Nᴴ) ≈ 0 and Nᴴ*adjoint(Nᴴ) ≈ I. The keyword argument kind can be used to specify the specific orthogonal decomposition that should be used to factor A, whereas atol and rtol can be used to control the precision in determining the rank of A via its singular values.

This is a high-level wrapper and will use one of the decompositions lq! or svd! to compute the orthogonal basis Nᴴ, as controlled by the keyword arguments.

When kind is provided, its possible values are

  • kind == :lqpos: Nᴴ is computed using the positive LQ decomposition. This requires iszero(atol) && iszero(rtol) and right_null!(A, [Nᴴ], kind=:lqpos) is equivalent to lq_null!(A, [Nᴴ], alg) with a default value alg = select_algorithm(lq_compact!, A; positive=true)

  • kind == :lq: Nᴴ is computed using the (nonpositive) LQ decomposition. This requires iszero(atol) && iszero(rtol) and right_null!(A, [Nᴴ], kind=:lq) is equivalent to lq_null!(A, [Nᴴ], alg) with a default value alg = select_algorithm(lq_compact!, A)

  • kind == :svd: N is computed using the singular value decomposition and will contain the left singular vectors corresponding to the singular values that are smaller than max(atol, rtol * σ₁), where σ₁ is the largest singular value of A.

When kind is not provided, the default value is :lqpos when iszero(atol) && iszero(rtol) and :svd otherwise. Finally, finer control is obtained by providing an explicit algorithm using the alg keyword argument, which should be compatible with the chosen or default value of kind.

Note

The bang method right_null! optionally accepts the output structure and possibly destroys the input matrix A. Always use the return value of the function as it may not always be possible to use the provided Nᴴ as output.

See also left_null(!), left_orth(!), right_orth(!)

source