Diagonal matrix






Matrix whose only nonzero elements are on its main diagonal

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero. The term usually refers to square matrices. An example of a 2-by-2 diagonal matrix is [3002]{displaystyle {begin{bmatrix}3&0\0&2end{bmatrix}}}{displaystyle {begin{bmatrix}3&0\0&2end{bmatrix}}}; the following matrix is a 3-by-3 diagonal matrix:[6000700019]{displaystyle {begin{bmatrix}6&0&0\0&7&0\0&0&19end{bmatrix}}}{displaystyle {begin{bmatrix}6&0&0\0&7&0\0&0&19end{bmatrix}}}. An identity matrix of any size, or any multiple of it, will be a diagonal matrix.




Contents






  • 1 Background


  • 2 Rectangular diagonal matrices


  • 3 Symmetric diagonal matrices


    • 3.1 Scalar matrix




  • 4 Matrix operations


  • 5 Operator matrix in eigenbasis


  • 6 Properties


  • 7 Applications


  • 8 Operator theory


  • 9 See also


  • 10 Notes


  • 11 References





Background


As stated above, the off-diagonal entries are zero. That is, the matrix D = (di,j) with n columns and n rows is diagonal if



i,j∈{1,2,…,n},i≠j⟹di,j=0{displaystyle forall i,jin {1,2,ldots ,n},ineq jimplies d_{i,j}=0}{displaystyle forall i,jin {1,2,ldots ,n},ineq jimplies d_{i,j}=0}.

However, the main diagonal entries are unrestricted.



Rectangular diagonal matrices


The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with all the entries not of the form di,i being zero. For example:



[10004000−3000]{displaystyle {begin{bmatrix}1&0&0\0&4&0\0&0&-3\0&0&0\end{bmatrix}}}{begin{bmatrix}1&0&0\0&4&0\0&0&-3\0&0&0\end{bmatrix}} or [100000400000−300]{displaystyle {begin{bmatrix}1&0&0&0&0\0&4&0&0&0\0&0&-3&0&0end{bmatrix}}}{begin{bmatrix}1&0&0&0&0\0&4&0&0&0\0&0&-3&0&0end{bmatrix}}


Symmetric diagonal matrices


The following matrix is a symmetric diagonal matrix:


[10004000−2]{displaystyle {begin{bmatrix}1&0&0\0&4&0\0&0&-2end{bmatrix}}}{begin{bmatrix}1&0&0\0&4&0\0&0&-2end{bmatrix}}

If the entries are real numbers or complex numbers, then it is a normal matrix as well.


In the remainder of this article we will consider only square matrices.



Scalar matrix


A square diagonal matrix with all its main diagonal entries equal is a scalar matrix, that is, a scalar multiple λI of the identity matrix I. Its effect on a vector is scalar multiplication by λ. For example, a 3×3 scalar matrix has the form:


000λ000λ]≡λI3{displaystyle {begin{bmatrix}lambda &0&0\0&lambda &0\0&0&lambda end{bmatrix}}equiv lambda {boldsymbol {I}}_{3}}{begin{bmatrix}lambda &0&0\0&lambda &0\0&0&lambda end{bmatrix}}equiv lambda {boldsymbol {I}}_{3}

The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. All other diagonal matrices which are not scalar only commute with other diagonal matrices and not with any matrix unlike scalar matrices.[1] Intuitively, this stems from the fact that scalar matrices are Identity matrices multiplied with scalars.


For an abstract vector space V (rather than the concrete vector space Kn{displaystyle K^{n}}K^{n}), or more generally a module M over a ring R, with the endomorphism algebra End(M) (algebra of linear operators on M) replacing the algebra of matrices, the analog of scalar matrices are scalar transformations. Formally, scalar multiplication is a linear map, inducing a map R→End⁡(M),{displaystyle Rto operatorname {End} (M),}Rto operatorname {End} (M), (send a scalar λ to the corresponding scalar transformation, multiplication by λ) exhibiting End(M) as a R-algebra. For vector spaces, or more generally free modules M≅Rn{displaystyle Mcong R^{n}}Mcong R^{n}, for which the endomorphism algebra is isomorphic to a matrix algebra, the scalar transforms are exactly the center of the endomorphism algebra, and similarly invertible transforms are the center of the general linear group GL(V), where they are denoted by Z(V), follow the usual notation for the center.



Matrix operations


The operations of matrix addition and matrix multiplication are especially simple for symmetric diagonal matrices. Write diag(a1, ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1, ..., an. Then, for addition, we have



diag(a1, ..., an) + diag(b1, ..., bn) = diag(a1 + b1, ..., an + bn)

and for matrix multiplication,



diag(a1, ..., an) · diag(b1, ..., bn) = diag(a1b1, ..., anbn).

The diagonal matrix diag(a1, ..., an) is invertible if and only if the entries a1, ..., an are all non-zero. In this case, we have



diag(a1, ..., an)−1 = diag(a1−1, ..., an−1).

In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices.


Multiplying an n-by-n matrix A from the left with diag(a1, ..., an) amounts to multiplying the ith row of A by ai for all i; multiplying the matrix A from the right with diag(a1, ..., an) amounts to multiplying the ith column of A by ai for all i.



Operator matrix in eigenbasis



As explained in determining coefficients of operator matrix, there is a special basis, e1, ..., en, for which the matrix A{displaystyle A}A takes the diagonal form. Hence, in the defining equation Ae→j=∑ai,je→i{displaystyle A{vec {e}}_{j}=sum a_{i,j}{vec {e}}_{i}}A{vec {e}}_{j}=sum a_{i,j}{vec {e}}_{i}, all coefficients ai,j{displaystyle a_{i,j}}a_{i,j} with ij are zero, leaving only one term per sum. The surviving diagonal elements, ai,i{displaystyle a_{i,i}}a_{i,i}, are known as eigenvalues and designated with λi{displaystyle lambda _{i}}lambda _{i} in the equation, which reduces to Ae→i=λie→i{displaystyle A{vec {e}}_{i}=lambda _{i}{vec {e}}_{i}}A{vec {e}}_{i}=lambda _{i}{vec {e}}_{i}. The resulting equation is known as eigenvalue equation[2] and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors.


In other words, the eigenvalues of diag(λ1, ..., λn) are λ1, ..., λn with associated eigenvectors of e1, ..., en.



Properties


The determinant of diag(a1, ..., an) is the product a1...an.


The adjugate of a diagonal matrix is again diagonal.


A square matrix is diagonal if and only if it is triangular and normal.


Any square diagonal matrix is also a symmetric matrix.


A symmetric diagonal matrix can be defined as a matrix that is both upper- and lower-triangular. The identity matrix In and any square zero matrix are diagonal. A one-dimensional matrix is always diagonal.



Applications


Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix.


In fact, a given n-by-n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X−1AX is diagonal) if and only if it has n linearly independent eigenvectors. Such matrices are said to be diagonalizable.


Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if AA = AA then there exists a unitary matrix U such that UAU is diagonal). Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices U and V such that UAV is diagonal with positive entries.



Operator theory


In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates–in the language of operators, an integral transform–which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation.


Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.



See also




  • Anti-diagonal matrix

  • Banded matrix

  • Bidiagonal matrix

  • Diagonally dominant matrix

  • Jordan normal form

  • Multiplication operator

  • Tridiagonal matrix

  • Toeplitz matrix

  • Toral Lie algebra

  • Circulant matrix




Notes




  1. ^ "do-diagonal-matrices-always-commute". stackexchange. Retrieved August 4, 2018..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Nearing, James (2010). "Chapter 7.9: Eigenvalues and Eigenvectors" (PDF). Mathematical Tools for Physics. ISBN 048648212X. Retrieved January 1, 2012.



References


  • Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1985.
    ISBN 0-521-30586-1 (hardback),
    ISBN 0-521-38632-2 (paperback).



Popular posts from this blog

Shashamane

Carrot

Deprivation index