Determinant

2008/9 Schools Wikipedia Selection. Related subjects: Mathematics

In algebra, a determinant is a function depending on n that associates a scalar, det(A), to every n×n square matrix A. The fundamental geometric meaning of a determinant is as the scale factor for volume when A is regarded as a linear transformation. Determinants are important both in calculus, where they enter the substitution rule for several variables, and in multilinear algebra.

For a fixed positive integer n, there is a unique determinant function for the n×n matrices over any commutative ring R. In particular, this function exists when R is the field of real or complex numbers.

Vertical bar notation

The determinant of a matrix A is also sometimes denoted by |A|. This notation can be ambiguous since it is also used for certain matrix norms and for the absolute value. However, often the matrix norm will be denoted with double vertical bars (e.g., ‖A‖) and may carry a subscript as well. Thus, the vertical bar notation for determinant is frequently used (e.g., Cramer's rule and minors). For example, for matrix


A = \begin{bmatrix} a & b & c\\d & e & f\\g & h & i \end{bmatrix}\,

the determinant det(A) might be indicated by | A | or more explicitly as


|A| = \begin{vmatrix} a & b & c\\d & e & f\\g & h & i \end{vmatrix}.\,

That is, the square braces around the matrices are replaced with elongated vertical bars.

Determinants of 2-by-2 matrices

The area of the parallelogram is the determinant of the matrix formed by the vectors representing the parallelogram's sides.
The area of the parallelogram is the determinant of the matrix formed by the vectors representing the parallelogram's sides.

The 2×2 matrix


A = \begin{bmatrix} a & b\\c & d \end{bmatrix}\,

has determinant

\det(A)=ad-bc.\,

The interpretation when the matrix has real number entries is that this gives the oriented area of the parallelogram with vertices at (0,0), (a,b), (a + c, b + d), and (c,d). The oriented area is the same as the usual area, except that it is negative when the vertices are listed in clockwise order.

The assumption here is that the linear transformation is applied to row vectors as the vector-matrix product xTA, where x is a column vector. The parallelogram in the figure is obtained by multiplying the row vectors  \begin{bmatrix} 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 \end{bmatrix} and \begin{bmatrix}1 & 1\end{bmatrix}, defining the vertices of the unit square. With the more common matrix-vector product Ax the parallelogram has vertices at \begin{bmatrix} 0 \\ 0  \end{bmatrix}, \begin{bmatrix} a \\ c \end{bmatrix}, \begin{bmatrix} a+b \\ c+d \end{bmatrix} and  \begin{bmatrix} b \\ d \end{bmatrix} (note that Ax = (xTAT)T).

A formula for larger matrices will be given below.

Determinants of 3-by-3 matrices

The volume of this Parallelepiped is the absolute value of the determinant of the matrix formed by the rows r1, r2, and r3.
The volume of this Parallelepiped is the absolute value of the determinant of the matrix formed by the rows r1, r2, and r3.

The 3×3 matrix:

A=\begin{bmatrix}a&b&c\\
d&e&f\\g&h&i\end{bmatrix}.

Using the cofactor expansion on the first row of the matrix we get:

\begin{align}
\det(A) &= a\begin{vmatrix}e&f\\h&i\end{vmatrix}
-b\begin{vmatrix}d&f\\g&i\end{vmatrix}
+c\begin{vmatrix}d&e\\g&h\end{vmatrix} \\
&= aei-afh-bdi+bfg+cdh-ceg \\
&= (aei+bfg+cdh)-(gec+hfa+idb),
\end{align}
The determinant of a 3x3 matrix can be calculated by its diagonals.
The determinant of a 3x3 matrix can be calculated by its diagonals.

which can be remembered as the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements when the copies of the first two columns of the matrix are written beside it as below:


\begin{matrix}
\color{blue}a & \color{blue}b & \color{blue}c & a & b \\
d & \color{blue}e & \color{blue}f & \color{blue}d & e \\
g & h & \color{blue}i & \color{blue}g & \color{blue}h
\end{matrix}
\quad - \quad
\begin{matrix}
a & b & \color{red}c & \color{red}a & \color{red}b \\
d & \color{red}e & \color{red}f & \color{red}d & e \\
\color{red}g & \color{red}h & \color{red}i & g & h
\end{matrix}

Note that this mnemonic does not carry over into higher dimensions.

Applications

Determinants are used to characterize invertible matrices (i.e., exactly those matrices with non-zero determinants), and to explicitly describe the solution to a system of linear equations with Cramer's rule. They can be used to find the eigenvalues of the matrix A through the characteristic polynomial

p(x) = \det(xI - A) \,

where I is the identity matrix of the same dimension as A.

One often thinks of the determinant as assigning a number to every sequence of n vectors in \Bbb{R}^n, by using the square matrix whose columns are the given vectors. With this understanding, the sign of the determinant of a basis can be used to define the notion of orientation in Euclidean spaces. The determinant of a set of vectors is positive if the vectors form a right-handed coordinate system, and negative if left-handed.

Determinants are used to calculate volumes in vector calculus: the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if the linear map f: \Bbb{R}^n \rightarrow \Bbb{R}^n is represented by the matrix A, and S is any measurable subset of \Bbb{R}^n, then the volume of f(S) is given by \left| \det(A) \right| \times \operatorname{volume}(S). More generally, if the linear map f: \Bbb{R}^n \rightarrow \Bbb{R}^m is represented by the m-by-n matrix A, and S is any measurable subset of \Bbb{R}^{n}, then the n- dimensional volume of f(S) is given by \sqrt{\det(A^\mathrm{T} A)} \times \operatorname{volume}(S). By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines.

The volume of any tetrahedron, given its vertices a, b, c, and d, is (1/6)·|det(a − bb − c, c − d)|, or any other combination of pairs of vertices that form a simply connected graph.

General definition and computation

The definition of the determinant comes from the following Theorem.

Theorem. Let Mn(K) denote the set of all n \times n matrices over the field K. There exists exactly one function

F : M_n(K) \longrightarrow K

with the two properties:

  • F is alternating multilinear with regard to columns;
  • F(I) = 1.

One can then define the determinant as the unique function with the above properties.

In proving the above theorem, one also obtains the Leibniz formula:

\det(A) = \sum_{\sigma \in S_n} \sgn(\sigma) \prod_{i=1}^n A_{i,\sigma(i)}.

Here the sum is computed over all permutations σ of the numbers {1,2,...,n} and sgn(σ) denotes the signature of the permutation σ: +1 if σ is an even permutation and −1 if it is odd.

This formula contains n! (factorial) summands, and it is therefore impractical to use it to calculate determinants for large n.

For small matrices, one obtains the following formulas:

  • if A is a 1-by-1 matrix, then \det(A) = A_{1,1}. \,
  • if A is a 2-by-2 matrix, then \det(A) = A_{1,1}A_{2,2} - A_{2,1}A_{1,2}. \,
  • for a 3-by-3 matrix A, the formula is more complicated:

\begin{matrix}
\det(A) & = & A_{1,1}A_{2,2}A_{3,3} + A_{1,3}A_{2,1}A_{3,2} + A_{1,2}A_{2,3}A_{3,1}\\
& & - A_{1,3}A_{2,2}A_{3,1} - A_{1,1}A_{2,3}A_{3,2} - A_{1,2}A_{2,1}A_{3,3}.
\end{matrix}\,

which takes the shape of the Sarrus' scheme.


In general, determinants can be computed using Gaussian elimination using the following rules:

  • If A is a triangular matrix, i.e. A_{i,j} = 0 \, whenever i > j or, alternatively, whenever i < j, then \det(A) =  A_{1,1} A_{2,2} \cdots A_{n,n} \, (the product of the diagonal entries of A).
  • If B results from A by exchanging two rows or columns, then \det(B) = -\det(A). \,
  • If B results from A by multiplying one row or column with the number c, then \det(B) = c\,\det(A). \,
  • If B results from A by adding a multiple of one row to another row, or a multiple of one column to another column, then \det(B) = \det(A). \,

Explicitly, starting out with some matrix, use the last three rules to convert it into a triangular matrix, then use the first rule to compute its determinant.

It is also possible to expand a determinant along a row or column using Laplace's formula, which is efficient for relatively small matrices. To do this along row i, say, we write

\det(A) = \sum_{j=1}^n A_{i,j}C_{i,j} = \sum_{j=1}^n A_{i,j} (-1)^{i+j} M_{i,j}

where the Ci,j represent the matrix cofactors, i.e. Ci,j is ( − 1)i + j times the minor Mi,j, which is the determinant of the matrix that results from A by removing the i-th row and the j-th column.

Example

Suppose we want to compute the determinant of

A = \begin{bmatrix}-2&2&-3\\
-1& 1& 3\\
2 &0 &-1\end{bmatrix}.

We can go ahead and use the Leibniz formula directly:

\det(A)\, =\, (-2\cdot 1 \cdot -1) + (-3\cdot -1 \cdot 0) + (2\cdot 3\cdot 2)
- (-3\cdot 1 \cdot 2) - (-2\cdot 3 \cdot 0) - (2\cdot -1 \cdot -1)
=\, 2 + 0 + 12 - (-6) - 0 - 2 = 18.\,

Alternatively, we can use Laplace's formula to expand the determinant along a row or column. It is best to choose a row or column with many zeros, so we will expand along the second column:

\det(A)\, =\, (-1)^{1+2}\cdot 2 \cdot \det \begin{bmatrix}-1&3\\ 2 &-1\end{bmatrix} + (-1)^{2+2}\cdot 1 \cdot \det \begin{bmatrix}-2&-3\\ 2&-1\end{bmatrix}
=\, (-2)\cdot((-1)\cdot(-1)-2\cdot3)+1\cdot((-2)\cdot(-1)-2\cdot(-3))
=\, (-2)(-5)+8 = 18.\,

A third way (and the method of choice for larger matrices) would involve the Gauss algorithm. When doing computations by hand, one can often shorten things dramatically by cleverly adding multiples of columns or rows to other columns or rows; this does not change the value of the determinant, but may create zero entries which simplifies the subsequent calculations. In this example, adding the second column to the first one is especially useful:

\begin{bmatrix}0&2&-3\\
0 &1 &3\\
2 &0 &-1\end{bmatrix}

and this determinant can be quickly expanded along the first column:

\det(A)\, =\, (-1)^{3+1}\cdot 2\cdot \det \begin{bmatrix}2&-3\\ 1&3\end{bmatrix}
=\, 2\cdot(2\cdot3-1\cdot(-3)) = 2\cdot 9 = 18.\,

Properties

The determinant is a multiplicative map in the sense that

\det(AB) = \det(A)\det(B) \, for all n-by-n matrices A and B.

This is generalized by the Cauchy-Binet formula to products of non-square matrices.

It is easy to see that \det(rI_n) = r^n \, and thus

\det(rA) = \det(rI_n \cdot A) = r^n \det(A) \, for all n-by-n matrices A and all scalars r.

A matrix over a commutative ring R is invertible if and only if its determinant is a unit in R. In particular, if A is a matrix over a field such as the real or complex numbers, then A is invertible if and only if det(A) is not zero. In this case we have

\det(A^{-1}) = \det(A)^{-1}. \,

Expressed differently: the vectors v1,...,vn in Rn form a basis if and only if det(v1,...,vn) is non-zero.

A matrix and its transpose have the same determinant:

\det(A^\mathrm{T}) = \det(A). \,

The determinants of a complex matrix and of its conjugate transpose are conjugate:

\det(A^*) = \det(A)^*. \,

(Note the conjugate transpose is identical to the transpose for a real matrix)

The determinant of a matrix A exhibits the following properties under elementary matrix transformations of A:

  1. Exchanging rows or columns multiplies the determinant by −1.
  2. Multiplying a row or column by m multiplies the determinant by m.
  3. Adding a multiple of a row or column to another leaves the determinant unchanged.

This follows from the multiplicative property and the determinants of the elementary matrix transformation matrices.

If A and B are similar, i.e., if there exists an invertible matrix X such that A = X − 1BX, then by the multiplicative property,

\det(A) = \det(B). \,

This means that the determinant is a similarity invariant. Because of this, the determinant of some linear transformation T : VV for some finite dimensional vector space V is independent of the basis for V. The relationship is one-way, however: there exist matrices which have the same determinant but are not similar.

If A is a square n-by-n matrix with real or complex entries and if λ1,...,λn are the (complex) eigenvalues of A listed according to their algebraic multiplicities, then

\det(A) = \lambda_{1}\lambda_{2} \cdots \lambda_{n}.\,

This follows from the fact that A is always similar to its Jordan normal form, an upper triangular matrix with the eigenvalues on the main diagonal.

Useful identities

For m-by-n matrix A and m-by-n matrix B, it holds

det(In + ATB) = det(Im + ABT) = det(In + BTA) = det(Im + BAT).

A consequence of these equalities for the case of (column) vectors x and y

det(I + xyT) = 1 + yTx.

And a generalized version of this identity

\det(A + x y^T) = \det(A)\ (1 + y^T A^{-1} x) .

Proofs can be found in .


Block matrices

Suppose, A,B,C,D are n\times n, n\times m, m\times n, m\times m matrices respectively. Then

\det\begin{pmatrix}A& 0\\ C& D\end{pmatrix} = \det\begin{pmatrix}A& B\\ 0& D\end{pmatrix} = \det(A) \det(D) .

This can be (quite) easily seen from e.g. the Leibniz formula. Employing the following identity

\begin{pmatrix}A& B\\ C& D\end{pmatrix} = \begin{pmatrix}A& 0\\ C& 1\end{pmatrix} \begin{pmatrix}1& A^{-1} B\\ 0& D - C A^{-1} B\end{pmatrix}

leads to

\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} = \det(A) \det(D - C A^{-1} B) .

Similar identity with det(D) factored out can be derived analogously. These identities were taken from .

If dij are diagonal matrices, then

\det\begin{pmatrix}d_{11} & \ldots & d_{1c}\\ \vdots & & \vdots\\ d_{r1} & \ldots & d_{rc} \end{pmatrix} =
\det \begin{pmatrix}\det(d_{11}) & \ldots & \det(d_{1c})\\ \vdots & & \vdots\\ \det(d_{r1}) & \ldots & \det(d_{rc}) \end{pmatrix}

This is a special case of the theorem published in .

Relationship to trace

From this connection between the determinant and the eigenvalues, one can derive a connection between the trace function, the exponential function, and the determinant:

\det(\exp(A)) = \exp(\operatorname{tr}(A)).

Performing the substitution \scriptstyle A \,\mapsto\, \log A in the above equation yields

 \det(A) = \exp(\operatorname{tr}(\log A)), \

which is closely related to the Fredholm determinant. Similarly,

 \operatorname{tr}(A) = \log(\det(\exp A)). \

For n-by-n matrices there are the relationships:

Case n = 1: \left.\det(A) = \operatorname{tr}(A)\right.
Case n = 2: \left.
\det(A) = \frac{1}{2} \left(
\operatorname{tr}(A)^2
- \operatorname{tr}(A^2)
\right)\right.
Case n = 3: \left.
\det(A) = \frac{1}{6} \left(
\operatorname{tr}(A)^3
- 3 \operatorname{tr}(A)\operatorname{tr}(A^2)
+ 2 \operatorname{tr}(A^3)
\right)\right.
Case n = 4: \left.
\det(A) = \frac{1}{24} \left(
\operatorname{tr}(A)^4
- 6\operatorname{tr}(A)^2\operatorname{tr}(A^2)
+ 3\operatorname{tr}(A^2)^2
+ 8\operatorname{tr}(A)\operatorname{tr}(A^3)
- 6\operatorname{tr}(A^4)
\right)\right.
\ldots

which are closely related to Newton's identities.

Derivative

The determinant of real square matrices is a polynomial function from \Bbb{R}^{n \times n} to \Bbb{R}, and as such is everywhere differentiable. Its derivative can be expressed using Jacobi's formula:

d \,\det(A) = \operatorname{tr}(\operatorname{adj}(A) \,dA)

where adj(A) denotes the adjugate of A. In particular, if A is invertible, we have

d \,\det(A) = \det(A) \,\operatorname{tr}(A^{-1} \,dA).

In component form, these are

 \frac{\partial \det(A)}{\partial A_{ij}}
= \operatorname{adj}(A)_{ji}
= \det(A)(A^{-1})_{ji}.

When ε is a small number these are equivalent to

\det(A + \epsilon X) - \det(A)
= \operatorname{tr}(\operatorname{adj}(A) X) \epsilon + {O}(\epsilon^2)
= \det(A) \,\operatorname{tr}(A^{-1} X) \epsilon + {O}(\epsilon^2).

The special case where A is equal to the identity matrix I yields

\det(I + \epsilon X) = 1 + \operatorname{tr}(X) \epsilon +O(\epsilon^2).


A useful property in the case of 3 x 3 matrices is the following:

A may be written as A = \begin{bmatrix}\bar{a} & \bar{b} & \bar{c}\end{bmatrix} where \bar{a}, \bar{b}, \bar{c} are vectors, then the gradient over one of the three vectors may be written as the cross product of the other two:

\nabla_\bar{a}\det(A) = \bar{b} \times \bar{c}
\nabla_\bar{b}\det(A) = \bar{c} \times \bar{a}
\nabla_\bar{c}\det(A) = \bar{a} \times \bar{b}

Abstract formulation

An n × n square matrix A may be thought of as the coordinate representation of a linear transformation of an n-dimensional vector space V. Given any linear transformation

A:V\to V\,

we can define the determinant of A as the determinant of any matrix representation of A. This is a well-defined notion (i.e. independent of a choice of basis) since the determinant is invariant under similarity transformations.

As one might expect, it is possible to define the determinant of a linear transformation in a coordinate-free manner. If V is an n-dimensional vector space, then one can construct its top exterior power ΛnV. This is a one-dimensional vector space whose elements are written

v_1 \wedge v_2 \wedge \cdots \wedge v_n

where each vi is a vector in V and the wedge product ∧ is antisymmetric (i.e., uu = 0). Any linear transformation A : VV induces a linear transformation of ΛnV as follows:

v_1 \wedge v_2 \wedge \cdots \wedge v_n \mapsto Av_1 \wedge Av_2 \wedge \cdots \wedge Av_n.

Since ΛnV is one-dimensional this operation is just multiplication by some scalar that depends on A. This scalar is called the determinant of A. That is, we define det(A) by the equation

Av_1 \wedge Av_2 \wedge \cdots \wedge Av_n = (\det A)\,v_1 \wedge v_2 \wedge \cdots \wedge v_n.

One can check that this definition agrees with the coordinate-dependent definition given above.

Algorithmic implementation

  • The naive method of implementing an algorithm to compute the determinant is to use Laplace's formula for expansion by cofactors. This approach is extremely inefficient in general, however, as it is of order n! (n factorial) for an n×n matrix M.
  • An improvement to order n3 can be achieved by using LU decomposition to write M = LU for triangular matrices L and U. Now, det M = det LU = det L det U, and since L and U are triangular the determinant of each is simply the product of its diagonal elements. Alternatively one can perform the Cholesky decomposition if possible or the QR decomposition and find the determinant in a similar fashion.
  • Since the definition of the determinant does not need divisions, a question arises: do fast algorithms exist that do not need divisions? This is especially interesting for matrices over rings. Indeed algorithms with run-time proportional to n4 exist. An algorithm of Mahajan and Vinay, and Berkowitz is based on closed ordered walks (short clow). It computes more products than the determinant definition requires, but some of these products cancel and the sum of these products can be computed more efficiently. The final algorithm looks very much like an iterated product of triangular matrices.
  • What is not often discussed is the so-called "bit complexity" of the problem, i.e. how many bits of accuracy you need to store for intermediate values. For example, using Gaussian elimination, you can reduce the matrix to upper triangular form, then multiply the main diagonal to get the determinant (this is essentially a special case of the LU decomposition as above), but a quick calculation will show that the bit size of intermediate values could potentially become exponential. One could talk about when it is appropriate to round intermediate values, but an elegant way of calculating the determinant uses the Bareiss Algorithm, an exact-division method based on Sylvester's identity to give a run time of order n3 and bit complexity roughly the bit size of the original entries in the matrix times n.

History

Historically, determinants were considered before matrices. Originally, a determinant was defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the 3rd century BC Chinese math textbook The Nine Chapters on the Mathematical Art. In Europe, two-by-two determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz and, in Japan, by Seki about 100 years later. Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrent law was first announced by Bézout (1764).

It was Vandermonde (1771) who first recognized determinants as independent functions. Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order. Lagrange was the first to apply determinants to questions elimination theory; he proved many special cases of general identities.

Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word determinants (Laplace had used resultant), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.

The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On the same day ( November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy-Binet formula.) In this he used the word determinant in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality.

The next important figure was Jacobi (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work.

The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the text-books on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.

Retrieved from " http://en.wikipedia.org/wiki/Determinant"
This Wikipedia Selection is sponsored by SOS Children , and is a hand-chosen selection of article versions from the English Wikipedia edited only by deletion (see www.wikipedia.org for details of authors and sources). The articles are available under the GNU Free Documentation License. See also our Disclaimer.