Determinants
Determinants
As used in mathematics, the word determinant refers to a number associated with a square matrix, that is, an array of numerical quantities arranged in, say, n rows and n columns. Matrices of this sort typically arise as a means for n linear equations in n unknowns.
Suppose the system of equations is
a11x1 + a12x2 + … + a1nxn = b1
a21x1 + a22x2 + … + a2nxn = b2
⋮an1x1 + an2x2 + … annxn = bn
Then the matrix of detached coefficients
is said to be nonsingular if and only if its determinant is nonzero. The existence and uniqueness of a solution to the system of equations are determined by the nonsingularity of A. If A lacks this property, it is said to be singular, and when this is the case, the system might have no solution (nonexistence) or infinitely many solutions (nonuniqueness).
The determinant of a square matrix A is the number
where (Anij) denotes the submatrix obtained from A by deleting its i th row and j th column. This definition of the determinant uses what is called expansion by a row, in this case by row i. There is an analogous definition of det(A ) in terms of expansion by a column, say j, which says
These formulas are associated with the name of the French mathematician Pierre-Simon Laplace (1749–1827). From either of them it is evident that the determinant of the identity matrix I is 1, and hence it is nonsingular.
The determinant of a square matrix A and that of its transpose A T are always equal. Moreover, the determinant of the product of two square matrices is the product of their determinants. In symbols, if A and B are two n X n matrices, then
det(AB ) = det(A ) det(B).
From this and the fact that the determinant of the identity matrix I is 1, it follows that when A is non-singular,
Thus, the determinant of a nonsingular matrix and that of its inverse are reciprocals of each other.
As can readily be appreciated, the calculation of the determinant of a large matrix by means of a row or column expansion can entail a significant amount of work. Fortunately, there are matrices whose determinants are not difficult to compute. Among these are diagonal matrices (the identity matrix being an example) and, more generally, lower triangular matrices. The determinant of any such matrix is the product of its diagonal elements. (The same is true for all upper triangular matrices.) Finding a determinant is aided by procedures (such as Gaussian elimination) that transform the matrix to another whose structure permits relatively easy calculation of its determinant.
Cramer’s Rule for solving the system Ax = b proceeds from the assumption that A is nonsingular. In that case, the system has a unique solution: x = A-1 b. Cramer’s rule gives formulae for the values for the components of this vector in terms of the data, specifically as ratios of determinants. The expression of these ratios requires the introduction of a notation for the matrix obtained from A and b by replacing the j th column of A by the vector b. Let this notation be A-j(b). Then Cramer’s Rule says that for each
In the system
the determinant of A is 15. The matrices A -1(b ), A -2(b ), A -3(b ) are, respectively,
To use Cramer’s Rule in this case, one would compute
det(A1(b)) = -10, det(A2 ((b )) = -55, and det(A 3(b )) =5.
Cramer’s Rule then yields
Although Cramer’s Rule is useful in numerically solving small systems of equations (those consisting of two equations in two unknowns, or three equations in three unknowns), it is not recommended for solving larger systems due to the difficulty in computing determinants of order larger than 3. This caution does not apply to situations in which the calculation is entirely symbolic. An example of the latter sort can be found in P. A. Samuelson’s Foundations of Economic Analysis (Harvard University Press, Cambridge, 1963; see equation 7, p. 14).
The task of solving square systems of linear equations arises from least-squares problems which in turn arise in linear regression analysis. The square system is typically of the form A T AX =A T b. These are called the normal equations. The problem is to find x. The first question one faces is whether the matrixAT A is nonsingular. If it is not—that is, det(ATA ) = 0—then Cramer’s Rule is not applicable. If it is nonsingular, then in principle, the solution is x = AtA )-1 A tb When n, the number of variables x1, …, xn , is quite small (and det[ AT A] = 0), the use of Cramer’s Rule for solving equations can be considered. But most practical problems of this nature are not small and need to be solved with computers. Since exact arithmetic is then lost, several numerical issues come to the fore. The extensive use of determinants is not advisable simply on computational efficiency grounds. Another consideration, the condition number of AtA, enters the picture here. As stated by Gilbert Strang, “Forming ATA can turn a healthy problem into a sick one, and it is much better (except for very small problems) to use either Gram-Schmidt or the singular value decomposition” (Strang 1976, p. 272).
SEE ALSO Inverse Matrix; Matrix Algebra; Simultaneous Equation Bias
BIBLIOGRAPHY
Aitken, A. C. 1964. Determinants and Matrices. Edinburgh: Oliver & Boyd.
Marcus, Marvin, and Henryk Minc. 1964. A Survey of Matrix Theory and Matrix Inequalities. Boston: Allyn & Bacon.
Strang, Gilbert. 1976. Linear Algebra and Its Applications. New York: Academic Press.
Richard W. Cottle