If \(A\) is an \(n\times n\) square matrix and if \(\vb{x}\in\RR^n\text{,}\) then \(\vb{x}\) and \(A\vb{x}\) have the same size. In other words, both \(\vb{x}\) and \(A\vb{x}\) live in the same vector space \(\RR^n\text{.}\) This makes some geometry involving \(A\) slightly easier. In particular, \(\col{A}\) must be a subspace of \(\RR^n\text{.}\)
If \(\col{A}\) is a subspace of \(\RR^n\) then we can say that the linear system \(A\vb{x} = \vb{b}\) is always consistent if and only if \(\col{A} = \RR^n\text{.}\) If this were not the case, then there would exist some vector \(\vb{b}\in\RR^n\) such that \(\vb{b}\notin\col{A}\) and so \(A\vb{x}=\vb{b}\) would have to be inconsistent. So square matrices \(A\) for which \(\col{A}=\RR^n\) are particularly well-behaved and useful. Therefore we'd like to develop conditions to check for when a square matrix \(A\) satisfied this property.
One possible condition for this is the following: \(\col{A}=\RR^n\) if and only if \(\dim\col{A}=n\text{,}\) which happens if and only if \(\rank{A}=n\text{.}\) Therefore \(A\vb{x}=\vb{b}\) is always consistent if and only if \(\rank{A}=n\text{.}\) However, another useful condition which uses geometry is the following: \(\col{A}=\RR^n\) if and only if the columns of \(A\) span an \(n\)-dimensional figure in \(\RR^n\text{.}\) To get an idea of why this should be true, consider a \(2\times2\) matrix \(A\) whose columns determine a parallelogram (as opposed to a line) in \(\RR^2\) such as
Since \(\col{A}\) is the set of all linear combinations of columns of \(A\text{,}\) and geometrically this is just the set of all points that we can reach in \(\RR^2\) by stretching and expanding the parallelogram determined by the columns, it follows that \(\col{A} = \RR^2\text{.}\)
The determinant makes this observation precise. Given an \(n\times n\) square matrix \(A\text{,}\) the determinant of \(A\) represents the (signed) volume of the parallelepiped determined by the columns of \(A\text{.}\) If this volume is nonzero then this means that the parallelepiped must be an \(n\)-dimensional figure in \(\RR^n\) and so the column space of \(A\) would be all of \(\RR^n\text{.}\) In the \(2\times2\) case it's not too difficult to compute the determinant. If
\begin{equation*}
A = \mqty[a & b \\ c & d],
\end{equation*}
then \(\det(A) = \mqty|a & b \\ c & d| = ad - bc\text{.}\) Note that \(ad-bc\) does give the area of the parallelogram determined by the columns of \(A\text{.}\) In three dimensions and higher the formula becomes more complicated and must be defined recursively.
Definition1.6.1.Determinant of a Matrix.
Let \(A = \smqty[a_{ij}]\) be an \(n\times n\) matrix. Let \(A_{ij}\) denote the sub-matrix of \(A\) obtained by removing the \(i^\th\) row and \(j^\th\) column of \(A\) (the same row and column containing the entry \(a_{ij}\)). Then the determinant of \(A\) is defined recursively by the formula
which simplifies to \(172\text{.}\) This can be confirmed in the Octave cell below.
When computing determinants by hand, it's often useful to expand along the row or column containing the most zeros instead of just the first row. As long as we're careful about signs, the next result says this is permissible.
Theorem1.6.3.Cofactor Expansion.
Let \(A=[a_{ij}]\) be an \(n\times n\) matrix and define \(A_{ij}\) as in Definition 1.6.1. Then
Computing determinants becomes very simple when working with triangular matrices.
Definition1.6.5.Triangular Matrices.
A matrix \(A\) is lower (respectively, upper) triangular is all of the entries above (respectively, below) the main diagonal are \(0\text{.}\) A matrix is triangular if it is lower triangular or upper triangular.
Example1.6.6.Computing the Determinant of a Triangular Matrix.
Theorem 1.6.7 leads to another approach for finding determinants via row reduction. If we have a square matrix \(A\) and we can reduce it to echelon form, then it becomes very easy to find the determinant of the echelon form. If we can then relate this determinant back to \(\det(A)\text{,}\) then we would be able to find \(\det(A)\) using the echelon form instead. It turns out this can be done as follows.
Theorem1.6.8.Row Operations and the Determinant.
Let \(A\) and \(B\) denote square matrices of the same size and suppose that \(B\) is obtained from \(A\) by performing a single row operation.
If the row operation was row replacement, then \(\det(A) = \det(B)\text{.}\)
If the row operation was row scaling by a factor of \(k\text{,}\) then \(\det(B) = k\det(A)\text{.}\)
If the row operation was row interchange, then \(\det(B) = -\det(A)\text{.}\)
Two other useful results about determinants are given below.
Theorem1.6.9.Multiplicative Property.
Let \(A\) and \(B\) denote square matrices of the same size. Then \(\det(AB) = \det(A)\det(B)\text{.}\)
Theorem1.6.10.Determinants and the Transpose.
Let \(A\) be a square matrix. Then \(\det(A) = \det(A^T)\text{.}\)