Skip to main content
Logo image

Section 2.2 Eigenvalue Problems

Many important problems in mathematics and its applications reduce to statements of the form \(A\vb{x} = \lambda\vb{x}\text{.}\) Naturally, eigenvalues and eigenvectors are useful tools for tackling these problems. As a first example, we'll again consider a Markov process.

Example 2.2.1. Long-term Behavior of Markov Processes.

Recall that a Markov process describes the evolution of one state \(\vb{x}_{n}\) into a future state \(\vb{x}_{n+1}\) using the matrix equation \(A\vb{x}_n = \vb{x}_{n+1}\text{.}\) In such a process, the matrix \(A\) is a square matrix with non-negative entries whose columns sum to \(1\text{.}\) Starting from an initial state \(\vb{x}_0\text{,}\) we are often interested in whether the long-term evolution approaches a specific state vector \(\vb{x}\text{.}\) In symbols, we want to determine if \(A^n\vb{x}_0 \to \vb{x}\) as \(n\to\infty\text{.}\)
For such a vector, we should have \(A\vb{x} = \lim_{n\to\infty}A(A^n\vb{x}) = \vb{x}\text{.}\) In other words, \(\vb{x}\) is an eigenvector of \(A\) with eigenvalue \(1\). We call this vector a steady-state vector of the Markov process.
Now let's suppose we model the weather with a Markov process with transition matrix
\begin{equation*} A = \mqty[0.33 & 0.25 & 0.40 \\ 0.52 & 0.42 & 0.40 \\ 0.15 & 0.33 & 0.20], \end{equation*}
corresponding states \(S, C\) and \(R\) (i.e., "sunny", "cloudy" and "rainy") and we use an initial state vector of \(\vb{x}_0 = \smqty[0 & 1 & 0]^T\text{.}\) To figure out the long-term probability that it will be a cloudy day, we can try computing \(A^n\vb{x}_0\) for large values of \(n\text{.}\) See the Octave cell below. If we do so, it appears that the long-term probability of a cloudy day settles in around \(44.6\%\text{.}\)
We can make this analysis more precise by looking for the steady state vector using eig. If we take this approach, then we see that \(A\) has \(1\) as an eigenvalue and corresponding eigenvector \(\smqty[0.52 & 0.74 & 0.41]^{T}\text{.}\) This is not a state vector since the values do not add to \(1\) and therefore can't be probabilities. However, we can convert this into a state vector by dividing each entry by the sum \(0.52+0.74+0.41 = 1.678\) which in turn gives the eigenvector
\begin{equation*} \vb{x} = \mqty[0.3113 \\ 0.4463 \\ 0.2425] \end{equation*}
confirming our earlier guess. We can also see the long-term probabilities of sunny and rainy days from this steady-state vector as well.

Example 2.2.2. Singular Values and the Condition Number.

In numerical linear algebra, the condition number of an invertible matrix \(A\) gives an estimate of how solutions of \(A\vb{x} = \vb{b}\) can change in the presence of error. More precisely, the condition number measures the response of the solution \(\vb{x}\) if \(\vb{b}\) is perturbed by an error term. If the condition number is small then we expect a small change in \(\vb{x}\) if the error in \(\vb{b}\) is small. If the condition is large, however, small changes in \(\vb{b}\) can have significant effects on \(\vb{x}\text{.}\)
The condition number itself is denoted \(\kappa(A)\text{.}\) If we let \(A\vb{x} = \vb{b}\) denote the unperturbed system and \(A\hat{\vb{x}} = \hat{\vb{b}}\) denote the corresponding perturbed system, then the relative error between \(\hat{\vb{x}}\) and \(\vb{x}\) is at most equal to \(\kappa(A)\) times the relative error between \(\hat{\vb{b}}\) and \(\vb{b}\text{.}\)
The condition number \(\kappa(A)\) itself can be calculated from the eigenvalues of \(A^{T}A\) as follows:
\begin{equation*} \kappa(A) = \frac{\sqrt{\lambda_{\text{max}}(A^{T}A)}}{\sqrt{\lambda_{\text{min}}(A^{T}A)}}. \end{equation*}
Using this, find the \(\kappa(A)\) for
\begin{equation*} A = \mqty[1 & 3 & -2 \\ 0 & 2 & 4 \\ -3 & 2 & 3] \end{equation*}