# MATH 2270 # PROJECT 5. Eigenvalues and eigenvectors. # November 22, 1999 # # A Maple text version of this project may be found at the web site # http://www.math.utah.edu/~kapovich/teaching2.html/ # Go to that page and click on the "5-th maple assignment". # # In this project we will find eigenvalues and eigenvectors # algebraically and illustrate them # geometrically. Recall that (real) eigenvalues of a square matrix A are # real roots of the characteristic # polynomial det(A- lambda; I), where I is the unit matrix. The # eigenvectors corresponding to eigenvalue lambda; # are nonzero vectors x such that Ax= lambda;x. For each eigenvalue # we will find a basis of the corresponding eigenvectors. Here is how to # find eigenvalues using Maple: > with(linalg):with(plots):# > A:= matrix([[2,-1], [-1,2]]); [ 2 -1] A := [ ] [-1 2] > p:=charpoly(A,t);#find characteristic polynomial as function of t 2 p := t - 4 t + 3 > r:= solve(p); # find roots of characteristic polynomial r := 1, 3 # Thus the matrix A has two distinct eigenvalues, 1 and 3. The # individual roots are obtained as > r[1]; r[2]; 1 3 # To find basis of eigenvectors corresponding to r[1] do: > N:= nullspace(evalm(A -r[1]* diag(1,1))); N := {[1, 1]} # We see that the basis in N consists of single vector [1,1]. If there # are several vectors in the basis you can produce them via the command: # > v1:= N[1]; v1 := [1, 1] # Here we got only one vector, but if there are several of them use # N[2], N[3],... > M:= nullspace(evalm(A -r[2]* diag(1,1))); M := {[1, -1]} > v2:=M[1]; v2 := [1, -1] # Thus we got two eigenvectors for the matrix A. They form a basis in # R^2. Note that the matrix A is symmetric, thus the eigenvectors v1, # v2 are mutually orthogonal. To find orthonormal basis of eigenvectors # we divide each vector v1, v2 by its magnitude (do this only in the # 1-st problem). # # Problem 1. Find two eigenvalues and an orthonormal basis of two # eigenvectors w1 , w2 of the matrix: > A:=matrix([[3,-1],[-1,2]]); [ 3 -1] A := [ ] [-1 2] # Problem 2. Use procedures developed in our 2-nd maple assignment to # draw a plot with the straight lines L1, L2 through the origin in the # direction of w1 and w2. # # Problem3. # Ellipses. It is the general fact that invertible 2-by-2 matrix maps # circles to ellipses. Below we illistrate this using the following # matrix: > A:=matrix([[1,-1],[-1,2]]); [ 1 -1] A := [ ] [-1 2] # But first we have to learn how to draw circles. > ngon:=n-> [[ cos(2*Pi *'i'/n), sin(2*Pi *'i'/n)]$ 'i'=1..n]; Pi 'i' Pi 'i' ngon := n -> [[cos(2 ------), sin(2 ------)] $ ('i' = 1 .. n)] n n # This command makes regular n-gon centered at the origin whose vertices # are on the unit circle. If n is large, then the polygon ``looks like'' # the unit circle. In the picture below we get the polygon which has 60 # vertices. # (If your Maple runs out of memory you can use 15-gon instead of the # 60-gon.) > disk:= ngon(60): #make sure to use colon! > polygonplot(disk, axes=framed,scaling=constrained); # > f:= v-> evalm(A&* v); f := v -> evalm(A `&*` v) # Now let's draw the picture of the image of this circle under the # transformation f: > F1:= map(f, disk):P1:= polygonplot(F1,color=blue): > display(P1); # # Assignment a): Display on the same plot (similarly to the 3-rd Maple # assignment) images of "disk" under the mapping f and its iterations # f^2 and f^3. # # From this picture you will see that iterations of f keep stretching # the ellipse in a certain direction. Let's see what happens after 10-th # iteration. The following sequence of commands describes the images of # the "disk" under the iterations from 1 to 10. > > iter:= proc(f,s,n) > > local d,i; > d:= array(0..n); > d[0]:=s; > for i from 1 to n do > d[i]:= map(f, d[i-1]); > od; > convert(d,list); > end; > film:=proc( d ) > local i, j, n, F; > n:= nops(d); > F:= array(1..n); > for i from 1 to n do > F[i] := polygonplot(d[i]); > od; > convert(F, list); > end: > > sequence:=iter(f, disk,10): > FF:= film(sequence): > display(FF,insequence=true); # # Click on this picture to put it in a "box". Then the "player" bar will # appear instead of the "Normal" ,... commands on the 3-rd row from the # top of the maple screen. The arrows -> and <- indicate the direction # in which you can play this movie (back or forward). Put it in # "forward" and then click several times on the button ->| to advance # the film one frame at a time or forwards. You see that after 10-th # iteration the ellipse becomes indistinguishable from a line. # # Assignment b): # Find eigenvectors of the matrix A and compare their slopes with the # slope of the "line" which you see # on the picture. What is your conclusion? # # Problem 4. Find all eigenvalues and basis of the space of eigenvectors # for the matrix: > A:=matrix([[2,0, 2],[-1,2,1],[0,0,3]]); [ 2 0 2] [ ] A := [-1 2 1] [ ] [ 0 0 3] # Try using the command > eigenvectors(A); # to solve this problem. Did you get a basis for the 3-dimensional # space? # # Diagonalization. Recall that a matrix A is said to be diagonalizable # if there is a matrix P and a diagonal matrix D so that A= PDP^{-1}. # To diagonalize the matrix A means find the matrices D, P, P^{-1}. If # n-by-n matrix A is such that that R^n has a basis of eigenvectors of A # then A is diagonalizable, the diagonal entries of D are the # eigenvalues and columns of P are the corresponding eigenvectors. It is # pretty simple if all eigenvalues of A are distinct. However it might # happen that some roots of the characteristic polynomal are multiple # roots. For instance, the polynomial p(t)=(1-t)(3-t)(3-t), has the # root t=1 of multiplicity 1 and the root 3 of the multiplicity 2. # The Maple command eigenvectors(A) will tell you what eigenvalues are # and what are their multiplicities. If a root (say 3) has double # multiplicty, and it has two linearly independent eigenvectors, then # you put the root t=3 on the diagonal twice . # # Example 1: > A:=matrix([[3,0, 0],[0,3,1],[0,0,1]]); [3 0 0] [ ] A := [0 3 1] [ ] [0 0 1] > > eigenvectors(A); [3, 2, {[1, 0, 0], [0, 1, 0]}], [1, 1, {[0, 1, -2]}] # Thus the eigenvalue 3 has double multiplicty and the pair of # eigenvectors corresonding to 3 is: # (1,0,0), (0,1,0). We will put them both as columns of the matrix P # (the third eigenvector (0,1,-2) will # be the last column). The diagonalization of A is given by the # following data: > Diag:= matrix([[3,0, 0],[0,3,0],[0,0,1]]);#the diagonal matrix [3 0 0] [ ] Diag := [0 3 0] [ ] [0 0 1] > P:= transpose([[1, 0, 0],[0, 1, 0],[0,1,-2]]); [1 0 0] [ ] P := [0 1 1] [ ] [0 0 -2] > inverseP:=inverse(P); [1 0 0 ] [ ] inverseP := [0 1 1/2 ] [ ] [0 0 -1/2] # # Problem 5. Find all eigenvalues and basis of eigenvectors for the # matrix: > B:=matrix([[2,2, 1],[-3,-5,-3],[6,12,7]]); [ 2 2 1] [ ] B := [-3 -5 -3] [ ] [ 6 12 7] # In this case you will get 3 linearly independent eigenvectors. Find # the diagonalization data of the matrix B, that is, find D, P, P^{-1}. # Verify algebraically (using Maple!) that B=PDP^{-1}. # # Problem 6. Find eigenvalues, bases of the corresponding eigenvectors # and dimensions of the spaces of eigenvectors corresponding to each # eigenvalue. Finally, determine the matrices (if possible!) that # diagonalize A. In the case when this is impossible explain why. > U:=matrix([[1,1,-1,2],[0,1,-1,1],[0,0,1,1],[0,0,0,1]]); [1 1 -1 2] [ ] [0 1 -1 1] U := [ ] [0 0 1 1] [ ] [0 0 0 1] # # Finctions of diagonalizable matrices. # # Suppose that A is diagonalizable, A= PDP^{-1}. Recall that A^n= P D^n # P^{-1} as we saw in the class. Similarly, # A+ A^2 +2A^3= P (D + D^2 + 2 D^3) P^(-1);. # In general, if q(t) is any polynomial, say > q(t)= c_0 + c_1 *t + c_2 *t^2 + c_3 *t^3; 2 3 q(t) = c_0 + c_1 t + c_2 t + c_3 t # then define > q(A)= c_0 *I + c_1* A + c_2 *A^2 + c_3 * A^3; 2 3 q(A) = I c_0 + c_1 A + c_2 A + c_3 A # # where I is the unit matrix (of the same shape as A). Then q(A)= P q(D) # P^{-1}. The polynomial q(D) is easy to compute. If D=diag(d_1, d_2, # d_3,...,d_n) then q(D) is the diagonal matrix # q(D)=diag(q(d_1), q(d_2), q(d_3),..., q(d_n) ) . # # Problem 7. Take the polynomial q(t)= -1 + 2*t -3*t^2 + 0.5*t^3. # Using diagonalization compute the polynomial q(B) for the matrix from # the Problem 5. Verify (using Maple!) that q(B)=P q(D)P^{-1} by # making direct computation of q(B). # # Similarly to the computation of polynomials of matrices we can compute # other functions of diagonalizatble matrices: # if f(t) is a function and A= PDP^{-1} then f(A):= P f(D) P^{-1} where # # f(D)=diag(f(d_1), f(d_2), f(d_3),..., f(d_n) ) . For instance, we # could compute the exponential function exp(A), trigonometric functions # like sin(A), cos(A), square root: sqrt{A}, which computes matrix B # such that B^2=A. Here we use the positive branch of the square root. # # Example 2. Consider the matrix A > A:=matrix([[2,0, 0],[0,2,1],[0,0,1]]); [2 0 0] [ ] A := [0 2 1] [ ] [0 0 1] > Diag:= matrix([[2,0, 0],[0,2,0],[0,0,1]]); [2 0 0] [ ] Diag := [0 2 0] [ ] [0 0 1] > P:= transpose([[0, 1, 0],[1, 0, 0],[0,-1,1]]); [0 1 0] [ ] P := [1 0 -1] [ ] [0 0 1] > inverseP:=inverse(P); [0 1 1] [ ] inverseP := [1 0 0] [ ] [0 0 1] # Let's compute sqrt(A), the square root of A: > sqrtD:=matrix([[sqrt(2),0, 0],[0,sqrt(2),0],[0,0,1]]); [sqrt(2) 0 0] [ ] sqrtD := [ 0 sqrt(2) 0] [ ] [ 0 0 1] > B:= evalm(P&* sqrtD &* inverseP); [sqrt(2) 0 0 ] [ ] B := [ 0 sqrt(2) sqrt(2) - 1] [ ] [ 0 0 1 ] # By computing B^2 let's check that B is indeed a square root of A: > evalf(evalm(B^2)); [2. 0 0 ] [ ] [0 2. .9999999989] [ ] [0 0 1. ] # Which is, up to a small numerical error, is close enough to the # matrix A. # # Problem 8. Compute the exponential function exp(A) of the matrix > A:=matrix([[3.0,4.0],[1.0,2.0]]); [3.0 4.0] A := [ ] [1.0 2.0]