LINEAR ALGEBRA
(Math 405 Spring 2020)

Class Meetings:
MWF 12-12:50 (CHEM 1224)


Office number:
MTH 2102

Phone number:
301-405-5156

Web Address:
www.math.umd.edu/~millson

Office Hours:
Mondays and Wednesdays 1-1:50 (in MTH 2102)

Email:
millson@umd.edu

Please use this address to contact me. Messages sent through Canvas may not reach me, and I cannot
reply to them. Math questions are best answered in person, during office hours or right after class, while
email is the preferred way of handling administrative matters.

I will communicate with the class by email. You are expected to have a correct email address.
You can update your email address at http://www.testudo.umd.edu/apps/saddr/

Text:
Linear Algebra, An Introductory Approach by Charles Curtis, ISBN 0-387-90992-3,3-540900002-3.
This is published by Springer in the Undergraduates Texts in Mathematics.
Note this is different from an older book by the same author, with the same title, published by Allyn and Bacon.

Course Lectures:
All my course lectures are available on my class web page. Go to my web page.
The address is above or else put "John Millson" into the Google search box,
then "TEACHING", then "Math405".

Prequisites
Prerequisite: 1 course with a minimum grade of C- from (MATH240, MATH461, MATH341);
and minimum grade of C- in MATH310.

I will assume all students know how to multiply matrices and the theory of solving linear equations
using row reduction of matrices. I will cover this second topic (Chapter 2, Section 6) very quickly in the second week
and will assign three problems on it in the second homework assignment.

The Final Grade
Homework = 20%. 2 in class exams (15% fpr exam 1, 25% for exam 2), final exam = 40 %
Students who get less than 50% total will receive an F and students who get 50% or more will receive at worst a C-.

The two in-class exams will be Friday, February 21 and Friday, April 3.
The final exam will be Tuesday, May 19, 8:00 -10:00 am .

After each in-class exam the students have one week from when the exam is returned to appeal the grading.
Appeals for the final exam must be made in writing.

On exams students must write out the following pledge by hand and sign it:
"I pledge on my honor that I have not given or received any unauthorized assistance on this examination".

Makeup-policy
Makeups for in-class exams will only be given in the case of a documented absence due to illness,
religious observances, participation in a University activity at the request of University authorities or other
compelling circumstances.

ADS
Students who require require special examination conditions must register with the Office of Accessibility and
Disability Services (ADS) in Shoemaker Hall. Documentation must be provided to the instructor.

Proper forms must be filled in and provided to the instructor before every exam. Students are responsible
for notifying the instructor within the first two weeks of the semester of any projected absences.
This is especially important for the final examination.

HOMEWORK

Homework for the week will be assigned during the week and due the Monday of the next week.

No late homework will be accepted.

The scores on homework assignments missed due to one of the above reasons
(illness, religious observances, participation in a University activity at the request of University authorities or
other compelling circumstances) will be replaced by the average of your other homework grades.

Here is your first homework assignment
It will be due at the beginning of class on Feb.3.

Revised HW1

Section 4: 4(a,b,c and e only) 5,6,7,8

Extra Problem
Prove that the set {1,x,x 2, ... ,x n, ... } is a basis for the vector space
of all polynomials in one variable x.

COURSE DESCRIPTION

We will cover the following topics in the order given. We will skip Chapter 1 except for the definition of a field.
  1. Abstract vector spaces, bases and dimension (Chapter 2).

    Axioms and properties of an abstract vector space.
    Spanning sets, linearly independent sets and bases.
    The theorem (with proof) that every vector space has a basis.
    The coordinates of a vector relative to a basis.

  2. Linear transformations and matrices (Chapter 3).

    How to write out the matrix of a linear transformation if you are given a basis of the vector space.
    I want everyone to become adept at computing with matrices.
    Especially I want everyone to know:

    1. How the coordinates of a vector change when you change the basis of the vector space.
    2. How the matrix of a linear transformation changes when you change the basis of the vector space.

    I will not follow the text for this. The formula for 2. is hard to remember.
    Years ago I found a way to do it in a linear algebra text of D. Pode. It involves
    introducing the strange notation P A ← B but once you do this the formula is automatic.

  3. Inner (dot) products (Chapter 4).

    If we assume that the vector space has an inner product then we can do "Euclidean geometry in n dimensions".
    In particular we can define lengths and angles by analogy with three dimensions. There are special kinds
    of linear transformations, e.g orthogonal, symmetric etc. defined in relation to the inner product.

  4. Determinants (Chapter 5).

  5. This is a hard topic but it is very important.

  6. Polynomials and complex numbers (Chapter 6).

    Here we learn material which we need for the next chapter. There is a natural inner product on the space of polynomials of degree n.

  7. Normal forms for matrices (Chapter 7).

    Here we study the diagonal form for real symmetric matrices and the triangular and Jordan normal forms for all complex matrices.

THE COURSE LECTURES



















Lecture Title Description
01 Abstract vector spaces Axioms and properties of an abstract vector space.
02 Chapter 2. Spanning sets and linearly independent sets. Linearly independent sets and spanning sets.
03 Bases.
    Chapter 2.
  • The definition of a basis.
  • The theorem (with proof) that every vector space has a basis.
  • The coordinates of a vector relative to a basis.
04 Linear transformations We start Chapter 3 in the text, the definition of a linear transformation. .
05 Linear transformations (continued) Chapter 3. More basic properties of linear transformations, the range R(T) and the null space N(T).
06 The matrix of a linear transformation One of the most important lectures in the course. How to assign a matrix B[T]A (so input basis A and output basis B)
to a linear transformation T: V → W if you have a basis A for V and a basis B for W.
This matrix is called the matrix of T relative to the bases A and B .
07 The right action of invertible n by n matrices on bases The definition and properties of the right action of invertible n by n matrices on bases.
This results in a new formula for the matrix attached to a linear transformation.
08 The first change of basis formula/the coordinates of a vector. Given two basis A and B for the same vector space, we define two change of basis matrices,
the change of basis matrix from A to B (write A in terms of B), to be denoted P B ← A and the change of basis matrix
from B to A (write B in terms of A), to be denoted P A ← B . Each matrix is the inverse of the other. Then, given a
given a fixed vector v in V, the coordinates of v relative to A, to be denoted [v]A,
and the coordinates of v relative to B, to be denoted [v]B, are related by
[v]B = P B ← A [v]A
It is hard to remember which of the two change of basis matrices to use.
The above notation is my way of remembering the formula.
If you can find another way of remembering the formula that's fine with me.
09 The second change of basis formula/the matrix of a linear transformation. This is one of the trickiest formulas in the course . The special case where V = W and A = B is in the text Theorem 13.6.
Given a linear transformation from V to W and a basis A for V and a basis B for W then from Lecture 06
we get the matrix B[T]A of T relative the basis A for V and the basis B for W.
Now suppose we are given a new basis C for V and a new basis D for W.
Then we get a new matrix D[T]C from the definition of Lecture 06, namely the matrix of T relative to the bases C and D.
The second change of basis formula tells us how these two matrices are related in terms of the four change of basis matrices,
P C ← A from A to C, P A ← C from C to A, P D ← B from B to D and P B ← D from D to B.
The second change of basis formula is
D[T]C = P D ← B B [T] A P A ← C .
Note on both sides of the formula we have input basis C and output basis D
and otherwise on the right, the adjoining letters match in pairs, AA and BB.
10 Inner product spaces. We start Chapter 4. The n-dimensional analogue of the dot product from vector calculus.
11 Gram Schmidt orthogonalization. Chapter 4. How to take a finite collection of vectors and make them into an orthonormal set.
In particular, how to change any basis into an orthonormal basis.
12 Orthogonal groups. Chapter 4. The n-dimensional analogue of the group of rotations and reflections in the plane.
13 Direct sums. This lecture is very important. It is taken from Chapter 7, Section 23, pages 195-196.
14 Existence of the determinant. We start Chapter 5. Determinants are very tricky but very important.
You need to know how to compute determinants and multiply permutations.
See Lecture 16 and the last subsection of Chapter 5 in the text.
They come in everywhere in mathematics.
15 Properties of the determinant. More on determinants,the determinant of a product is the product of determinants
and a formula for the inverse of a square matrix.
16 Permutations. This lecture is important as I said above .
17 Polynomials The division algorithm and unique factorization - every polynomial
can be factored in a unique wy as a product of powers of prime polynomials.
I will say a little about complex numbers as well.
18 The minimal polynomial of a linear transformation. Given a polynomial p(x) and a linear transformation T in L(V,V) we get
form a new linear transformation p(T), by substituting T for x.
The lowest degree monic (highest order term xn has coefficient 1) polynomial
such that p(T) = 0 is unique and is called the minimal polynomial of T and denoted m(x) or mT(x).
The polynomial m(x) contains a great deal of information about T but it is hard to compute. .
19 The characteristic polynomial of a linear transformation.

The characteristic polynomial of a matrix A, denoted h(x) or hA(x),
is defined by the formula
hA(x)= det( xI - A)
The characteristic polynomial of a linear transformation T, denoted h(x) or hT(x),
is defined by first choosing a basis to get a matrix A representing T then using the above formula.
So hT(x):= hA(x). The result doesn't depend on the choice of basis.
20 T-Invariant direct sums. The last part of the course will be concerned with special forms e.g. diagonal form or triangular form
or Jordan normal form, of a matrix A, which can be obtained by changing to a special basis
e.g a basis of eigenvectors or a Jordan basis.
So we want to find a matrix P-1 (the special basis written in terms of the standard basis)
so that P A P -1 is in the desired special form.
In our examples we will first find the matrix P -1 (which goes on the right) then find P.
The first step in finding P-1 will be to find a direct sum decomposition
of the underlying vector space so that the summands in the direct sum decomposition
are each carried into themselves by the linear transformation T corresponding to A.
Such a direct sum decomposition is called a T-invariant direct sum decomposition .
21 Diagonalizing a real symetric matrix.

In this lecture we will prove the theorem that given any real symmetric matrix A there is a a basis E of Rn
consisting of eigenvectors of A. Hence A, rewritten in terms of this basis of eigenvectors E is a diagonal matrix.
So if S is the standard basis of Rn and P -1 is the matrix whose columns are the coordinates of
the eigenvectors so P -1= PS←E, we have D = P A P-1 = PE←SA PS←E
Remember, to diagonalize a symmetric matrix A the matrix of eigenvectors P-1 goes on the right of A.
The hard part of the theorem is proving that a real symmetric matrix A has at least one real eigenvalue.
To do this, we are forced to go to the complex matrices and prove a the analogous theorem about Hermitian matrices.
22 Diagonalizing a real symetric matrix - Examples.

Two examples, one 2 by 2 and one 3 by 3. .
23 Existence of a triangular form for a complex matrix. The proof that any matrix A can be put in triangular form. We use this to prove the Cayley Hamilton theorem that hA(A) = 0.
24 Introduction to Jordan normal form We will explain Jordan normal form J(A) for a general complex matrix A by focusing on a seven dimensional example.
We will define a Jordan basis for a general complex matrix , generalizing the notion of an eigenbasis for a real symmetric matrix. .
25 The minimal polynomial of a matrix in Jordan normal form We will explain how to compute the minimal polynomial for a matrix A in Jordan normal
26 Computation of Jordan normal form for three examples We will compute the Jordan normal form for two 3 by 3 matrices and one 4 by 4 matrix. .
27 The first step in computing the Jordan normal form-
the generalized eigenspace decompostion for a complex matrix A
This is the first of two main steps in proving the existence of Jordan normal form for a complex matrix A. If A is an n by n matrix
and λ is a complex number then the generalized eigenspace Gλ(A) of λ is defined to be the nullspace of (A - λI)n.
The generalized eigenspace Gλ(A) is nonzero if and only if λ is an eigenvalue of A.
Then the generalized eigenspace decomposition states (for V = Cn) states

Theorem 1. We have an A-invariant direct sum decomposition V = Σλ Gλ(A) , here the sum is over λ the eigenvalues of A.
28 The second and last step - the decomposition of a generalized eigenspace into the sum of cyclic subspaces for A. The second, final and hardest step in the proof of existence of Jordan normal form for a complex matrix A is the proof that each generalized eigenspace for A can be decomposed
into a sum of cyclic subspaces for A - λI. Here, a subspace U of dimension k of the generalized eigenspace Gλ(A)
is said to be a cyclic subspace if the exists a vector v in U such that U has a basis Bλ of the form
Bλ= (v, A - λ) v, (A - λ)2 v, (A - λ)3 v, ..., (A - λ)k-1 v)
The vector v is said to be a cyclic vector for U and A.
The last vector in the cyclic basis, (A - λ)k-1 v, is an eigenvector for A corresponding to the eigenvalue λ.
If you reverse the order of the vectors in a cyclic basis for U you get a Jordan basis for the cyclic subspace U.

Theorem 2. Each generalized eigenspace Gλ(A) is the sum of cyclic subspaces (they can have different dimensions)

The proof of Jordan normal form from Theorems 1 and 2
Reverse the order of the vectors in B to get a new basis Bopposite
starting with the eigenvector (A - λ)k-1 v.
Write out the matrix of A relative to the new basis Bopposite. The resulting matrix is a Jordan block, see Lecture 24.
Now do this for each generalized eigenspace. The resulting matrix will be in Jordan normal form. The collection of cyclic subspaces U belonging to
the single generalized eigenspace Gλ(A) give rise to all the Jordan λ-blocks.
Then the sum over the Gλ(A) provides all the collections of Jordan λ-blocks as λ varies.