NEWS:
• practice problems for final exam ,   solution
topics for final exam
• You can bring a "cheat sheet" for the final exam: letter size, 2-sided, must be handwritten
• Topics for Final Exam:
• Least squares problem: Normal equations, orthogonalization, QR decomposition
SVD will NOT be on final exam
• Approximation: Fourier series for periodic functions, Chebyshev series for functions on [-1,1]. Approximation by orthogonal projection and by interpolation. Error estimates for L2 or Lw2 norms and for L norm. Decay of coefficients if m-th derivative has bounded variation, or if function is analytic.
• Solving linear problems, finding minimum of quadratic functions
• Motivating problem: FEM for convection diffusion problem, only 1D
gives linear system Au=b where A is tridiagonal, ||A||2 ≤ C h-1,
symmetric part H = (A+AT)/2 is positive definite: λmin(H) ≥ C' h with C'>0
diffusion problem: A is symmetric, u minimizes f(u):=½uTA u - bTu
• Richardson method where
(i) α is fixed (ii) GMRES(1): αk chosen to minimize ||rk+1||
(iii) for symmetric A: "steepest descent": αk chosen to minimize f(uk+1)
• conjugate gradient method will NOT be on final
• Solving nonlinear problems, finding local minimum of functions
• solve F(x)=0: Richardson iteration, Newton method
• quasi-Newton (Broyden) method will NOT be on final
• finding local minimum of f(x): use positive definite matrix Bk to find search direction dk, find λ>0, let xk+1:=xk+λdk
• Bk=I: steepest descent method
• Bk=F'(xk)+αI: modified Newton method
• quasi-Newton (BFGS) will NOT be on final
• How to solve nonlinear systems and unconstrained optimization problems in Matlab: see below
• Summary of basic results for nonlinear equations, Newton method
• Iterative methods for symmetric positive definite matrix: Steepest Descent Method and Conjugate Gradient Method (UPDATED!)
• Notes on iterative methods for linear systems
• MIDTERM EXAM will be on Mon, Nov. 7.
You can bring a "cheat sheet": letter size, handwritten, use front & back
It will cover the material in Notes on Approximation and Notes on Least Squares Approximation.
• approximation with polynomials and trigonometric polynomials
basic theorems: Theorems 3-8 in Notes on Approximation
Fourier series, Chebyshev series
method of approximation: interpolation, least squares approximation for L2, least squares approximation for Lw2
interpolation with arbitrary nodes, Lagrange formula
interpolation with equidistant nodes for trig. polynomials, interpolation with Chebyshev nodes for polynomials
bounds for ||u-un||, ||u-pn|| in terms of Fourier/Chebyshev coefficients
decay of Fourier/Chebyshev coefficients if (i) k-th derivative has bounded variation, (ii)  function is analytic
• least squares approximation: normal equations, Gram-Schmidt algorithm, QR decomposition
Singular Value Decomposition for matrix, finding best lower rank approximations
• Quadrature using interpolation with polynomials for I[u]=∫abu(x)w(x)dx
for any given nodes: find w1,...,wn , exactness for some polynomial degree k
special choices of nodes: (i) Chebyshev nodes (Curtis-Clenshaw quadrature), (ii) Gaussian quadrature
error estimates using ||u-pn||
• How to solve sparse linear systems in Matlab (m-file), m-file for initializing the stiffness matrix: laplacian2.m
• Assignment 2, due Friday, Oct. 7
When you plot error graphs: Use plot(...,...,'o-')
Problem 2: Use modlagr.m to evaluate the interpolating polynomial, see "Lagrange Interpolation" and Example below.
Problem 1(a): Typo: It should read [U0,...,Un]
Example: for n=4 we have a vector [U0,...,U4] . You then extend this vector to a vector [U0,...,U7] where the additional entries U5, U6, U7 use the fact that u(t) is even.
Problem 2(a): determine the slope α experimentally from the slopes in the (n, log En) graph.
Problem 2(b): first determine the slope α experimentally from the slopes in the (n, log En) graph. Then find a theoretical value for α.
Problem 3(a): Typo: It should read f1(t) := |cos t|1/3 .
Hint for problem 3
• Class notes about approximation (Oct. 4)
• Interpolation of a periodic function with p∈ Tncos :
Make sure you understand the code in the m-files cossin_transform.m, inv_cossin_transform.m
and the Example for interpolation (m-file)
• Principal Component Analysis (PCA):
Example 1: Classifying iris flowers, data irisdata.txt, m-file, nice3dn.m
Example 2: Nutrition in Europe, data proteindata.txt, m-file
• Course notes on Least Squares Approximation
• There was a TYPO in the homework: In problem 1(b) it should be "n-2" instead of "n-1": It should read
(log(n-2), log dn-2) , ( log n, log dn)
• Assignment 1, due Friday, Sept. 16
Hint: Only print out the values I ask for, nothing else. For the graph make sure that each point (log n, log dn) is marked (if you only plot the connecting line you can't see each point).
You can do this as follows:
```      for n=2:2:30
...        % code which computes d, condition, slope
fprintf('n=%g, d=%g, slope=%g, cond=%g\n',n,d,slope,condition)
plot( log(n), log(d), 'o'); hold on
end
hold off; grid on
```
• Since AMSC466 is a prerequisite I will assume that you are familiar with the basics of Numerical Analysis: roundoff error, linear systems (with matrix norms and condition numbers), interpolation, solution of nonlinear equations, numerical integration.
In class I will briefly summarize all results which I need for this course. I will provide references if you want to review the material in more detail.
• Example from class: Least squares approximation of sine function by quadratic polynomial
m-files: l2proj_ex.m, findmatrix.m, findrhsvector.m
Please look at this example and make sure you understand the code. Run the code in Matlab and answer the following questions:
Run this with n=4,5,6. What should happen with ||u-p||2 as n increases?
Run this with n=15. What is happening? Hint: Print out the condition number of the matrix.

 Least squares approximation of sign(x) on [-1,1] with polynomials of degree 10,20,40

## Course information

• Since Fall 2015 the course AMSC 666 has a new syllabus, combining the most important topics of the former AMSC666, AMSC667. This makes the course more attractive:
• students who only want to take one graduate numerical analysis course get to see more relevant topics
• students who are interested in topics like numerical PDE methods can more quickly take courses like AMSC612, AMSC614
• The AMSC qualifying exam rules were just updated accordingly: The sequences
AMSC666, AMSC612 and AMSC666, AMSC614
were just added to the list of permitted 2-course sequences

## Additional Course Material

• Solving a scalar nonlinear equation with Matlab: (one variable)
We want to find a zero of the function f in the interval [a,b] where f(a) and f(b) have different signs:
`xs = fzero(@f,[a,b])`
Here the function f is given in the m-file f.m. If the function f is defined by an @-expression use `xs=fzero(f,[a,b])`.
Example with @-function: ` f=@(x)sin(x); xs=fzero(f,[3,4])`
• Solving a nonlinear system in Matlab using fsolve: (multiple variables)
• Without using the Jacobian:
Write an m-file f.m containing the function f:
function y = f(x)
y = ...
Then use x = fsolve(@f,x0) where x0 is the initial guess.
• With using the Jacobian:
Write an m-file f.m containing the function and the Jacobian:
function [y,A] = f(x)
y = ...
A = ...
Then use
opt = optimset('Jacobian','on');
x = fsolve(@f,x0,opt);
where x0 is the initial guess.
• You can specify additional options using optimset, e.g. the tolerance for function value and the tolerance for x:
opt = optimset('TolFun',1e-5,'TolX',1e-5);
x = fsolve(@f,x0,opt);
There are many other options, type doc fsolve for more information.
• Solving an unconstrained optimization problem in Matlab:
• function of one variable:
xs = fminbnd(@f,a,b) finds a local minimum in the interval [a,b].
Example for fminbnd
• function of several variables
xs = fminunc(@f,x0) tries to find a local minimum of the function given in the m-file f.m
Example for fminunc
• Contraction Mapping Theorem (a.k.a. Banach Fixed Point Theorem)
• How to solve a linear system in Matlab:
• [L,U,p] = lu(A,'vector'); % Gaussian elimination choosing the pivot candidate with largest absolute value
y = L\b(p)                % forward substitution with permuted vector b
x = U\y                   % back substitution
• or just use the shortcut
x = A\b
which executes the above steps (but without the option to save the result of the LU-decomposition).
• How to solve sparse linear systems in Matlab:
A sparse matrix is a matrix where most elements are zero. In this case it is much more efficient to use the special sparse data structures in Matlab. All operations like *, \, lu have special efficient algorithms for matrices in sparse format. In particular the backslash command x=A\b uses clever row and column permutations to minimize "fill-in".
• initializing a sparse matrix: A sparse matrix can be described by giving a list of the nonzero elements, as vectors iv (contains i-values), jv (contains j-values), av (contains matrix entries). Then a sparse matrix is initialized by
A = sparse(iv,jv,av);
Example: The full matrix given by A = [0 7 0; 0 0 8; 9 0 0] can be defined as a sparse matrix by
A = sparse([1 2 3],[2 3 1],[7 8 9])
• Solving a linear system with a sparse matrix:
• [L,U,p,q] = lu(A,'vector'); % Gaussian elimination with row and column permutations (chosen to minimize "fill-in")
y = L\b(p)                  % forward substitution with permuted vector b
x = U\y; x(q) = x           % back substitution, then undo permutation given by q
• or just use
x = A\b
which does the above steps automatically (but without the option to save the result of the LU-decomposition).
• Lagrange Interpolation: Modified and Barycentric Formula
Implementation in Matlab: modlagr.m, Example (m-file)
• Approximation with polynomials, quadrature
• Approximation theory and approximation practice (Errata) by L.N. Trefethen.
I will not follow this very closely, but we will cover parts of sections 2, 3, 4, 6, 7, 8, 19 in class.
• Approximation with trigonometric functions

## Topics

• Approximation Theory (3 weeks, [1,2,3])
• Vector, Matrix and Functional Norms
• Least Squares, QR, SVD
• Orthogonal Polynomials
• Chebyshev Expansions
• Numerical Solution of Initial-Value Problems (3 weeks, [4,5,6])
• Consistency, Stability, and Convergence Analysis
• One-Step Methods, Runge-Kutta Methods
• Multistep Methods
• Methods for Stiff Problems
• Error Estimation and Adaptivity
• Iterative Methods for Linear Algebraic Systems (3 weeks, [7,8,9,10])
• Motivation: Boundary-Value Problems for Elliptic PDEs
• Classic Iterative Methods: Jacobi, Gauss-Seidel, SOR methods
• Conjugate Gradient Method and Preconditioning
• GMRES
• Optimization (3 weeks, [9,10,11])
• Steepest Descent, Newton and Quasi-Newton Methods
• Line Search Methods and Trust Region Methods
• Rates of Convergence
• Nonlinear Conjugate Gradient Method
• Nonlinear Least Squares Problem

## Literature

1. James Demmel, Applied Numerical Linear Algebra, SIAM, 1997
freely available online via UMD Library
2. Rivlin, T. J., an Introduction to the Approximation of Functions, Dover, 1969
3. Gil, A., Segura, J., Temme, N., Numerical Methods for Special Functions, SIAM, 2007
chapter on Chebyshev Expansions is freely available online
4. Hairer, E., Norsett, S.P., Wanner, G., Solving Ordinary Differential Equations I. Nonstiff Problems, Second Revised Edition, Springer 1993
5. Hairer, E., Wanner, G., Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems, Second Revised Edition, Springer 1993
6. Deuflhard, P., Bornemann, F., Scientific Computing with Ordinary Differential Equations, Springer, 2002
7. Elman, H., Silvester, D., Wathen, A., Finite Elements and Fast Iterative Solvers, Second Edition, Oxford Science Publications, 2014
8. Morton, K. W., Mayers, D.F., Numerical Solutions of Partial Differential Equations, Second Edition, Cambridge, 2005
9. Nocedal, J., Wright, S., Numerical Optimization, Second Edition, Springer 2006
freely available online via UMD Library
10. Kelley, C. T., Iterative Methods for Linear and Nonlinear Equations, SIAM 1995
11. Kelley, C. T., Iterative Methods for Optimization, SIAM 1999
freely available online via UMD Library

## Matlab programming

We will use Matlab to see how various algorithms perform (or fail).