NEWS:
- practice problems for final exam , solution
topics for final exam- You can bring a "cheat sheet" for the final exam: letter size, 2-sided, must be handwritten
- Topics for Final Exam:
- Least squares problem: Normal equations, orthogonalization, QR decomposition
SVD will NOT be on final exam- Approximation: Fourier series for periodic functions, Chebyshev series for functions on [-1,1]. Approximation by orthogonal projection and by interpolation. Error estimates for L^{2} or L_{w}^{2} norms and for L^{∞} norm. Decay of coefficients if m-th derivative has bounded variation, or if function is analytic.
- Solving linear problems, finding minimum of quadratic functions
- Motivating problem: FEM for convection diffusion problem, only 1D
gives linear system Au=b where A is tridiagonal, ||A||_{2} ≤ C h^{-1},
symmetric part H = (A+A^{T})/2 is positive definite: λ_{min}(H) ≥ C' h with C'>0
diffusion problem: A is symmetric, u minimizes f(u):=½u^{T}A u - b^{T}u- Richardson method where
(i) α is fixed (ii) GMRES(1): α_{k} chosen to minimize ||r_{k+1}||
(iii) for symmetric A: "steepest descent": α_{k} chosen to minimize f(u_{k+1})- conjugate gradient method will NOT be on final
- Solving nonlinear problems, finding local minimum of functions
- solve F(x)=0: Richardson iteration, Newton method
- quasi-Newton (Broyden) method will NOT be on final
- finding local minimum of f(x): use positive definite matrix B_{k} to find search direction d_{k}, find λ>0, let x_{k+1}:=x_{k}+λd_{k}
- B_{k}=I: steepest descent method
- B_{k}=F'(x_{k})+αI: modified Newton method
- quasi-Newton (BFGS) will NOT be on final
- How to solve nonlinear systems and unconstrained optimization problems in Matlab: see below
- Summary of basic results for nonlinear equations, Newton method
- Iterative methods for symmetric positive definite matrix: Steepest Descent Method and Conjugate Gradient Method (UPDATED!)
- Notes on iterative methods for linear systems
- MIDTERM EXAM will be on Mon, Nov. 7.
You can bring a "cheat sheet": letter size, handwritten, use front & back
It will cover the material in Notes on Approximation and Notes on Least Squares Approximation.
- approximation with polynomials and trigonometric polynomials
basic theorems: Theorems 3-8 in Notes on Approximation
Fourier series, Chebyshev series
method of approximation: interpolation, least squares approximation for L^{2}, least squares approximation for L_{w}^{2}
interpolation with arbitrary nodes, Lagrange formula
interpolation with equidistant nodes for trig. polynomials, interpolation with Chebyshev nodes for polynomials
bounds for ||u-u_{n}||_{∞}, ||u-p_{n}||_{∞} in terms of Fourier/Chebyshev coefficients
decay of Fourier/Chebyshev coefficients if (i) k-th derivative has bounded variation, (ii) function is analytic- least squares approximation: normal equations, Gram-Schmidt algorithm, QR decomposition
Singular Value Decomposition for matrix, finding best lower rank approximations- Quadrature using interpolation with polynomials for I[u]=∫_{a}^{b}u(x)w(x)dx
for any given nodes: find w_{1},...,w_{n} , exactness for some polynomial degree k
special choices of nodes: (i) Chebyshev nodes (Curtis-Clenshaw quadrature), (ii) Gaussian quadrature
error estimates using ||u-p_{n}||_{∞}- How to solve sparse linear systems in Matlab (m-file), m-file for initializing the stiffness matrix: laplacian2.m
- Assignment 2, due Friday, Oct. 7
When you plot error graphs: Use plot(...,...,'o-')
Problem 2: Use modlagr.m to evaluate the interpolating polynomial, see "Lagrange Interpolation" and Example below.
Problem 1(a): Typo: It should read [U_{0},...,U_{n}]
Example: for n=4 we have a vector [U_{0},...,U_{4}] . You then extend this vector to a vector [U_{0},...,U_{7}] where the additional entries U_{5}, U_{6}, U_{7} use the fact that u(t) is even.
Problem 2(a): determine the slope α experimentally from the slopes in the (n, log E_{n}) graph.
Problem 2(b): first determine the slope α experimentally from the slopes in the (n, log E_{n}) graph. Then find a theoretical value for α.
Problem 3(a): Typo: It should read f_{1}(t) := |cos t|^{1/3} .
Hint for problem 3- Class notes about approximation (Oct. 4)
- Interpolation of a periodic function with p∈ T_{n}^{cos} :
Make sure you understand the code in the m-files cossin_transform.m, inv_cossin_transform.m
and the Example for interpolation (m-file)- Principal Component Analysis (PCA):
Example 1: Classifying iris flowers, data irisdata.txt, m-file, nice3dn.m
Example 2: Nutrition in Europe, data proteindata.txt, m-file- Course notes on Least Squares Approximation
- There was a TYPO in the homework: In problem 1(b) it should be "n-2" instead of "n-1": It should read
(log(n-2), log d_{n-2}) , ( log n, log d_{n})- Assignment 1, due Friday, Sept. 16
Hint: Only print out the values I ask for, nothing else. For the graph make sure that each point (log n, log d_{n}) is marked (if you only plot the connecting line you can't see each point).
You can do this as follows:
for n=2:2:30 ... % code which computes d, condition, slope fprintf('n=%g, d=%g, slope=%g, cond=%g\n',n,d,slope,condition) plot( log(n), log(d), 'o'); hold on end hold off; grid on- Since AMSC466 is a prerequisite I will assume that you are familiar with the basics of Numerical Analysis: roundoff error, linear systems (with matrix norms and condition numbers), interpolation, solution of nonlinear equations, numerical integration.
In class I will briefly summarize all results which I need for this course. I will provide references if you want to review the material in more detail.
- I recommend the following book:
E. Süli: "An Introduction to Numerical Analysis", Cambridge University Press, 2003
Chapters 1,2,6,7 in this book contain the prerequisites for this course.
Chapters 8,9,10,12,13,14 are related to the material I want to cover in this course.- My notes about Linear systems:
Solving Linear Systems using LU-decomposition
Norms, Condition Number, Errors for Linear Systems- Example from class: Least squares approximation of sine function by quadratic polynomial
m-files: l2proj_ex.m, findmatrix.m, findrhsvector.m
Please look at this example and make sure you understand the code. Run the code in Matlab and answer the following questions:
Run this with n=4,5,6. What should happen with ||u-p||_{2} as n increases?
Run this with n=15. What is happening? Hint: Print out the condition number of the matrix.
Least squares approximation of sign(x) on [-1,1] with polynomials of degree 10,20,40 |
SYLLABUS: Information about exams, homeworks, grades
xs = fzero(@f,[a,b])
Here the function f is given in the m-file f.m. If the function f is defined by an @-expression use xs=fzero(f,[a,b])
.
f=@(x)sin(x); xs=fzero(f,[3,4])
function y = f(x)Then use x = fsolve(@f,x0) where x0 is the initial guess.
y = ...
function [y,A] = f(x)Then use
y = ...
A = ...
opt = optimset('Jacobian','on');where x0 is the initial guess.
x = fsolve(@f,x0,opt);
opt = optimset('TolFun',1e-5,'TolX',1e-5);There are many other options, type doc fsolve for more information.
x = fsolve(@f,x0,opt);
A = sparse(iv,jv,av);Example: The full matrix given by A = [0 7 0; 0 0 8; 9 0 0] can be defined as a sparse matrix by
A = sparse([1 2 3],[2 3 1],[7 8 9])
We will use Matlab to see how various algorithms perform (or fail).
Matlab can be downloaded for free from TERPware.
How to hand in Matlab results for homeworks: