- practice problems for final exam , solution
topics for final exam
- You can bring a "cheat sheet" for the final exam: letter size, 2-sided, must be handwritten
- Topics for Final Exam:
- Least squares problem: Normal equations, orthogonalization, QR decomposition
SVD will NOT be on final exam
- Approximation: Fourier series for periodic functions, Chebyshev series for functions on [-1,1]. Approximation by orthogonal projection and by interpolation. Error estimates for L2 or Lw2 norms and for L∞ norm. Decay of coefficients if m-th derivative has bounded variation, or if function is analytic.
- Solving linear problems, finding minimum of quadratic functions
- Motivating problem: FEM for convection diffusion problem, only 1D
gives linear system Au=b where A is tridiagonal, ||A||2 ≤ C h-1,
symmetric part H = (A+AT)/2 is positive definite: λmin(H) ≥ C' h with C'>0
diffusion problem: A is symmetric, u minimizes f(u):=½uTA u - bTu
- Richardson method where
(i) α is fixed (ii) GMRES(1): αk chosen to minimize ||rk+1||
(iii) for symmetric A: "steepest descent": αk chosen to minimize f(uk+1)
- conjugate gradient method will NOT be on final
- Solving nonlinear problems, finding local minimum of functions
- solve F(x)=0: Richardson iteration, Newton method
- quasi-Newton (Broyden) method will NOT be on final
- finding local minimum of f(x): use positive definite matrix Bk to find search direction dk, find λ>0, let xk+1:=xk+λdk
- Bk=I: steepest descent method
- Bk=F'(xk)+αI: modified Newton method
- quasi-Newton (BFGS) will NOT be on final
- How to solve nonlinear systems and unconstrained optimization problems in Matlab: see below
- Summary of basic results for nonlinear equations, Newton method
- Iterative methods for symmetric positive definite matrix: Steepest Descent Method and Conjugate Gradient Method (UPDATED!)
- Notes on iterative methods for linear systems
- MIDTERM EXAM will be on Mon, Nov. 7.
You can bring a "cheat sheet": letter size, handwritten, use front & back
It will cover the material in Notes on Approximation and Notes on Least Squares Approximation.
- approximation with polynomials and trigonometric polynomials
basic theorems: Theorems 3-8 in Notes on Approximation
Fourier series, Chebyshev series
method of approximation: interpolation, least squares approximation for L2, least squares approximation for Lw2
interpolation with arbitrary nodes, Lagrange formula
interpolation with equidistant nodes for trig. polynomials, interpolation with Chebyshev nodes for polynomials
bounds for ||u-un||∞, ||u-pn||∞ in terms of Fourier/Chebyshev coefficients
decay of Fourier/Chebyshev coefficients if (i) k-th derivative has bounded variation, (ii) function is analytic
- least squares approximation: normal equations, Gram-Schmidt algorithm, QR decomposition
Singular Value Decomposition for matrix, finding best lower rank approximations
- Quadrature using interpolation with polynomials for I[u]=∫abu(x)w(x)dx
for any given nodes: find w1,...,wn , exactness for some polynomial degree k
special choices of nodes: (i) Chebyshev nodes (Curtis-Clenshaw quadrature), (ii) Gaussian quadrature
error estimates using ||u-pn||∞
- How to solve sparse linear systems in Matlab (m-file), m-file for initializing the stiffness matrix: laplacian2.m
- Assignment 2, due Friday, Oct. 7
When you plot error graphs: Use plot(...,...,'o-')
Problem 2: Use modlagr.m to evaluate the interpolating polynomial, see "Lagrange Interpolation" and Example below.
Problem 1(a): Typo: It should read [U0,...,Un]
Example: for n=4 we have a vector [U0,...,U4] . You then extend this vector to a vector [U0,...,U7] where the additional entries U5, U6, U7 use the fact that u(t) is even.
Problem 2(a): determine the slope α experimentally from the slopes in the (n, log En) graph.
Problem 2(b): first determine the slope α experimentally from the slopes in the (n, log En) graph. Then find a theoretical value for α.
Problem 3(a): Typo: It should read f1(t) := |cos t|1/3 .
Hint for problem 3
- Class notes about approximation (Oct. 4)
- Interpolation of a periodic function with p∈ Tncos :
Make sure you understand the code in the m-files cossin_transform.m, inv_cossin_transform.m
and the Example for interpolation (m-file)
- Principal Component Analysis (PCA):
Example 1: Classifying iris flowers, data irisdata.txt, m-file, nice3dn.m
Example 2: Nutrition in Europe, data proteindata.txt, m-file
- Course notes on Least Squares Approximation
- There was a TYPO in the homework: In problem 1(b) it should be "n-2" instead of "n-1": It should read
(log(n-2), log dn-2) , ( log n, log dn)
- Assignment 1, due Friday, Sept. 16
Hint: Only print out the values I ask for, nothing else. For the graph make sure that each point (log n, log dn) is marked (if you only plot the connecting line you can't see each point).
You can do this as follows:
for n=2:2:30 ... % code which computes d, condition, slope fprintf('n=%g, d=%g, slope=%g, cond=%g\n',n,d,slope,condition) plot( log(n), log(d), 'o'); hold on end hold off; grid on
- Since AMSC466 is a prerequisite I will assume that you are familiar with the basics of Numerical Analysis: roundoff error, linear systems (with matrix norms and condition numbers), interpolation, solution of nonlinear equations, numerical integration.
In class I will briefly summarize all results which I need for this course. I will provide references if you want to review the material in more detail.
- I recommend the following book:
E. Süli: "An Introduction to Numerical Analysis", Cambridge University Press, 2003
Chapters 1,2,6,7 in this book contain the prerequisites for this course.
Chapters 8,9,10,12,13,14 are related to the material I want to cover in this course.
- My notes about Linear systems:
Solving Linear Systems using LU-decomposition
Norms, Condition Number, Errors for Linear Systems
- Example from class: Least squares approximation of sine function by quadratic polynomial
m-files: l2proj_ex.m, findmatrix.m, findrhsvector.m
Please look at this example and make sure you understand the code. Run the code in Matlab and answer the following questions:
Run this with n=4,5,6. What should happen with ||u-p||2 as n increases?
Run this with n=15. What is happening? Hint: Print out the condition number of the matrix.
|Least squares approximation of sign(x) on [-1,1] with polynomials of degree 10,20,40|
SYLLABUS: Information about exams, homeworks, grades
Here the function f is given in the m-file f.m. If the function f is defined by an @-expression use
xs = fzero(@f,[a,b])
function y = f(x)Then use x = fsolve(@f,x0) where x0 is the initial guess.
y = ...
function [y,A] = f(x)Then use
y = ...
A = ...
opt = optimset('Jacobian','on');where x0 is the initial guess.
x = fsolve(@f,x0,opt);
opt = optimset('TolFun',1e-5,'TolX',1e-5);There are many other options, type doc fsolve for more information.
x = fsolve(@f,x0,opt);
A = sparse(iv,jv,av);Example: The full matrix given by A = [0 7 0; 0 0 8; 9 0 0] can be defined as a sparse matrix by
A = sparse([1 2 3],[2 3 1],[7 8 9])
We will use Matlab to see how various algorithms perform (or fail).
Matlab can be downloaded for free from TERPware.
How to hand in Matlab results for homeworks: