1.1 Review of Calculus
Theorems: Mean Value Theorem and its variations (Rolle's Thm., Generalized Rolle's Thm., MVT for Integrals), Intermediate Value Theorem, Extreme Value Theorem, Taylor's Theorem
Exercises: 4, 7, 9, 11
  1. Show that if a < xk < b for k = 1,...,N and f is continous on (a,b), then
    f(x1) + f(x2) + .. + f(xN)= N f(c) for some c in the interval (a,b).
  2. If f is in C1[0,1] show that
    I01 f(x)(x-1/2)dx = (1/12)f'(c) for some c in [0,1].
    Hint: write f(x) = f(1/2) + (f(x) - f(1/2))/(x-1/2)*(x-1/2)

1.2 Roundoff Error & Computer Arithmetic
Key Ideas: Floating point number, Chopping, Rounding, Round-off error, Absolute error, Relative error, Significant figures, Computer arithmetic
Exercises: 3, 4, 5, 13, 17, 22, 26
  1. Let x* be an approximaton to x, then f(x*) is an approximation to f(x). If f is in C1, estimate the absolute error in f(x*) in terms of the absolute error in x* and f'(x).
  2. Find the smallest number e on a k-digit-base-10-with-rounding computer such that 1 + e > 1. This number e is called the machine epsilon or unit roundoff.
    1.3 Algorithms & Convergence
    Key Ideas: Stable, Unstable, Rate of Convergence
    Exercises: 6, 7, 8, 15
    1. Let S = 1 - 1/2 + 1/3 - 1/4 + ... - 1/1000.
      Rank the following four methods for computing S from the most accurate to the least accurate. Justify your answer.
      1. From largest to smallest (left to right)
      2. From smallest to largest (right to left)
      3. From largest to smallest, summing the positive and negative terms separately
      4. From smallest to largest, summing the positive and negative terms separately
    2. Show that if xn = x + O(an), with x not 0, and yn = y + O(an), with an converging to 0, then
      xn+ yn = x + y + O(an), and
      xn yn = xy + O(an).

    3.1 Interpolation/ Lagrange Polynomial
    Key Ideas: Interpolation, Lagrange interpolating polynomial
    Algorithms: Horner's Algorithmi, also known as Nested Multiplication and Synthetic Division (Thm 2.18/Algo 2.7), Lagrange interpolating polynomial (Thm 3.2)
    Theorems: Error Estimate (Thm 3.3)
    Exercises: 1, 2, 3, 9, 10, 15, 17, 24
    3.2 Divided Differences/ Newton's Polynomial
    Key Ideas: Divided difference table (Table 3.8)
    Theorems: Theorem 3.6
    Algorithms: Newton's Interpolatory Divided-Difference (Algo 3.2)
    Exercises: 1, 4, 6, 7, 10
    1. Show that if f is a polynomial of degree m then if n > m, then
      f[x0,x1,...,xn] = 0
      for any choice of x0, x1,..., xn.

    3.3 Hermite Interpolation
    Key Ideas: Osculating polynomials, Hermite polynomials (Lagrange-Type and Divided Difference-Type)
    Algorithms: Hermite Interpolation (Algo 3.3)
    Theorems: Error Estimate (Thm 3.9)
    Exercises: 1, 3, 4, 7
    3.4 Cubic Splines
    Key Ideas: Piecewise polynomial, Interpolating cubic spline, Natural spline, Clamped spline, Knot
    Algorithms: Natural Cubic Spline (Algo 3.4)
    Theorems: Error Estimate (Thm 3.13)
    Exercises: 1, 2, 3cd, 7, 9, 24
    8.1 Discrete Least-Squares Approximation
    Key Ideas: Least squares, Normal equations
    Algorithms: Normal equations (8.3)
    Exercises: 4, 7, 8
    8.2 Orthogonal Polynomials
    Key Ideas: Linearly independent, Orthogonal, Orthonormal
    Algorithms: Least-Squares for Orthogonal polynomials (Thm 8.5)
    Exercises: 1ac, 2ac, 3ad
    8.4 Rational Function Approximation (SKIP)

    8.5 Trigonometric Polynomial Approximation
    Key Ideas: Trigonometric Polynomial, Fourier Series (Discrete and Continuous)
    Algorithms: Formulas for the Fourier Coefficients (Discrete and Continuous)
    Exercises: 1, 2, 5, 7ab, 13, 14 (thus odd functions have only sine terms, even functions only cosine terms)
    4.1 Numerical Differentiation
    Key Ideas: Truncation error
    Algorithms: n-point formulas (Eqn 4.2, 4.12-15, 4.20), General formula (Eqn 4.4)
    Theorems: Truncation errors for $n$-point formulas
    Exercises: 1, 2, 3, 10, 16
    1. Estimate f'(0) = 1 for f(x) = ex by computing
      (eh - e-h)/(2h)
      for h = 10-N, N = 1, ..., 20. Comment on the results.
    2. Estimate f'(0.3) as best as possible, using the data f(0.0) = 1, f(0.2) = 1.2 and f(0.3) = 1.2.

    4.2 Richardson's Extrapolation
    Algorithms: Richardson extrapolation (in Class), On-line error estimates
    Exercises: 6, 8
    1. Assume we have an algorithm T(h) which approximates T with the error estimate
      T = T(h) + Kh2 + O(h3).
      If T(0.1) = 1.326 and T(0.01) = 1.417 (1) use Richardson extrapolation to produce an improved approximation of T and (2) estimate the value of h needed to have an error of less than 10-6.

    4.3 Elements of Numerical Integration
    Key Ideas: Numerical quadrature, Quadrature rule, Degree of accuracy
    Algorithms: Trapezoid rule (Eqn 4.40), Simpson's rule (Eqn 4.41), Midpoint rule (Eqn 4.46)
    Theorems: Truncation errors
    Exercises: 1abe, 2, 9
    4.4 Composite Numerical Integration
    Algorithms: Composite Simpson's (Algo 4.1), Composite Trapezoid (Thm 4.5), Composite Midpoint (Thm 4.6)
    Theorems: Truncation errors (Thm 4.4-4.6)
    Exercises: 123abcf, 6, 13, 22, 26ab
    4.5 Romberg Integration
    Algorithms: Romberg integration (Algo 4.3)
    Exercises: 1abe, 12
    4.6 Adaptive Quadrature Methods
    Key Ideas: Adaptive quadrature
    Algorithms: Adaptive quadrature (Algo 4.2)
    Exercises:
    1. Use adaptive quadrature with the Trapezoid Rule and the exact error to estimate \int_0^1 x3 dx to an accuracy of 10-2.

    4.7 Gaussian Quadrature
    Key Ideas: Orthogonal functions, Weight function
    Algorithms: Gaussian quadrature (Eqn 4.67), Change of interval (Eqn 4.72)
    Theorems: Degree of accuracy (Thm 4.8)
    Exercises: 1abc, 2abc
    1. Show that the coefficients given in Table 4.11 for n=3 are the weights obtained from integrating over the interval [-1,1] the functions Lk(x) (Lagrange interpolating polynomials) for the roots given.

    5.1 Theory of Initial-Value Problems
    Key Ideas: Initial-Value Problem (IVP), Well-posed IVP
    5.2 Euler's Method
    Key Ideas: Mesh points, Step size
    Algorithms: Euler's (Algo 5.1)
    Theorems: Error estimate (Eqn 5.20)
    Exercises: 1abc, 4
    5.3 Higher-Order Taylor's Methods
    Key Ideas: Local truncation error
    Algorithms: Taylor method of order $n$ (Eqn 5.27)
    Exercises: 1abc, 6abc
    5.4 Runge-Kutta Methods
    Algorithms: Midpoint (5.35), Modified Euler (5.37), Order-Four (Classic R-K) (Algo 5.2)
    Theorems: Truncation errors
    Exercises: 1ab, 2ab, 3ab, 6abc, 11
    5.5 Error Control and Runge-Kutta-Fehlberg Method
    Key Ideas: Error Control using two methods
    Algorithms: Runge-Kutta-Fehlberg (5.3)
    Exercises: 2a (program, not by hand)

    5.6 Multistep Methods
    Key Ideas: Multistep, Explicit, Implicit, Predictor-Corrector method
    Algorithms: Adams-Bashforth Four-Step Method (Eqn 5.34), Adams-Moulton Three-Step Method (Eqn 5.37), Predictor-Corrector (Algo 5.4)
    Theorems: Truncation errors
    Exercises: 1b, 8a, 9a
    5.9 Higher-Order Equations and Systems of Differential Equations
    Key Ideas: Order of an equation, Systems, Conversion between the two
    Exercises: 1ac, Convert #2abc to Systems
    5.10 Stability
    Key Ideas: Consistent, Stable, Convergent, Root Condition, Absolute Stability (5.11, Def 5.24)
    Exercises: 4, 7
    11.1 The Linear Shooting Method
    Key Ideas: Boundary Value Problem, Linear DE, Shooting Method
    Algorithms: Linear Shooting (11.1)
    Exercises: 1, 2, 4, 5
    11.2 The Shooting Method for Nonlinear Problems
    Key Ideas: Nonlinear DE, Shooting Method
    Algorithms: Non-Linear Shooting (11.2)
    Exercises: 1, 2
    11.3 Finite-Difference Methods for Linear Problems
    Key Ideas: Finite Differences, Centered Difference
    Algorithms: Linear Finite-Difference (11.3)
    Exercises: 1, 2, 3, 5

    ccollins@math.utk.edu
    Last Modified: November 29, 2000