 1.1 Review of Calculus
 Theorems: Mean Value Theorem and its variations (Rolle's Thm., Generalized Rolle's Thm.,
MVT for Integrals), Intermediate Value Theorem, Extreme Value Theorem,
Taylor's Theorem
 Exercises: 4, 7, 9, 11
 Show that if a < x_{k} < b for k = 1,...,N and f is
continous on (a,b), then
f(x_{1}) + f(x_{2}) + .. + f(x_{N})= N f(c)
for some c in the interval (a,b).
 If f is in C^{1}[0,1] show that
I_{0}^{1} f(x)(x1/2)dx = (1/12)f'(c)
for some c in [0,1].
Hint: write
f(x) = f(1/2) + (f(x)  f(1/2))/(x1/2)*(x1/2)
 1.2 Roundoff Error & Computer Arithmetic
 Key Ideas: Floating point number, Chopping, Rounding, Roundoff error, Absolute
error, Relative error, Significant figures, Computer arithmetic
 Exercises:
3, 4, 5, 13, 17, 22, 26
 Let x^{*} be an approximaton to x, then f(x^{*}) is an approximation
to f(x).
If f is in C^{1}, estimate the absolute error in f(x^{*}) in terms of the
absolute error in x^{*} and f'(x).

Find the smallest number e on a kdigitbase10withrounding
computer
such that 1 + e > 1. This number e is
called the machine epsilon or unit roundoff.
 1.3 Algorithms & Convergence
 Key Ideas:
Stable, Unstable, Rate of Convergence
 Exercises:
6, 7, 8, 15
 Let
S = 1  1/2 + 1/3  1/4 + ...  1/1000.
Rank the following four methods for computing S from the most accurate to
the least accurate. Justify your answer.
 From largest to smallest (left to right)
 From smallest to largest (right to left)
 From largest to smallest, summing the positive and negative terms separately
 From smallest to largest, summing the positive and negative terms separately
 Show that if
x_{n} = x + O(a_{n}), with x not 0, and
y_{n} = y + O(a_{n}), with a_{n} converging to 0,
then
x_{n}+ y_{n} = x + y + O(a_{n}), and
x_{n} y_{n} = xy + O(a_{n}).
 3.1 Interpolation/ Lagrange Polynomial
 Key Ideas:
Interpolation, Lagrange interpolating polynomial
 Algorithms:
Horner's Algorithmi, also known as Nested Multiplication and Synthetic
Division (Thm 2.18/Algo 2.7),
Lagrange interpolating polynomial (Thm 3.2)
 Theorems:
Error Estimate (Thm 3.3)
 Exercises:
1, 2, 3, 9, 10, 15, 17, 24
 3.2 Divided Differences/ Newton's Polynomial
 Key Ideas:
Divided difference table (Table 3.8)
 Theorems:
Theorem 3.6
 Algorithms:
Newton's Interpolatory DividedDifference (Algo 3.2)
 Exercises:
1, 4, 6, 7, 10
 Show that if f is a polynomial of degree m then if n > m,
then
f[x_{0},x_{1},...,x_{n}] = 0
for any choice of x_{0}, x_{1},..., x_{n}.
 3.3 Hermite Interpolation
 Key Ideas:
Osculating polynomials, Hermite polynomials (LagrangeType and Divided
DifferenceType)
 Algorithms:
Hermite Interpolation (Algo 3.3)
 Theorems:
Error Estimate (Thm 3.9)
 Exercises:
1, 3, 4, 7
 3.4 Cubic Splines
 Key Ideas:
Piecewise polynomial, Interpolating cubic spline, Natural spline,
Clamped spline, Knot
 Algorithms:
Natural Cubic Spline (Algo 3.4)
 Theorems:
Error Estimate (Thm 3.13)
 Exercises:
1, 2, 3cd, 7, 9, 24
 8.1 Discrete LeastSquares Approximation
 Key Ideas:
Least squares, Normal equations
 Algorithms:
Normal equations (8.3)
 Exercises:
4, 7, 8
 8.2 Orthogonal Polynomials
 Key Ideas:
Linearly independent, Orthogonal, Orthonormal
 Algorithms:
LeastSquares for Orthogonal polynomials (Thm 8.5)
 Exercises:
1ac, 2ac, 3ad
 8.4 Rational Function Approximation (SKIP)

 8.5 Trigonometric Polynomial Approximation
 Key Ideas:
Trigonometric Polynomial, Fourier Series (Discrete and Continuous)
 Algorithms:
Formulas for the Fourier Coefficients (Discrete and Continuous)
 Exercises:
1, 2, 5, 7ab, 13, 14 (thus odd functions have only sine terms, even
functions only cosine terms)
 4.1 Numerical Differentiation
 Key Ideas:
Truncation error
 Algorithms:
npoint formulas (Eqn 4.2, 4.1215, 4.20), General formula (Eqn 4.4)
 Theorems:
Truncation errors for $n$point formulas
 Exercises:
1, 2, 3, 10, 16
 Estimate f'(0) = 1 for f(x) = e^{x} by computing
(e^{h}  e^{h})/(2h)
for h = 10^{N}, N = 1, ..., 20. Comment on the results.
 Estimate f'(0.3) as best as possible, using the data
f(0.0) = 1, f(0.2) = 1.2 and f(0.3) = 1.2.
 4.2 Richardson's Extrapolation
 Algorithms:
Richardson extrapolation (in Class), Online error estimates
 Exercises:
6, 8
 Assume we have an algorithm T(h) which approximates T with
the error estimate
T = T(h) + Kh^{2} + O(h^{3}).
If T(0.1) = 1.326 and T(0.01) = 1.417 (1) use Richardson extrapolation
to produce an improved approximation of T and (2) estimate the value of
h needed to have an error of less than 10^{6}.
 4.3 Elements of Numerical Integration
 Key Ideas:
Numerical quadrature, Quadrature rule, Degree of accuracy
 Algorithms:
Trapezoid rule (Eqn 4.40), Simpson's rule (Eqn 4.41), Midpoint rule
(Eqn 4.46)
 Theorems:
Truncation errors
 Exercises:
1abe, 2, 9
 4.4 Composite Numerical Integration
 Algorithms:
Composite Simpson's (Algo 4.1), Composite Trapezoid (Thm 4.5),
Composite Midpoint (Thm 4.6)
 Theorems:
Truncation errors (Thm 4.44.6)
 Exercises:
123abcf, 6, 13, 22, 26ab
 4.5 Romberg Integration
 Algorithms:
Romberg integration (Algo 4.3)
 Exercises:
1abe, 12
 4.6 Adaptive Quadrature Methods
 Key Ideas:
Adaptive quadrature
 Algorithms:
Adaptive quadrature (Algo 4.2)
 Exercises:

Use adaptive quadrature with the Trapezoid Rule and the exact
error to estimate \int_0^1 x^{3} dx to an accuracy of 10^{2}.
 4.7 Gaussian Quadrature
 Key Ideas:
Orthogonal functions, Weight function
 Algorithms:
Gaussian quadrature (Eqn 4.67), Change of interval (Eqn 4.72)
 Theorems:
Degree of accuracy (Thm 4.8)
 Exercises:
1abc, 2abc

Show that the coefficients given in Table 4.11 for n=3 are the
weights obtained from integrating over the interval [1,1] the functions
L_{k}(x) (Lagrange interpolating polynomials) for the roots given.
 5.1 Theory of InitialValue Problems
 Key Ideas:
InitialValue Problem (IVP), Wellposed IVP
 5.2 Euler's Method
 Key Ideas:
Mesh points, Step size
 Algorithms:
Euler's (Algo 5.1)
 Theorems:
Error estimate (Eqn 5.20)
 Exercises:
1abc, 4
 5.3 HigherOrder Taylor's Methods
 Key Ideas:
Local truncation error
 Algorithms:
Taylor method of order $n$ (Eqn 5.27)
 Exercises:
1abc, 6abc
 5.4 RungeKutta Methods
 Algorithms:
Midpoint (5.35), Modified Euler (5.37), OrderFour (Classic RK) (Algo
5.2)
 Theorems:
Truncation errors
 Exercises:
1ab, 2ab, 3ab, 6abc, 11
 5.5 Error Control and RungeKuttaFehlberg Method
 Key Ideas: Error Control using two methods
 Algorithms: RungeKuttaFehlberg (5.3)
 Exercises: 2a (program, not by hand)

 5.6 Multistep Methods
 Key Ideas:
Multistep, Explicit, Implicit, PredictorCorrector method
 Algorithms:
AdamsBashforth FourStep Method (Eqn 5.34), AdamsMoulton ThreeStep
Method (Eqn 5.37), PredictorCorrector (Algo 5.4)
 Theorems:
Truncation errors
 Exercises:
1b, 8a, 9a
 5.9 HigherOrder Equations and Systems of Differential Equations
 Key Ideas: Order of an equation, Systems, Conversion between the
two
 Exercises:
1ac, Convert #2abc to Systems
 5.10 Stability
 Key Ideas: Consistent, Stable, Convergent, Root Condition,
Absolute Stability (5.11, Def 5.24)
 Exercises:
4, 7
 11.1 The Linear Shooting Method
 Key Ideas:
Boundary Value Problem, Linear DE, Shooting Method
 Algorithms:
Linear Shooting (11.1)
 Exercises:
1, 2, 4, 5
 11.2 The Shooting Method for Nonlinear Problems
 Key Ideas:
Nonlinear DE, Shooting Method
 Algorithms:
NonLinear Shooting (11.2)
 Exercises:
1, 2
 11.3 FiniteDifference Methods for Linear Problems
 Key Ideas:
Finite Differences, Centered Difference
 Algorithms:
Linear FiniteDifference (11.3)
 Exercises:
1, 2, 3, 5
ccollins@math.utk.edu
Last Modified: November 29, 2000