 1.1 Review of Calculus
 Theorems: Mean Value Theorem and its variations (Rolle's Thm., Generalized Rolle's Thm.,
MVT for Integrals), Intermediate Value Theorem, Extreme Value Theorem,
Taylor's Theorem
 Exercises: 4, 7, 9, 11
 Show that if a < x_{k} < b for k = 1,...,N and f is
continous on (a,b), then
f(x_{1}) + f(x_{2}) + .. + f(x_{N})= N f(c)
for some c in the interval (a,b).
 If f is in C^{1}[0,1] show that
I_{0}^{1} f(x)(x1/2)dx = (1/12)f'(c)
for some c in [0,1].
Hint: write
f(x) = f(1/2) + (f(x)  f(1/2))/(x1/2)*(x1/2)
 1.2 Roundoff Error & Computer Arithmetic
 Key Ideas: Floating point number, Chopping, Rounding, Roundoff error, Absolute
error, Relative error, Significant figures, Computer arithmetic
 Exercises:
3, 4, 5, 13, 17, 22, 26
 Let x^{*} be an approximaton to x, then f(x^{*}) is an approximation
to f(x).
If f is in C^{1}, estimate the absolute error in f(x^{*}) in terms of the
absolute error in x^{*} and f'(x).

Find the smallest number e on a kdigitbase10withrounding
computer
such that 1 + e > 1. This number e is
called the machine epsilon or unit roundoff.
 1.3 Algorithms & Convergence
 Key Ideas:
Stable, Unstable, Rate of Convergence
 Exercises:
6, 7, 8, 15
 Let
S = 1  1/2 + 1/3  1/4 + ...  1/1000.
Rank the following four methods for computing S from the most accurate to
the least accurate. Justify your answer.
 From largest to smallest (left to right)
 From smallest to largest (right to left)
 From largest to smallest, summing the positive and negative terms separately
 From smallest to largest, summing the positive and negative terms separately
 Show that if
x_{n} = x + O(a_{n}), with x not 0, and
y_{n} = y + O(a_{n}), with a_{n} converging to 0,
then
x_{n}+ y_{n} = x + y + O(a_{n}), and
x_{n} y_{n} = xy + O(a_{n}).
 6.1 Linear Systems of Equations
 Key Ideas:
Row operations, Triangular matrix, Augmented matrix, Flop count, Pivot element
 Algorithms:
BackwardSubstitution, Gaussian Elimination (6.1)
 Exercises:
2, 3, 5, 7
 6.2 Pivoting Strategies
 Key Ideas:
Pivoting
 Algorithms:
Pivoting Strategies: GE w/Parial Pivoting (6.2), Scaled PP (6.3)
 Exercises:
1, 2, 3, 6, 8, 16
 6.5 Matrix Factorization
 Key Ideas:
Gaussian Transformation Matrix
 Algorithms:
LU (6.4) w/permutation
 Exercises:
1, 2, 4, 5
 6.6 Special Types of Matrices
 Key Ideas:
Diagonal Dominant, Positive Definite, Bandwidth, Band Matrix, Tridiagonal Matrix
 Algorithms:
Choleski (6.6), Crout (6.7)
 Theorems:
Cor. 6.27
 Exercises:
1, 2, 3, 6, 10, 15
 7.1 Norms of Vectors and Matrices
 Key Ideas:
Vector norm, Matrix norm, Natural (Induced) matrix norm, l_{2} and
l_{infty} norms, Convergence
 Algorithms:
Infinity and 2
 Theorems:
Equivalence of Norms (7.7 and paragraph after), Cor. 7.10, Thm 7.11
 Exercises:
1, 2, 3, 4, 5
 7.2 Eigenvalues and Eigenvectors
 Key Ideas: Eigenvalue, Eigenvector, Spectral radius, Convergent
matrix
 Algorithms:
Characteristic Polynomial
 Theorems:
Spectral Radius and Norms (Thm. 7.15), Thm. 7.17
 Exercises:
1, 2, 3, 4, 5, 10
 7.3 Iterative Techniques for Solving Linear Systems
 Key Ideas:
Iteration Matrix, Residual vector
 Algorithms:
Jacobi (7.1), GaussSeidel (7.2), SOR (7.3)
 Theorems:
Iteration Convergence (Thm 7.19), Convergence bound (7.20), SteinRosenberg (7.22), OstrowskiReich (7.25)
 Exercises:
1, 2, 5, 8, 13
 7.4 Error Estimates and Iterative Refinement
 Key Ideas:
Condition Number, Ill and Well conditioned
 Algorithms:
Iterative Refinement (7.4)
 Theorems:
Error bounds (7.27 and 7.28)
 Exercises:
1, 2, 3
 9.1 Linear Algebra and Eigenvalues
 Key Ideas:
Eigenvalues and Eigenvectors; Similar matrices; Linearly independent;
Orthogonal
 Theorems: Theorem 9.9 (Schur Decompositon), Theorem 9.10 (Symmetric Decomposition)
 Exercises: 1, 6
 9.2 The Power Method
 Key Ideas: Power method
 Algorithms: Power Method (9.1), Inverse Power Method (9.3)
 Exercises: 1, 2
 9.3 Householder's Method
 Key Ideas: Householder Transformation, Similarity transformation
 Algorithms: Householder's (9.5)
 Exercises: 1
 2.1 The Bisection Method
 Key Ideas:
Root, Zero, Stopping procedure
 Algorithms:
Bisection (2.1)
 Theorems:
Convergence Bound (Thm. 2.1)
 Exercises:
4, 8, 10, 13, 14, 15, 16
 2.2 FixedPoint Iteration
 Key Ideas:
FixedPoint, FixedPoint iteration
 Theorems:
FixedPoint Theorem (Thm 2.3), Convergence Bound (Cor 2.4,2.5)
 Exercises:
1, 4, 9, 11, 20
 2.3 The NewtonRaphson Method
 Algorithms:
Newton's (Algo 2.3), Secant (Algo 2.4), Method of FalsePosition (Algo 2.5)
 Exercises:
6, 7, 8, 11, 12, 15, 24, 29
 2.4 Error Analysis for Iterative Methods
 Key Ideas:
Convergence of order alpha, Linear convergence, Quadratic convergence,
Root of multiplicity m, Simple zero
 Theorems:
Thm. 2.7, Thm. 2.8
 Exercises:
1, 2, 5, 8, 11
 2.5 Accelerating Convergence
 Key Ideas:
Aitken's DeltaSquared Method
 Algorithms:
Steffensen's Method (2.6)
 Exercises:
1, 4, 9, 12, 13
 2.6 Zeros of Polynomials and Muller's Method
 Key Ideas:
Deflation
 Algorithms:
Horner's (Algo 2.7), Muller's (Algo 2.8)
 Exercises:
2adg, 4adg, 9, 10
 10.1 Fixed Points for Functions of Several Variables
 Key Ideas:
 Algorithms:
 Theorems:
 Exercises:
 10.2 Newton's Method
 Key Ideas:
 Algorithms:
 Theorems:
 Exercises:
 10.3 QuasiNewton Methods
 Key Ideas:
 Algorithms:
 Theorems:
 Exercises:
 10.4 Steepest Descent Techniques
 Key Ideas:
 Algorithms:
 Theorems:
 Exercises:
ccollins@math.utk.edu
Last Modified: January 3, 2001