Projects For Today
Solving a system of linear equations
- LUP decomposition
- Solving the resulting system
Given matrices A of size (m, n) and B of size (n, p)
Product is C[i, j] = sum(k <- 1..n, A[i, k] * B[k, j])
multiply A by B: for i <- 1 .. m for j <- 1 .. p C[i, j] <- A[i, 1] * B[1, j] for k <- 2 .. n C[i, j] <- C[i, j] + A[i, k] * B[k, j] return C
mp space for result, mnp time
For square matrices, these are O(n^2) and O(n^3)
For a square matrix, let
A = [ A[1,1] A[1,2] ] [ A[2,1] A[2,2] ]
and similarly for B and C
Compute the matrix product as
C[i,j] = sum(k <- 1..2, A[i, k] . B[k, j])
where "." is matrix multiplication
Gives the same answer (easy proof), but recursive
"Strassen's method is not at all obvious. (This might the biggest understatement in this book)."
Let S = B[1,2] - B[2,2] and then define S[2..10] as sums or differences of blocks according to p. 80
Let P = A[1,1] . S and then define P[2..7] similarly according to p. 80
C[(1,1)..(2,2)] can be written as sums of various P according to p. 81
Note that only the P calculations have multiply, and that there are only 7 of them.
- Traditional block multiplication would give 2x2x2 = 8 multiplies
The P multiplies can be done by recursively applying Strassen's algorithm! This gives complexity O(n^(lg 7)) ~= O(n^2.8)
Solving Systems Of Linear Equations
Solving for n independent linear equations in n unknowns with unique answer
3*x1 + 5*x2 + 7*x3 = 9 4*x1 + 6*x2 + 8*x3 = 10 1*x1 + 0*x2 + 3*x3 = 4
Watch out for underconstrained systems due to dependent equations, overconstrained systems due to unsolvability
Matrix formulation: write
b = a[1, n] * x[n] + ... + a[1,1] * x b = a[2, n] * x[n] + ... + a[2,1] * x ...
Ax = b
where b and x are vectors of length n, and A is an (n, n) matrix of coefficients
Matrix Multiply By FFT
- Take DFT of input matrices
- Then add the DFTs
- Then take inverse DFT of the result
Most efficient known algorithm for very large n
Gaussian Elimination: Row-Reduction To "LUP" Form
Equations (rows) can be multiplied by a constant, added, swapped around.
Idea: Use first coefficient in first row to get rid of first coefficient in all other rows.
a -= a * (a[2,1] / a[1,1]) b -= b * (a[2,1] / a[1,1])
Continue to eliminate successive coefficients from successive rows.
Permutation is needed if divisor is zero or small. Preserve the permutation to keep track of which equation was which!
Gaussian Elimination: Solving From "LUP" Form
Note that a[n,n] * x[n] = b[p[n]] is trivially solvable for x[n]
Substitute x[n] and subtract resulting constant from b[p[i]] to get equations like
a[n-1,n-1] * x[n-1] = ..
Continue until all x[i] have been computed, and return the results
Running time of decomposition, solving is O(n^2)
Finding A^-1 such that A . A^-1 = I
Can use Gaussian Elimination to find
- Set up n systems of equations, each of which solves for a column of A^-1
- Solve the systems in total time O(n^3)
Proof of equivalence of multiplication and inversion
- Important property: two-way proof
Practically super-important: fit a function to data points minimizing square of error in y
See last section of the book and meditate hard for details
Look somewhere else (e.g. Numerical Recipes, web) for implementation hints