Operations on matrices are at the heart of scientific computing. Efficient algorithms for working with matrices are therefore of considerable practical interest. This chapter provides a brief introduction to matrix theory and matrix operations, emphasizing the problems of multiplying matrices and solving sets of simultaneous linear equations.

A * matrix* is a rectangular array of numbers. For example,

The * transpose* of a matrix

A * vector* is a one-dimensional array of numbers. For example,

x^{T}= ( 2 3 5 ) .

The **unit vector***e _{i}* is the vector whose

A * zero matrix* is a matrix whose every entry is 0. Such a matrix is often denoted 0, since the ambiguity between the number 0 and a matrix of 0's is usually easily resolved from context. If a matrix of 0's is intended, then the size of the matrix also needs to be derived from the context.

**Square***n* *n* matrices arise frequently. Several special cases of square matrices are of particular interest:

1. A * diagonal matrix* has

2 The *n* *n* **identity matrix****I*** _{n}* is a diagonal matrix with 1's along the diagonal:

I= diag(1,1,...,1)_{n}

3. A **tridiagonal matrix ***T* is one for which *t _{ij}* = 0 if |

4. An **upper-triangular matrix ***U* is one for which *u _{ij}* = 0 if

An upper-triangular matrix is * unit upper-triangular* if it has all 1's along the diagonal.

5. A **lower-triangular matrix*** L* is one for which *l _{ij}* = 0 if

A lower-triangular matrix is * unit lower-triangular* if it has all 1's along the diagonal.

6. A **permutation matrix***P* has exactly one 1 in each row or column, and 0's elsewhere. An example of a permutation matrix is

7. A **symmetric matrix***A* satisfies the condition *A *= *A*^{T}. For example,

The elements of a matrix or vector are numbers from a number system, such as the real numbers, the complex numbers, or integers modulo a prime. The number system defines how to add and multiply numbers. We can extend these definitions to encompass addition and multiplication of matrices.

We define * matrix addition* as follows. If

c=_{ij}a+_{ij}b_{ij}

A+ 0 =A

= 0 +A.

If is a number and *A* = (*a _{ij}*) is a matrix, then

A+ (-A) = 0

= (-A) +A .

We define * matrix multiplication* as follows. We start with two matrices

I=_{m}AAI=_{n}A

for any *m* *n* matrix *A*. Multiplying by a zero matrix gives a zero matrix:

A0 = 0 .

Matrix multiplication is associative:

A(BC)=(AB)C

for compatible matrices *A*, *B*, and *C*. Matrix multiplication distributes over addition:

A(B+C) =AB+AC,

(B+C)D=BD+CD.

Multiplication of *n* *n* matrices is not commutative, however, unless *n* = 1. For example, if and , then

Matrix-vector products or vector-vector products are defined as if the vector were the equivalent *n* 1 matrix (or a 1 *n* matrix, in the case of a row vector). Thus, if *A* is an *m* *n* matrix and *x* is a vector of size *n*, then *Ax* is a vector of size *m*. If *x* and *y* are vectors of size *n*, then

Thus, the norm of *x* is its length in *n*-dimensional euclidean space.

We define the * inverse* of an

Many nonzero *n n *matrices do not have inverses. A matrix without an inverse is is called * noninvertible*, or

If a matrix has an inverse, it is called * invertible*, or

(BA)^{-1}=A^{-1}B^{-1}.

The inverse operation commutes with the transpose operation:

(A^{-1})^{T}= (A^{T})^{-1 }.

The vectors *x*_{1}, *x*_{2}, . . . , *x _{n}* are

The* column rank *of a nonzero

A=BC.

A square *n * *n* matrix has * full rank *if its rank is

A square matrix has full rank if and only if it is nonsingular.

An *m* *n* matrix has * full column rank* if its rank is

A* null vector *for a matrix

A matrix *A* has full column rank if and only if it does not have a null vector.

A square matrix *A* is singular if and only if it has a null vector.

The *ij*th* minor* of an

The term (-1)* ^{i+ j }*det(

The determinant of a square matrix *A* has the following properties:

If any row or any column of *A* is zero, then det (*A*) = 0.

The determinant of *A* equals the determinant of *A*^{T}.

The determinant of *A* is multiplied by - 1 if any two rows (respectively, columns) are exchanged.

Also, for any square matrices *A* and *B*, we have det (*AB*) = det(*A*) det(*B*).

An* n * *n* matrix *A* is singular if and only if det(*A*) = 0.

Positive-definite matrices play an important role in many applications. An *n* *n *matrix *A* is* positive-definite* if

For any matrix *A* with full column rank, the matrix *A*^{T}*A* is positive-definite.

* Proof *We must show that

x^{T}(A^{T}A)x =(Ax)^{T}(Ax)(by Exercise 31.1-3)

=||Ax||^{2}

0 .

Other properties of positive-definite matrices will be explored in Section 31.6.

Prove that the product of two lower-triangular matrices is lower-triangular. Prove that the determinant of a (lower- or upper-) triangular matrix is equal to the product of its diagonal elements. Prove that the inverse of a lower-triangular matrix, if it exists, is lower-triangular.

Prove that if* P* is an *n* *n* permutation matrix and *A* is an *n * n matrix, then *PA* can be obtained from *A* by permuting its rows, and *AP* can be obtained from *A *by permuting its columns. Prove that the product of two permutation matrices is a permutation matrix. Prove that if* P* is a permutation matrix, then *P* is invertible, its inverse is *P*^{T}, and *P*^{T} is a permutation matrix.

Prove that (*AB*)^{T }= *B*^{T }*A*^{T }and that *A*^{T }*A* is always a symmetric matrix.

Prove that if *B* and *C* are inverses of *A*, then *B* = *C.*

Prove that for any two compatible matrices *A* and *B*,

rank(*AB*) min(rank(*A*), rank(*B*)) ,

Given numbers *x*_{0}, *x*_{1}, . . . , *x _{n}*-1, prove that the determinant of the

This section presents Strassen's remarkable recursive algorithm for multiplying* n ** n *matrices that runs in (*n*^{lg 7}) = *O*(*n*^{2.81}) time. For sufficiently large* n,* therefore, it outperforms the naive (*n*^{3}) matrix-multiplication algorithm MATRIX-MULTIPLY from Section 26.1.

Strassen's algorithm can be viewed as an application of a familiar design technique: divide and conquer. Suppose we wish to compute the product *C* = *AB*, where each of *A*, *B*, and *C* are *n ** n* matrices. Assuming that *n *is an exact power of 2, we divide each of *A, B,* and *C* into four *n/2 n/2 *matrices, rewriting the equation *C* = *AB* as follows:

r=ae+bf,

s = ag+bh ,

t = ce+df ,

u = cg+dh .

T(n) = 8T(n/2) + (n^{2}).

T(n) = 7T(n/2) + (n^{2})

= (n^{lg 7})

=O(n^{2.81}) .

Strassen's method has four steps:

1. Divide the input matrices *A* and *B* into *n*/2* ** n*/2 submatrices, as in equation (31.9).

3. Recursively compute the seven matrix products *P _{i} = A_{i}B_{i}* for

Let us guess that each matrix product *P _{i}* can be written in the form

P=_{i }A_{i}B_{i}

= (1_{i}a+2_{i}b+3_{i}c+4_{i}d) (1_{i}e+2_{i}f+3_{i}g+4_{i}h) ,

The matrix *t* can be computed in a similar manner as *t* = *P*_{3} + *P*_{4}, where

By adding an additional product

By subtracting an additional product

Use Strassen's algorithm to compute the matrix product

V. Pan has discovered a way of multiplying 68 68 matrices using 132,464 multiplications, a way of multiplying 70 70 matrices using 143,640 multiplications, and a way of multiplying 72 72 matrices using 155,424 multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? Compare it with the running time for Strassen's algorithm.

Show how to multiply the complex numbers *a* + *bi* and *c* + *di* using only three real multiplications. The algorithm should take *a*, *b*, *c*, and *d* as input and produce the real component *ac* - *bd* and the imaginary component *ad* + *bc* separately.

Let denote a number system, where *S* is a set of elements, and are binary operations on *S* (the addition and multiplication operations, respectively), and are distinct distinguished elements of *S*. This system is a * quasiring* if it satisfies the following properties:

*S* is * closed* under ; that is,

is * associative*; that is,

is an * identity* for ; that is, for all

2. is an * annihilator*, that is, for all

3. The operator is * commutative*; that is,

4. The operator * distributes* over ; that is, and for all

Examples of quasirings include the * boolean quasiring* ({0,1}, , , 0, 1), where denotes logical OR and denotes logical AND, and the natural number system (

If is a quasiring and *n* 1, then is a quasiring.

* Proof *The proof is left as Exercise 31.3-3.

Subtraction is not defined for quasirings, but it is for a * ring*, which is a quasiring that satisfies the following additional property:

5. Every element in *S* has an * additive inverse*; that is, for all

Given that the negative of any element is defined, we can define subtraction by *a *- *b* = *a *+ (-*b*).

The following corollary shows that Theorem 31.7 generalizes naturally to rings.

If is a ring and *n* 1, then is a ring.

* Proof *The proof is left as Exercise 31.3-3.

Using this corollary, we can prove the following theorem.

Strassen's matrix-multiplication algorithm works properly over any ring of matrix elements.

Strassen's algorithm cannot be used directly to multiply boolean matrices, since the boolean quasiring ({0,1}, , , 0, 1) is not a ring. There are instances in which a quasiring is contained in a larger system that is a ring. For example, the natural numbers (a quasiring) are a subset of the integers (a ring), and Strassen's algorithm can therefore be used to multiply matrices of natural numbers if we consider the underlying number system to be the integers. Unfortunately, the boolean quasiring cannot be extended in a similar way to a ring. (See Exercise 31.3-4.)

* Proof *Let the two matrices be

Thus, using Strassen's algorithm, we can perform boolean matrix multiplication in *O*(*n *^{lg 7}) time.

A ring is a * field* if it satisfies the following two additional properties:

6. The operator is * commutative*; that is, for all

7. Every nonzero element in *S* has a * multiplicative inverse*; that is, for all , there exists an element

Such an element *b* is often called the i* nverse* of

Prove Theorem 31.7 and Corollary 31.8.

Show how to determine efficiently if a given undirected input graph contains a triangle (a set of three mutually adjacent vertices).

Show that computing the product of two *n* *n* boolean matrices over the boolean quasiring is reducible to computing the transitive closure of a given directed 3*n*-vertex input graph.

Solving a set of simultaneous linear equations is a fundamental problem that occurs in diverse applications. A linear system can be expressed as a matrix equation in which each matrix or vector element belongs to a field, typically the real numbers **R**. This section discusses how to solve a system of linear equations using a method called LUP decomposition.

We start with a set of linear equations in *n* unknowns *x*_{1},* x*_{2}, . . . ,* x _{n}*:

a_{11}x_{1}+a_{12}x_{2}+ +a_{1n}x=_{n}b_{1},

a_{21}x_{1}+a_{22}x_{2}+ +a_{2n}x=_{n}b_{2},

a1_{n}x_{1}+a2_{n}x_{2}+ +a=_{nn}x_{n}b._{n}

A set of values for *x*_{1}, *x*_{2}, . . . , *x _{n}* that satisfy all of the equations (31.17) simultaneously is said to be a

We can conveniently rewrite equations (31.17) as the matrix-vector equation

or, equivalently, letting *A* = (*a _{ij}*),

Ax=b.

If *A* is nonsingular, it possesses an inverse *A*^{-1}, and

x=A^{-1 }b

x= (A^{-1 }A)x

=A^{-1}(Ax)

=A^{-1}(Ax')

= (A^{-1 }A)x'

=x'.

In this section, we shall be concerned predominantly with the case in which *A* is nonsingular or, equivalently (by Theorem 31.1), the rank of *A* is equal to the number *n* of unknowns. There are other possibilities, however, which merit a brief discussion. If the number of equations is less than the number *n* of unknowns--or, more generally, if the rank of *A* is less than *n*--then the system is * underdetermined*. An underdetermined system typically has infinitely many solutions (see Exercise 31.4-9), although it may have no solutions at all if the equations are inconsistent. If the number of equations exceeds the number

The idea behind LUP decomposition is to find three *n* * n* matrices *L, U*, and *P* such that

PA=LU,

*L* is a unit lower-triangular matrix,

*U* is an upper-triangular matrix, and

LUx=Pb.

Ly = Pb

Ux = y

Ax = P1^{-}LUx

=P1^{-}Ly

=P1^{-}Pb

=b.

* Forward substitution* can solve the lower-triangular system (31.21) in (

y_{1}=b_{}_{[1],}

l_{21}y_{1}+y_{2 }=b_{}_{[2],}

l_{31}y_{1}+l_{32}y_{2}+y_{3 }=b_{}_{[3],}

l1_{n}y_{1}+l2_{n}y_{2}+l3_{n}y_{3}+ +y=_{n }b_{}_{[n]}.

y_{2}=b_{}_{[2] }-l_{21}y_{1}.

Now, we can substitute both *y*_{1} and *y*_{2} into the third equation, obtaining

y_{3}=b_{}_{[3]}- (l_{31}y_{1}+l_{32}y_{2}).

* Back substitution* is similar to forward substitution. Given

u_{11}x_{1}+u_{12}x_{2}+ +u_{1,n-2}x-2 +_{n}u_{1,n-1}x-1 +_{n}u_{1n}x=_{n}y_{1},

u_{22}x_{2}+ +u_{2,n-2}x-2 +_{n}u_{2,n-1}x-1 +_{n}u_{2n}x=_{n}y_{2},

un-2,n-2xn-2 +un-2,n-1xn-1 +un-2,nxn=yn-2,

un-1,n-1xn-1 +un-1,nxn=yn-1,

u_{n,n}x_{n}= y_{n}.

Thus, we can solve for *x _{n}*,

x=_{n}y/_{n}u,_{nn}

x1_{n-}=(y-1 -_{n}u-1_{n},nx)/_{n}u-1_{n},n-1 ,

x-2_{n}=(y2_{n-}-(u-2_{n},n-1x-1 +_{n}u-2_{n},nx))/_{n}u-2_{n},n-2 ,

Given *P, L, U,* and *b*, the procedure LUP-SOLVE solves for *x* by combining forward and back substitution. The pseudocode assumes that the dimension *n* appears in the attribute *rows*[*L*] and that the permutation matrix *P* is represented by the array .

As an example of these methods, consider the system of linear equations defined by

and we wish to solve for the unknown *x*. The LUP decomposition is

(The reader can verify that *PA* = *LU*.) Using forward substitution, we solve *Ly* = *Pb* for *y*:

thereby obtaining the desired answer

by computing first *x*_{3}, then *x*_{2}, and finally *x*_{1}*.*

We have now shown that if an LUP decomposition can be computed for a nonsingular matrix *A*, forward and back substitution can be used to solve the system *Ax* = *b* of linear equations. It remains to show how an LUP decomposition for *A* can be found efficiently. We start with the case in which *A* is an *n* *n* nonsingular matrix and *P* is absent (or, equivalently, *P* = *I _{n}*). In this case, we must find a factorization

The process by which we perform LU decomposition is called * Gaussian elimination*. We start by subtracting multiples of the first equation from the other equations so that the first variable is removed from those equations. Then, we subtract multiples of the second equation from the third and subsequent equations so that now the first and second variables are removed from them. We continue this process until the system that is left has an upper-triangular form--in fact, it is the matrix

The 0's in the first and second matrices of the factorization are row and column vectors, respectively, of size *n* - 1. The term *vw*^{T}/*a*_{11}, formed by taking the outer product of *v* and *w* and dividing each element of the result by *a*_{11}, is an (*n* - 1) (*n* - 1) matrix, which conforms in size to the matrix *A*' from which it is subtracted. The resulting (*n* - 1) (*n* - 1) matrix

A'- vw^{T}/a_{11}

is called the * Schur complement* of

We now recursively find an LU decomposition of the Schur complement. Let us say that

A'- vw^{T}/a_{11}= L'U' ,

where *L*' is unit lower-triangular and *U*' is upper-triangular. Then, using matrix algebra, we have

Of course, if *a*_{11} = 0, this method doesn't work, because it divides by 0. It also doesn't work if the upper leftmost entry of the Schur complement *A*'* - vw*^{T}/*a*_{11} is 0, since we divide by it in the next step of the recursion. The elements by which we divide during LU decomposition are called * pivots*, and they occupy the diagonal elements of the matrix

Our code for LU decomposition of a matrix *A* follows the recursive strategy, except that an iteration loop replaces the recursion. (This transformation is a standard optimization for a "tail-recursive" procedure--one whose last operation is a recursive call to itself.) It assumes that the dimension of *A* is kept in the attribute *rows*[*A*]. Since we know that the output matrix *U* has 0's below the diagonal, and since LU-SOLVE does not look at these entries, the code does not bother to fill them in. Likewise, because the output matrix *L* has 1's on its diagonal and 0's above the diagonal, these entries are not filled in either. Thus, the code computes only the "significant" entries of *L* and *U*.

Generally, in solving a system of linear equations *Ax* = *b*, we must pivot on off-diagonal elements of *A* to avoid dividing by 0. Not only is division by 0 undesirable, so is division by any small value, even if *A* is nonsingular, because numerical instabilities can result in the computation. We therefore try to pivot on a large value.

P'(A' -vw^{T}/a1) =_{k}L'U' .

LUP-DECOMPOSITION(A)

1nrows[A]

2fori1ton

3do[i]i

4fork1ton- 1

5dop0

6forikton

7do if|a| >_{ik}p

8thenp|a|_{ik}

9k'i

10ifp=0

11then error"singular matrix"

12 exchange [k] [k']

13fori1ton

14doexchangea_{ki }a'_{k}_{i}

15forik+ 1ton

16doa_{ik}a/_{ik}a_{kk}

17forjk+ 1ton

18doa_{ij}a-_{ij}a_{ik}a_{kj}

by using forward substitution.

Find an LU decomposition of the matrix

by using an LUP decomposition.

Describe the LUP decomposition of a diagonal matrix.

Describe the LUP decomposition of a permutation matrix *A*, and prove that it is unique.

Show that for all *n* 1, there exist singular *n* *n* matrices that have LU decompositions.

Although in practice we do not generally use matrix inverses to solve systems of linear equations, preferring instead to use more numerically stable techniques such as LUP decomposition, it is sometimes necessary to compute a matrix inverse. In this section, we show how LUP decomposition can be used to compute a matrix inverse. We also discuss the theoretically interesting question of whether the computation of a matrix inverse can be sped up using techniques such as Strassen's algorithm for matrix multiplication. Indeed, Strassen's original paper was motivated by the problem of showing that a set of a linear equations could be solved more quickly than by the usual method.

Suppose that we have an LUP decomposition of a matrix *A* in the form of three matrices *L, U*, and *P* such that *PA *=* LU*. Using LU-SOLVE, we can solve an equation of the form *Ax *=* b* in time (*n*^{2}). Since the LUP decomposition depends on *A* but not *b*, we can solve a second set of equations of the form *Ax* = *b*' in additional time (*n*^{2}). In general, once we have the LUP decomposition of *A*, we can solve, in time (*kn*^{2}), *k* versions of the equation *Ax *=* b* that differ only in *b*.

AX=I_{n}

AX=_{i}e_{i}

We now show that the theoretical speedups obtained for matrix multiplication translate to speedups for matrix inversion. In fact, we prove something stronger: matrix inversion is equivalent to matrix multiplication, in the following sense. If *M*(*n*) denotes the time to multiply two *n* *n* matrices and *I*(*n*) denotes the time to invert a nonsingular *n* *n* matrix, then *I*(*n*) = (*M*(*n*)). We prove this result in two parts. First, we show that *M*(*n*) = *O*(*I*(*n*)), which is relatively easy, and then we prove that *I*(*n*) = *O*(*M*(*n*)).

and thus we can compute the product *AB* by taking the upper right *n* *n *submatrix of *D*^{-1}.

M(n) =O(I(n)).

The proof that matrix inversion is no harder than matrix multiplication relies on some properties of symmetric positive-definite matrices that will be proved in Section 31.6.

* Proof *We can assume that

S=D-CB^{-1 }C^{T}

be the Schur complement of *A* with respect to *B*, we have

CB^{-1},

(CB^{-1})C^{T},

S^{-1}^{ }(CB^{-1}),

(CB^{-1})^{T}(S^{-1}CB^{-1}).

I(n) 2I(n/2) + 4M(n) +O(n^{2})

= 2I(n/2) +O(M(n))

=O(M(n)).

The reduction is based on the observation that when *A* is an *n n *nonsingular matrix, we have

A^{-1}=(A^{T}A)^{-1}A^{T},

Let *M*(*n*) be the time to multiply *n n* matrices, and let *L*(*n*) be the time to compute the LUP decomposition of an *n* *n* matrix. Show that multiplying matrices and computing LUP decompositions of matrices have essentially the same difficulty: *L*(*n*) *= *(*M*(*n*)).

Let *M*(*n*) be the time to multiply *n n* matrices, and let *D*(*n*) denote the time required to find the determinant of an *n n* matrix. Show that finding the determinant is no harder than multiplying matrices: *D*(*n*) *= O*(*M*(*n*))*.*

*Let M*(*n*) be the time to multiply *n n* boolean matrices, and let *T*(*n*) be the time to find the transitive closure of *n n* boolean matrices. Show that *M*(*n*) =* O*(*T*(*n*)) and *T*(*n*) =* O*(*M*(*n*)lg *n*).

Generalize the matrix-inversion algorithm of Theorem 31.12 to handle matrices of complex numbers, and prove that your generalization works correctly. (*Hint*: Instead of the transpose of *A*, use the * conjugate transpose A**, which is obtained from the transpose of

Symmetric positive-definite matrices have many interesting and desirable properties. For example, they are nonsingular, and LU decomposition can be performed on them without our having to worry about dividing by 0. In this section, we shall prove several other important properties of symmetric positive-definite matrices and show an interesting application to curve fitting by a least-squares approximation.

The first property we prove is perhaps the most basic.

Any symmetric positive-definite matrix is nonsingular.

The proof that we can perform LU decomposition on a symmetric positive-definite matrix *A* without dividing by 0 is more involved. We begin by proving properties about certain submatrices of *A*. Define the *k*th * leading submatrix* of

since *A* is positive-definite, and hence *A _{k}* is also positive-definite.

We now turn to some essential properties of the Schur complement. Let *A* be a symmetric positive-definite matrix, and let *A _{k}* be a leading

Then, the * Schur complement* of

LU decomposition of a symmetric positive-definite matrix never causes a division by 0.

Fitting curves to given sets of data points is an important application of symmetric positive-definite matrices. Suppose that we are given a set of *m* data points

(x_{1},y_{1}),(x_{2},y_{2}),...,(x,_{m}y),_{m}

y=_{i}F(x) +_{i},_{i}

F(x) =c_{1}+c_{2}x+c_{3}x^{2}+ +c-1_{n}x^{n}

is a polynomial of degree *n *- 1 in *x*.

is the size-*m* vector of "predicted values" for *y*. Thus,

=Ac-y

is the size-*m* vector of * approximation errors*.

The *n* equations (31.32) for *k* = 1, 2, . . . , *n* are equivalent to the single matrix equation

(Ac-y)^{T}A= 0

or, equivalently (using Exercise 31.1-3), to

A^{T}(Ac - y) = 0 ,

A^{T}Ac=A^{T}y.

In statistics, this is called the * normal equation*. The matrix

c= ((A^{T}A)^{-1}A^{T})y

=A^{+}y,

As an example of producing a least-squares fit, suppose that we have 5 data points

(-1,2),(1,1),(2,1),(3,0),(5,3),

shown as black dots in Figure 31.3. We wish to fit these points with a quadratic polynomial

F(x) =c_{1}+c_{2}x+c_{3}x^{2}.

We start with the matrix of basis-function values

Multiplying *y* by *A*^{+}, we obtain the coefficient vector

which corresponds to the quadratic polynomial

F(x) = 1.200 - 0.757x+ 0.214x^{2}

as the closest-fitting quadratic to the given data, in a least-squares sense.

Prove that every diagonal element of a symmetric positive-definite matrix is positive.

Prove that the maximum element in a symmetric positive-definite matrix lies on the diagonal.

F(x) =c_{1}+c_{2}xlgx+c_{3}e^{x}

that is the best least-squares fit to the data points

(1,1),(2,1),(3,3),(4,8) .

Show that the pseudoinverse *A*^{+}* *satisfies the following four equations:

AA^{+}A=A,

A^{+}AA^{+ }=A^{+ },

(AA^{+})^{T }=AA^{+ },

(A^{+}A)^{T }=A^{+}A.

31-1 Shamir's boolean matrix multiplication algorithm

In Section 31.3, we observed that Strassen's matrix-multiplication algorithm cannot be applied directly to boolean matrix multiplication because the boolean quasiring *Q* = ({0, 1}, , , 0, 1) is not a ring. Theorem 31.10 showed that if we used arithmetic operations on words of *O*(lg *n*) bits, we could nevertheless apply Strassen's method to multiply *n* *n* boolean matrices in *O*(*n*^{lg 7}) time. In this problem, we investigate a probabilistic method that uses only bit operations to achieve nearly as good a bound but with some small chance of error.

* a.* Show that

If *a _{ij}* = 0, then let

31-2 Tridiagonal systems of linear equations

Consider the tridiagonal matrix

* a.* Find an LU decomposition of

* b.* Solve the equation

A practical method for interpolating a set of points with a curve is to use * cubic splines.* We are given a set {(

To ensure continuity of *f*(*x*), we require that

f(x) =_{i}f(0) =_{i}y,_{i }

f(x+1) =_{i}f(1) =_{i}y+1_{i}

* b.* Use the continuity constraints on the second derivative to show that for

D- 1 + 4_{i }D+_{i}D+ 1 = 3(_{i }y+ 1 -_{i }y- 1) ._{i }

2D_{0 }+D_{1}= 3(y_{1}-y_{0}) ,

D- 1 + 2_{n }D= 3(_{n}y-_{n}y- 1) ._{n }