Fundamentals of Matrix ComputationsJohn Wiley & Sons, 2004. gada 27. aug. - 640 lappuses A significantly revised and improved introduction to a critical aspect of scientific computation Matrix computations lie at the heart of most scientific computational tasks. For any scientist or engineer doing large-scale simulations, an understanding of the topic is essential. Fundamentals of Matrix Computations, Second Edition explains matrix computations and the accompanying theory clearly and in detail, along with useful insights. This Second Edition of a popular text has now been revised and improved to appeal to the needs of practicing scientists and graduate and advanced undergraduate students. New to this edition is the use of MATLAB for many of the exercises and examples, although the Fortran exercises in the First Edition have been kept for those who want to use them. This new edition includes: * Numerous examples and exercises on applications including electrical circuits, elasticity (mass-spring systems), and simple partial differential equations * Early introduction of the singular value decomposition * A new chapter on iterative methods, including the powerful preconditioned conjugate-gradient method for solving symmetric, positive definite systems * An introduction to new methods for solving large, sparse eigenvalue problems including the popular implicitly-restarted Arnoldi and Jacobi-Davidson methods With in-depth discussions of such other topics as modern componentwise error analysis, reorthogonalization, and rank-one updates of the QR decomposition, Fundamentals of Matrix Computations, Second Edition will prove to be a versatile companion to novice and practicing mathematicians who seek mastery of matrix computation. |
Saturs
1 | |
2 Sensitivity of Linear Systems | 111 |
3 The Least Squares Problem | 181 |
4 The Singular Value Decomposition | 261 |
5 Eigenvalues and Eigenvectors I | 289 |
6 Eigenvalues and Eigenvectors II | 413 |
7 Iterative Methods for Linear Systems | 521 |
Some Sources of Software for Matrix Computations | 603 |
References | 605 |
Index | 611 |
Index of MATLAB Terms | 617 |
Citi izdevumi - Skatīt visu
Bieži izmantoti vārdi un frāzes
A₁ applied approximation arithmetic Arnoldi process back substitution block calculate Cholesky decomposition Cholesky factor Cholesky's method Cnxn coefficient matrix column computed condition number convergence defined denote diagonal matrix eigenvalue problem example Exercise flop count forward substitution Gauss-Seidel Gaussian elimination Gram-Schmidt process Hessenberg matrix ill conditioned inner product inverse iteration iterative methods Jacobi least squares problem linear system linearly independent main diagonal main-diagonal entries MATLAB multiply nonsingular nonzero entries norm normal O(u² obtain operations orthogonal orthogonal matrix orthonormal partial pivoting perform perturbation polynomial positive definite preconditioner proof Prove q₁ QR algorithm QR decomposition QR step Rayleigh quotient iteration reflectors residual result Rnxm rotator roundoff errors satisfies Section sequence shift Show singular values solution span{v1 sparse sparse matrix steepest descent stored Suppose symmetric symmetric matrix system Ax Theorem tridiagonal unique unitary upper triangular v₁ vector zero
Populāri fragmenti
530. lappuse - One can easily see that if 5 is symmetric, then it is positive definite if and only if all of its eigenvalues are positive.
305. lappuse - A is an eigenvalue of A if and only if it is a solution of det (AI - A) = 0.
290. lappuse - ... SELF-INDUCTION. The magnetic field associated with an electric current cuts the conductor carrying the current. When the current changes, so does the magnetic field, resulting in an induced EMF (See induction, electromagnetic.) This phenomenon is called self-induction. The induced EMF is proportional to the rate of change of the current, the constant of proportionality being called the coefficient of self-induction, or the self-inductance.
193. lappuse - A (not necessarily square) produces an orthonormal matrix Q and an upper triangular matrix R such that A QR.
209. lappuse - 01101 00010 00000 is in the row-reduced echelon form, but the matrix "0 0 0" 1 1 0 ^0 0 1 is not. The column reduced echelon form can be defined in an analogous way. It follows readily from the definition that the rank of a matrix is equal to the number of nonzero rows in its row-reduced echelon form. Example 36. Reduce the following matrix in the row-reduced echelon form: "2 4 -2 2 12-30 36-43 Solution.
213. lappuse - Recall that the rank of a matrix is the number of linearly independent columns or rows in that matrix.
340. lappuse - A is unitarily similar to a diagonal matrix if and only if it is normal.
6. lappuse - C is the dot product of the ith row of A with the jth column of B.
79. lappuse - Standard Gaussian elimination is equivalent to factoring the matrix A as A LU, where L is lower triangular and U is upper triangular. In actual computations, these factors are explicitly constructed. The main problem in sparse matrix computations is that the factors of A are often a good deal less sparse than A, which makes solving expensive.