Fundamentals of Matrix ComputationsJohn Wiley & Sons, 2004. gada 27. aug. - 640 lappuses A significantly revised and improved introduction to a critical aspect of scientific computation Matrix computations lie at the heart of most scientific computational tasks. For any scientist or engineer doing large-scale simulations, an understanding of the topic is essential. Fundamentals of Matrix Computations, Second Edition explains matrix computations and the accompanying theory clearly and in detail, along with useful insights. This Second Edition of a popular text has now been revised and improved to appeal to the needs of practicing scientists and graduate and advanced undergraduate students. New to this edition is the use of MATLAB for many of the exercises and examples, although the Fortran exercises in the First Edition have been kept for those who want to use them. This new edition includes: * Numerous examples and exercises on applications including electrical circuits, elasticity (mass-spring systems), and simple partial differential equations * Early introduction of the singular value decomposition * A new chapter on iterative methods, including the powerful preconditioned conjugate-gradient method for solving symmetric, positive definite systems * An introduction to new methods for solving large, sparse eigenvalue problems including the popular implicitly-restarted Arnoldi and Jacobi-Davidson methods With in-depth discussions of such other topics as modern componentwise error analysis, reorthogonalization, and rank-one updates of the QR decomposition, Fundamentals of Matrix Computations, Second Edition will prove to be a versatile companion to novice and practicing mathematicians who seek mastery of matrix computation. |
No grāmatas satura
1.–5. rezultāts no 75.
10. lappuse
... suppose the cache is big enough to hold two matrix columns or rows . Computation of entry b1 ; requires the ith row of A and the jth column of X. The time required to move these into cache is proportional to 2n , the number of data ...
... suppose the cache is big enough to hold two matrix columns or rows . Computation of entry b1 ; requires the ith row of A and the jth column of X. The time required to move these into cache is proportional to 2n , the number of data ...
13. lappuse
... Suppose the circuit is in an equilibrium state ; all of the voltages and currents are constant . The four unknown nodal voltages x1 , ... , x4 can be determined as follows . At each of the four nodes , the sum of the currents away from ...
... Suppose the circuit is in an equilibrium state ; all of the voltages and currents are constant . The four unknown nodal voltages x1 , ... , x4 can be determined as follows . At each of the four nodes , the sum of the currents away from ...
26. lappuse
... suppose b1 = 0. Then obviously y1 = 0 as well , and we do not need yı to make the computer do the computation y1 = b1 / 911 . In addition , all subsequent computations involving y1 can be skipped . Now suppose that b2 = 0 also . Then y2 ...
... suppose b1 = 0. Then obviously y1 = 0 as well , and we do not need yı to make the computer do the computation y1 = b1 / 911 . In addition , all subsequent computations involving y1 can be skipped . Now suppose that b2 = 0 also . Then y2 ...
43. lappuse
... Suppose A is positive definite , and let R be the Cholesky factor of A. Then R has leading principal submatrices Rj , j = 1 , ... , n , which are upper triangular and have positive entries on the main diagonal . Exercise 1.4.35 By ...
... Suppose A is positive definite , and let R be the Cholesky factor of A. Then R has leading principal submatrices Rj , j = 1 , ... , n , which are upper triangular and have positive entries on the main diagonal . Exercise 1.4.35 By ...
48. lappuse
... suppose A11 is j x j and A22 is k × k . By Proposition 1.4.53 , A11 is positive definite . Let R11 be the Cholesky factor of A11 , let R12 = RA12 , and let Ã22 = A22 - RA12 , and let Ã22 = A22 – R2R12 . The matrix Ã22 is called the ...
... suppose A11 is j x j and A22 is k × k . By Proposition 1.4.53 , A11 is positive definite . Let R11 be the Cholesky factor of A11 , let R12 = RA12 , and let Ã22 = A22 - RA12 , and let Ã22 = A22 – R2R12 . The matrix Ã22 is called the ...
Saturs
1 | |
2 Sensitivity of Linear Systems | 111 |
3 The Least Squares Problem | 181 |
4 The Singular Value Decomposition | 261 |
5 Eigenvalues and Eigenvectors I | 289 |
6 Eigenvalues and Eigenvectors II | 413 |
7 Iterative Methods for Linear Systems | 521 |
Some Sources of Software for Matrix Computations | 603 |
References | 605 |
Index | 611 |
Index of MATLAB Terms | 617 |
Citi izdevumi - Skatīt visu
Bieži izmantoti vārdi un frāzes
A₁ applied approximation arithmetic Arnoldi process back substitution block calculate Cholesky decomposition Cholesky factor Cholesky's method Cnxn coefficient matrix column computed condition number convergence defined denote diagonal matrix eigenvalue problem example Exercise flop count forward substitution Gauss-Seidel Gaussian elimination Gram-Schmidt process Hessenberg matrix ill conditioned inner product inverse iteration iterative methods Jacobi least squares problem linear system linearly independent main diagonal main-diagonal entries MATLAB multiply nonsingular nonzero entries norm normal O(u² obtain operations orthogonal orthogonal matrix orthonormal partial pivoting perform perturbation polynomial positive definite preconditioner proof Prove q₁ QR algorithm QR decomposition QR step Rayleigh quotient iteration reflectors residual result Rnxm rotator roundoff errors satisfies Section sequence shift Show singular values solution span{v1 sparse sparse matrix steepest descent stored Suppose symmetric symmetric matrix system Ax Theorem tridiagonal unique unitary upper triangular v₁ vector zero
Populāri fragmenti
530. lappuse - One can easily see that if 5 is symmetric, then it is positive definite if and only if all of its eigenvalues are positive.
305. lappuse - A is an eigenvalue of A if and only if it is a solution of det (AI - A) = 0.
290. lappuse - ... SELF-INDUCTION. The magnetic field associated with an electric current cuts the conductor carrying the current. When the current changes, so does the magnetic field, resulting in an induced EMF (See induction, electromagnetic.) This phenomenon is called self-induction. The induced EMF is proportional to the rate of change of the current, the constant of proportionality being called the coefficient of self-induction, or the self-inductance.
193. lappuse - A (not necessarily square) produces an orthonormal matrix Q and an upper triangular matrix R such that A — QR.
209. lappuse - 01101 00010 00000 is in the row-reduced echelon form, but the matrix "0 0 0" 1 1 0 ^0 0 1 is not. The column reduced echelon form can be defined in an analogous way. It follows readily from the definition that the rank of a matrix is equal to the number of nonzero rows in its row-reduced echelon form. Example 36. Reduce the following matrix in the row-reduced echelon form: "2 4 -2 2 12-30 36-43 Solution.
213. lappuse - Recall that the rank of a matrix is the number of linearly independent columns or rows in that matrix.
340. lappuse - A is unitarily similar to a diagonal matrix if and only if it is normal.
6. lappuse - C is the dot product of the ith row of A with the jth column of B.
79. lappuse - Standard Gaussian elimination is equivalent to factoring the matrix A as A — LU, where L is lower triangular and U is upper triangular. In actual computations, these factors are explicitly constructed. The main problem in sparse matrix computations is that the factors of A are often a good deal less sparse than A, which makes solving expensive.