2
130 BOOK REVIEWS encountered in practice. The choice of the family in (1) is rarely an easy one, i.e., "model selection" is a problem requiting study. Further, in the last paragraph, it is unusual to know all the variances a/2. More commonly a=oE/wi where the wi are known but o 2 is not. Here t 2 is called a "nuisance parameter." Hence (1) needs to be generalized. Further the "link" function g in (2) has to be chosen. Finally one needs sound numerical algorithms. If one adds, to all these general questions, the fact that many of the special cases occur so frequently that it is worthwhile working out all the formulae and details for them, the size of this literature is less surprising. McC&N try to cover all this field briefly and informally, so this necessarily leads to some vagueness and to some dense writing, both of which make the book tough reading for the outsider. Thus Dobson reorganizes nicely material known to every statistician into the general linear model format, in a book readable by virtually anyone with some applied mathematical maturity. By contrast, McCullagh and Nelder have provided a unique book for specialists which knits together the mass of theory and detail in the literature related to these models, explicitly or implicitly. Their book would make an excellent text for a second year graduate course for statistical students and will become the standard reference in this literature. Thus both books can be heartily recommended but to different audiences. G. S. WATSON Princeton University Sparse Matrix Technology. By SERGIO PISSANETSKY. Academic Press, London, 1984. xiv + 321 pp. $55.00. ISBN 0-12-557580-7. This book represents a systematic and thorough presentation of sparse matrix technology. Except for a brief treatment of the Gauss-Seidel algorithm, the book concerns direct methods for solving equations and for calculating eigenvalues and eigenvectors for the standard eigenvalue problem as well as the generalized eigenvalue problem. A common approach to data files for sparse matrices is to store the successive nonzeros as a single vector either in row order or column order. This numerical vector is accompanied by a pair of integer vectors which fix the (i,j) for each nonzero Aj. If A is positive definite symmetric, then often the diagonal elements are stored as a separate vector and only the strictly uppertriangular nonzeros are stored in the other numerical vector. Linked lists are discussed together with unordered lists. In fields such as structural mechanics, band matrices play a big role and algo- rithms have been developed which yield an ordering with minimal band width. Here A ij=O for li-jl> fl, and the outer diagonals contain at least one nonzero. A band matrix is normally stored by diagonals for li-jl<=fl. A generalization of the band matrix is a profile matrix, that is, A A i 0 for j <j(i). One stores and processes the zero elements inside the profile. There is a chapter on solving linear algebraic equations. This covers Gauss elimination, Gauss-Jordan elimination, and Cholesky factorization of a symmetric positive definite matrix. After this chapter is one on numerical errors in Gauss elimina- tion. Threshold pivoting is discussed and this type of pivoting is important for matrices which are neither positive definite symmetric nor diagonally dominant. With the threshold approach one can balance numerical stability and reasonable fill in the matrix factorization. There are two major chapters devoted to ordering matrices for Gauss elimination. Downloaded 11/15/14 to 129.97.124.167. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Sparse Matrix Technology (Sergio Pissanetsky)

  • Upload
    ralph-a

  • View
    260

  • Download
    10

Embed Size (px)

Citation preview

Page 1: Sparse Matrix Technology (Sergio Pissanetsky)

130 BOOK REVIEWS

encountered in practice. The choice of the family in (1) is rarely an easy one, i.e.,"model selection" is a problem requiting study. Further, in the last paragraph, it isunusual to know all the variances a/2. More commonly a=oE/wi where the wi areknown but o 2 is not. Here t 2 is called a "nuisance parameter." Hence (1) needs to begeneralized. Further the "link" function g in (2) has to be chosen. Finally one needssound numerical algorithms. If one adds, to all these general questions, the fact thatmany of the special cases occur so frequently that it is worthwhile working out all theformulae and details for them, the size of this literature is less surprising. McC&N tryto cover all this field briefly and informally, so this necessarily leads to some vaguenessand to some dense writing, both of which make the book tough reading for theoutsider.

Thus Dobson reorganizes nicely material known to every statistician into thegeneral linear model format, in a book readable by virtually anyone with some appliedmathematical maturity. By contrast, McCullagh and Nelder have provided a uniquebook for specialists which knits together the mass of theory and detail in the literaturerelated to these models, explicitly or implicitly. Their book would make an excellenttext for a second year graduate course for statistical students and will become thestandard reference in this literature. Thus both books can be heartily recommended butto different audiences.

G. S. WATSONPrinceton University

Sparse Matrix Technology. By SERGIO PISSANETSKY. Academic Press, London, 1984.xiv + 321 pp. $55.00. ISBN 0-12-557580-7.This book represents a systematic and thorough presentation of sparse matrix

technology. Except for a brief treatment of the Gauss-Seidel algorithm, the bookconcerns direct methods for solving equations and for calculating eigenvalues andeigenvectors for the standard eigenvalue problem as well as the generalized eigenvalueproblem.

A common approach to data files for sparse matrices is to store the successivenonzeros as a single vector either in row order or column order. This numerical vectoris accompanied by a pair of integer vectors which fix the (i,j) for each nonzero Aj. IfA is positive definite symmetric, then often the diagonal elements are stored as a

separate vector and only the strictly uppertriangular nonzeros are stored in the othernumerical vector. Linked lists are discussed together with unordered lists.

In fields such as structural mechanics, band matrices play a big role and algo-rithms have been developed which yield an ordering with minimal band width. HereAij=O for li-jl> fl, and the outer diagonals contain at least one nonzero. A bandmatrix is normally stored by diagonals for li-jl<=fl. A generalization of the bandmatrix is a profile matrix, that is, A Ai 0 for j <j(i). One stores and processes thezero elements inside the profile.

There is a chapter on solving linear algebraic equations. This covers Gausselimination, Gauss-Jordan elimination, and Cholesky factorization of a symmetricpositive definite matrix. After this chapter is one on numerical errors in Gauss elimina-tion. Threshold pivoting is discussed and this type of pivoting is important for matriceswhich are neither positive definite symmetric nor diagonally dominant. With thethreshold approach one can balance numerical stability and reasonable fill in thematrix factorization.

There are two major chapters devoted to ordering matrices for Gauss elimination.

Dow

nloa

ded

11/1

5/14

to 1

29.9

7.12

4.16

7. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 2: Sparse Matrix Technology (Sergio Pissanetsky)

BOOK REVIEWS 131

The first chapter deals with the case where A is positive definite symmetric. Theordering strategies are analyzed in terms of undirected graphs. The same graph appliesto all matrices PAPr where P is a permutation matrix. Concepts such as pseudo-peripheral vertices, reachable set, quotient graph, and level structures are analyzed.Some of the ordering procedures are bandwidth or profile reduction, minimum degreealgorithm, refined quotient tree algorithm, and nested disection. A special discussion isdevoted to finite element problems.

The second chapter on ordering concerns general matrices. A good deal of discus-sion concerns determining if A is reducible. This is treated in two steps: (1) find areordering which yields a nonzero diagonal, and (2) determine the strong componentsof the directed graph.

The chapter on eigenvalue problems for sparse matrices covers the Rayleighquotient, Sturm sequence calculations, reduction to tridiagonal and Hessenberg matrices,inverse iteration, invariant subspaces, simultaneous iteration, and the Lanczos algo-rithms.

There is a chapter on sparse matrix algebras which gives program cores for manyof the algorithms.

The last two chapters are on connectivity and nodal assembly for finite elementproblems and then general purpose algorithms.

RALPH A. WILLOUGHBYIBM T.J.Watson Research Center

A Treatise on Generating Functions. By H. M. SRIVASTAVA and H. L. MANOCHA. EllisHorwood Limited, Chichester (distributed by Halsted Press: A division of JohnWiley & Sons, New York), 1984. 569 pp. $89.95. ISBN 0-85312-508-2, EllisHorwood; ISBN 0-470-20010-3, Halsted Press. A volume in the Ellis HorwoodSeries" Mathematics and Its Applications.Any aficionado of the subject will recognize that in the preface to this mammoth

work the authors are running a bit of a scam:

Generating functions play an important role in the investigation of various useful propertiesof the sequences which they generate. They are used as Z-transforms in solving certain classes ofdifference equations which arise in a wide variety of problems in Operations Research (including,for example, queueing theory and related processes) The existence of a generating function maybe useful for [evaluating sums] by such summability methods as those due to Abel and Cesaro.

They then declare that the book is designed so that it could be used as a textbookin a beginning graduate course on special functions. Perhaps. I’ve had the idbe fixe thata coherent graduate course in complex analysis could be organized entirely around thetheory and applications of the Mellin transform; this year I even ran such a course.However, was it really for the benefit of the students that I took such an approach, orfor mine? The reader knows. If the word scam is too strong, let us say that the authorshave made a too generous appraisal of the usefulness of the book and of the signifi-cance of the field. Having myself written a book on summability theory, I greeted withgood-natured scepticism their assessment of the importance of generating functions inthat subject.

It seems that we who work in a given area often become partners in a conspiracy:we agree not to notice when overly enthusiastic claims are made of the field’s signifi-cance in the wider scheme of things. After all, if the field is diminished, so are we. Wemay want to write a book one day. And mathematicians, who are inordinately sensitive

Dow

nloa

ded

11/1

5/14

to 1

29.9

7.12

4.16

7. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p