Chapter 3_ MATLAB Implementations

Embed Size (px)

Citation preview

  • 7/28/2019 Chapter 3_ MATLAB Implementations

    1/4

    TOC Abstract (1) Intro (2) Simplex (3) Impl (4) Test (5) Results (6) Concl Scripts Biblio Author Chairman

    Chapter 3

    MATLAB Implementations

    For this exposition, the preceding methods were all implemented in MATLAB and tested with the same sets

    of data available made publicly available via the World Wide Web by Netlib at ftp://netlib.att.com/netlib

    /master/readme.html. The choice of MATLAB was deliberate specifically for its use of matrices as the basic

    data structure. The many varied matrix operations that these routines require become simple expressions in

    MATLAB where they would otherwise be complicated language-dependent routines elsewhere.

    Using MATLAB also has its limitations. MATLAB's scripting language is interpreted, often resulting in very

    high overhead, especially when looping structures circumvent MATLAB's optimization attempts. Comparing

    the running time of the different routines becomes meaningless, so flops (floating-point operations, that is

    scalar addition, subtraction, multiplication, division, and modulo) are employed. MATLAB maintains a count

    of the number of flops performed. By comparing the flop counts of these routines, a reasonable estimate of

    their true run-time complexity can be gained.

    The various algorithms investigated were given a uniform interface to provide easy use and easy comparison(see Appendix). Each routine specifically solves the following problem beginning at the corner suggested by

    the Basic Feasible Solution (bfs):

    Due to the variety of expressive modes that linear programs can assume, the simplex.m routine was

    needed to translate the various possibilities into the form the routines expected. For instance, some linear

    programs expect equality constraints to be met (Ax=b) while others only ask for inequality constraints (Ax= b), and they require different processing steps.

    In the case of inequality constraints, slack variables -- variables adding no cost to the solution -- were used to

    create equality constraints, and the initial basic feasible solution consisted simply of these slack variables. In

    the case of equality constraints, however, an initial basic feasible solution needed to be found first. Using

    standard practice, the initial BFS was found by solving for the same sets of equations with the slack variables

    given positive costs and the original variables given zero cost. If this produced a solution with a minimized

    cost of zero then all the slack variables had swapped out and this solution became the initial basic feasible

    solution to the actual problem. If a nonzero solution is determined, then there is no feasible solution to the

    original problem.

    Revised Simplex Method

    The MATLAB implementation of the Revised Simplex Method follows the exact steps listed in the previous

    table. MATLAB's matrix operations allow this code to be elegant and compact. The -cost calculations

    are hidden in the simple statement, Binv = inv(B).

    MATLAB's own inversion procedure is used, but it still remains as complex as before. Yet, since MATLAB

    pter 3: MATLAB Implementations http://www.cise.ufl.edu/research/sparse/Morgan/chap

    4 07-11-201

  • 7/28/2019 Chapter 3_ MATLAB Implementations

    2/4

    has already been compiled and optimized for the particular platform, the actual running time of this routine

    ought to be very impressive even though the flop count may not be.

    Bartels-Golub Method

    The Bartels-Golub Method so closely parallels the Revised Simplex Method that the code looks nearly the

    same. Basis inversion is not used, but the double backsolve is used instead. Algebraically, the results areidentical. It should be noted, though, that the double backsolve is performed three times per iteration whereas

    the matrix inversion was only performed once in the Revised Simplex Method. The extra calculations do not

    change the asymptotic behavior, but they do hamper the Bartels-Golub slight time-complexity edge over the

    Revised Simplex Method.

    LU decomposition could be performed only once per iteration to determine the inverse of the basis, however,

    Bartels and Golubs original presumption was that the LU factors will be much less dense than the basis

    inverse thus saving space in memory. A measurement of the actual space saved was attempted in order to

    justify this assumption.

    Sparse Bartels-Golub Method

    The sparse Bartels-Golub Method contains an inner loop that provides a threshold for when the basis should

    be refactored. In this case, refactoring is based on the size of the eta_array, the list of eta-matrix factors

    constitutingL.

    The LU-factorization of the basis is made using the lu() routine built into MATLAB. First, the basis

    columns are permuted according to the MATLAB function colmmd() to preserve sparsity in the LU-factors.

    Following the factorization, the vectors Q and Qinv are vectors indicating the column permutation of the

    basis.Parray

    , which contains the row permutations of L throughout the process, has its first entry set to the

    permutation indicated by lu().

    Because the lu() function permutes the rows ofL to achieve numerical stability, the algorithm had to be

    slightly modified. Each time the upper-Hessenberg bump is factored, not only are there new L and Ufactors,

    but there is also a row permutation involved. Therefore, not only do the eta matrices, , need storing, but

    also the row permutations. In the algorithm presented, the eta_array flags when the next row permutation

    is supposed to occur with a row of zeros.

    This last modification requires that control of the looping parameter be maintained by the algorithm rather

    than handled internally by MATLAB so that the intermittent row perumutations can be detected. Such a

    change may result in significant overhead by the MATLAB interpreter, so running time may prove to be

    longer.

    Forrest-Tomlin Method

    The matrix U remains a row-permuted upper-triangular matrix throughout the algorithm. It is column

    permuted to proper upper-triangular form when a new variable enters the basis so that the new column at the

    far right, as shown in Figures 11, 12, and 13. This is the implementation suggested by Forrest and Tomlin in

    their original paper.

    pter 3: MATLAB Implementations http://www.cise.ufl.edu/research/sparse/Morgan/chap

    4 07-11-201

  • 7/28/2019 Chapter 3_ MATLAB Implementations

    3/4

    Figure 11 Figure 12 Figure 13

    The corresponding row entries are

    removed.

    The matrix is kept as a

    row-permuted upper-triangular

    matrix.

    The proper upper-triangular form is

    used for calculations.

    Calculation of the row factor in each iteration occurs during the update step. When the new column enters

    the basis, the row elements in the corresponding row need to be canceled, thus casting the diagonal entry of

    the incoming column as the bottom-most entry of the proper upper-triangular matrix. Canceling these row

    entries is obtained by post-multiplying the non-diagonal entries by the inverse ofU (rather, forward solving

    the row entries throughU). The result, also a row as shown in Figure 14, has its entries negated to simulate

    the inverse of an eta matrix containing this row and then placed in R.

    Figure 14

    where the row-eta factor is determined by:

    Formation of the Forrest-Tomlin elimination factor.

    Reids Method

    Reids Method is nearly identical to the Sparse Bartels-Golub Method. The extra step that Reid takes issimple in concept, but complex to implement. Therefore a separate routine was added: rotate. The

    difficulty lay in an efficient implementation because a linear search for each row and column singleton would

    take more time than would be otherwise saved. Therefore, a linear search occurs only twice: once for column

    singletons and once for row singletons, both occurring right at the beginning of the rotation step. As each

    singleton is eliminated, only necessary rows and columns are checked for new singletons. So long as the

    matrices remain sparse, this process will be far more efficient.

    Implementation of Reid's Method becomes even more inefficient in MATLAB. The overhead necessary to

    interpret the code for each iteration ofrotate is significant. The flop count should still indicate an

    improvement in calculation efficiency from the other methods since the rotations involve integer

    pter 3: MATLAB Implementations http://www.cise.ufl.edu/research/sparse/Morgan/chap

    4 07-11-201

  • 7/28/2019 Chapter 3_ MATLAB Implementations

    4/4

    manipulations rather than the costly floating point calculations. Only Uand the permutation vectors are

    manipulated by this routine, so the Sparse Bartels-Golub code can continue without any extra modifications.

    pter 3: MATLAB Implementations http://www.cise.ufl.edu/research/sparse/Morgan/chap

    4 07-11-201