The Design and Analysis of BucketSort for Bubble Memory

  • View

  • Download

Embed Size (px)

Text of The Design and Analysis of BucketSort for Bubble Memory


    The Design and Analysis of BucketSort for BubbleMemory Secondary Storage


    Abstract- BucketSort is a new external sorting algorithm forvery large files that is a substantial improvement over mergesorting with disks. BucketSort requires an associative secondarystorage device, which can be realized by large disk drives withlogic-per-track capabilities or by magnetic bubble memory(MBM). This paper describes and analyzes a hypothetical Bucket-Sort implementation that uses bubble memory. A new softwaremarking technique is introduced that reduces the effective timefor an associative search.One of the important features of BucketSort is that the algo-

    rithm is randomized; there are no worst case input files. Forpurposes of analysis, no assumptions are made about the distribu-tion of the key values or about the initial order of the keys. Twoexamples are studied for purposes of comparing BucketSort toconventional merge-based methods. The sorting times for the twoexamples using an optimized merge sort with disk storage areabout 1 h and 10-15 h, respectively; BucketSort needs only about5-10 min and 1-2 h. The sorting time can be reduced further byusing faster I/O channels if the sort is I/O-bounded, or by using avector processor if the sort is CPU-bounded. BucketSort can alsobe used effectively in database systems.

    Index Terms -Analysis of algorithms, associative memory,database systems, external sorting, magnetic bubble memory,probabilistic algorithms, secondary storage, vector processing.


    T HE predominant computer application today is sorting.Ninety-seven percent of all IBM customers have in-

    stalled a sort package. In his classic text on sorting andsearching of 12 years ago, Knuth points out: "Computermanufacturers estimate that over 25 percent of the runningtime on their computers is currently being spent on sorting,when all their customers are taken into account. There aremany installations in which sorting uses more than half of thecomputing time" [9]. Several studies made by IBM confirmthat these observations have continued to be true despite thesubsequent flurry of database research and development.The main difference between the sorting jobs ten years ago

    and the ones today is that the sizes of the files sorted, thelengths of the records, and the lengths of the keys have allincreased dramatically. This tendency is expected to continue

    Manuscript received January 19, 1984; revised August 7, 1984.E. E. Lindstrom is with the IBM Palo Alto Scientific Center, Palo Alto, CA

    94304.J. S. Vitter is with the Department of Computer Science, Brown University,

    Providence, RI 02912.

    in the future. For example, major banks currently sort largefiles (on the order of several hundred Mbytes) once eachnight in order to process their demand deposit accounts, andby law they must complete that processing before the openingof the next business day. These sorts frequently take 2 h ormore using conventional merge-based methods. Within thisdecade, the files to be sorted are expected to be 12 timeslarger; even with faster disks, an optimized merge sort wouldthen take 10-15 h!The BucketSort algorithm, which was introduced in [13],

    is a new external distribution sorting method for very largefiles that significantly outperforms the current methods.BucketSort requires an associative secondary storage device,which can be approximated'by large disk drives with logic-per-track capabilities or by magnetic bubble memory(MBM). A disk implementation was considered in [13], andsubsequent modifications appear in [8]. This paper gives thedesign and analysis of a hypothetical BucketSort imple-mentation that uses MBM. An earlier version appears in [ 16].

    For purposes of notation, we call each package of informa-tion a record; each record contains a special field called thekey. The job of a sorting algorithm is to rearrange the originalfile so that the records are ordered by their key values. Weassume that the file is much larger than the internal memorysize and must be sorted externally.The BucketSort algorithm is randomized: it takes random

    actions during the course of execution, and hence its runningtime for any given input file is a random variable. An im-portant feature of BucketSort is that the distribution of therunning time is independent of the initial order of the keysand of the distribution of the key values. There are no worstcase input files. The algorithm consists of the following threephases.

    1) Sample Phase: The key values are randomly sampled,and the sampled keys are sorted internally. The number ofsampled keys is an important parameter in BucketSort.

    2) Bucket Formation Phase: In one pass through the file,counts are taken for how many records belong to each range(minibucket) defined by the sorted sample. By use of thesecounts, contiguous minibucket ranges are combined to getlarger bucket ranges, each of which contains roughly thenumber of records that can fit in internal memory.

    3) Internal Sort Phase: For each bucket range (in theorder of increasing key values), an associative search throughthe file is made in order to retrieve into internal memory allthe records whose key values lie within the range; the records

    0018-9340/85/0300-0218$01.00 C) 1985 IEEE



    are then sorted internally and appended to the output file.BucketSort is an example of an external distribution sort-

    ing algorithm. In the next section, we see why distributionsorting methods are inferior to merge sort when conventionalmagnetic disk storage is used. The success of BucketSort isdue to three factors: associative secondary storage, random-ization in the algorithm, and the marking technique discussedin Section VII.A complete description of the BucketSort algorithm ap-

    pears in Section III. Section IV explains how to set two im-portant parametets in BucketSort: the size of the sampletaken during the Sample Phase and the minimum internalmemory size. Section V lists several modifications that canimprove the time or space efficiency of BucketSort. InSection VI, we discuss'how to implement associative sec-ondary storage by use of magnetic bubble memory. The timeper associative search is on the order of 1-2 s. Section VIIdescribes a software marking technique that can reduce thetime for each bucket retrieval during the Internal Sort Phaseto only a fraction of a second.

    In Section VIII, we list the file sizes and internal memoryrequirements for two very large sorting examples. The firstexample is typical of the large sorts done once each night bymajor banks; the second represents the projected file sizes forthe large sorts expected within this decade. In Section IX,we estimate the execution times of BucketSort for the twoexamples and compare them to an optimized merge sort.BucketSort is approximately 10 times faster. The runningtime of BucketSort for these examples is -I/C where I is thesize of the input file and C is the I/O channel speed. Ifthe input file does not initially reside in the MBM, thesorting time increases to -2I/C. In both examples, the sortsare I/O bounded. The calculations in Sections VIII and IX arebased upon the formulas that are derived in the Appendixes.

    Since this sorting method is partition based, the externalnature of the BucketSort algorithm is almost independent ofthe internal sorting of each bucket. If the input file size getsextremely large- much larger than those considered inSection VIII, for example -the bottleneck will be the CPU.The sorting time in this unlikely case is- c(I + (I/(2R))ln(IK/R2)) for key size K and some very small fractionc > 0. Section X discusses ways to speed up the internal sortphase (when the CPU turns out to be the bottleneck) by usingmore internal memory and a vector (or array) processor. Sec-tion XI discusses the use of associative secondary storageMBM for storing relational databases. BucketSort can beviewed as one of the operations in such a database. In Sec-tion XII, we summarize our results and draw conclusions.


    We shall use the following notation to describe the parame-ters to an external sort:

    N = number of records;M = internal memory size (bytes);R = record size (bytes);K = size of the key (bytes).

    The best external sorting methods currently used for con-ventional disk drives are the merge-based methods, whichoperate by generating several initial sorted strings (or runs)using replacement selection and then repeatedly mergingstrings until only one string remains. Another class of meth-ods, referred to as radix or distribution sorting, is shown in[9] to be essentially the "reverse" of the merge-based mneth-ods. That is, each distribution pattern corresponds to a mergepattern, and vice versa. However, the merge-based sortingmethods are superior to the distribution sorting methods oncurrent computer systems because the initial strings producedby replacement selection tend to contain about 2M/R recordseach, which is twice as many records as can fit in internalmemory at one time. Distribution methods do not have ananalogous shortcut that would permit internal sorts of morethan one memory load at a time.

    BucketSort is a distribution algorithm designed for asso-ciative secondary storage, not for conventional disk drives.In this section, we shall see why distribution sorting methodsperform poorly with conventional disk drives. A typical ex-ample is the distribution sorting algorithm given in [8]. Therange of key values is partitioned into m bucket ranges. Thedistribution phase consists of one pass through the file, dur-ing which the records in each bucket are stored in blocks thatare linke