13

Click here to load reader

Avoiding a Data Crunch

  • Upload
    jon

  • View
    220

  • Download
    5

Embed Size (px)

Citation preview

Page 1: Avoiding a Data Crunch

DENSITY OF DATA STORED on a magnetic harddisk increased 1.3-million-fold in the four decades afterIBM’s introduction of the first commercial disk drive in1957. Improvements in miniaturization have been theprimary catalyst for the spectacular growth.

TOM

DR

APE

R D

ESIG

N;S

OU

RCE:

MA

GN

ETIC

RECO

RDIN

G:T

HE

FIRS

T 10

0 YE

ARS

,ED

ITED

BY

E.D

.DA

NIE

L ET

AL.

IEEE

PRE

SS,1

999

AVOIDING ADATA CRUNCH

Are

al D

ensi

ty (k

ilob

its

per

sq

uar

e in

ch)

Are

al D

ensi

ty (k

ilob

its

per

sq

uar

e in

ch)

SPECIAL INDUSTRY REPORT

100,000,000100,000,000

10,000,00010,000,000

1,000,0001,000,000

100,000100,000

10,00010,000

1,0001,000

100100

1010

1

1957 1962 1967 1972 1977 1982 1987 1992 1997 2002

Year

Copyright 2000 Scientific American, Inc.

Page 2: Avoiding a Data Crunch

Avoiding a Data Crunch Scientific American May 2000 59

any corporations find that thevolume of data generated bytheir computers doubles everyyear. Gargantuan databases con-taining more than a terabyte—

that is, one trillion bytes—are becomingthe norm as companies begin to keepmore and more of their data on-line,stored on hard-disk drives, where theinformation can be accessed readily.The benefits of doing so are numerous:with the right software tools to retrieveand analyze the data, companies canquickly identify market trends, providebetter customer service, hone manufac-turing processes, and so on. Meanwhileindividual consumers are using modest-ly priced PCs to handle a data glut oftheir own, storing countless e-mails,household accounting spreadsheets, dig-itized photographs, and software games.

All this has been enabled by the avail-ability of inexpensive, high-capacitymagnetic hard-disk drives. Improvementin the technology has been nothingshort of legendary: the capacity of hard-disk drives grew about 25 to 30 percenteach year through the 1980s and accel-erated to an average of 60 percent in the1990s. By the end of last year the annu-al increase had reached 130 percent. To-day disk capacities are doubling everynine months, fast outpacing advances incomputer chips, which obey Moore’sLaw (doubling every 18 months).

At the same time, the cost of hard-disk drives has plummeted. Disk/Trend,a Mountain View, Calif.–based marketresearch firm that tracks the industry,reports that the average price per mega-byte for hard-disk drives plunged from$11.54 in 1988 to $0.04 in 1998, and

the estimate for last year is $0.02. JamesN. Porter, president of Disk/Trend, pre-dicts that by 2002 the price will havefallen to $0.003 per megabyte.

Not surprisingly, this remarkable com-bination of rising capacity and decliningprice has resulted in a thriving market.The industry shipped 145 million hard-disk drives in 1998 and nearly 170 mil-lion last year. That number is expectedto surge to about 250 million in 2002,representing revenues of $50 billion,according to Disk/Trend projections.

But whether the industry can main-

tain these fantastic economics is highlyquestionable. In the coming years thetechnology could reach a limit imposedby the superparamagnetic effect, or SPE.Simply described, SPE is a physical phe-nomenon that occurs in data storagewhen the energy that holds the magnet-ic spin in the atoms making up a bit (ei-ther a 0 or 1) becomes comparable tothe ambient thermal energy. When thathappens, bits become subject to random“flipping” between 0’s and 1’s, corrupt-ing the information they represent.

In the quest to deliver hard disks with

M

The technology of computer hard drives is fast approaching a physical barrier imposed by the superparamagnetic effect. Overcoming it will require tricky innovations

20

30

40

50

60

Ave

rag

e P

rice

per

Meg

abyt

e (i

n d

oll

ars)

Wo

rld

wid

e D

isk-

Dri

ve S

ales

Rev

enu

e (i

n b

illi

on

s o

f d

oll

ars)

1988 1990 1992 1994 1996

Year

1998 2000 20020.01

0.1

1

10

100

SALES OF HARD-DISK DRIVES have soared as costs per megabyte haveplummeted.Sales revenues are expected to grow to $50 billion in 2002. BR

YAN

CH

RIST

IE;S

OU

RCE:

DIS

K/TR

END

by Jon William Toigo

Copyright 2000 Scientific American, Inc.

Page 3: Avoiding a Data Crunch

60 Scientific American May 2000 Avoiding a Data Crunch

ever increasing capacities, IBM, SeagateTechnology, Quantum Corporation andother manufacturers have continuallycrammed smaller and smaller bits to-gether, which has made the data moresusceptible to SPE. With the currentpace of miniaturization, some expertsbelieve the industry could hit the SPEwall as early as 2005. But researchershave been busy devising various strate-gies for avoiding the SPE barrier. Imple-menting them in a marketplace charac-terized by fierce competition, frequentprice wars and cost-conscious consumerswill take a Herculean feat of engineering.

Magnetic Marvels

The hard-disk drive is a wonder ofmodern technology, consisting of a

stack of disk platters, each one an alu-minum alloy or glass substrate coatedwith a magnetic material and protectivelayers. Read-write heads, typically lo-cated on both sides of each platter,record and retrieve data from circumfer-ential tracks on the magnetic medium.Servomechanical actuator arms positionthe heads precisely above the tracks,and a hydrodynamic air bearing is usedto “fly” the heads above the surface atheights measured in fractions of mi-croinches. A spindle motor rotates thestack at speeds of between 3,600 and10,000 revolutions per minute.

This basic design traces its origins tothe first hard-disk drive—the RandomAccess Method of Accounting and Con-trol (RAMAC)—which IBM introducedin 1956. The RAMAC drive stored dataon 50 aluminum platters, each of whichwas 24 inches in diameter and coatedon both sides with magnetic iron oxide.(The coating was derived from theprimer used to paint San Francisco’sGolden Gate Bridge.) Capable of stor-ing up to five million characters, RA-MAC weighed nearly a ton and occu-pied the same floor space as two mod-ern refrigerators.

In the more than four decades sincethen, various innovations have led todramatic increases in storage capacityand equally amazing decreases in thephysical dimensions of the drives them-selves. Indeed, storage capacity hasjumped multiple orders of magnitudeduring that time, with the result thatsome of today’s desktop PCs have diskdrives containing more than 70 giga-bytes. Tom H. Porter, chief technologyofficer at California-based Seagate Tech-nology’s Minneapolis office, explains

HEAD ACTUATOR pushes and pulls theread-write head arms across the platters.It precisely aligns the heads with theconcentric circles of tracks on the sur-face of the platters.

MAGNETICALLY COATED PLATTERS made of met-al or glass spin at several thousand revolutions perminute, driven by an electric motor. The capacityof the drive depends on the number of platters(which may be as many as eight) and the type ofmagnetic coating.

HOW A HARD-DISK DRIVE WORKS

FILES are stored as magnetically encoded areason platters.A single file may be scattered amongseveral areas on different platters.

HEAD HAIR DIAMETER75,000 nanometers

GAP15 nanometers

GAP between a read-write head andthe platter surface is 5,000 times small-er than the diameter of a human hair.

PROTECTIVEHOUSING

Copyright 2000 Scientific American, Inc.

Page 4: Avoiding a Data Crunch

Avoiding a Data Crunch Scientific American May 2000 61

PRINTED CIRCUIT BOARD receives commands from the drive’s con-troller.The controller is managed by the operating system and the ba-sic input-output system,low-level software that links the operating sys-tem to the hardware. The circuit board translates the commands intovoltage fluctuations, which force the head actuator to move the read-write heads across the surfaces of the platters.The board also controlsthe spindle that turns the platters at a constant speed and tells thedrive heads when to read from and when to write to the disk.

READ-WRITE HEADS,attached to the ends of mov-ing arms, slide across both the top and bottomsurfaces of the spinning platters.The heads writethe data to the platters by aligning the magneticfields of particles on the platters’ surfaces; theyread data by detecting the polarities of particlesthat have already been aligned.

GEO

RGE

RETS

ECK;

SOU

RCE:

HO

W C

OM

PUTE

RS W

ORK

,BY

RON

WH

ITE.

FOU

RTH

ED

ITIO

N.Q

UE

CO

RPO

RAT

ION

,199

8

READ-WRITE HEAD

DISKMAGNETIC

FILM

Copyright 2000 Scientific American, Inc.

Page 5: Avoiding a Data Crunch

that the industry has achieved these im-provements largely through straightfor-ward miniaturization. “Smaller heads,thinner disks, smaller fly heights [the dis-tance between head and platter]: every-thing has been about scaling,” he notes.

Head Improvements

Many of the past improvements indisk-drive capacity have been a

result of advances in the read-writehead, which records data by alteringthe magnetic polarities of tiny areas,called domains (each domain represent-ing one bit), in the storage medium. Toretrieve that information, the head ispositioned so that the magnetic states ofthe domains produce an electrical sig-nal that can be interpreted as a string of0’s and 1’s.

Early products used heads made offerrite, but beginning in 1979 siliconchip–building technology enabled theprecise fabrication of thin-film heads.This new type of head was able to readand write bits in smaller domains. In theearly 1990s thin-film heads themselveswere displaced with the introduction ofa revolutionary technology from IBM.The innovation, based on the magne-toresistive effect (first observed by Lord

Kelvin in 1857), led to a major break-through in storage density.

Rather than reading the varying mag-netic field in a disk directly, a magne-toresistive head looks for minute changesin the electrical resistance of the overly-ing read element, which is influenced bythat magnetic field. The greater sensitiv-ity that results allows data-storing do-mains to be shrunk further. Althoughmanufacturers continued to sell thin-film heads through 1996, magnetoresis-tive drives have come to dominate themarket.

In 1997 IBM introduced another in-novation—the giant magnetoresistive(GMR) head—in which magnetic andnonmagnetic materials are layered inthe read head, roughly doubling ortripling its sensitivity. Layering materi-als with different quantum-mechanicalproperties enables developers to engi-neer a specific head with desired GMRcapabilities. Currie Munce, director ofstorage systems and technology at theIBM Almaden Research Center in SanJose, Calif., says developments with thistechnology will enable disk drives tostore data at a density exceeding 100 gi-gabits per square inch of platter space.

Interestingly, as recently as 1998 someexperts thought that the SPE limit was

30 gigabits per square inch. Today noone seems to know for sure what theexact barrier is, but IBM’s achievementhas made some assert that the “densitydemon” lives somewhere past 150 giga-bits per square inch.

A Bit about Bit Size

Of course, innovations in read-writeheads would be meaningless if the

disk platters could not store informa-tion more densely. To fit more dataonto a disk, says Pat McGarrah, a di-rector of strategic and technical mar-keting at Quantum Corporation inMilpitas, Calif., many companies arelooking for media that will supportshorter bits.

The problem, though, is SPE: as oneshrinks the size of grains or crystals ofmagnetic material to make smaller bits,the grains can lose the ability to hold amagnetic field at a given temperature. “Itreally comes down to the thermal stabil-ity of the media,” Munce explains. “Youcan make heads more sensitive, but youultimately need to consider the proper-ties of the media material, such as the co-ercivity, or magnetic stability, and howfew grains you can use to obtain the de-sired resistance to thermal erasure.”

ne strategy for extending the life span of the workhorsemagnetic-disk drive is to supplement it with optical tech-nology. Such a hybrid approach could push storage densi-

ties to well beyond the current range of 10 to 30 gigabits persquare inch. In fact,TeraStor in San Jose,Calif., claims that capaci-ties could eventually top 200 gigabits per square inch, surpass-ing the anticipated limit imposed by the superparamagnetic ef-fect [see main article].

The TeraStor disk drive is essentially a variation of magneto-op-tical technology, in which a laser heats a small spot on the disk sothat information can then be written there magnetically.A crucialdifference, however, is that TeraStor uses a solid-immersion lens,or SIL,which is a special type of truncated spherical lens.

Invented at Stanford University,SILs rely on the concept of liq-uid-immersion microscopy, in which both the lens and objectbeing studied are placed in a liquid, typically oil, that greatlyboosts the magnification.But SILs apply the technique in reverse

LASER BEAM heats a tiny spot on a disk, which permits the fly-ing head to alter the magnetic properties of the spot so that itstores a binary 1 or 0.Two lenses focus the beam to an extremelyfine point, enabling the bits to be written onto the disk at veryhigh density.An objective lens concentrates the beam on a solid-immersion lens—the cornerstone of the system—which in turnfocuses the light to a spot smaller than a micron across.

64 Scientific American May 2000 Avoiding a Data Crunch

ADDING OPTICAL TO MAGNETIC

GEO

RGE

RET

SEC

K

O LASER BEAM

OBJECTIVE LENS

FLYING HEAD

SOLID-IMMERSION LENS

PLASTIC SUBSTRATE

RECORDINGLAYER

MAGNETIC COIL

Copyright 2000 Scientific American, Inc.

Page 6: Avoiding a Data Crunch

Traditionally, Munce says, a mini-mum of about 500 to 1,000 grains ofmagnetic material was required to storea bit. (In March, however, IBM scien-tists announced a process for self-assembling magnetic particles into bitsthat could provide areal densities ashigh as 150 gigabits per square inch.)Currently researchers are actively look-ing for improved materials that canhold a detectable magnetic charge andresist SPE with fewer grains. Also, theindustry has been developing better man-ufacturing processes to decrease the im-purities in the storage medium and there-by enable smaller bits.

In lieu of improvements of this type,the limit of bits per inch will remain inthe range of between 500,000 and650,000, according to Karl A. Belser, astorage technologist for Seagate Tech-nology’s research division. But this pa-rameter, which is for data stored in a

particular track on a platter, is only onedeterminant of areal density, which isthe number of bits per square inch.

Tracking the Tracks

Storage capacity also depends on thenarrowness of the tracks, and so far

manufacturers have been able to cramup to 20,000 tracks per inch. This num-ber is limited by various factors, such asthe ability of the recording head to re-solve the different tracks and the accura-cy of its position-sensing system. Squeez-ing in additional tracks will require sig-nificant improvements in several areas,including the design of the head and theactuator that controls that head. Toachieve an overall density of 100 giga-bits per square inch, the industry mustsomehow figure out a way to fit about150,000 tracks or more per inch.

With the existing technology, tracks

must be separated by gaps of 90 to 100nanometers, according to Belser. “Mostwrite heads look like a horseshoe thatextends across the width of a track,” heexplains. “They write in a longitudinaldirection [that is, along the circulartrack], but they also generate fringefields that extend radially.” If the tracksare spaced too closely, this effect cancause information on adjacent tracks tobe overwritten and lost.

One solution is to fabricate the record-ing head more precisely to smaller di-mensions. “You can use a focused ionbeam to trim the write head and to nar-row the width of the track that a writerwrites,” Belser says. But the read head,which is a complex sandwich of ele-ments, poses a harder manufacturingproblem. Furthermore, for 150,000tracks or more per inch to be squeezedin, the tracks will have to be less thanabout 170 nanometers wide. Such mi-

to focus a laser beam on a spot with dimensions of less than amicron [see illustration at left].The TeraStor technology is called“near field” because the read-write head must be extremelyclose to the storage medium (the separation is less than thewavelength of the laser beam).

The recording medium consists of a layer of magnetic mate-rial similar to that used in magneto-optical systems. But ratherthan being a magnetic layer encased in plastic, the recordinglayer is placed on top of a plastic substrate, which reduces theproduction cost and permits data to be written directly ontothe recording surface.

As on a conventional magnetic disk, data bits (domains) arelaid down one after the other.But the near-field bits are writtenstanding up, or perpendicular to the plane of the disk, and nothorizontally along the disk surface.“The magnetic fields of thedomains poke out of the media vertically, rather than beinglaid out longitudinally,” explains Gordon R. Knight, chief tech-nology officer for TeraStor. “This configuration means that themagnetic fields of the bits support each other,unlike the fieldsof horizontally recorded bits, and are not subject to the super-paramagnetic effect.”

Furthermore, the ultrasmall domains are written in overlap-ping sequences, creating a series of crescent-shaped bits. Thisrecording method effectively doubles the number of bits thatcan be written linearly in a track,thus enabling the TeraStor tech-nology to achieve a higher storage capacity. Information is read

by exploiting the so-called Kerr effect.A beam of light is bouncedoff a domain on the disk. Depending on whether the crystals inthe domain have been magnetized to represent a 0 or a 1, theywill polarize the reflected light in different directions.

The TeraStor technology has been under development formore than five years,and Knight acknowledges that the deliveryof products has been delayed several times as the companyworks out various technical kinks.TeraStor did,however,demon-strate several prototypes at an industry trade show late last year,and the company has already lined up various manufacturingpartners, including Maxell and Toso for the storage medium,Olympus for the optical components,Texas Instruments for thesupporting electronic chips, and Mitsumi for the drive assembly.Industry heavyweight Quantum Corporation in Milpitas, Calif.,which has a financial stake in TeraStor, has provided additionaltechnology and access to its research lab.

If all goes well,TeraStor will be shipping products by the endof 2000. But the drives will contain just 20 gigabytes of storageon a removable CD-size medium. (Current hard drives alreadyboast more than 70 gigabytes.) Knight asserts that the initialproducts may replace tape and optical storage products in ap-plications in which access speed is important, such as digitalvideo editing. He contends that the technology will ultimatelymake possible disk drives with much higher capacity—greaterthan 300 gigabytes—which may enable it to compete more di-rectly with magnetic-disk drives. —J.W.T.

Avoiding a Data Crunch Scientific American May 2000 65

By 2002 the average price per megabyte for hard-disk drives willhave fallen to $0.003, predicts James Porter, Disk/Trend

GEO

RGE

RETS

ECK

Copyright 2000 Scientific American, Inc.

Page 7: Avoiding a Data Crunch

croscopically narrow tracks will be diffi-cult for the heads to follow, and thuseach head will need a secondary actua-tor for precise positioning. (In currentproducts, just one actuator controls theentire assembly of heads.)

Last, smaller bits in thinner trackswill generate weaker signals. To sepa-rate those signals from backgroundnoise, researchers need to develop new

algorithms that can retrieve the infor-mation accurately. Today’s software re-quires a signal-to-noise ratio of at least20 decibels. Says Belser, “The industryis at least six decibels short of beingable to work with the signal-to-noiseratio that would apply when dealingwith the bit sizes entailed in disks withareal densities of 100 to 150 gigabitsper square inch.”

Nevertheless, such problems are wellunderstood, many industry experts con-cur. In fact, Munce asserts that the im-provements in materials, fabricationtechniques and signal processing al-ready being studied at IBM and else-where will, over the next few years, en-able the manufacture of disk driveswith areal densities in the range of 100to 150 gigabits per square inch.

n obvious way to cram more information onto a disk is toshrink the size of the data bits by using fewer or smallergrains of a magnetic material for each bit. The problem,

though, is that the tiny bits begin to interfere with one another(think of what happens when two bar magnets are broughtclose to each other).To prevent such corruption of data, which iscaused by the superparamagnetic effect, researchers have beeninvestigating the use of certain rare-earth and transition ele-ments that are very magnetically stable. In the industry lingo,such metals have high coercivity and are called “hard.”

But a hard material is difficult to write on, so it may first be“softened”by being heated with a laser.This process lowers thecoercivity of the grains so that data can then be written to them.

As the material cools, it hardens, protecting the stored informa-tion from the vicissitudes of superparamagnetism.The conceptsounds simple enough, but it has been difficult to implement:the laser beam must avoid accidentally heating adjacent bitsthat contain previously stored data.

To that end, Seagate Technology, headquartered in Scotts Val-ley, Calif., is using a disk that has grooves between the circulartracks of bits (much as a vinyl record does). The grooves blockthe laser heat from flowing to neighboring tracks. To record information in those narrow tracks, Seagate has been develop-ing a new type of write head that is controlled by a special actu-

ator. Details of these components are being kept under wraps.The read head also presents certain difficulties. Because even

the current experimental devices are three tracks wide instead ofone,they have the potential to pick up unwanted noise during thereading process,according to Karl A.Belser,a storage technologistwith Seagate.Even if a narrower head were developed,the devicewould need to be positioned precisely to follow the extremelythin tracks.Solutions include the use of a laser-positioning system,but that would add complexity—and cost—to the overall drive.

An alternative is to make the medium easier to read.This canbe done by using a two-layer medium with a permanent storagelayer positioned below a readout layer.To read data on the medi-um, the readout layer would be magnetically erased, and then

the appropriate track of the storage layerwould be heated with a laser to bring its datato the readout layer through a magnetic cou-pling process similar to current magneto-opti-

cal disk processes. Once the track had been written to the read-out layer, its bits could be read in isolation from other tracks.Without the noise of adjacent tracks, even a wide head couldread the information in the readout track.

If such a system were workable, the technology could store1,000 gigabits per square inch,according to Seagate. In contrast,conventional wisdom holds that the superparamagnetic effectlimits the storage density of traditional disk drives to a range of100 to 150 gigabits per square inch.But even Seagate admits thatit is at least four years from commercializing its thermally assistedmagnetic drives. —J.W.T.

USING “HARD” MATERIALS

A

Avoiding a Data Crunch66 Scientific American May 2000

MAGNETO-OPTICAL DRIVE heats a spotwith a laser, loosening magnetic crystals so thatthey can be reoriented with a magnetic field.Thisbasic concept has been difficult to miniaturizebecause the laser must avoid accidentally heat-ing—and thus possibly destroying—previouslystored data. One solution is to manufacture adisk with grooves between concentric tracks ofdata to block heat from flowing between thetracks. To read the information, Seagate Tech-nology is considering the use of a two-tier sys-tem in which the data are stored in tracks on alower level.When the data are to be read out, alaser heats a section of a track in the lower layer.The heating induces magnetic coupling thattransfers the data to the upper level of the disk,where they can be read out in the absence of in-terfering fields from adjacent tracks.

GEO

RGE

RETS

ECK;

SOU

RCE:

HO

W C

OM

PUTE

RS W

ORK

,BY

RON

WH

ITE.

FOU

RTH

ED

ITIO

N.Q

UE

CO

RPO

RAT

ION

,199

8

LASER

RANGE OFMAGNETIC FIELD

LASER BEAM

WRITE HEAD

AREA HEATEDBY LASER

PLASTIC

ALLOYALUMINUM

Copyright 2000 Scientific American, Inc.

Page 8: Avoiding a Data Crunch

The introduction of thin-film headstook nearly 10 years. The transitionfrom that to magnetoresistive technolo-gy required six more years because ofvarious technical demands, includingseparate read and write elements for thehead, a manufacturing technique calledsputter deposition and different servocontrols. “Going from thin-film induc-tive heads to MR heads entailed a num-ber of new processes,” Munce remarks.“Delays were bound to happen.”

But the switch to giant magnetoresis-tive drives is occurring much faster, tak-ing just between 12 and 18 months. Infact, IBM and Toshiba began shippingsuch products before the rest of the in-dustry had fully converted to magne-toresistive heads.

The quick transition was possible be-cause giant magnetoresistive heads haverequired relatively few changes in thesurrounding disk-drive components.According to Munce, the progression todrive capacities of 100 gigabits persquare inch will likewise be evolution-ary, not revolutionary, requiring only in-cremental steps.

The Issue of Speed

But storage capacity is not the only is-sue. Indeed, the rate with which data

can be accessed is becoming an impor-tant factor that may also determine theuseful life span of magnetic disk-drivetechnology. Although the capacity ofhard-disk drives is surging by 130 per-cent annually, access rates are increasingby a comparatively tame 40 percent.

To improve on this, manufacturershave been working to increase the rota-tional speed of drives. But as a diskspins more quickly, air turbulence andvibration can cause misregistration ofthe tracks—a problem that could becorrected by the addition of a secondaryactuator for every head. Other possibleenhancements include the use of fluidbearings in the motor to replace steeland ceramic ball bearings, which wearand emit noticeably audible noise whenplatters spin at speeds greater than10,000 revolutions per minute.

Many industry onlookers foresee apossible bifurcation in the marketplace,with some disk drives optimized for ca-pacity and others for speed. The formermight be used for mass storage, such asfor backing up a company’s historicalfiles. The latter would be necessary forapplications such as customer service, in

agnetic hard drives can store data at incredible densities—more than10 gigabits per square inch of disk space. But as manufacturers craminformation ever more tightly, the tiny bits begin to interfere with one

another—the superparamagnetic effect [see main article]. One simple solu-tion is to segregate the individual bits by erecting barriers between them.This approach, called patterned media, has been an ongoing area of re-search at most laboratories doing advanced work in storage technology.

One type of patterned media consists of “mesas” and “valleys” fabricatedon the surface of a disk platter, with each mesa storing an individual bit. Ac-cording to proponents of this approach, one bit of data (either a 0 or 1)could theoretically be stored in a single grain,or crystal,of a magnetic mate-rial. In contrast, conventional hard-disk technology requires a minimum ofabout 500 to 1,000 grains for each bit. Thus, with a grain size of seven toeight nanometers in diameter, this type of storage could achieve a densityof more than 10,000 gigabits (or 10 terabits) per square inch.

To fabricate the mesas and valleys, companies have been investigatingphotolithographic processes used by the chip industry. “Electron beams orlasers would be needed to etch the pattern [onto the storage medium].

Mesas would then need to be grown on a substrate layer, one bit in diame-ter,”explains Gordon R. Knight, chief technology officer at TeraStor. But thistechnique needs much refinement. One estimate is that the current litho-graphic processes can at best make mesas that are about 80 nanometers indiameter—an order of magnitude too large for what is needed.

Even if the industry could obtain sufficiently tiny mesas and valleys, itwould still need a revolutionary new type of head to read the data,says Cur-rie Munce, director of storage systems and technology at the IBM AlmadenResearch Center in San Jose, Calif. According to Munce, various signal-to-noise issues would necessitate a radical departure from current magnetic-disk systems. Consequently,most experts agree that patterned-media tech-nology will take years to become practical. —J.W.T.

PATTERNS OF BITS

M

Avoiding a Data Crunch

“MESAS AND VALLEYS” of a future magnetic disk could help manufactur-ers avoid the superparamagnetic effect, in which closely packed bits in themagnetic media interfere with one another. In this patterned approach, theproblem is circumvented by segregating each bit in its own mesa.The diffi-culty is in making the mesas small enough: they would have to be aroundeight nanometers across or smaller in order to achieve the kind of densitiesthat developers are seeking. IBM has been able to build such structures withfeature sizes as small as 0.1 and 0.2 micron (inset),or 100 and 200 nanometers.

Scientific American May 2000 67

PATTERNEDMAGNETIC FILM

SUBSTRATE

GEO

RGE

RETS

ECK;

IBM

(mic

rogr

aph)

Copyright 2000 Scientific American, Inc.

Page 9: Avoiding a Data Crunch

which the fast retrieval of data is crucial.In the past, customers typically pre-

ferred a bigger drive at the lowest possi-ble cost, even if the product had slowerperformance. “In our hypercompetitiveindustry, drives with 30 to 40 percenthigher performance sell for only about a20 percent higher price,” Munce notes.

But new applications are demandingfaster drives. With electronic commerceover the World Wide Web, for exam-ple, companies need to store and re-trieve customer data on the fly. In addi-tion, businesses are deploying an in-creasing number of dedicated file serversfor information that needs to be shared

and accessed quickly by a number ofemployees.

The capacity-versus-performance de-bate could become acute as the industryconsiders various ways to avoid theSPE barrier. Experts agree that movingbeyond areal densities of 150 gigabitsper square inch will require a significant

or nearly four decades, holographic memory has been thegreat white whale of technology research. Despite enor-mous expenditures, a complete, general-purpose system

that could be sold commercially continues to elude industrialand academic researchers.Nevertheless, they continue to pursuethe technology aggressively because of its staggering promise.

Theoretical projections suggest that it will eventually be possi-ble to use holographic techniques to store trillions of bytes—anamount of information corresponding to the contents of millionsof books—in a piece of crystalline material the size of a sugar cubeor a standard CD platter. Moreover,holographic technologies per-mit retrieval of stored data at speeds not possible with magneticmethods. In short, no other storage technology under develop-ment can match holography’s capacity and speed potential.

These facts have attracted name-brand players, including IBM,Rockwell, Lucent Technologies and Bayer Corporation. Workingboth independently and in some cases as part of research con-sortia organized and co-funded by the U.S. Defense AdvancedResearch Projects Agency (DARPA), the companies are striving toproduce a practical commercial holographic storage systemwithin a decade.

Since the mid-1990s, DARPA has contributed to two groupsworking on holographic memory technologies: the HolographicData Storage System (HDSS) consortium and the PhotoRefrac-tive Information Storage Materials (PRISM) consortium. Bothbring together companies and academic researchers at such in-stitutions as the California Institute of Technology, Stanford Uni-versity, the University of Arizona and Carnegie Mellon University.Formed in 1995, HDSS was given a five-year mission to develop apractical holographic memory system,whereas PRISM,formed in1994, was commissioned to produce advanced storage mediafor use in holographic memories by the end of this year.

With deadlines for the two projects looming, insiders reportsome significant recent advances. For example, late last year atStanford, HDSS consortium members demonstrated a holo-graphic memory from which data could be read out at a rate of abillion bits per second.At about the same time,an HDSS demon-stration at Rockwell in Thousand Oaks, Calif., showed how a ran-domly chosen data element could be accessed in 100 microsec-onds or less, a figure the developers expect to reduce to tens ofmicroseconds.That figure is superior by several orders of magni-tude to the retrieval speed of magnetic-disk drives,which requiremilliseconds to access a randomly selected item of stored data.Such a fast access time is possible because the laser beams thatare central to holographic technologies can be moved rapidlywithout inertia,unlike the actuators in a conventional disk drive.

Although the 1999 demonstrations differed significantly interms of storage media and reading techniques, certain funda-mental aspects underlie both demonstration systems.An impor-

tant one is the storage and retrieval of entire pages of data at onetime.These pages might contain thousands or even millions of bits.

Each of these pages of data is stored in the form of an optical-in-terference pattern within a photosensitive crystal or polymer ma-terial.The pages are written into the material,one after another,us-ing two laser beams. One of them, known as the object or signalbeam, is imprinted with the page of data to be stored when itshines through a liquid-crystal-like screen known as a spatial-lightmodulator. The screen displays the page of data as a pattern ofclear and opaque squares that resembles a crossword puzzle.

A hologram of that page is created when the object beammeets the second beam, known as the reference beam, and thetwo beams interfere with each other inside the photosensitiverecording material.Depending on what the recording material ismade of, the optical-interference pattern is imprinted as the re-sult of physical or chemical changes in the material.The patternis imprinted throughout the material as variations in the refrac-tive index, the light absorption properties or the thickness of thephotosensitive material.

When this stored interference pattern is illuminated with ei-ther of the two original beams, it diffracts the light so as to recon-struct the other beam used to produce the pattern originally.Thus, illuminating the material with the reference beam re-cre-ates the object beam, with its imprinted page of data. It is then arelatively simple matter to detect the data pattern with a solid-state camera chip, similar to those used in modern digital videocameras.The data from the chip are interpreted and forwardedto the computer as a stream of digital information.

Researchers put many different interference patterns,each cor-responding to a different page of data, in the same material.Theyseparate the pages either by varying the angle between the ob-ject and reference beams or by changing the laser wavelength.

R ockwell, which is interested in developing holographic mem-ories for applications in defense and aerospace, optimized its

demonstration system for fast data access, rather than for largestorage capacities. Thus, its system utilized a unique, very highspeed acousto-optical-positioning system to steer its laserthrough a lithium niobate crystal.By contrast, the demonstrationat Stanford, including technologies contributed by IBM, Bayerand others, featured a high-capacity polymer disk mediumabout the size of a CD platter to store larger amounts of data. Inaddition, the Stanford system emphasized the use of compo-nents and materials that could be readily integrated into futurecommercial holographic storage products.

According to Hans Coufal,who manages IBM’s participation inboth HDSS and PRISM, the company’s strategy is to make use ofmass-produced components wherever possible.The lasers, Cou-fal points out, are similar to those that are found in CD players,

FON THE HORIZON: HOLOGRAPHIC STORAGE

70 Scientific American May 2000 Avoiding a Data CrunchCopyright 2000 Scientific American, Inc.

Page 10: Avoiding a Data Crunch

departure from conventional magnetichard disks. Some of the alternativesboast impressive storage capabilitiesbut mediocre speeds, which would lim-it their use for certain applications. Atpresent, the main strategies include:

• Change the orientation of the bits onthe disk from longitudinal (circumfer-

ential) to perpendicular, or vertical, tocram more of them together and toprevent them from flipping [see boxon page 64].

• Use magnetic materials, such as alloysof iron/platinum or cobalt/samarium,that are more resistant to SPE. If themagnetic “hardness” of the materialis a problem for recording data, heat

the medium first to “soften” it mag-netically before writing on it [see boxon page 66].

• Imprint a pattern lithographically on-to the storage medium to build micro-scopic barriers between individual bits[see box on page 67].

• Use a radically different storage mate-rial, such as holographic crystals [see

and the spatial-light modulators resemble ordinary liquid-crystaldisplays.

Nevertheless,significant work remains before holographic mem-ory can go commercial, Coufal says. He reports that the image ofthe data page on the camera chip must be as close to perfect aspossible for holographic information storage and retrieval to work.Meeting the exacting requirements for aligning lasers, detectorsand spatial-light modulators in a low-cost system presents a signif-icant challenge.

Finding the right storage material is also a persistent challenge,according to Currie Munce,director of storage systems and tech-nology at the IBM Almaden Research Center. IBM has workedwith a variety of materials, including crystal cubes made of lithi-

um niobate and other inorganic sub-stances and photorefractive, pho-tochromic and photochemical poly-mers,which are in development at Bay-er and elsewhere. He notes that in-dependent work by Lucent and by Ima-tion Corporation in Oakdale, Minn., isalso yielding promising media pros-pects. No materials that IBM has testedso far, however, have yielded the mix ofperformance, capacity and price thatwould support a mainstream commer-cial storage system.

Both Munce and Coufal say that IBM’slong-standing interest in holographicstorage intensified in the late 1990s asthe associative retrieval properties ofthe medium became better under-stood. Coufal notes that past applica-tions for holographic storage targetedthe permanent storage of vast librariesof text, audio and video data in a smallspace. With the growing commercialinterest in data mining—essentially,sifting through extremely large ware-houses of data to find relationships orpatterns that might guide corporatedecision making and business processrefinements—holographic memory’sassociative retrieval capabilities seemincreasingly attractive.

After data are stored to a holographicmedium,a single desired data page canbe projected that will reconstruct all ref-erence beams for similarly patterneddata stored in the media. The intensity

of each reference beam indicates the degree to which the corre-sponding stored data pattern matches the desired data page.

“Today we search for data on a disk by its sector address, notby the content of the data,”Coufal explains.“We go to an addressand bring information in and compare it with other patterns.With holographic storage, you could compare data opticallywithout ever having to retrieve it. When searching large databas-es,you would be immediately directed to the best matches.”

While the quest for the ideal storage medium continues, prac-tical applications such as data mining increase the desirability ofholographic memories.And with even one business opportunityclearly defined, the future of holographic storage systems isbright indeed. —J.W.T.

OPTICAL LAYOUT of a holographicmemory system shows how a crystal canbe imprinted with pages of data.An objectbeam takes on the data as it passes througha spatial-light modulator.This beam meetsanother—the reference beam—in the crys-tal,which records the resulting interferencepattern. A mechanical scanner changes theangle of the reference beam, and then an-other page can be recorded.

GEO

RGE

RETS

ECK

LASER

BEAMSPLITTER

MIRRORSPATIAL-LIGHTMODULATOR

LENS

MIRROR

MIRROR

MIRROR LENS

LENS

LENS

LENS

LENS

BEAMSPLITTER

MIRRORSTRIPS

HOLOGRAMREADER

SCANNERASSEMBLY

REFERENCEBEAM

REFERENCEBEAM

OBJECTBEAM

OBJECTBEAM

CRYSTAL

Avoiding a Data Crunch Scientific American May 2000 71

Copyright 2000 Scientific American, Inc.

Page 11: Avoiding a Data Crunch

huck Morehouse, director of Hewlett-Packard’s InformationStorage Technology Lab in Palo Alto, Calif., is quick to pointout that atomic resolution storage (ARS) will probably nev-

er completely replace rotational magnetic storage.Existing hard-disk drives and drive arrays play well in desktops and data cen-ters where device size is not a major issue.But what about the re-quirements for mass storage on a wristwatch or in a spacecraft,where form factor, mass and power consumption are overridingcriteria?

The ARS program at Hewlett-Packard (HP) aims to provide athumbnail-size device with storage densities greater than oneterabit (1,000 gigabits) per square inch.The technology builds onadvances in atomic probe microscopy, in which a probe tip assmall as a single atom scans the surface of a material to produceimages accurate within a few nanometers. Probe storage tech-nology would employ an array of atom-size probe tips to readand write data to spots on the storage medium. A micromoverwould position the medium relative to the tips.

IBM and other companies are actively developing such probestorage technology,and Morehouse reports that the U.S.Depart-ment of Defense has a stake in the work. For example, the De-

fense Advanced Research Projects Agency (DARPA) is footing thebill for three HP researchers who are working on bringing a de-vice from the test lab to the marketplace.

According to Morehouse,they face four primary challenges.Firstis the storage medium.The HP group has chosen one consisting ofa material with two distinct physical states,or phases,that are sta-ble at room temperature.One phase is amorphous,and the otheris crystalline. Bits are set in this “phase-change medium”by heatingdata spots to change them from one phase to the other.

The second challenge is the probe tip,which must emit a well-directed beam of electrons when voltage is applied. A strong

beam flowing from the tip to the medium heats a data spot asneeded to write or erase a bit. A weak beam can be used to readdata by detecting a spot’s resistance or other phase-dependentelectrical property. Optical reading techniques may also be pos-sible. HP is looking at a “far-field”approach in which the tip is per-haps 1,000 nanometers from the medium, unlike most probe ef-forts in which the tip is in contact or almost in contact with themedium.

A third issue is the actuator or micromover that positions themedia for reading and writing. HP is developing a micromotorwith nanometer-level positioning capabilities.

The final step is packaging. Morehouse explains: “We need toget the ARS device together into a rugged package and developthe system electronics that will allow it to be integrated with oth-er devices.”An extra difficulty is that the working elements of thedevice will probably need to be in a vacuum or at least in a con-trolled atmosphere to reduce the scattering of electrons fromthe read-write beam and to reduce the flow of heat betweendata spots.

Morehouse sees the technology to create the ARS device be-coming available within a decade but acknowledges that it may

take considerably longer tobring the device to market.The magnetic-disk industryhas a significant investmentto protect, but he is confi-dent that as applications de-mand the portability andperformance that ARS of-fers, it will become a signifi-cant player in the storagemarket.

“My imagination for howit can be used is woefully in-adequate,” Morehouse says.Magnetic-disk drives are be-

ing scaled steadily downward in size—witness the example ofIBM’s Microdrive (340 megabytes in a one-inch form factor). Nev-ertheless, “ARS may be competitive for many applications,” henotes. “One key advantage is low power consumption. WhenARS is not being asked to perform an operation, it has no powerconsumption. Watchmakers may not want a Microdrive with alot of batteries for a watch.”

Morehouse says that the first ARS devices might have a one-gigabyte capacity but that capacities will increase over time:“The ultimate capacity will be determined by how small you canmake a spot. Nobody knows the answer to that—100 atoms?” —

ELECTRON BEAMS from an array of probeswith atom-size tips write data onto the storagemedium by heating tiny data spots and alteringtheir physical state or phase.Under the array,themedium is moved with nanometer precision(lower right).The 30-micron-wide Hewlett-Pack-ard logo (inset) was written and imaged using asingle tip on such a phase-change medium.

GEO

RGE

RETS

ECK;

HEW

LETT

-PA

CK

ARD

(m

icro

grap

h)

EMISSION-TIP ARRAY

STORAGEMEDIUM SUSPENSION

SPRINGS

FIELD EMISSIONTIP

MEDIUM RECORDINGCELLS

A DECADE AWAY: ATOMIC RESOLUTION STORAGE

72 Scientific American May 2000 Avoiding a Data Crunch

C

ELECTRONBEAM

Copyright 2000 Scientific American, Inc.

Page 12: Avoiding a Data Crunch

ecades ago punch cards were a popular, if clunky, way tostore information. Now their mechanical simplicity, shrunkto fantastically small dimensions, is back in vogue at IBM.

The company’s research group in Zurich has recently developeda prototype, dubbed Millipede, that can cram an amazingamount of information into a tiny space—500 gigabits persquare inch—by storing the data as microscopic indentations ona flat polymer surface. In comparison, the workhorse magnetic-disk drive is currently limited to no more than 35 gigabits persquare inch, even in the most advanced laboratory prototype.

“We’ve reinvented punch cards by using plastics,” boasts PeterVettiger,manager of the project for IBM.

To understand how Millipede works, think of another bygonetechnology:the phonograph stylus. In IBM’s version the tip of thestylus is incredibly sharp (its radius is just 20 nanometers) andrests gently on a smooth, moving plastic surface.To create an in-dentation, an electric current zaps through the stylus, brieflyheating the tip to 400 degrees Celsius, which melts the polymerslightly. A series of such current spikes thus produces a string ofindentations, the dips and flats becoming the 0’s and 1’s of thedigital world.

To read this information, the stylus tip is heated to a constant350 degrees C (below the melting point of the plastic) as itmoves across the polymer surface.When the stylus drops into anindentation, heat from the tip dissipates.The resulting tempera-ture drop can be detected by a change in the electrical resist-ance of the stylus.

IBM has recently turned this basic technology, derived fromatomic-force microscopy, into a working prototype. To increasethe rate at which data are both written and read, the device con-sists of 1,024 styluses (hence the name “Millipede”) in an area justthree by three millimeters. Arranged in a 32-by-32 array, the sty-

luses are made of silicon and operate simultaneously, makingtheir indentations on a thin layer of plastic coated onto a siliconsubstrate.

One advantage of Millipede is its anticipated low manufactur-ing cost. IBM asserts that the arrays can be built economicallywith processes similar to those used to fabricate computer chips.“You could build hundreds of these arrays on the same wafer,”Vettiger predicts.

For increased storage, he says, a future improvement mightconsist of a larger array (in an earlier incarnation, Millipede had

just 25 styluses arranged five by five) or multiple arrays,or a com-bination of both. “We are completely open at this time. It’s a verygeneral concept,”Vettiger contends. Another innovation mightbe the use of carbon nanotubes as stylus tips,which would makethem even smaller.

Still, IBM must resolve several core issues before Millipede canbecome a practical product.One that the company is currently in-vestigating is the long-term durability of the technology.Mechan-ical wear could dull the sharp tips of the styluses,making their in-dentations on the plastic surface larger. Eventually the markingsmight begin to overlap,turning pristine data into digital mush.

To be sure, much work remains before Millipede could evenhope to supplant the venerable magnetic-disk drive. A greateropportunity might be in small consumer products. IBM is confi-dent that it can develop a tiny device (10 by 10 by 5 millimetersor considerably smaller) that can store 10 gigabits, making thetechnology appropriate for cell phones,digital cameras and pos-sibly watches. “This is where we’ll concentrate our efforts overthe next one to two years,” Vettiger says. But he is the first totemper his growing excitement. “This is still an exploratory re-search project,” he admits. “It’s not as if tomorrow these productswill come on the market.” —Alden M. Hayashi, staff writer

“PUNCH CARDS” OF THE FUTURE

MILLIPEDE consists of 1,024 tiny styluses arranged in a32-by-32 array (left).The styluses write and read data as tinyindentations on a plastic surface (above). The tip of eachstylus has a radius of just 20 nanometers (inset).

GEO

RGE

RETS

ECK;

IBM

(m

icro

grap

h)

2-D STYLUSARRAY CHIP

MULTIPLEX DRIVER

STORAGE MEDIUM

2-D STYLUS ARRAY CHIP

STYLUS

STYLUS TIP

D

Scientific American May 2000 73Avoiding a Data CrunchCopyright 2000 Scientific American, Inc.

Page 13: Avoiding a Data Crunch

74 Scientific American May 2000 Avoiding a Data Crunch

box on page 70], phase-change met-als [see box on page 72], or plastic[see box on page 73].

Although several of these approacheshave attracted large investments fromthe leading manufacturers, most remainin the very early stages of testing. Someof the concepts await research break-throughs or key advances in supportingtechnologies before work can begin inearnest on prototypes.

Taking a Breather

Until then, hard-disk manufacturerswill continue wringing additional

improvements from conventional mag-netic technology. “This industry seemsto be following an S curve,” Porter ofSeagate observes. He explains that inthe near term, manufacturers will stillbe riding the wave of advances made inthe fields of photolithography and semi-conductor-chip manufacturing, whichenabled the economical fabrication ofmagnetoresistive and giant magnetore-sistive heads. Consequently, the cost perstorage bit has been dropping at a rateof 1.5 percent per week.

But experts agree that the industrywill be hard-pressed to sustain that pacebeyond the next few years. Gone will bethe days when storage capacities grewby 130 percent annually. “Given thetechnological challenges,” Munce says,“we will probably be back to a 60 per-cent annual growth rate within fiveyears.” Adds Porter, “Things are gettingmore difficult at a faster rate than theyhave historically. Perhaps a slowdownwill give us time to evaluate options.”

Interestingly, no one can be sure whichof the alternative technologies will bearfruit. IBM itself is hedging its bets, inves-tigating a number of approaches, includ-ing the use of patterned media, plasticmedia and holography. One thing is forcertain, though. With the SPE specterlooming and the industry confrontingcore design and materials issues, a win-dow of opportunity might be opening.

But getting through that window willnot be easy. “New technologies will needto show a profit quickly and be able tocompete on a cost per gigabyte of stor-age with magnetic disks,” Munce as-serts. More likely, he adds, the new tech-nologies might first need to carve outniches in the market. (Holographic crys-tals, for example, might be used for thepermanent digitized storage of large li-braries.) “The trend is definitely towardcost reduction,” Munce points out,“which will make it very difficult forother technologies to get on the tracks.”

Most observers agree that, regardlessof the uncertainties of future data storagetechnologies, the Information Age willnot come to a grinding halt as a result ofthe superparamagnetic effect. If delaysare encountered in bringing higher-ca-pacity products to market, the void ismost likely to be filled in the interim byaggregating many drives into intelligentarrays that represent themselves as indi-vidual “virtual” drives to systems and toend users. This approach has alreadybeen seen in the case of Redundant Ar-rays of Inexpensive Disks (RAID) andJust a Bunch of Disks (JBOD) arrays thatwere deployed during the 1980s and

1990s to meet the storage capacity re-quirements of companies whose needswere not being satisfied by the single-drive capacities of the day.

Another approach that has garneredsignificant attention in the industry is toplace drives and arrays into storage areanetworks (SANs) that can be scaled dy-namically to meet growing storage ap-petites. The SAN approach further “vir-tualizes” storage and will eventuallyprovide a “storage utility” that will au-tomatically deliver to users applicationswith the size and kind of storage theyrequire from a pool of interconnectedstorage devices.

These approaches to increasing stor-age capacity entail greater complexityand, by extension, greater managementcosts in comparison to single storage de-vices. But they do provide a means foraddressing the needs of businesses andconsumers for more elbow room fortheir data in the short term until new,higher-capacity technologies can bebrought to market. Moreover, they willprobably be facilitated by decliningdisk-drive prices, which will result fromongoing improvements in manufactur-ing processes.

The Author

JON WILLIAM TOIGO is an independent information technologies consultant. He is theauthor of eight books, including The Holy Grail of Data Storage Management (Prentice-HallPTR, 2000), and has written numerous white papers and trade press articles on business au-tomation and storage technology issues. Toigo maintains an information clearinghouse cov-ering data storage and storage management at www.stormgt.org on the World Wide Web.

Further Information

Magneto-Resistive Heads: Fundamentals and Applications. Electromagnetism Se-ries. John C. Mallinson, contributions by Isaak D. Mayergoyz. Academic Press, 1995.

Magnetic Disk Drive Technology: Heads, Media, Channel, Interfaces, and Inte-gration. Kanu G. Ashar. IEEE Press, 1997.

Magnetic Information Storage Technology. Electromagnetism Series. Shan X. Wangand A. M. Taratorin. Academic Press, 1999.

Magnetic Recording: The First 100 Years. Edited by Eric D. Daniel, C. Denis Mee andMark H. Clark. IEEE Press, 1999.

Holographic Data Storage. Edited by Hans J. Coufal and Demetri Saltis. Springer-Ver-lag TELOS, 2000.

New technologies will need to show a profit quickly and beable to compete economically with magnetic disks, warnsCurrie Munce, IBM Almaden Research Center

SA

Copyright 2000 Scientific American, Inc.