Davie 5/18/2010. Thursday, May 20 th @ 5:30pm Ursa Minor Co-sponsored with CSS Guest Speakers ...

Preview:

Citation preview

Davie5/18/2010

Thursday, May 20th @ 5:30pm Ursa Minor Co-sponsored with CSS Guest Speakers

Dr. Craig Rich – TBA James Schneider – Cal Poly “State of the

Network” Address Sean Taylor – Reverse Engineering for

Beginners Free food!

04/18/23

Friday, May 21st @ 5:30PM98P 2-007 (You better know where

this is!)Games

Starcraft, TF2, FEAR, Bad Company 2 Linux, GotY Edition Consoles welcome

MusicFree food

04/18/23

Wednesday, May 19th @ 1:00pmSean McAllisterStructured Exception Handling

04/18/23

Redundant Array of Inexpensive | IndependentDisks

04/18/23

Combining multiple physical devices to achieve increased performance and/or reliability

Added benefit of a single, large device

04/18/23

A backup solution. End of story.▪ Stop arguing.▪ You’re stupid.

04/18/23

RAID functions by combining three concepts to achieve desired results Striping – Splitting data across multiple

disks to maximize I/O bandwidth Mirroring – Storing a copy of the data

across multiple disks to guard against drive failure

Error-correction – Parity calculations to find and repair bad data. Also used to distribute data across drives

04/18/23

Array – collection of disks that operate as one

Degraded array – array where a component disk is missing, but the array can still function

Failed array – array where enough disks are missing to prevent all functionality

Hot spare – extra disk that will allow a degraded array to repair itself Won’t help failed arrays though

Reshape – modify array size or level04/18/23

Levels 0-6Nested RAID

Combines two levels Just a Bunch Of Disks (JBOD) &

Spanning Concatenates one disk to the end of the

other No performance or reliability

improvements

04/18/23

Data is striped across multiple disks Minimum of two

No redundancy Lose one disk, lose all data

High read/write throughput Disks can read or write simultaneously

without costly parity calculation Results in array of size n Difficult to reshape, and therefore

expand04/18/23

04/18/23

Data is mirrored across multiple disks Minimum of two

Full redundancy Lose all but 1 disk, data still good

High read, low write throughput Read different simultaneously Write same data multiple times

Results in array of size 1 Can be reshaped to RAID 5

04/18/23

04/18/23

I don’t bother with these Neither should you

RAID 2: Sounds like CS magic http://en.wikipedia.org/wiki/RAID_2#RAI

D_2RAID 3 & 4: Striped with a single disk

for parity Use RAID 5 or 6 instead

04/18/23

Data is striped across multiple disks Minimum of three (unless you’re retarded

like me) Parity is calculated and distributed

Lose any 1 disk, parity allows it to be regenerated

Increased overhead All reads and writes require calculations

Results in array of size n-1 Can be reshaped to RAID 6

04/18/23

04/18/23

Data is striped across multiple disks Minimum of three (unless you’re retarded

like me) Parity is calculated and distributed

Lose any 2 disks, parity allows regeneration

Increased overhead All reads and writes require calculations

Results in array of size n-2 Can be reshaped to RAID 5

04/18/23

04/18/23

Combines striping and mirroringMay tolerate multiple failures

But specific combination of failures may ruin array

Extremely inefficient space usage

04/18/23

04/18/23

04/18/23

04/18/23

Dedicated CPU & RAM May include battery for cache High throughput for I/O Data on disk may be vendor-specific

and not portable to other controllers (Controller died? Better have an exact replacement!)

OS sees a single device from the BIOS, but may require additional driver

04/18/23

Relies on host CPU for all calculations

No battery for cacheOS level drivers allow for maximum

portability (within OS families of course) Native to Linux kernel (Woooo!) Windows, BSD, Solaris, OSX all have

support

04/18/23

Looks and acts like hardware RAID OS sees single BIOS device Requires vendor-specific driver

Performs like software RAID Relies on host CPU & ram No cache battery

04/18/23

Add filesystems ORUse Logical Volume Management

(LVM) Then add filesystems

Create a storage server Media Backups

04/18/23

Physical Volumes (PV) Disks, partitions, arrays

Volume Group (VG) Combines PVs into single pool of space

Logical Volumes (LV) Create LVs inside the VG that act like

partitions Don’t need to be continuous Can be added or resized at will

04/18/23