Upload
arlene-sanders
View
215
Download
0
Embed Size (px)
Citation preview
I/O Management and I/O Management and Disk SchedulingDisk Scheduling
Chapter #11Chapter #11
I/O I/O 장치의 분류장치의 분류 (1)(1)
인간가독 장치 (Human readable) 사용자가 컴퓨터와 대화하는데 적합한 장치 Printers Video display terminals
Display Keyboard Mouse
2Operating System
I/O I/O 장치의 분류장치의 분류 (2)(2)
기계가독 장치 (Machine readable) 전자장치와 통신하는데 사용되는 장치 Disk and tape drives Sensors Controllers Actuators
3Operating System
I/O I/O 장치의 분류장치의 분류 (3)(3)
통신 장치 (Communication) 원격장치와 통신하는데 적합한 장치 Digital line drivers Network Interface devices Modems
4Operating System
I/O I/O 장치의 차이점 장치의 차이점 (1)(1)
데이터 전송률 (Data rate) May be differences of several orders of magnitude
between the data transfer rates
응용 (Application) Disk used to store files requires file management
software Disk used to store virtual memory pages needs special
hardware and software to support it Terminal used by system administrator may have a
higher priority
5Operating System
6Operating System
I/O I/O 장치의 차이점 장치의 차이점 (2)(2)
제어복잡성 (Complexity of control) I/O 장치에 대한 제어 인터페이스의 복잡도
전송 단위 (Unit of transfer) Data may be transferred as a stream of bytes for a
terminal or in larger blocks for a disk 데이터 표현 (Data representation)
Encoding schemes 에러 조건 (Error conditions)
Devices respond to errors differently
7Operating System
I/O I/O 수행 방식 수행 방식 (1)(1)
프로그램에 의한 입출력 (Programmed I/O) 프로세스가 입출력 모듈에게 원하는 입출력 명령을 보낸다 프로세스는 입출력 작업이 완료될 때까지 빠쁜 대기 (busy-
waiting) 을 수행한다
인터럽트 구동 입출력 (Interrupt-driven I/O) 프로세스가 입출력 모듈에게 원하는 입출력 명령을 보내고
보류 (blocking) 된다 프로세서는 다른 프로세스의 실행을 계속한다 I/O 모듈은 입출력 작업이 완료되면 인터럽트를 발생시킨다 입출력을 요구한 프로세스를 활성화시켜 실행한다
8Operating System
I/O I/O 수행 방식 수행 방식 (2)(2)
직접 메모리 접근 (Direct Memory Access:DMA) DMA module controls direct exchange of data between
main memory and the I/O device 프로세서는 DMA 모듈에게 데이터 블록 전송을 요청한다 . DMA 모듈은 요청된 데이터 블록 전송이 완료되면
인터럽트를 발생시킨다 .
9Operating System
I/O I/O 수행 방식 수행 방식 (3)(3)
10Operating System
I/O I/O 기능의 발달 기능의 발달 (1)(1)
단계 1. 프로세스가 직접 주변장치를 제어함
단계 2. 입출력 제어기 또는 입출력 모듈이 추가됨 Processor uses programmed I/O without interrupts Processor does not need to handle details of external
devices
단계 3. 인터럽트을 지원하는 입출력 모듈 도입 Processor does not spend time waiting for an I/O
operation to be performed
11Operating System
I/O I/O 기능의 발달 기능의 발달 (2)(2)
단계 4. DMA(Direct Memory Access) 도입 Blocks of data are moved into memory without involving the
processor Processor involved at beginning and end only
단계 5. 별도의 처리기로 동작하는 입출력 채널 (I/O Channel) 도입 입출력용 명령어 집합을 지원 CPU 는 입출력 명령어을 전송하고 , 모든 입출력 명령어 실행이
종료된 후에 인터럽트를 받음
단계 6. 독립적인 입출력 프로세서 (I/O Processor) 도입 I/O module has its own local memory Its a computer in its own right
12Operating System
DMA(Direct Memory Access) (1)DMA(Direct Memory Access) (1)
Takes over control of the system from the CPU to transfer data to and from memory over the system bus
Cycle stealing is commonly used to transfer data on the system bus processor is suspended just before it needs to use the
bus DMA transfers one word and returns control to the
processor processor pauses for one bus cycle
13Operating System
DMA(Direct Memory Access) (2)DMA(Direct Memory Access) (2)
DMA 동작 방식 Processor sends the following information
read or write? address of I/O device involved starting address in memory number of words
DMA transfers the entire block DMA sends an interrupt signal when done
14Operating System
DMA(Direct Memory Access) (3)DMA(Direct Memory Access) (3)
15Operating System
DMA(Direct Memory Access) (4)DMA(Direct Memory Access) (4)
16Operating System
DMA DMA 구성 방법 구성 방법 (1)(1)
17Operating System
Single bus, detached DMA all modules share the same system bus inexpensive, but inefficient
Single bus, integrated DMA-I/O there is a separate path between DMA and I/O modules
I/O bus only one interface between DMA and I/O modules provides for an easily expandable configuration
DMA DMA 구성 방법 구성 방법 (2)(2)
18Operating System
DMA DMA 구성 방법 구성 방법 (3)(3)
19Operating System
Operating System Design Issues (1)Operating System Design Issues (1)
효율성 (Efficiency) Most I/O devices extremely slow compared to main
memory and the processor
입출력 연산이 병목현상 (bottleneck) 을 야기 Solutions
Use of multiprogramming allows for some processes to be waiting on I/O while another process executes
Swapping is used to bring in additional Ready processes to keep the processor busy
Support efficient Disk I/O to improve the efficiency of the I/O
20Operating System
Operating System Design Issues (2)Operating System Design Issues (2)
일반성 (Generality) 간결성과 무결성을 지원하기 위해서는 모든 장치를 일관된
방식으로 다루는 것이 바람직함 입출력 장치의 특성이 다양하여 일반성을 얻기 어려움 계층적인 모듈 접근법을 이용한 입출력 기능 설계
Hide most of the details of device I/O in lower-level routines so that processes and upper levels see devices in general terms such as read, write, open, close, lock, unlock
21Operating System
Operating System Design Issues (3)Operating System Design Issues (3)
22Operating System
I/O Buffering (1)I/O Buffering (1)
버퍼링 ( buffering) 기법의 필요성 Processes must wait for I/O to complete before
proceeding Certain pages must remain in main memory during I/O Perform input transfers in advance of requests and
perform output transfers sometime after the request is made
Schemes 단일 버퍼링 (single buffering) 이중 버퍼링 (double buffering) 환형 버퍼링 (circular buffering)
23Operating System
I/O Buffering (2)I/O Buffering (2)
I/O 장치의 유형 블록형 (Block-oriented ) 장치
Information is stored in fixed sized blocks Transfers are made a block at a time Used for disks and tapes
스트림형 (Stream-oriented) 장치 Transfer information as a stream of bytes Used for terminals, printers, communication ports, mouse
and other pointing devices, and most other devices that are not secondary storage
24Operating System
Single Buffer (1)Single Buffer (1)
Operating system assigns a buffer in main memory for an I/O request
Block-oriented Input transfers made to buffer Block moved to user space when needed Another block is moved into the buffer
Read ahead
25Operating System
Single Buffer (2)Single Buffer (2)
Block-oriented User process can process one block of data while next
block is read in Swapping can occur since input is taking place in system
memory, not user memory Operating system keeps track of assignment of system
buffers to user processes output is accomplished by the user process writing a
block to the buffer and later actually written out
26Operating System
Single Buffer (3)Single Buffer (3)
Stream-oriented Used a line at time User input from a terminal is one line at a time with
carriage return signaling the end of the line Output to the terminal is one line at a time
27Operating System
Single Buffer (4)Single Buffer (4)
28Operating System
Double BufferDouble Buffer
Operating System 29
Use two system buffers instead of one A process can transfer data to or from one
buffer while the operating system empties or fills the other buffer
Circular BufferCircular Buffer
Operating System 30
More than two buffers are used Each individual buffer is one unit in a circular
buffer Used when I/O operation must keep up with
process
Disk Performance Parameters (1)Disk Performance Parameters (1)
To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector
Disk I/O time = queuing time (wait for device) + channel waiting time (wait for channel) + seek time + rotational delay (latency) + data transfer time
31Operating System
Timing of a Disk I/O TransferTiming of a Disk I/O Transfer
32Operating System
Disk Performance Parameters (2)Disk Performance Parameters (2)
Seek time time it takes to position the head at the desired track Ts = m * n + s
Ts = estimated seek time, n = number of tracks traversed, m = constant that depends on the disk drive, s = startup time
inexpensive disk : m = 0.3, s = 20ms expensive disk : m = 0.1, s = 3ms
33Operating System
Disk Performance Parameters (3)Disk Performance Parameters (3)
Rotational delay or rotational latency time its takes until desired sector is rotated to line up
with the head Tr = 1 / (2r)
Tr = average rotational delay time, r = rotation speed in revolutions per second
disk : 3600 rpm, 16.7 ms/rot, Tr = 8.3 ms ; floppy : 300~600 rpm, 100~200 ms/rot, Tr = 50~100 ms
34Operating System
Disk Performance Parameters (4)Disk Performance Parameters (4)
Transfer time time its takes while desired sector moves under the head Tt = b / (rN)
Tt = transfer time b = number of bytes to be transferred N = number of bytes on a track r = rotation speed in revolutions per second
35Operating System
Disk Performance Parameters (5)Disk Performance Parameters (5)
Access time Sum of seek time and rotational delay The time it takes to get in position to read or write
Data transfer occurs as the sector moves under the head
Total Access Time Ta = Ts + Tr + Tt = Ts + 1/(2r) + b/(rN)
36Operating System
Disk Scheduling Policies (1)Disk Scheduling Policies (1)
Seek time is the reason for differences in performance Need to reduce the average seek time
OS maintains a queue of requests for each I/O devices
For a single disk there will be a number of I/O requests
If requests are selected randomly, we will poor performance
37Operating System
Disk Scheduling Policies (2)Disk Scheduling Policies (2)
Assume a disk with 200 tracks Starting at track 100 in the direction of increasing
track number Requested tracks in the order received :
55, 58, 39, 18, 90, 160, 150, 38, 184
38Operating System
Disk Scheduling Policies (3)Disk Scheduling Policies (3)
Random scheduling First-in, first-out (FIFO) Priority Last-in, first-out Shortest Service Time First SCAN C-SCAN N-step-SCAN FSCAN
39Operating System
Disk Scheduling Policies (4)Disk Scheduling Policies (4)
Random scheduling select requests from queue in random order the worst possible performance useful as a benchmark against which to evaluate other
techniques
40Operating System
Disk Scheduling Policies (5)Disk Scheduling Policies (5)
Operating System 41
First-in, first-out (FIFO) Process request sequentially Fair to all processes If there are only a few processes that require access and if many
of the requests are to clustered, good performance can be hoped Approaches random scheduling in performance if there are many
processes
Disk Scheduling Policies (6)Disk Scheduling Policies (6)
Priority Goal is not to optimize disk use but to meet other
objectives Short batch jobs may have higher priority
Provide good interactive response time Longer jobs may have to wait long
42Operating System
Disk Scheduling Policies (7)Disk Scheduling Policies (7)
Last-in, first-out Good for transaction processing systems
The device is given to the most recent user so there should be little arm movement
improve throughput and reduce queue length Possibility of starvation since a job may never regain
the head of the line
43Operating System
Disk Scheduling Policies (8)Disk Scheduling Policies (8)
Operating System 44
Shortest Service Time First Select the disk I/O request that requires the least
movement of the disk arm from its current position Always choose the minimum Seek time
Disk Scheduling Policies (9)Disk Scheduling Policies (9)
Operating System 45
SCAN Arm moves in one direction only, satisfying all
outstanding requests until it reaches the last track in that direction
Direction is reversed
Disk Scheduling Policies (10)Disk Scheduling Policies (10)
Operating System 46
C-SCAN Restricts scanning to one direction only When the last track has been visited in one
direction, the arm is returned to the opposite end of the disk and the scan begins again
Disk Scheduling Policies (11)Disk Scheduling Policies (11)
N-step-SCAN Segments the disk request queue into subqueues of
length N Subqueues are processed one at a time, using SCAN New requests added to other queue when queue is
processed With large value of N, this is similar to SCAN With N = 1, this is same as FIFO
47Operating System
Disk Scheduling Policies (12)Disk Scheduling Policies (12)
FSCAN Two queues When a scan begins, all of the requests are in one of the
queues, with the other empty During the scan, all new requests are put into the other
queue
48Operating System
Disk Scheduling Algorithms (1)Disk Scheduling Algorithms (1)
49Operating System
Disk Scheduling Algorithms (2)Disk Scheduling Algorithms (2)
50Operating System
RAID (Redundant Array of RAID (Redundant Array of Independent Disk) (1)Independent Disk) (1)
Why RAID? Big speed gap between CPU and disk Why not having a parallelism in disks?
Redundant Array of Independent Disks RAID(Redundant Array of Independent Disk)
Many SCSI disks with RAID SCSI controller Organizations defined by Patterson et al. Level 0 through level 6
SLED(Single Large Expensive Disk)
51Operating System
RAID (Redundant Array of RAID (Redundant Array of Independent Disk) (2)Independent Disk) (2)
Array of disks that operate independently and in parallel Distribute the data on multiple disks
Single I/O request can be executed in parallel Replaces large-capacity disk drives with multiple
smaller-capacity drives Improves I/O performance and allows easier incremental
increases in capacity
52Operating System
Characteristics of RAIDCharacteristics of RAID
RAID is a set of physical disk drives viewed by the operating system as a single logical drive
Data are distributed across the physical drives of an array
Redundant disk capacity is used to store parity information
53Operating System
RAID Levels (1)RAID Levels (1)
54Operating System
RAID Levels (2)RAID Levels (2)
55Operating System
RAID 0 (non-redundant) (1)RAID 0 (non-redundant) (1)
56Operating System
A logical disk is divided into strips Strips : physical blocks, sectors, or some other
unit Writes consecutive strips over the drives in round
robin way A stripe : a set of logically consecutive strips that
maps exactly one strip to each array member
RAID 0 (non-redundant) (2)RAID 0 (non-redundant) (2)
57Operating System
RAID 1 (mirrored)RAID 1 (mirrored)
58Operating System
On a write, every strip is written twice On a read, either copy can be used
Read performance can be up to twice as good Fault-tolerance is excellent
RAID 2 (redundancy through Hamming RAID 2 (redundancy through Hamming code) (1)code) (1)
59Operating System
Works on a word basis + Hamming code Splitting each byte into a pair of 4-bit nibbles Adding 3 parity bits 7 bits are spread over the 7 drives
Performance is good In one sector time, it could write 4 sectors Losing one drive do not cause any problem
All the drives must be synchronized On a single write, all data disks and parity disks must be
accessed
RAID 2 (redundancy through Hamming RAID 2 (redundancy through Hamming code) (2)code) (2)
60Operating System
RAID 3 (bit-interleaved parity) (1)RAID 3 (bit-interleaved parity) (1)
61Operating System
Similar to RAID 2 : requires only single redundant disk
A parity bit is generated by exclusive-or of across corresponding bits on each data disk X4(i) = X3(i) + X2(i) + X1(i) + X0(i) X1(i) = X4(i) + X3(i) + X2(i) + X0(i)
RAID 3 (bit-interleaved parity) (2)RAID 3 (bit-interleaved parity) (2)
62Operating System
RAID 4 (block-level parity) (1)RAID 4 (block-level parity) (1)
63Operating System
Work with strips with a strip-for-strip parity written onto an extra drive All the strips are EXCLUSIVE ORed together
If a drive crashes, the lost bytes can be recomputed from the parity drive
Performs poorly for small updates Need to recalculate the parity every time
Parity drive may become a bottleneck Use an independent access technique
X4(i) = X3(i) + X2(i) + X1(i) + X0(i) X4’(i) = X3(i) + X2(i) + X1’(i) + X0(i)
= X3(i) + X2(i) + X1’(i) + X0(i) + X1(i) + X1(i)
= X4(i) + X1(i) + X1’(i)
RAID 4 (block-level parity) (2)RAID 4 (block-level parity) (2)
64Operating System
RAID 5 (block-level distributed parity)RAID 5 (block-level distributed parity)
65Operating System
Distributing the parity bits uniformly over all the drives
RAID 6 (dual redundancy)RAID 6 (dual redundancy)
66Operating System
Disk CacheDisk Cache
Buffer in main memory for disk sectors Contains a copy of some of the sectors on the
disk When an I/O request is made, a check is made to
determine if the sector is in the disk cache
67Operating System
Replacement PoliciesReplacement Policies
When a new sector is brought into the disk cache, one of the existing blocks must be replaced Least Recently Used(LRU) Least Frequently Use(LFU)
68Operating System
Least Recently UsedLeast Recently Used
The block that has been in the cache the longest with no reference to it is replaced The cache consists of a stack of blocks When a block is referenced or brought into the cache, it
is placed on the top of the stack Most recently referenced block is on the top of the stack The block on the bottom of the stack is removed when a
new block is brought in Blocks don’t actually move around in main memory A stack of pointers is used
69Operating System
Least Frequently UsedLeast Frequently Used
The block that has experienced the fewest references is replaced A counter is associated with each block Counter is incremented each time block accessed Block with smallest count is selected for replacement Some blocks may be referenced many times in a short
period of time and the reference count is misleading Frequency-based replacement technique
70Operating System
71Operating System
72Operating System
73Operating System
UNIX SCR4 I/OUNIX SCR4 I/O
Each individual device is associated with a special file
Two types of I/O Buffered Unbuffered
74Operating System
Device#, Block#
Figure 11.13 UNIX Buffer Cache Organization
Hash Table Buffer Cache
Free ListPointer
75Operating System
Linux I/OLinux I/O
Elevator scheduler Maintains a single queue for disk read and write requests Keeps list of requests sorted by block number Drive moves in a single direction to satisy each request
76Operating System
Linux I/OLinux I/O
Deadline scheduler Uses three queues
Incoming requests Read requests go to the tail of a FIFO queue Write requests go to the tail of a FIFO queue
Each request has an expiration time
77Operating System
Linux I/OLinux I/O
78Operating System
Linux I/OLinux I/O
Anticipatory I/O scheduler Delay a short period of time after satisfying a read
request to see if a new nearby request can be made
79Operating System
Windows I/OWindows I/O
Basic I/O modules Cache manager File system drivers Network drivers Hardware device drivers
80Operating System
Windows I/OWindows I/O
81Operating System