Upload
gauri
View
36
Download
0
Embed Size (px)
DESCRIPTION
Friday, June 16, 2006. "In order to maintain secrecy, this posting will self-destruct in five seconds. Memorize it, then eat your computer." - Anonymous. g++ -R/opt/sfw/lib somefile.c to add /opt/sfw/lib to the runtime shared library lookup path - PowerPoint PPT Presentation
Citation preview
1
Friday, June 16, 2006
"In order to maintain secrecy, this posting will self-destruct in five
seconds. Memorize it, then eat your computer."
- Anonymous
2
g++ -R/opt/sfw/lib somefile.c
to add /opt/sfw/lib to the runtime shared library lookup path
Edit your .bash_profile and add /opt/sfw/lib to LD_LIBRARY_PATH
3
Scheduling in Unix - other versions also possible
Designed to provide good response to interactive processes
Uses multiple queues
Each queue is associated with a range of non-overlapping priority values
4
Scheduling in Unix - other versions also possible
Processes executing in user mode have positive values
Processes executing in kernel mode (doing system calls) have negative values
Negative values have higher priority and large positive values have lowest
5
Scheduling in Unix
Only processes that are in memory and ready to run are located on queues
Scheduler searches the queues starting at highest priority
first process is chosen on that queue and started. It runs for one time quantum (say 100ms) or until it blocks.
If the process uses up its quantum it is blocked Processes within same priority range share
CPU in RR
6
Scheduling in UnixEvery second each process’s priority is
recalculated (usually based on CPU usage) and it is attached to appropriate queue.
CPU_usage decay
7
Scheduling in UnixProcess might have to block before system call is
complete. While waiting it is put in a queue with a negative number. (determined by the event it is waiting for)
Reason: Allow process to run immediately after each request
is completed, so that it make the next one quickly If it is waiting for terminal input it is an interactive
process.
CPU bound get service when all I/O bound and interactive processes are blocked
8Unix scheduler is based on multi-level queue structure
9
top provides an ongoing look at processor activity in real time
nice default value is zero in UNIX Allowed range -20 to 20
10
Windows Priority based preemptive schedulingSelected thread runs until:
Preempted by a higher priority thread Time quantum ends Calls a blocking system call Terminates
11
Win32 APISetPriorityClass sets the priority of all
threads in the caller’s process. Real time, high, above normal, normal,
below normal and base
SetThreadPriority sets the priority of a thread compared to other threads in its process Time critical, highest, above normal, normal,
below normal, lowest, idle
12
Win32 APIHow many combinations?System has 32 priorities 0 - 31
13
RR for multiple threads at same priority
14
Selection of thread irrespective of the process it belongs to.
Priorities 16-31 are called real time, but there are not.
Priorities 16-31 are reserved for the system itself
15
Ordinary users are not allowed those priorities. Why?
Users run at priorities 1-15
16
Windows Priority lowering depends on a thread’s time
quantumPriority boosting:
When a thread is released from a wait operation• Thread waiting for keyboard• Thread waiting for disk operation
Good response time for interactive threads
17
Windows
Currently active window gets a boost to increase its response time
Also keeps track of when a ready thread ran last
Priority boosting does not go above priority 15.
18
Example: Multiprogramming
5 jobs with 80% I/O waitIf one of theses jobs enter the system and
is the only process there then it uses 12 seconds of CPU time for each minute.
CPU busy: 20% of time.If that job needs 4 minutes of CPU time it
will require at least 20 minutes in all to get the job done
19
Example: Multiprogramming
Too simplistic model: If average job computes for only 20% of
time, then with five such processes CPU should be busy all the time.
BUT…
20
Example: Multiprogramming
But we are assuming all five will not be waiting for I/O all the time.
CPU utilization: 1-pn
n = number of processesp= probability that all n is waiting for I/O at
the same time
Approximation only: There might be dependencies between processes
21
CPU utilization as a function of number of processes in memory
22
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process. unbounded-buffer places no practical limit
on the size of the buffer. bounded-buffer assumes that there is a
fixed buffer size.
23
Bounded-Buffer – Shared-Memory Solution Shared data
var n;
type item = … ;
var buffer. array [0..n–1] of item;
in, out: 0..n–1;
24
Bounded-Buffer – Shared-Memory Solution Producer process
repeat
…
produce an item in nextp
…
while in+1 mod n = out do no-op;
buffer [in] :=nextp;
in :=in+1 mod n;
until false;
25
Bounded-Buffer (Cont.)
Consumer process
repeat
while in = out do no-op;
nextc := buffer [out];
out := out+1 mod n;
…
consume the item in nextc
…
until false;
26
Bounded-Buffer (Cont.)
Solution is correct, but …?
27
Bounded-Buffer Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
28
Bounded-Buffer Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
29
Bounded-Buffer
Consumer process
item nextConsumed;
while (1) {while (counter == 0)
; /* do nothing */nextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;counter--;
}
30
Bounded Buffer
The statements
counter++;counter--;
must be performed atomically.
Atomic operation means an operation that completes in its entirety without interruption.
31
Bounded BufferThe statement “count++” may be implemented in
machine language as:
register1 = counter
register1 = register1 + 1counter = register1
The statement “count - -” may be implemented as:
register2 = counterregister2 = register2 – 1counter = register2
32
Process Synchronization
Cooperating processes executing in a system can affect each other
Concurrent access to shared data may result in data inconsistency
Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes
33
Race ConditionProblem in previous example: One process
started using shared variable before another process was finished using it
Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last.
To prevent race conditions, concurrent processes must be synchronized.
34
Two processes want to access shared memory at the same timeDebugging very difficult ...
35
Example ...
36
Solution #1
37
Solution #2
38
Solution #3
39
The Critical-Section Problem
n processes all competing to use some shared data
Each process has a code segment, called critical section, in which the shared data is accessed.
Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
40
41
No processes may be simultaneously inside their critical sections
No assumptions may be made about the speeds or number of CPUs
No process running outside the critical region may block other processes
No process waits forever to enter its critical section
42
Only 2 processes, P0 and P1
General structure of process Pi (other process Pj)
do {
entry section
critical section
exit section
remainder section
} while (1);Processes may share some common variables to
synchronize their actions.
43
Solution to Critical-Section Problem
1.Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.
44
Solution to Critical-Section Problem
2.Progress. If no process is executing in its critical section and some processes wish to enter their critical section, then only those processes that are not executing in the remainder section can participate in the decision on which will enter critical section next.
45
Solution to Critical-Section Problem
3.Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes.
46
Assume the basic machine language instructions like load or store etc. are executed atomically.
47
Algorithm 1
turn is initialized to 0 (or 1)