Upload
philomena-obrien
View
216
Download
0
Tags:
Embed Size (px)
Citation preview
2-1
• The critical section– A piece of code which cannot be interrupted during execution
• Cases of critical sections– Modifying a block of memory shared by multiple kernel services
• Process table
• Ready queue, waiting queue, delay queue, etc.
– Modifying global variables used by kernel
• Entering a critical section– Disable global interrupt - disable()
• Leaving a critical section – Enable global interrupt - enable()
Chapter 2 Real-Time Systems Concepts
2-2
2-3
• Resource– EX: I/O, CPU, Memory, Printer, …
• Shared Resource– The resources which are shared among the tasks
– Each task should gain exclusive access to the shared resource to prevent data corruption Mutual exclusion
• Task and Thread (herein both terminology represent a same thing)– A simple program that thinks it has the CPU all to itself
– Each task is an infinite loop
– It has five state: Dormant, Ready, Running, Waiting, and ISR (Interrupted)
2-4
SP
CPU Registers
SP
Task Control Block
Priority
Context
Stack Stack Stack
CPU
MEMORY
TASK #1 TASK #2 TASK #n
SP
Task Control Block
Priority
SP
Task Control Block
Priority
Status Status Status
2-5
RUNNINGREADY
OSTaskCreate()OSTaskCreateExt()
Task is Preempted
OSMBoxPend()OSQPend()
OSSemPend()OSTaskSuspend()OSTimeDly()OSTimeDlyHMSM()
OSMBoxPost()OSQPost()OSQPostFront()OSSemPost()OSTaskResume()OSTimeDlyResume()OSTimeTick()
OSTaskDel()
DORMANT
WAITING
OSStart()OSIntExit()
OS_TASK_SW()
OSTaskDel()
OSTaskDel()
Interrupt
OSIntExit()
ISR
2-6
Context Switch (or Task Switch)• Each task has its stack to store the associated information
• The purpose of context switch– To make sure that the task, which is forced to give up CPU, can
resume operation later without lost of any vital information or data
• The procedure of interrupt
MP_addr(n) : instruction(n)MP_addr(n+1) : instruction(n+1)MP_addr(n+2) : instruction(n+2)
Push MP_addr(n+1)Push CPU registers ; isr codespop CPU registersreturn ; jump back to
; MP_addr(n+1)
Interrupt{push flags push ret-address}
returnfrom interrupt
2-7
• Foreground/background programming – Context (CPU registers and interrupted program address) save and
restore using one stack
• Multi-tasking context switch– Using each task’s own stack
Return address (optional)
Task1’s local variables
Current CPU stack pointer
Return address (optional)
Task 2’s local variableswhen it was switched
Task 2’s code address when it was switched
CPU registers whenTask 2 was switched
Task 1 (to be switched from) stack Task 2 (to be switched to) stack
2-8
• During context switch
Return address (optional)
Task1’s local variables
Current Program address
Return address (optional)
Task 2’s local variableswhen it was switched
Task 2’s code address when it was switched
CPU registers whenTask 2 was switched
Task 1 stack Task 2 stack
Current CPU registers
Current CPU stack pointerStack pointer
‧‧‧
Task 1 TCB
Stack pointer
‧‧‧
Task 2 TCB
2-9
• After context switch
Return address (optional)
Task1’s local variables
Current CPU stack pointer
Task 2 (current) stack
Return address (optional)
Task 1’s local variableswhen it was switched
Task 1’s code address when it was switched
CPU registers whenTask 1 was switched
Task 1 (suspended) stack
2-10
The Operations of Context Switch
• Rely Interrupt (hardware or software) to do context switch– Push return address
– Push FLAGS register
• ISR (context switch routine)– Push all register
– Store sp to TCB (task control block)
– Select a ready task which has the highest priority (Scheduler)
– Restore the sp from the TCB of the new selected task
– Pop all register
– iret (interrupt return which will pop FLAGS and return address)
• Switch to new task
2-11
• Basic components of a task– A set of dynamic properties
• Task slices (optional)
• Stack pointer
• Task status (current, suspended, etc.)
• Task priority
– A portion of support memory
• Task stack
• All basic components except the support memory are stored in a data structure called Task Control Block (TCB)
2-12
• A task definition example /* Task class */
class far Task : public _node{
public:
void far (*StartAdd)(void far*arg); /* starting address */
void far *Arg; /* argument fo initialization */
unsigned SP,BP,SS; /* stack pointer */
unsigned char far *Stack; /* stack starting address */
unsigned StackSize; /* stack size */
unsigned tid; /* task ID */
int slice,slice_left; /* task slice */
int status; /* task status */
int type; /* task type, optional*/
2-13
void far setup( /* task initialization */
void far (*_start_add)(void far*arg),
void far *arg,
void far *_stack,
unsigned stack_size,
unsigned prio,
int _type, /* optional */
unsigned rate, /* optional */
void far (*ret_add)(void) /* optional */
);
};
2-14
Task Creation
• Setting up static attributes to TCB– Assigning a task ID
– Assigning task type (optional)
– Assigning task slices (optional)
– Assigning initialization arguments
• Setting up initial value of the dynamic properties– Assigning initial task priority
– Assigning initial task slices
– Assigning “suspended” to task status
2-15
• Task stack creation– Allocating a memory segment for task stack according to the given
stack length
– Assigning the stack starting address to the bottom of the stack
• Task stack assignment– Task stack content is assigned such that resuming the task is like
entering a subroutine
• load CPU stack pointer (SP) with the stack starting address
• Push return address into the stack (optional)
• Push task starting address to the stack
• Push CPU registers to the stack
• Storing current CUP stack pointer to the stack pointer in TCB
2-16
Task return address (optional)
Task starting address
‧‧‧CPU
registers‧‧‧
A typical initial task stack
Stack pointerstored in TCB
Stack startingaddress
2-17
Context Switch
• The purpose of context switch– To make sure that the task, which is forced to give up CPU, can
resume operation later without lost of any vital information or data
• The procedure of interrupt
MP_addr(n) : instruction(n)MP_addr(n+1) : instruction(n+1)MP_addr(n+2) : instruction(n+2)
Push MP_addr(n+1)Push CPU registers ; isr codespop CPU registersreturn ; jump back to
; MP_addr(n+1)
interrupt
returnfrom interrupt
2-18
• Foreground/background programming – Context (CPU registers and interrupted program address) save and
restore using one stack
• Multi-tasking context switch– Using each task’s own stack
Return address (optional)
Task1’s local variables
Current CPU stack pointer
Return address (optional)
Task 2’s local variableswhen it was switched
Task 2’s code address when it was switched
CPU registers whenTask 2 was switched
Task 1 (to be switched from) stack Task 2 (to be switched to) stack
2-19
• During context switch
Return address (optional)
Task1’s local variables
Current Program address
Return address (optional)
Task 2’s local variableswhen it was switched
Task 2’s code address when it was switched
CPU registers whenTask 2 was switched
Task 1 stack Task 2 stack
Current CPU registers
Current CPU stack pointerStack pointer
‧‧‧
Task 1 TCB
Stack pointer
‧‧‧
Task 2 TCB
2-20
• After context switch
Return address (optional)
Task1’s local variables
Current CPU stack pointer
Task 2 (current) stack
Return address (optional)
Task 1’s local variableswhen it was switched
Task 1’s code address when it was switched
CPU registers whenTask 1 was switched
Task 1 (suspended) stack
2-21
Scheduler
• Also called the dispatcher
• Determine which task will run next
• In a priority-based kernel, control of the CPU is always given to the highest priority task ready to run
• Two types of priority-based kernels– Non-preemptive
– preemptive
2-22
Non-preemptive
• Task auto gives up the control of the CPU
• Also called cooperative multitasking– Tasks cooperate with each other to share the CPU
• advantages – Can use non-reentrant function
– Without fear of corruption by another task (less need to guard shared data through the use of semaphores)
– Interrupt latency is typically low
– Task-level response time much lower than the foreground/background
• Worst case is the longest task time
2-23
Low Priority Task
High Priority Task
ISR
ISR makes the highpriority task ready
Low priority taskrelinquishes the CPU
Time
(1) (2)
(3)
(4)
(5)
(6)
(7)
Figure 2.4 Non-preemptive kernel
2-24
Preemptive Kernel
• Used in the high system responsiveness
• The highest priority task ready to run is always given control of the CPU– When a task make a higher priority task ready to run, the current
task is preempted and the higher priority task is immediately given control of the CPU
– If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended and the new higher priority task is resumed
• The use of non-reentrant functions requires to cooperate with mutual exclusion semaphores
2-25
Figure 2.5 Preemptive kernel
Low Priority Task
High Priority Task
ISR
ISR makes the highpriority task ready Time
2-26
Reentrancy
• Reentrant function– Can be used by more than one task in concurrent without fear of
data corruption
– Can be interrupted at any time and resumed at a later time without loss of data
– Use local variable
void strcpy(char *dest, char *src){ while (*dest++ = *src++) { ; } *dest = NUL;}
int Temp;void swap(int *x, int *y){ Temp = *x; *x = *y; *y = Temp;}
Listing 2.1 Reentrant function Listing 2.2 Non-reentrant function
2-27
Figure 2.6 Non-reentrant function
ISR O.S.
O.S.
HIGH PRIORITY TASK
while (1) { z = 3; t = 4;
swap(&z, &t); { Temp = *z; *z = *t; *t = Temp; } . . OSTimeDly(1); . .}
Temp == 3!
Temp == 1
Temp == 3
LOW PRIORITY TASK
while (1) { x = 1; y = 2;
swap(&x, &y); { Temp = *x;
*x = *y; *y = Temp; } . . OSTimeDly(1);}
OSIntExit()
(1)
(2)(3)
(4)
(5)
2-28
Task Priority
• Static priorities– The priority of each task does not change during the application’s
execution
• Dynamic priorities– The priority of tasks can be changed during the application’s
execution
2-29
Figure 2.7 Priority Inversion problem
Task 1 (H)
Task 2 (M)
Task 3 (L)
Priority Inversion
Task 3 Get Semaphore
Task 1 Preempts Task 3
Task 1 Tries to get Semaphore
Task 2 Preempts Task 3
Task 3 Resumes
Task 3 Releases the Semaphore
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
2-30
Figure 2.8 Kernel that supports priority inheritance
Task 1 (H)
Task 2 (M)
Task 3 (L)
Priority Inversion
Task 3 Get Semaphore
Task 1 Preempts Task 3
Task 1 Tries to get Semaphore(Priority of Task 3 is raised to Task 1's)
Task 3 Releases the Semaphore(Task 1 Resumes)
Task 1 Completes
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
2-31
Assigning Task Priorities
• Rate Monotonic Scheduling (RMS)– The highest rate of execution are given the highest priority
• RMS makes a number of assumptions:– All tasks are periodic (they occur at regular intervals).
– Tasks do not synchronize with one another, share resources, or exchange data.
– The CPU must always execute the highest priority task that is ready to run. In other words, preemptive scheduling must be used.
• If meet the following inequality equation, all task HARD real-time deadlines will be met
i
n
i
i nT
E12
1CPU utilization of all time-critical tasks should be less than 70%Other 30% can be used by non-time0critical tasks
2-32
Mutual Exclusion
• Multiple tasks access same area (critical section) must ensure that each task has exclusive access to the data to avoid contention and data corruption
• The method of exclusive access– Disabling interrupts
– Performing test-and-set operations
– Disabling scheduling
– Using semaphores
2-33
• Disabling and enabling interrupts
• uc/OS-II provides two macros to disable/enable interrupt
Disable interrupts;Access the resource (read/write from/to variables);Reenable interrupts;
void Function (void){ OS_ENTER_CRITICAL(); . . /* You can access shared data in here */ . OS_EXIT_CRITICAL();}
2-34
• Test-and-Set
Disable interrupts;if (‘Access Variable’ is 0) { Set variable to 1; Reenable interrupts; Access the resource; Disable interrupts; Set the ‘Access Variable’ back to 0; Reenable interrupts;} else { Reenable interrupts; /* You don’t have access to the resource, try back later; */}
2-35
• Disabling and Enabling the Scheduler– If no shared variables or data structures with an ISR, we can disabl
e and enable scheduling
– Two or more tasks can share data without contention
– While the scheduler is locked, interrupts are enable
• If the ISR generates a new event which enables a higher priority, the higher priority will be run when the OSSchedunlock() is called.
void Function (void){ OS_ENTER_CRITICAL(); . . /* You can access shared data in here */ . OS_EXIT_CRITICAL();}
2-36
Semaphores
• Semaphores are used to– Control access to a shared resource (mutual exclusion)– Signal the occurrence of an event– Allow two tasks to synchronize their activities
• Two types of semaphores– Binary semaphores– Counting semaphores
• Three operations of a semaphore– INITIALIZE (called CREATE)– WAIT (called PEND)– SIGNAL (called POST) --Release semaphore
• Highest priority task waiting for the semaphore is activized (uCOS-II supports this one)
• First task that requested the semaphore is activized (FIFO)
2-37
• Accessing shared data by obtaining a semaphore
OS_EVENT *SharedDataSem;void Function (void){ INT8U err; OSSemPend(SharedDataSem, 0, &err); . . /* You can access shared data in here (interrupts are recognized) */ . OSSemPost(SharedDataSem);}
2-38
• Control of shared resources (multual exclusion)– e.g., single display device
task1( ... ){...printf("This is task 1.");...}
task2( ... ){...printf("This is task 2.");...}
ThiThsi siis ttasaks k12..
Results may be:
Exclusive usageof certain resources(e.g., shared memory)
2-39
• Solution: using a semaphore and initialized it to 1
TASK 1
TASK 2
PRINTERSEMAPHORE
Acquire Semaphore
Acquire Semaphore
"I am task #2!"
"I am task #1!"
• Each task must know about the existence of semaphore in order to access the resource– Some situations may encapsulate the semaphore is better
2-40
INT8U CommSendCmd(char *cmd, char *response, INT16U timeout){
Acquire port's semaphore;Send command to device;Wait for response (with timeout);if (timed out) {
Release semaphore;return (error code);
} else {Release semaphore;return (no error);
}}
CommSendCmd()
CommSendCmd()
TASK1
TASK2
DRIVER RS-232C
Semaphore
Figure 2.11 Hiding a semaphore from tasks
2-41
Counting semaphoreBUF *BufReq(void){ BUF *ptr;
Acquire a semaphore; Disable interrupts; ptr = BufFreeList; BufFreeList = ptr->BufNext; Enable interrupts; return (ptr);}
void BufRel(BUF *ptr){ Disable interrupts; ptr->BufNext = BufFreeList; BufFreeList = ptr; Enable interrupts; Release semaphore;}
BufFreeListNext Next Next 0
BufReq() BufRel()
Task1 Task2
10
Buffer Manager
2-42
Deadlock
• To avoid a deadlock the tasks is – Acquire all resources before proceeding– Acquire the resources in the same order– Release the resources in the reverse order
• Using timeout when acquiring a semaphore– When a timeout occur, a return error code prevents the taks form thinking it has obtained the resource.
• Deadlocks generally occur in large multitasking systems, not in embedded systems
2-43
Synchronization• A task can be synchronized with an ISR or a task
ISR TASKPOST PEND
TASKPOST PENDTASK
• Tasks synchronizing their activities
TASK
POST PEND
TASK
POSTPEND
2-44
Event Flags (uCOS-II does not support)• Used in a task needs to synchronize with the occurrence of
multiple events
OR
TASK
ISR
TASKPOST PEND
Semaphore
TASK
ISR
TASKPOST PEND
Semaphore
AND
Events
Events
DISJUNCTIVE SYNCHRONIZATION
CONJUNCTIVE SYNCHRONIZATION
2-45
• Common events can be used to signal multiple tasks
OR TASKPOST PEND
Semaphore
TASK ISR
TASKPOST PEND
Semaphore
ANDEvents
Events(8, 16 or 32 bits)
Events
2-46
Intertask Communication
• A task or an ISR to communicate information to another task
• There are two ways of intertask communication– Through global data (disable/enable interrupts or using semaphor
e)
• Task can only communicate information to an ISR by using global variables
• Task can not aware any global variable is changed (unless using semaphore or task periodical polling)
– Sending messages
• Message mailbox or message queue
2-47
Message Mailboxes
• A task desiring a message from an empty mailbox is suspended and placed on the waiting list until a message is received
• Kernel allows the task waiting for a message to specify a timeout
• When a message is deposited into the mailbox– Priority based
– FIFO
TASKPOST PEND
Mailbox
10TASK
Waiting list
2-48
Message queues
• Is used to send one or more messages to a task
• Is basically an array of mailboxes
• The ifrst message inserted in the queue will be the first message extracted from the queue (FIFO) or Last-In_first-Out (LIFO)
TASKISR POST PEND
Queue
Interrupt0
10
2-49
Interrutps• When an interrupt is recognized, the CPU saves
– Return address (interrupted task)
– Flags
• Jump to Interrupt Service Routine (ISR)
• Upon completion of the ISR, the program returns to – Foreground/background
– The interrupted task for a non-preemptive kernel
– The highest priority task ready to run for a preemptive kernelTIME
TASK
ISR #1
ISR #2
ISR #3
Interrupt #1
Interrupt #2
Interrupt #3
Interrupt Nesting
2-50
Interrupt latency, Response, and Recovery
• Interrupt latency– Maximum amount of time interrupts are disabled + Time to
start executing the first instruction in the ISR
• Interrupt response– Foreground/background
– Non-preemptive kernel
– Preemptive kernel
• Interrupt recovery– Foreground/background
– Non-preemptive kernel
– Preemptive kernel
Interrupt latency + Time to save the CPU's context
Interrupt latency + Time to save the CPU's context +Execution time of the kernel ISR entry function
Time to restore the CPU's context +
Time to execute the return from interrupt instruction
Time to determine if a higher priority task is ready +Time to restore the CPU's context of the highest priority task +
Time to execute the return from interrupt instruction
2-51
Figure 2.20 foreground/background
BACKGROUND
CPU Context Saved
Interrupt Request
Interrupt Latency
Interrupt ResponseInterrupt Recovery
BACKGROUND
ISR
User ISR Code
TIME
CPU contextrestored
2-52
Figure 2.21 non-preemptive kernel
TASK
CPU Context Saved
Interrupt Request
Interrupt Latency
Interrupt ResponseInterrupt Recovery
TASK
ISR
User ISR Code
TIME
CPU contextrestored
2-53
Figure 2.22 preemptive kernel
TASK
CPU Context Saved
Kernel's ISREntry function
Interrupt Request
Interrupt Latency
Interrupt Response
Interrupt Recovery
TASK
ISR
Kernel's ISRExit function
User ISR Code
TIME
CPU contextrestored
Kernel's ISRExit function
CPU contextrestored
TASK
Interrupt Recovery
A
B
2-54
Nonmaskable Interrupts (NMIs)
• NMI cannot be disabled– Interrupt latency, response, and recovery are minimal
– Interrupt latency
– Interrupt response
– Interrupt recovery
Time to execute longest instruction +Time to start executing the NMI ISR
Interrupt latency +Time to save the CPU's context
Time to restore the CPU's context +Time to execute the return from interrupt instruction
To Processor's NMI Input
NMI Interrupt Source
OutputPort
DisableingNonmaskable interrupts
NMIISR ISR
Semaphore
TASKNMI Interrupt
Issues interrupt by writingto an output port.
POST PEND
Signaling a task from a nonmaskable interrupt
Every 150 us Every 150us*40 = 6ms
2-55
Clock Tick
• A special interrupt that occurs periodically
• Allows kernel to delay task for an interal number of clock ticks
• To provide timeout when task are waiting for event to occur
• The faster the tick rate, the higher the overhead imposed on the system
2-56
Figure 2.25 delaying a task for one tick (case 1)
Tick Interrupt
Tick ISR
All higher priority tasks
Delayed Task
t1t2
t3
20 mS
(19 mS)(17 mS)
(27 mS)
Call to delay 1 tick (20 mS)Call to delay 1 tick (20 mS) Call to delay 1 tick (20 mS)
2-57
Figure 2.26 Delaying a task for one tick (case 2)
Tick Interrupt
Tick ISR
All higher priority tasks
Delayed Task
t1 t2t3
20 mS
(6 mS) (19 mS)(27 mS)
Call to delay 1 tick (20 mS)Call to delay 1 tick (20 mS) Call to delay 1 tick (20 mS)
2-58
Figure 2.27 delaying a task for one tick (case 3)
Tick Interrupt
Tick ISR
All higher priority tasks
Delayed Task
t1t2
20 mS
(40 mS)(26 mS)
Call to delay 1 tick (20 mS) Call to delay 1 tick (20 mS)
2-59
• Reduce the execution jitter of the task – Increase the clock rate of your microprocessor.
– Increase the time between tick interrupts.
– Rearrange task priorities.
– Avoid using floating-point math (if you must, use single precision).
– Get a compiler that performs better code optimization.
– Write time-critical code in assembly language.
– If possible, upgrade to a faster microprocessor in the same family, e.g., 8086 to 80186, 68000 to 68020, etc.