CPU Scheduling

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 63

CPU SCHEDULING

CPU scheduling
• CPU scheduling is a process which allows one process to use the
CPU while the execution of another process is on hold(in waiting
state) due to unavailability of any resource like I/O etc, thereby
making full use of CPU. The aim of CPU scheduling is to make
the system efficient, fast and fair.
• Whenever the CPU becomes idle, the operating system must
select one of the processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler
(or CPU scheduler).
• The scheduler selects from among the processes in memory
that are ready to execute, and allocates the CPU to one of them.
• Process execution consists of a CPU execution
and IO wait. Process alternates between these
two states.
• CPU burst: controlled by CPU
• IO burst: controlled by IO.
Dispatcher
• The dispatcher is the module that gives control of the
CPU to the process selected by the short-term
scheduler. This function involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to
restart that program.
• The dispatcher should be as fast as possible, given that
it is invoked during every process switch. The time it
takes for the dispatcher to stop one process and start
another running is known as the dispatch latency.
• CPU scheduling decisions may take place under the following four circumstances:
• When a process switches from the running state to the waiting state(for I/O
request or invocation of wait for the termination of one of the child processes).
• When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
• When a process switches from the waiting state to the ready state(for example,
completion of I/O).
• When a process terminates.
• In circumstances 1 and 4, there is no choice in terms of scheduling. A new
process(if one exists in the ready queue) must be selected for execution. There is
a choice, however in circumstances 2 and 3.
• When Scheduling takes place only under circumstances 1 and 4, we say the
scheduling scheme is non-preemptive; otherwise the scheduling scheme
is preemptive.
SCHEDULING TYPES
• Preemptive:
preemptive scheduling is based on priority where
a scheduler may preempt a low priority running
process anytime when a high priority process
enters into a ready state.
• Non-preemptive:
  Non-preemptive algorithms are designed so that
once a process enters the running state, it cannot
be preempted until it completes its allotted time
Scheduling Criteria
• CPU Utilization:
Keep the CPU as busy as possible. It range from 0 to 100%. In practice, it range from 40 to
90%.More CPU utilization more will be the system performance.
• Throughput:
Throughput is the rate at which processes are completed per unit of time.
• Turnaround time:
This is the how long a process takes to execute a process. It is calculated as the time gap
between the submission of a process and its completion.

Time Difference between completion time and arrival time.


Turn Around Time = Completion Time - Arrival Time
• Waiting time:
Waiting time is the sum of the time periods spent in waiting in the ready queue.
Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time
• Response time:
Response time is the time it takes to start
responding from submission time. It is
calculated as the amount of time it takes from
when a request was submitted until the first
response is produced.
• Fairness:
Each process should have a fair share of CPU.
Scheduling Algorithm Optimization Criteria

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
SCHEDULING ALGORITHM
• First Come First Serve(FCFS) Scheduling
• Shortest-Job-First(SJF) Scheduling
• Priority Scheduling
• Round Robin(RR) Scheduling
• Multilevel Queue Scheduling
• Multilevel Feedback Queue Scheduling
FCFS SCHEDULING
first come first serve.
• Jobs are executed on first come, first serve basis.
• Easy to understand and implement.
• Poor in performance as average wait time is high.
• FCFS is non preemptive. Once CPU has been allocated to a process
that process keeps the CPU until it terminates or IO request comes.
This is particularly troublesome for time sharing systems where each
user needs to get a share of CPU at regular intervals. It would be
disastrous to allow one process to keep CPU for an extended period.
DISADVANTAGE
Convoy effect short process behind long process. All processes are
waiting for one big process to get off the CPU. This effect results in
lower CPU utilization.
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont)
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
First-Come, First-Served (FCFS)

 Example: Three processes arrive in order P1, P2, P3.


 P1 burst time: 24
 P2 burst time: 3
 P3 burst time: 3
 Waiting Time
 P1: 0 P1 P2 P3
 P2: 24
0 24 27 30
 P3: 27
 Turnaround Time/Completion Time:
 P1: 24
 P2: 27
 P3: 30

 Average Waiting Time: (0+24+27)/3 = (51/3)= 17 milliseconds


 Average Completion Time: (24+27+30)/3 = 81/3=27 milliseconds
Find out average waiting time and turn around time

Process Burst time


P1 21
P2 3
P3 6
P4 2
Example: First-Come, First-Served (FCFS)

Average turn around time =107/4=26.75


Find out average waiting time and turn
around time
Process Arrival time Burst time
P1 0 4
P2 1 3
P3 2 1
P4 3 2
P5 4 5
Proc Arrival Burst time CT TAR WT
ess time
P1 0 4 4 4 0
P2 1 3 7 6 3
P3 2 1 8 6 5
P4 3 2 10 7 5
P5 4 5 15 11 6

p1 p2 p3 p4 p5
0 4 7 8 10 15
First Come First Serve

Process id AT BT
1 0 2
2 3 1
3 5 6

Turn around time= Completion


Time- Arrival Time

Waiting Time= Turn around Time-


Burst Time
First Come First Serve

Process id AT BT CT TAT WT
1 0 2 2 2 0
2 3 1 4 1 0
3 5 6 11 6 0

Turn around time= Completion


Time- Arrival Time

Waiting Time= Turn around Time-


Burst Time
Shortest Job First
 Processes with least execution time are selected first.
 CPU is assigned to process with less CPU burst time.
 SJF:
 Non-Preemption: CPU is always allocated to the process with least burst
time and Process Keeps CPU with it until it is completed.
 Pre-Emption: When a new process enters the queue, scheduler checks its
execution time and compare with the already running process.
 If Execution time of running process is more, CPU is taken from it and given
to new process.
Shortest Job First

ADVANTAGES
•It gives superior turnaround time performance to
shortest process next because a short job is given
immediate preference to a running longer job.
•Throughput is high.

DIS ADVANTAGES
•Elapsed time (i.e., execution-completed-time) must
be recorded, it results an additional overhead on the
processor.
•Starvation may be possible for the longer processes
Shortest Job First

SJF can be premptive or non-preemptive. 


A premptive SJF algorithm will preempt the currently
executing process if the next CPU burst of newly
arrived process may be shorter than what is left to the
currently executing process.
A Non-premptive SJF algorithm will allow the currently
running process to finish.Preemptive SJF Scheduling is
sometimes called Shortest Remaining Time First
algorithm
Practice: Shortest Job First (Non Preemption)

 P1 burst time: 15

 P2 burst time: 8

 P3 burst time: 10

 P4 burst time: 3

25
Shortest Job First (Non-Preemption)
 What if their order had been P4, P2, P3 and P1?
Actual order:
 P1 burst time: 15
 P2 burst time: 8
 P3 burst time: 10
 P4 burst time: 3
 Waiting Time
 P4: 0
 P2: 3
 P3: 11
 P1: 21
 Completion Time:
 P4: 3
 P2: 3+8=11
 P3: 3+8+10=21
 P1: 3+8+10+15=36
 Average Waiting Time: (0+3+11+21)/4 = 8.7
 Turnaround Time: (3+11+21+36)=71/4=17.75
SHORTEST JOB FIRST (NON-PREEMTIVE)

Process Arrival time Burst time


P1 1 7
P2 2 5
P3 3 1
P4 4 2
P5 5 8

Find out completion time ,waiting time and


turn around time
Proc Arrival Burst time CT TAR WT
ess time
P1 1 7 8 7 0
P2 2 5 16 14 9
P3 3 1 9 6 5
P4 4 2 11 7 5
P5 5 8 24 19 11

p1 p3 p4 p2 p5
0 1 8 9 11 16 24
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 9
P2 1 3
P3 2 9
Cal. Completion time, turn around time and avg. waiting time.
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 3
P3 2 3
P4 3 1
Cal. Completion time, turn around time and avg. waiting time.

0.P1-> 1.P2->2.P2->3.P2->4.P4->5.P3->8.P1
Shortest Job First(Preemptive)

Proc Arrival Burst time CT TAR WT


ess time
P1 0 7
P2 1 5
P3 2 3
P4 3 1
P5 4 2
P6 5 1

Cal. Completion time, turn around time and


avg. waiting time.
Proc Arrival Burst time CT TAR WT
ess time
P1 0 7 19 19 12
P2 1 5 13 12 7
P3 2 3 6 4 1
P4 3 1 4 1 0
P5 4 2 9 5 3
P6 5 1 7 2 1

P1 P2 p3 p4 p3 p3 p6 p5 p2 p1

0 1 2 3 4 5 6 7 9 13 19
Shortest Job First(Preemptive)

Proc Arrival Burst time CT TAR WT


ess time
P1 8 1
P2 5 1
P3 2 7
P4 4 3
P5 2 8
P6 4 2
P7 3 5
Cal. Completion time, turn around time and
avg. waiting time.
Proc Arrival Burst time CT TAR WT
ess time
P1 8 1
P2 5 1
P3 2 7
P4 4 3
P5 2 8
P6 4 2
P7 3 5
P3 p7 p6 p6 P2 p4 p1 P4 P7 P3 P5

0 2 3 4 5 6 7 8 9 11 15 21 29
PRIORITY SCHEDULING

The SJF is a special case of general priority scheduling algorithm.


A Priority (an integer) is associated with each process.The CPU is allocated to the
process with the highest priority.Generally smallest integer is considered as the
highest priority.Equal priority processes are scheduled in First Come First serve
order.It can be preemptive or Non-preemptive.

Non-preemptive Priority Scheduling

In this type of scheduling the CPU is allocated to the process with the
highest priority after completing the present running process

Advantage
Very good response for the highest priority process over non-preemptive
version of it.
Disadvantage
Starvation may be possible for the lowest priority processes
Non pre-emtive priority scheduling

Pr Burst time Priority CT TAR WT


oc Arrival
es time
s
P1 0 4 2
P2 1 2 4
P3 2 3 6
P4 3 5 10
P5 4 1 8
P6 5 4 12
P7 6 6 9

Cal. Completion time, turn around time and


avg. waiting time.
Pr Burst time Priority CT TAT WT
oc Arrival
es time
s

P1 0 4 2 4 4 0
P2 1 2 4 25 24 22
P3 2 3 6 23 21 18
P4 3 5 10 9 6 1
P5 4 1 8 20 16 15
P6 5 4 12 13 8 4
P7 6 6 9 19 13 7

P1 p4 p6 p7 P5 p3 P2

0 4 9 13 19 20 23 25
Pre -emtive priority scheduling

Pr Burst time Priority CT TAR WT


oc Arrival
es time
s
P1 0 4 2
P2 1 2 4
P3 2 3 6
P4 3 5 10
P5 4 1 8
P6 5 4 12
P7 6 6 9

Cal. Completion time, turn around time and


avg. waiting time.
Pr Burst time Priority CT TAT WT
oc Arrival
es time
s

P1 0 4 2 25 25 0
P2 1 2 4 22 21 0
P3 2 3 6 21 19 0
P4 3 5 10 12 9 0
P5 4 1 8 19 15 14
P6 5 4 12 9 4 0
P7 6 6 9 18 12 6

P1 p2 p3 p4 P4 p6 P4 P7 P5 P3 P2 P1

0 1 2 3 4 5 9 12 18 19 21 22 25
Round Robin Scheduling
•Round Robin is the preemptive process scheduling
algorithm.

•Each process is provided a fix time to execute, it is


called a quantum.

•Once a process is executed for a given time period, it


is preempted and other process executes for a given
time period.

•Context switching is used to save states of preempted


processes.
Pr Burst time CT TAT WT
oc Arrival
es time
s

P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 5 3

P1 p2 p3 p1 P4 p5 P2 P6 P5 P2 P6 P5

0 2 3 4 5 9 12 18 19 21 22 25

Round robin scheduling


TIME QUANTUM 2
Pr Burst time CT TAT WT
oc Arrival
es time
s
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 5 3
Pr Burst time CT TAT WT
oc Arrival
es time
s

P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 5 3

P1 p2 p3 p4 P5 p6 P2 P5

0 4 8 10 11 15 18 19 21

Round robin scheduling


TIME QUANTUM 4
Pr Burst time CT TAT WT
oc Arrival
es time
s

P1 5 5
P2 4 4
P3 3 7
P4 1 9
P5 2 2
P6 6 3

P4 P5 P3 P3 P4 P6 P3 P2 P4 P3

0 1 4 6 9 12 15 18 21 22 25 26

19
Round robin scheduling
TIME QUANTUM 3
Multilevel Queue Scheduling
A multilevel queue scheduling algorithm partitions the ready queue in several separate queues,
for instance
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the
process, such as
Memory size , Process priority , Process type
Algorithm choose the process from the occupied queue that has the highest priority, and run
that process either
Preemptive or Non-preemptively
Each queue has its own scheduling algorithm or policy. 
Possibility I 
    If each queue has absolute priority over lower-priority queues then no process in the queue
could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for
system processes, interactive processes, and interactive editing processes will all empty.
Possibility II 
    If there is a time slice between the queues then each queue gets a certain amount of CPU
times, which it can then schedule among the processes in its queue. For instance;
80% of the CPU time to foreground queue using RR.
20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling
overhead, but it is inflexible.
Multilevel feedback queue-scheduling
algorithm

Multilevel feedback queue-scheduling algorithm allows a process to move between queues.

It uses many ready queues and associate a different priority with each queue.
The Algorithm chooses to process with highest priority from the occupied queue and run that
process either preemptively or un preemptively. If the process uses too much CPU time it will
moved to a lower-priority queue.

Similarly, a process that wait too long in the lower-priority queue may be moved to a higher-
priority queue may be moved to a highest-priority queue. Note that this form of aging prevents
starvation.
Example:
A process entering the ready queue is placed in queue 0.
If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
If it does not complete, it is preempted and placed into queue 2.
Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when
queue 0 and queue 1 are empty.`
Process Queue Burst Time Arrival Time
P1 Q1 4 0
P2 Q2 2 1
P3 Q2 3 2
P4 Q1 2 2
P5 Q1 5 8
Queue Priority(Preemptive) Queue
Scheduling
Q1 1 RR (TQ=3)
Q2 2 FCFS

P1 P4 P1 P2 P5 P3

0 3 5 6 8 13 16
P1 P4 P1 P2 P5 P3

0 3 5 6 8 13 16

• Waiting Time
P1 : (0-0)+2=2
P2 : 6-1=5
P3 : 13-2=11 Av. Waiting Time= (2+5+11+1+0)/5
P4 : 3-2=1 = 19/5
P5 : 8-8=0 = 3.8
Process Queue Burst Time Arrival Time
P1 Q1 3 0
P2 Q2 5 2
P3 Q2 6 2
P4 Q1 2 4
P5 Q1 4 5

Queue Scheduling
Q1 FCFS
Q2 SJF(NP)

Queues are scheduled using RR (TQ=5)

P1 P2 P4 P5 P3 P5 P3
0 3 8 10 13 18 19 20
P1 P2 P4 P5 P3 P5 P3
0 3 8 10 13 18 19 20

• Waiting time
P1 : 0-0=0
P2 : (3-2)=1
P3 : (13-2)+1=12Av. Waiting Time=27/5
P4 : 8-4=4 =5.4
P5 : (10-5)+5=10
LOAD BALANCING
Multicore Processors
Recently computer hardware has been to place multiple processor cores on the same
physical chip, resulting in a multicore processor.
When a processor accesses memory, it spends a significant amount of time waiting for
the data to become available. This situation, known as a memory stall.

To remedy this situation, many recent hardware designs have implemented


multithreaded processor cores in which two (or more) hardware threads are assigned to
each core. That way, if one thread stalls while waiting for memory, the core can switch to
another thread.

There are two ways to multithread a processing core: coarse grained and fine-grained
multithreading.
With coarse-grained multithreading, a thread executes on a processor until a long-
latency event such as a memory stall occurs. Because of the delay caused by the long-
latency event, the processor must switch to another thread to begin execution. However,
the cost of switching between threads is high, since the instruction pipeline must
be flushed before the other thread can begin execution on the processor core. Once this
new thread begins execution, it begins filling the pipeline with its instructions. Fine-
grained (or interleaved) multithreading switches between threads at a much finer level of
granularity—typically at the boundary of an instruction cycle. However, the architectural
design of fine-grained systems includes logic for
Real-time operating systems

•Priority-based scheduler
•Guaranteed maximum time to service interrupts
•Ability to ensure that processes are fully loaded into memory and stay in memory
•Consistent and efficient memory allocation
•Preemptable system calls
Process types

erminating processes: A terminating process may be considered as one that runs and then exits
terminates). We are interested in the amount of time that it takes it to run to completion.
s deadline is the time that at which it should complete all its work
s compute time is the amount of CPU time it needs.

Nonterminating processes: For processes such as video and audio servers as well as editors, we are
ot interested in the terminating time of these processes but rather in the time between events.
or example, we would like our audio server to fill a 4K byte audio buffer every 500 milliseconds or
we would like our video server to provide a new video frame every 33.3 milliseconds. For these
rocesses, the compute time is the CPU time that the process needs to compute its periodic event an
he deadline is the time at which it must have the results ready. Non terminating processes may be
ivided into two classes:
Periodic: A periodic process has a fixed frequency at which it needs to run. For example, a video
decompressor may have to run 30 times per second at 33.3 millisecond intervals.
Aperiodic: Aperiodic processes have no fixed, cyclical, frequency between events. Event interrup
may occur sporadically and event computation times may vary dramatically. For purposes of
scheduling, we use the shortest period and the longest computation time to play it safe.
This assures us that we will have enough CPU time to complete the task. Moreover, for
periodic tasks, the deadline must be within the period. If the period of the process is T, the
following relation must now hold:
C≤D≤T
Uponentering the system, each periodic task is
assigned
a priority inversely based on its period. The
shorter the period, the higher the
priority; the longer the period, the lower the
priority.

Rate monotonic analysis is a technique for


assigning static priorities to periodic processes

The rate-monotonic scheduling algorithm


schedules periodic tasks using a
static priority policy with preemption. If a
lower-priority process is running
and a higher-priority process becomes
available to run, it will preempt the
lower-priority process.
Here is an example of ratemonotonic priority assignment. Suppose we have the
following processes:
A runs every 50 msec for 20 msec 
B runs every 50 msec for 10 msec 
C runs every 30 msec for 10 msec

This example does’nt satisfy the Rate Monotonic approch as its not able to
meet deadline for process C
HOW IT CAN ACHIEVE DEADLINE FOR ALL THE
PROCESSES.
This does not give us the desired performance because, while processes A and B get
scheduled acceptably, process C is late the second time it is scheduled and misses an
entire period! Now let us reverse the priorities as ratemonotonic assignment would
dictate:

measure the CPU utilization of a process


Pi as the ratio of its burst to its period—ti/pi

measure the CPU utilization of a process A is


20/50

You might also like