Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 20

PROCESS MANAGEMENT

Name: Imran Ahsanullah


Roll# : sp20mcs0023

Name: Rahoul
Roll #: sp20-bsse-1025
PROCESSES AND PROGRAM
What is a Process?

 A program is a passive entity that does


not perform any actions by itself; it has to
be executed if the actions it calls for are
to take place.
 A process is an execution of a program. It
actually performs the actions specified in a
program.

 The instructions, data and stack of the


program constitute its address space.
 A process comprises six components:
 ID : Unique id assigned by the OS(in PCB/TCB)
 Code: It is the code of the program(also called as text of a program)
 Data: It is the data used in the execution of the program, including data from files
 Stack: stack contains parameters of functions and procedures called during execution
of the program, and their return addresses
 Resources: resources is the set of resources allocated by the OS
 CPU State: It is composed of contents of the PSW and the general-purpose registers
(GPRs) of the CPU (we assume that the stack pointer is maintained in a GPR)
Processes & Threads Overview

 Processes & Programs


 What is a Process?
 Relationships between Processes & Programs
 Child Processes
 Concurrency and Parallelism

 Implementing Processes
 Process States and State Transitions
 Process Context and the Process Control Block
 Context Save, Scheduling and Dispatching
 Event Handling
 Sharing, Communication and Synchronization between Processes
Processes and Programs
Relationship between Processes and Programs

 A program consists of a set of functions and procedures.


 The OS doesn’t know anything about the nature of a program, including functions
and procedures in its code.
 It knows only what it is told through system calls. The rest is under the control of
the program.
 The functions of a program may be separate processes, or they may constitute the code part of a
single process.

 Two kinds of relationships can exist between processes and programs.


IMPLEMENTING PROCESSES
 In the operating system’s view, a process is a unit of computational work. Hence the kernel’s
primary task is to control operation of processes to provide effective utilization of the
computer system.
 Accordingly, the kernel allocates resources to a process, protects the process and its
resources from interference by other processes, and ensures that the process gets to use
the CPU until it completes its operation.
 Fundamental functions of the kernel for controlling processes.
 Context Save - Saving CPU state and information concerning resources of
the process whose operation is interrupted.
 Event Handling - Analyzing the condition that led to an interrupt, or the request
by a process that led to a system call, and taking appropriate actions.
 Scheduling - Selecting the process to be executed next on the CPU.
 Dispatching - Setting up access to resources of the scheduled process and loading its
saved CPU state in the CPU to begin or resume its operation.
Implementing Processes
Process States and State Transitions
 An Operating system uses the notion of a process state to keep track of what a process is
doing at any moment. i.e. it describes the nature of current activity at any moment.
 The kernel uses process states to simplify its own functioning, so the number of process
states and their name many vary across Oss.
 The Four fundament states of a process
Implementing Processes
Process States and State Transitions Continue…

 A state transition for a process P is a change in its state. A state transition is caused by the
occurrence of some event such as the start or end of an I/O operation. When the event
occurs, the kernel determines its influence on activities in processes, and accordingly
changes the state of an affected process.
 The figure diagrams the fundamental state transitions for a process. A new process is put in
the ready state after resources required by it have been allocated. It may enter the running,
blocked, and ready states a number of times as a result of events.
Implementing Processes
Process States and State Transitions Continue…
Implementing Processes
Process States and State Transitions Continue…
Implementing Processes
Suspended Processes
 A kernel needs additional states to describe the nature of the activity of a process that is
not in one of the four fundamental states described earlier. Consider a process that was in
the ready or the blocked state when it got swapped out of memory.
Concurrency and Parallelism

 Concurrency means executing multiple tasks at the same time but not necessarily
simultaneously.
 There are two tasks executing concurrently, but those are run in a 1-core CPU, so the CPU will decide to run a
task first and then the other task or run half a task and half another task, etc.
 Two tasks can start, run, and complete in overlapping time periods i.e Task-2 can start even before Task-1
gets completed. It all depends on the system architecture..
 Concurrency means executing multiple tasks at the same time but not necessarily simultaneously
 Concurrency: Interruptability

 Parallelism is when tasks literally run at the same time, e.g., on a multicore processor
 Parallelism means that an application splits its tasks up into smaller subtasks which can be processed in
parallel, for instance on multiple CPUs at the exact same time
 Parallelism does not require two tasks to exist. It literally physically run parts of tasks OR multiple tasks, at the
same time using the multi-core infrastructure of CPU, by assigning one core to each task or sub-task
 Parallelism: Independentability
Process Scheduling Goals

1. User-based scheduling goals


 Turnaround time –time to execute a process : from start to completion i.e sum of
 Waiting time
 Executing time
 Blocked waiting for an event, for example waiting for I/O
 Waiting time – time in Ready queue, directly impacted by scheduling algorithm
 Response time – time from submission of request to production of first response
 Predictability – prediction about completion time of a process by human mind

2. System-based scheduling goals


 Throughput – number of process that complete their execution per unit time
 CPU utilization – percent time CPU busy
 Fairness – all processes should be treated equally, unit priority is set
 Balance – There should be a balance between I/O bound and CPU bound processes
Process Scheduling Levels

1. Long-term scheduling
 Long-term scheduling is done when a new process is created. It initiates processes and so
controls the degree of multi-programming (number of processes in memory).

2. Medium-term scheduling
 Medium term scheduling Medium-term scheduling involves suspending or resuming processes by
swapping (rolling) them out of or into memory

3. Short-term scheduling
 Short-term (process or CPU) scheduling occurs most frequently and decides which process to
execute next
Scheduling Algorithms

1. First Come First Serve (FCFS)


 The arriving process is added onto the tail of the queue and the process at the head of the
queue is dispatched to the processor for execution
 A process that has been interrupted by any means does not maintain its previous order in
the ready queue
Example
Scheduling Algorithms

2. Priority Scheduling

The processes are executed according to the priority


 Priority number based scheduling
 In it preference is given to the processes, based on a priority number assigned to it.

 Shortest Process Next(SPN)


 The process with the shortest execution time is executed first . It is non-pre-emptive scheduling
algorithm
Scheduling Algorithms
3. Shortest Job First (SJF)
Shortest Job First scheduling works on the process with the shortest burst time or duration
first.
 This is the best approach to minimize waiting time.
 his is used in Batch Systems.
 It is of two types:
 Non Pre-emptive
 Pre-emptive
 To successfully implement it, the burst time/duration time of the processes should
be known to the processor in advance, which is practically not feasible all the time.
 This scheduling algorithm is optimal if all the jobs/processes are available at the
same time. (either Arrival time is 0 for all, or Arrival time is same for all)
Scheduling Algorithms
4. Preemptive (SJF)
Shortest Job First Preemptive Scheduling is also known as shortest remaining Time (SRT) or
Shortest Next Time (SNT)..
 The choice of preemptive and non preemptive arises when a new process arrives at
the ready queue and a previous process is not finished and is being executed. If the
next CPU burst of new process is shorter than current executing process, then in
preemptive version, it will stop that process and will start executing the newly
arrived process.
 While, in non preemptive version of SJF, even if the arriving process is shorter than
currently executing process, current process is not stopped. After the current process
finishes, then the new process gets in the queue. This is the key difference between
preemptive and preemptive version of SJF.
 The current state of the process is saved by the context switch and the CPU is given
to another process.
 Note – If 2 processes have same execution time, then jobs are based on First Come
First Serve Basis.
Scheduling Algorithms
4. Non-Preemptive (SJF)
In non-preemptive scheduling, once the CPU cycle is allocated to process, the process holds
it till it reaches a waiting state or terminated

 It is used when a process terminates, or a process switches from running to waiting


state. In this scheduling, once the resources (CPU cycles) is allocated to a process,
the process holds the CPU till it gets terminated or it reaches a waiting state.
 In case of non-preemptive scheduling does not interrupt a process running CPU in
middle of the execution. Instead, it waits till the process complete its CPU burst
time and then it can allocate the CPU to another process.
Scheduling Algorithms

5. Round Robin Scheduling


 Time slices are assigned to each process in equal portions and in circular order,
handling all processes without priority
 The design is such that whenever a process finishes before the time slice expires, the
timer will stop and send the interrupt signal ,so that the next process can be
scheduled
 Round robin is a pre-emptive algorithm
 The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.
 Round robin is a hybrid model which is clock-driven
 It is a real time algorithm which responds to the event within a specific time limit.
 Round robin is one of the oldest, fairest, and easiest algorithm.
 Widely used scheduling method in traditional OS
Thank You

You might also like