Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

A process is an instance of a computer program that is being sequentially

executed. Every program running on a computer, be it background services or


applications, is a process. One CPU can run one process at a time. Process management is
an operating system's way of dealing with running multiple processes on a single CPU.
Since most computers contain one processor with one core, multitasking is done by
simply switching processes quickly (known as context switching). Process management
involves computing and distributing CPU time as well as other resources.

Question: Explain Context Switching

Steps involved in context switch

1. The state of the first process must be saved somehow, so that, when the scheduler
gets back to the execution of the first process, it can restore this state and
continue. This is accomplished as follows:
2. All the data required to define the state of the process - all the registers that the
process may be using, especially the program counter, plus any other operating
system specific data that may be necessary is stored in one data structure, called a
switch frame or a process control block (PCB).
3. The PCB for the first process is created and saved.
4. The OS then loads the PCB and context of the second process. In doing so, the
program counter from the PCB is loaded, and thus execution can continue in the
new process.

In a context switch new processes are chosen from a queue or queues. Process and thread
priority can influence which process continues execution, with processes of the highest
priority checked first for ready threads to execute.

Question: Important Definitions

1. Time Slice - The period of time for which a process is allowed to run
uninterrupted in a pre-emptive multitasking operating system is called as time
slice. Context switch happens at the end of each time slice.
2. Degree of multiprogramming - The number of process that can be run
simultaneously on a computer is known as degree of multiprogramming.
Question: Explain different Process States

1. Created or New

When a process is first created, it occupies the "created" or "new" state. In this
state, the process awaits admission to the "ready" state. This admission will be
approved or delayed by process scheduler. Typically in most desktop computer
systems, this admission will be approved automatically, however for real time
operating systems this admission may be delayed.

2. Ready

A "ready" or "waiting" process has been loaded into main memory and is
awaiting execution on a CPU (to be context switched onto the CPU). There may be
many "ready" processes at any one point of the systems execution - for example, in a
one processor system, only one process can be executing at any one time, and all
other "concurrently executing" processes will be waiting for execution.

A ready queue is used in computer scheduling. Modern computers are capable


of running many different programs or processes at the same time. However, the CPU
is only capable of handling one process at a time. Processes that are ready for the
CPU are kept in a queue for "ready" processes. Other processes that are waiting for an
event to occur, such as loading information from a hard drive or waiting on an
internet connection, are not in the ready queue.
3. Running or Executing

A "running", "executing" or "active" process is a process which is currently


executing on a CPU. From this state the process may exceed its allocated time slice
and be context switched out and back to "ready" by the operating system, it may
indicate that it has finished and be terminated or it may block on some needed
resource (such as an input / output resource) and be moved to a "blocked" state.

4. Blocked or Sleeping

Should a process "block" on a resource (such as a file or a device), it will be


removed from the CPU (as a blocked process cannot continue execution) and will be
in the blocked state. The process will remain "blocked" until its resource becomes
available, which can unfortunately lead to deadlock. From the blocked state, the
operating system may notify the process of the availability of the resource it is
blocking on (the operating system itself may be alerted to the resource availability by
an interrupt). Once the operating system is aware that a process is no longer blocking,
the process is again "ready" and can from there be dispatched to its "running" state,
and from there the process may make use of its newly available resource.

5. Terminated

A process may be terminated, either from the "running" state by completing


its execution or by explicitly being killed. In either of these cases, the process moves
to the "terminated" state.

Question: Explain Process Scheduling

Scheduling is a key concept in computer multitasking and multiprocessing operating


system design. It refers to the way processes are assigned priorities in a priority queue.
This assignment is carried out by software known as a scheduler.

Scheduling Objectives

The scheduler is concerned mainly with –

• CPU utilization - to keep the CPU as busy as possible.


• Throughput - number of processes that complete their execution per time unit.
• Turnaround time - amount of time to execute a particular process.
• Waiting time - amount of time a process has been waiting in the ready queue.
• Response time - amount of time it takes from when a request was submitted until
the first response is produced
Scheduling Algorithms

• Non-preemptive scheduling - In non-preemptive scheduling, a job is completed


before making another scheduling decision. Since the OS waits till a job is
completed, one poorly designed program can cause the whole system to hang.
Non-preemptive scheduling is generally used in real time operating systems.
• Preemptive scheduling - In preemptive scheduling a scheduling decision can be
made while the current job is executing. The OS does not wait for a job to
compete to schedule another job but provides each process with a fixed amount of
processor time (also known as time slice). It is more reliable and makes the OS
more responsive.

Process Priority

The concept of priority is important as many processes are competing with each other for
the CPU time and memory. The priority can be external or internal.

• External priority is specified by the user at the time of initiating a process. This
priority can be changed during the time of execution. If the user does not assign a
priority then the OS assigns a default priority.
• Internal priority is based on the calculation of the current state of the process.
This can be based on the estimated time it would take for a process to compete.
Based on this, the OS can which process it should execute first.
• Purchased priority is used in data centers. In this case a each used pays for the
time used and priority. Higher priority processes are charged a premium.

Question: Process control block

A Process Control Block (PCB) also called Task Controling Block) containing the
information needed to manage a particular process. The PCB is "the manifestation of a
process in an operating system".
A process in an operating system is represented by a data structure known as a process
control block (PCB) or process descriptor. The PCB contains important information
about the specific process including

• The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
• Unique identification of the process in order to track "which is which"
information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.

Process State: The State may be new, ready, running, waiting, halted and so on.

Program Counter: The counter indicates the address of the next instruction to be
executed for this process.

CPU Register: The registers vary in number and type, depending upon computer
architecture. They include accumulators, index register, Stack pointer and general
purpose register, plus any condition code, information. Along with the program counter,
this state information must be saved when an interrupt accrue, to allow the process to be
continued correctly afterword

CPU Scheduling information: This information include a process priority, Pointer to


scheduling Queues, and any other scheduling parameter

Memory Management Information: The information may include such information as


the value of base and limit register, the page tables, or the segment table, depending upon
the memory system used by Operating System.
Accounting Information: This information may include the amount of CPU and real
time used, time limit, account no, job or process no, and so on.

I/O Status information: This information may include the I/O devices allocated to the
process a list of open files, and so on.

Question: Explain Scheduling Algorithms

CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the cpu. In this section, we describe several of the many CPU-
scheduling algorithms that exist.

First-Come, First-Served Scheduling

By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)


scheduling algorithm. With this scheme, the process that requests the CPU first is
allocated the CPU first. The implementation of the FCFS policy is easily managed with a
FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the
queue. When the CPU is free, it is allocated to the process at the head of the queue. The
running process is then removed from the queue. The code for FCFS scheduling is simple
to write and understand.

The average waiting time under the FCFS policy, however, is often quite long. Consider
the following set of processes that .arrive at time 0, with the length of the CPU-burst time
given in milliseconds:

Process Burst Time


If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart:

The wafting time is 0 milliseconds for process PI, 24 milliseconds for process Pz, and 27
milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17
milliseconds. If the processes arrive in the order Pz, P3, PI, however, the results will be
as shown in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is


substantial. Thus, the average waiting time under a FCFS policy is generally not minimal,
and may vary substantially if the process CPU-burst times vary greatly.

Shortest-Job-First Scheduling

A different approach to CPU scheduling is the shortest-job-first (SIF) scheduling


algorithm. This algorithm associates with each process the length of the latter's next CPU
burst. When the CPU is available, it is assigned to the process that has the smallest next
CPU burst. If two processes have the same length next CPU burst, FCFS scheduling is
used to break the tie. Note that a more appropriate term would be the shortest next CPU
burst, because the scheduling is done by examining the length of the next CPU burst of a
process, rather than its total length. We use the term SJF because most people and
textbooks refer to this type of scheduling discipline as SJF.

Priority Scheduling

The SJF algorithm is a special case of the general priority-scheduling algorithm. A


priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal-priority processes are scheduled in FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the
(predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice
versa.

Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095.


However, there is no general agreement on whether 0 is the highest or lowest Priority.
Some systems use low numbers to represent low priority; others use low numbers for
high priority. This difference can lead to confusion. In this text, we use low numbers to
represent high priority.

Priority scheduling can be either preemptive or non preemptive. When a process arrives
at the ready queue, its priority is compared with the priority of the currently running
process. A preemptive priority-scheduling algorithm will preempt the CPU if the priority
of the newly arrived process is higher than the priority of the currently running process. A
non preemptive priority-scheduling algorithm will simply put the new process at the head
of the ready queue.

A major problem with priority-scheduling algorithms is indefinite blocking (or


starvation). A process' that is ready to run but lacking the CPU can be considered
blocked-waiting for the CPU. A priority-scheduling algorithm can leave some low-
priority processes waiting indefinitely for the CPU. In a heavily loaded computer system,
a steady stream of higher-priority processes can prevent a low-priority process fro in ever
getting the CEU.

Round-Robin Scheduling

The round-robin (RR) scheduling algorithm is designed especially for timesharing


systems. It is similar to FCFS scheduling, but preemption is added to switch between
processes. A small unit of time, called a time quantum (or time slice), is defined. A time
quantum is generally from 10 to 100 milliseconds. The ready queue is treated as a
circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
To implement RR scheduling, we keep the ready queue as a FIFO queue of processes.
New processes are added to the tail of the ready queue. The CPU scheduler picks the first
process from the ready queue, sets a timer to interrupt after 1 time quantum, and
dispatches the process.

One of two things will then happen. The process may have a CPU burst of less than 1
time quantum. In this case, the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. Otherwise, if the CPU
burst of the currently running process is longer than 1 time quantum, the timer will go off
and will cause an. interrupt to the operating system. A context switch will be executed,
and the process will be put at the tail of the ready queue. The CPU scheduler will then
select the next process in the ready queue.

Multilevel Queue Scheduling

Another class of scheduling algorithms has been created for situations in which processes
are easily classified into different groups. For example, a common division is made
between foreground (or interactive) processes and background (or batch) processes.
These two types of processes have different response-time requirements, and so might
have different scheduling needs. In addition, foreground processes may have priority (or
externally defined) over background processes.

A multilevel queue-scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type. Each
queue has its own scheduling algorithm. For example, separate queues might be used for
foreground and background processes. The foreground queue might be scheduled by an
RR algorithm. While the background queue is scheduled by an FCFS algorithm.

Multilevel Feedback Queue Scheduling

Normally, in a multilevel queue-scheduling algorithm, processes are permanently


assigned to a queue on entry to the system. Processes do not move between queues. If
there are separate queues for foreground and background processes, for example,
processes do not move from one queue to the other, since processes do not change their
foreground or background nature. This setup has the advantage of low scheduling
overhead, but the disadvantage of being inflexible.

Multilevel feedback queue scheduling, however, allows a process to move between


queues. The idea is to separate processes with different CPU-burst characteristics. If a
process uses too much CPU time, it will be moved to a lower-priority queue. This scheme
leaves I/O-bound and interactive processes in the higher-priority queues. Similarly, a
process that waits too long in a lower priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.

In general, a multilevel feedback queue scheduler is defined by the following parameters:

• The number of queues.

• The scheduling algorithm for each queue

• The method used to determine when to upgrade a process to a higher priority


queue.

• The method used to determine when to demote a process to a lower-priority queue.

• The method used to determine which queue a process will enter when that process
needs service.

Conclusion

The process management is to manage the processes of the computer system. In which
we manage the processes according to their priority, timing constrains, FCFS (First Come
First Serve) etc. Its arrange according to the CPU timing, memory spaces, priority wise,
Time limit, scheduling Queues etc.

You might also like