Question: Explain Context Switching
Question: Explain Context Switching
1. The state of the first process must be saved somehow, so that, when the scheduler
gets back to the execution of the first process, it can restore this state and
continue. This is accomplished as follows:
2. All the data required to define the state of the process - all the registers that the
process may be using, especially the program counter, plus any other operating
system specific data that may be necessary is stored in one data structure, called a
switch frame or a process control block (PCB).
3. The PCB for the first process is created and saved.
4. The OS then loads the PCB and context of the second process. In doing so, the
program counter from the PCB is loaded, and thus execution can continue in the
new process.
In a context switch new processes are chosen from a queue or queues. Process and thread
priority can influence which process continues execution, with processes of the highest
priority checked first for ready threads to execute.
1. Time Slice - The period of time for which a process is allowed to run
uninterrupted in a pre-emptive multitasking operating system is called as time
slice. Context switch happens at the end of each time slice.
2. Degree of multiprogramming - The number of process that can be run
simultaneously on a computer is known as degree of multiprogramming.
Question: Explain different Process States
1. Created or New
When a process is first created, it occupies the "created" or "new" state. In this
state, the process awaits admission to the "ready" state. This admission will be
approved or delayed by process scheduler. Typically in most desktop computer
systems, this admission will be approved automatically, however for real time
operating systems this admission may be delayed.
2. Ready
A "ready" or "waiting" process has been loaded into main memory and is
awaiting execution on a CPU (to be context switched onto the CPU). There may be
many "ready" processes at any one point of the systems execution - for example, in a
one processor system, only one process can be executing at any one time, and all
other "concurrently executing" processes will be waiting for execution.
4. Blocked or Sleeping
5. Terminated
Scheduling Objectives
Process Priority
The concept of priority is important as many processes are competing with each other for
the CPU time and memory. The priority can be external or internal.
• External priority is specified by the user at the time of initiating a process. This
priority can be changed during the time of execution. If the user does not assign a
priority then the OS assigns a default priority.
• Internal priority is based on the calculation of the current state of the process.
This can be based on the estimated time it would take for a process to compete.
Based on this, the OS can which process it should execute first.
• Purchased priority is used in data centers. In this case a each used pays for the
time used and priority. Higher priority processes are charged a premium.
A Process Control Block (PCB) also called Task Controling Block) containing the
information needed to manage a particular process. The PCB is "the manifestation of a
process in an operating system".
A process in an operating system is represented by a data structure known as a process
control block (PCB) or process descriptor. The PCB contains important information
about the specific process including
• The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
• Unique identification of the process in order to track "which is which"
information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.
Process State: The State may be new, ready, running, waiting, halted and so on.
Program Counter: The counter indicates the address of the next instruction to be
executed for this process.
CPU Register: The registers vary in number and type, depending upon computer
architecture. They include accumulators, index register, Stack pointer and general
purpose register, plus any condition code, information. Along with the program counter,
this state information must be saved when an interrupt accrue, to allow the process to be
continued correctly afterword
I/O Status information: This information may include the I/O devices allocated to the
process a list of open files, and so on.
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the cpu. In this section, we describe several of the many CPU-
scheduling algorithms that exist.
The average waiting time under the FCFS policy, however, is often quite long. Consider
the following set of processes that .arrive at time 0, with the length of the CPU-burst time
given in milliseconds:
The wafting time is 0 milliseconds for process PI, 24 milliseconds for process Pz, and 27
milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17
milliseconds. If the processes arrive in the order Pz, P3, PI, however, the results will be
as shown in the following Gantt chart:
Shortest-Job-First Scheduling
Priority Scheduling
Priority scheduling can be either preemptive or non preemptive. When a process arrives
at the ready queue, its priority is compared with the priority of the currently running
process. A preemptive priority-scheduling algorithm will preempt the CPU if the priority
of the newly arrived process is higher than the priority of the currently running process. A
non preemptive priority-scheduling algorithm will simply put the new process at the head
of the ready queue.
Round-Robin Scheduling
One of two things will then happen. The process may have a CPU burst of less than 1
time quantum. In this case, the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. Otherwise, if the CPU
burst of the currently running process is longer than 1 time quantum, the timer will go off
and will cause an. interrupt to the operating system. A context switch will be executed,
and the process will be put at the tail of the ready queue. The CPU scheduler will then
select the next process in the ready queue.
Another class of scheduling algorithms has been created for situations in which processes
are easily classified into different groups. For example, a common division is made
between foreground (or interactive) processes and background (or batch) processes.
These two types of processes have different response-time requirements, and so might
have different scheduling needs. In addition, foreground processes may have priority (or
externally defined) over background processes.
A multilevel queue-scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type. Each
queue has its own scheduling algorithm. For example, separate queues might be used for
foreground and background processes. The foreground queue might be scheduled by an
RR algorithm. While the background queue is scheduled by an FCFS algorithm.
• The method used to determine which queue a process will enter when that process
needs service.
Conclusion
The process management is to manage the processes of the computer system. In which
we manage the processes according to their priority, timing constrains, FCFS (First Come
First Serve) etc. Its arrange according to the CPU timing, memory spaces, priority wise,
Time limit, scheduling Queues etc.