Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

SCHOOL OF ELECTRICAL AND COMMUNICATION

DEPARTMENT OF ECE
B.TECH – ECE (VTU R-15)
UNIT II-QUESTION BANK

1151EC217 – EMBEDDED OS AND DEVICE DRIVERS

OVERVIEW OF RTOS

RTOS Task and Task State, Preemptive Scheduler, Process Synchronization, Message
Queues, Mailboxes, Pipes, Critical Section, Semaphores, Classical Synchronization
Problem –Deadlocks.

PART A

1) Define task
• A task is an independent thread of execution that can compete with other concurrent tasks for
processor execution time.
• A task is schedulable
• A task is defined by its distinct set of parameters and supporting data structures. Specifically,
upon creation, each task has an:
– associated name,
– a unique ID,
– a priority (if part of a preemptive scheduling plan),
– a task control block (TCB),
– a stack, a task routine
2. Mention various task states.
 New. The process is being created.
 Running. Instructions are being executed.
 Waiting. The process is waiting for some event to occur (such as an I/O completion or reception
of a signal).
 Ready. The process is waiting to be assigned to a processor.
 Terminated. The process has finished execution.
3. Draw task state diagram
4. If a task curently in running state, what are possible next state during execution?
Wait / Ready / End
5. Write system calls to create and delete a task?
os_create_task (TASK ID);
os_delete_task (TASK ID);
6. What is task syncronisation?
• The act of making processes aware of the access of shared resources by each process to avoid
conflicts is known as „ Task/Process Synchronisation‟

• For Example:
– where two processes try to access display hardware connected to the system

7. What are all the issues can be overcome by task sycronisation techniques?
• Racing

• Deadlock
• Livelock
• Starvation
8. List any two realtime task schedulling techniques.
1. Rate Monotonic
2. Earliest Deadline First
3. Round Robin
4. Evalutionary Algorithms
5. Multiobjective genetic algorithms
9. List the performance evaluation parameters of realtime task schedullers.
1. Schedulability
2. Average waiting time
3. Processor Utilisation
4. Power consumption
5. Computation Speed
10. What is premption?
• In preemptive approaches a running task instance may be stopped (preempted) at any time and
restarted (resumed) at any time later.

11. What is critical section?


The critical section problem is to make sure that only one process should be in a critical
section at a time. When a process is in the critical section, no other processes are allowed to enter the
critical section.
12. What is race condition?
• It is the situation in which multiple processes compete (race) each other to access and manipulate
shared data concurrently.
• In a Race condition the final value of the shared data depends on the process which acted on the
data finally.
13. What is deadlock?
• A deadlocked processes is, none of the processes are able to make any progress in their
execution
• A process is waiting for a resource held by another process which is waiting for a resource held
by the first process
14. What is semaphore?
• Semaphore is a system resource

• A process which wants to access the shared resource can


– first acquire this system object
– Indicate the other processes which wants the shared resource that the shared resource is
currently acquired by it.
• The resources which are shared among a process can be either for exclusive use by a process or
for using by a number of processes at a time.
• The display device of an embedded system is a typical example for the shared resource which
needs exclusive access by a process.

16. List various methods used to avaid deadlock.


There are three principal methods for dealing with deadlocks:
 Use some protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlocked state.
 Allow the system to enter a deadlocked state, detect it, and then recover.
 Ignore the problem altogether and pretend that deadlocks never occur in the system.
17. What are pipes?
 Pipes provide a relatively simple ways for processes to communicate with one another.
 Ordinary pipes allow communication between parent and child processes, while named pipes
permit unrelated processes to communicate.

18. Mention the basic schemes used for task communication.


o Principally, communication is achieved through two schemes: shared memory and message
passing.
o The shared-memory method requires communicating processes to share some variables.
o The message-passing method allows the processes to exchange messages. The responsibility for
providing communication may rest with the operating system itself.
o These two schemes are not mutually exclusive and can be used simultaneously within a single
operating system
o
19. What are the techniques used in Communication in client–server systems?
It may use (1) sockets, (2) remote procedure calls (RPCs), or (3) pipes.

20. When RPC occurs?


An RPC occurs when a process (or thread) calls a procedure on a remote application.

21. What are the various clasification of sempahore?


• Semaphores are classified into two;
– „Binary Semaphore‟ and
„Counting Semaphore‟.
22. Mention the disadvantages of Remote Procedure Call

– The remote procedure call is a concept that can be implemented in different ways. It is
not a standard.
– There is no flexibility in RPC for hardware architecture. It is only interaction based.
– There is an increase in costs because of remote procedure call.

PART B

1 Write the structure of task control block and its entity

2 Explain with an example the flow of task state diagram

3 Differentiate between OS and RTOS

4 Explain with an example RTOS Preemptive scheduler.

5 Discuss the method for handling the situation of critical section.

6 Discuss about the need of process synchronization and explain its operations

7 Explain details about bounded buffer problem in operating system and provide
solution with help of semaphore.

8 Explain in detail message queue and mail box

9 Explain how the dead lock is occurred and write the mechanism to over come this.

10 Calculate Completion time, waiting time and response time of preemptive and non-
preemptive CPU scheduling. Explain which one is best for real time application.
1. Use your DOB (Ex-12345) .
2. Each digit is a burst value and reverse of each digit is an arrival time value. The
below one is reference for your understanding.
Priority Task Burst Time Arrival time
1 T1 1 5
3 T2 2 4
5 T3 3 3
2 T4 4 2
4 T5 5 1

1. Write the structure of task control block and its entity

A task control block is implemented to provide more efficient user task access to task-specific
variables and context information. The task control block uses multiple portions located in both
protected system space and unprotected “user” space.

Task Control Block (TCB) is a data structure in the operating system kernel which contains Task
-specific information needed to manage it. The TCB is "the manifestation of a Task in an
operating system."

It enables the gathering of information on existing tasks within the system to facilitate task
naming, checking the status of given tasks and providing options for task execution like specific
memory models or use of co-processors and task deletion (note that deletion needs special
precautions in regards to semaphores.

2 .Explain with an example the flow of task state diagram

A task is an independent thread of execution that can compete with other concurrent tasks
for processor execution time. Task states are classified primarily into the five states. They are
new, ready, run, wait and dormant.
In the Free RTOS there are four distinct task states: Running, Ready, Blocked, and Suspended
Running State

The task is currently being executed. When a task-independent portion is executing, except when
otherwise specified, the task that was executing prior to the start of task-independent portion execution
is said to be in RUNNING state.

Ready State

The task has completed preparations for running, but cannot run because a task with higher
precedence is running. In this state, the task is able to run whenever it becomes the task with the
highest precedence among the tasks in READY state.
Waiting states

The task cannot run because the conditions for running are not in place. In other words, the task
is waiting for the conditions for its execution to be met. While a task is in one of the Waiting states, the
program counter and register values, and the other information representing the program execution
state, are saved. When the task resumes running from this state, the program counter, registers and
other values revert to their values immediately prior to going to the Waiting state. This state is
subdivided into the following three states.
Waiting State

Execution is stopped because a system call was invoked that interrupts execution of the invoking
task until some condition is met.

Suspended State
Execution was forcibly interrupted by another task.

Waiting-Suspended State

The task is in both WAITING state and SUSPENDED state at the same time. WAITING-
SUSPENDED state results when another task requests suspension of a task already in WAITING state.

T-Kernel makes a clear distinction between WAITING state and SUSPENDED state. A task cannot go
to SUSPENDED state on its own.

Dormant State

The task has not yet been started or has completed execution. While a task is in DORMANT state,
information presenting its execution state is not saved. When a task is started from DORMANT state,
execution starts from the task start address. Except when otherwise specified, the register values are
not saved.

3. Differentiate between OS and RTOS

In general, an operating system (OS) is responsible for managing the hardware resources of a
computer and hosting applications that run on the computer. An RTOS performs these tasks, but is also
specially designed to run applications with very precise timing and a high degree of reliability.

A real-time operating system (RTOS) is a type of operating system. An OS is a program that


serves as a bridge between the system hardware and the user. Furthermore, it manages all interactions
between system software and hardware.

Real-time operating systems are employed in real-time systems when time limitations are fixed
and followed strictly. It means that there is extremely little time for processing and responding.

It is mainly used in control device apps like industrial control systems, automobile-engine fuel
injection systems, medical imaging systems, weapon systems, etc.

Advantages

1. The RTOS concentrates on one application at a time. Most often, this app would be the one that
is already running. All others in the queue will be kept waiting stage. As a result, crucial tasks can be
completed on time and within the specified timeframe to achieve the desired results.
2. A real-time operating system ensures that the system consumes more resources while active on
all devices. As a result, RTOS systems have relatively little downtime. Hosting companies get the best
outcomes when they use RTOS.

Disadvantages

1. There are only a few tasks running at the same time, and the system's focus is on a few
applications to avoid errors; thus, other tasks must wait. Sometimes there is no time restriction on how
long waiting tasks must wait.
2. An RTOS interface uses complex algorithms. These algorithms will be challenging to write for
the normal user. Only a skilled developer may write and understand them.

GPOS is designed to perform non-time-critical general tasks. GPOS is commonly used to create
documents in Microsoft Office, play music and watch videos, etc.

These systems' scheduling isn't always prioritized. A lower-priority process can be executed first. The
task scheduler uses a fairness policy, allowing the overall high throughput but not ensuring that high-
priority jobs will be executed first.

It is used for systems and applications that are not time-critical. Some General Purpose operating
system examples are Windows, Linux, UNIX, etc.

Advantages

1. General Purpose operating system enables User-Friendly Graphic Interface for all users
because it contains multiple menus, buttons, icons, and more for easy navigation.
2. General Purpose uses several techniques, including memory segmentation, paging, and
swapping. GPOS may manage its own memory by using those techniques.

Disadvantages

1. Some OS costs more than open-source platforms such as Linux. While free operating systems
are available to customers, they are frequently more difficult to use than others. Furthermore, GPOS
with GUI functionality and other built-in features is costly, like Microsoft Windows.
2. Operating system threats are more prone to viral attacks, with higher risks. Several users have
malicious software packages installed on their computers, causing the operating system to stop
working and slow down.

3. Explain with an example RTOS Preemptive scheduler

The scheduler is the part of the kernel responsible for deciding which task should be executing
at any particular time. The kernel can suspend and later resume a task many times during the task
lifetime.

The scheduling policy is the algorithm used by the scheduler to decide which task to execute at
any point in time. The policy of a (non real time) multi user system will most likely allow each task a
"fair" proportion of processor time. The policy used in real time / embedded systems is described later.

In addition to being suspended involuntarily by the kernel a task can choose to suspend itself. It
will do this if it either wants to delay (sleep) for a fixed period, or wait (block) for a resource to
become available (eg a serial port) or an event to occur (eg a key press). A blocked or sleeping task is
not able to execute, and will not be allocated any processing time.

Referring to the numbers in the diagram above:

 At (1) task 1 is executing.


 At (2) the kernel suspends (swaps out) task 1 ...
 ... and at (3) resumes task 2.
 While task 2 is executing (4), it locks a processor peripheral for its own exclusive access.
 At (5) the kernel suspends task 2 ...
 ... and at (6) resumes task 3.
 Task 3 tries to access the same processor peripheral, finding it locked task 3 cannot continue so
suspends itself at (7).
 At (8) the kernel resumes task 1.
 Etc.
 The next time task 2 is executing (9) it finishes with the processor peripheral and unlocks it.
 The next time task 3 is executing (10) it finds it can now access the processor peripheral and this
time executes until suspended by the kernel.

Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid. Examples: – Shortest


Remaining Time First, Round Robin, etc. Examples: First Come First Serve, Shortest Job First,
Priority Scheduling.

Algorithms that are backed by preemptive Scheduling are round-robin (RR), priority, SRTF
(shortest remaining time first). Non-preemptive Scheduling is a CPU scheduling technique the
process takes the resource (CPU time) and holds it till the process gets terminated or is pushed to the
waiting state.

4. Discuss the method for handling the situation of critical section.

What is the Critical Section in OS?

Critical Section refers to the segment of code or the program which tries to access or modify the
value of the variables in a shared resource.

The section above the critical section is called the Entry Section. The process that is entering the
critical section must pass the entry section.

The section below the critical section is called the Exit Section.

The section below the exit section is called the Reminder Section and this section has the
remaining code that is left after execution.

What is the Critical Section Problem in OS?

When there is more than one process accessing or modifying a shared resource at the same time, then
the value of that resource will be determined by the last process. This is called the race condition.
Consider an example of two processes, p1 and p2. Let value=3 be a variable present in the shared
resource.

Let us consider the following actions are done by the two processes,

value+3 // process p1
value=6
value-3 // process p2
value=3
The original value of, value should be 6, but due to the interruption of the process p2, the value is
changed back to 3. This is the problem of synchronization.

The critical section problem is to make sure that only one process should be in a critical section at a
time. When a process is in the critical section, no other processes are allowed to enter the critical
section. This solves the race condition.

Solutions to the Critical Section Problem

A solution developed for the critical section should have the following properties,

 If a process enters the critical section, then no other process should be allowed to enter the
critical section. This is called mutual exclusion.
 If a process is in the critical section and another process arrives, then the new process must wait
until the first process exits the critical section. In such cases, the process that is waiting to enter
the critical section should not wait for an unlimited period. This is called progress.
 If a process wants to enter into the critical section, then there should be a specified time that the
process can be made to wait. This property is called bounded waiting.
 The solution should be independent of the system's architecture. This is called neutrality

Some of the software-based solutions for critical section problems are Peterson's
solution, semaphores, monitors.

Some of the hardware-based solutions for the critical section problem involve atomic instructions
such as TestAndSet, compare and swap, Unlock and Lock.

Pseudocode of Critical Section Problem

The concept of a critical section problem can be illustrated with the following pseudocode

while(true){
// entry section

process_entered=1;

//loop until process_eneterd in 0

while(process_entered);

// critical section

// exit section

process_entered=0

// remainder section

Consider a process is in the critical region and a new process wants to enter the critical section. In this
case, the new process will be made to wait in the entry section by the while loop. The while loop will
continue to stall the process until the variable process entered is made 0 by the process exiting from the
critical section in the exit section.

Thus, this mechanism prevents two processes to enter the critical section at the same time.

Conclusion

 The critical section is used to solve the problem of race conditions occurring due to
synchronization.
 The critical section represents the segment of code that can access or modify a shared resource.
 There can be only one process in the critical section at a time.
 There are many solutions to critical section problems such as Peterson's solution, semaphore, etc.

5. Discuss about the need of process synchronization and explain its operations
6. Explain details about bounded buffer problem in operating system and provide solution
with help of semaphore.

Process synchronization problem

Processes Synchronization or Synchronization is the way by which processes that share


the same memory space are managed in an operating system. It helps maintain the consistency of
data by using variables or hardware so that only one process can make changes to the shared
memory at a time.
Solution for classical synchronization problem

Conclusion.
Synchronization is the effort of executing processes such that no two processes have
access to the same shared data.
7. Explain in detail message queue and mail box
8. Explain how the dead lock is occurred and write the mechanism to overcome this.
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there
is only one track, none of the trains can move once they are in front of each other. A similar
situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process 1
is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.
Examples of Deadlock
System has 2 tape drives. P1 and P2 each hold one tape drive and each needs another one.
Semaphores A and B, initialize to 1, P0 and P1 are in deadlock as follows:
P0 executes wait(A) and preempts.
P1 executes wait(B).
Now P0 and P1 enters in deadlock.

Deadlock occurs if both processes progress to their second request.


Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use
at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases
the resource.
Circular Wait: A set of processes are waiting for each other in circular form.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance:
Prevention:
The idea is to not let the system into a deadlock state. This system will make sure that
above mentioned four conditions will not arise. These techniques are very costly so we use this
in cases where our priority is making a system deadlock-free.
One can zoom into each category individually; Prevention is done by negating one of
above mentioned necessary conditions for deadlock. Prevention can be done in four different
ways:
1. Eliminate mutual exclusion 3. Allow preemption

2. Solve hold and Wait 4. Circular wait Solution

Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make
an assumption. We need to ensure that all information about resources that the process will need
is known to us before the execution of the process. We use Banker‟s algorithm (Which is in turn
a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get correctness of data but performance decreases.
2) Deadlock detection and recovery:
If Deadlock prevention or avoidance is not applied to the software then we can handle
this by deadlock detection and recovery. which consist of two phases:
In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.
If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance
decreases.
3) Deadlock ignorance:
If a deadlock is very rare, then let it happen and reboot the system. This is the approach that both
Windows and UNIX take. we use the ostrich algorithm for deadlock ignorance.
In Deadlock, ignorance performance is better than the above two methods but the
correctness of data.
10.Answer

You might also like