Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 11

UNIT IV RTOS BASED EMBEDDED SYSTEM DESIGN

4.1 Introduction to basic concepts of RTOS- Task, process & threads, Interrupt routines in RTOS
4.2 Multiprocessing and Multitasking
4.3 Preemptive and non-preemptive scheduling
4.4 Task communication shared memory
4.5 Message passing
4.6 Inter process Communication
4.7 Synchronization between processes-semaphores, Mailbox, pipes
4.8 Priority inversion, priority inheritance
4.9 Comparison of Real time Operating systems: VxWorks, чC/OS-II, RT Linux
4.1 Introduction to basic concepts of RTOS
• A variant of OS that operates in constrained environment in which computer memory and processing
power is limited. Moreover they often need to provide their services in definite amount of time.
• Hard, Soft & Firm RTOS
• Example RTOS: VxWorks, pSOS, Nucleus, RTLinux…
4.1 Operating System
A system software, which
• Task Management— Creation, block, run, delay, suspend, resume, deletion
• Memory Management— Allocation, Freeing, De-location
• Device Management—Configure, Initiate, register with OS, read, listen, write, accept, deregister
• I/O Devices subsystems management—Display (LCD, Touch Screen), Printer, USB ports
• Network Devices subsystems management — Ethernet, Internet, WiFi
• Includes Middleware — TCP/IP stack for telecommunications
• Includes Key-applications — Clock, Mail, Internet Explorer, Search, Access to the Maps external
library
4.1 RTOS
 Real-time OS (RTOS) is an intermediate layer between hardware devices and software
programming
 “Real-time” means keeping deadlines, not speed
 Advantages of RTOS in SoC design
• Shorter development time
• Less porting efforts
• Better reusability
 Disadvantages
• More system resources needed
• Future development confined to the chosen RTOS
 Multitasking operation system with hard or soft real time constraints
 An OS for the system having the time-limits for service of tasks and interrupts
 Enables defining of time constraints
 Enables execution of concurrent tasks (or processes or threads)
 RTOS enables setting of the rules
 Assigning priorities
 Predictable Latencies
4.1 Soft and Hard Real Time OS
 Soft real-time
• Tasks are performed by the system as fast as possible, but tasks don’t have to finish by
specific times
• Priority scheduling
• Multimedia streaming
• Hard real-time
• Tasks have to be performed correctly and on time
• Deadline scheduling
• Aircraft controller, Nuclear reactor controller
4.1 Structure of a RTOS
4.1 Components of RTOS
• The most important component of RTOS is its kernel (Monolithic & Microkernel).
• BSP or Board Support Package makes an RTOS target-specific (It’s a processor specific code onto
(processor) which we like to have our RTOS running).
4.1 RTOS KERNEL

4.1 Tasks
• Task is defined as embedded program computational unit that runs on the CPU under the state
control of kernel of an OS.
• It has a state , which at an instance defines by status (running , blocked ,or finished).
• Structure-its data , objects and resources and control block.

 Task is an instance of program


 Task thinks that it has the CPU all to itself
 Task is assigned a unique priority
 Task has its own set of stack
 Task has its own set of CPU registers (backup in its stack)
 Task is the basic unit for scheduling
 Task status are stored in Task Control Block (TCB)
Task structure:
 An infinite loop
 An self-delete function
Task States
Task Priority
 Unique priority (also used as task identifiers)
 64 priorities max (8 reserved)
 Always run the highest priority task that is READY
 Allow dynamically change priority
Task Control Block
PROCESS
A process is a program in execution...
• Starting a new process is a heavy job for OS: memory has to be allocated, and lots of data
structures and code must be copied.
• Memory pages (in virtual memory and in physical RAM) for code ,data, stack, heap, and for file
and other descriptors; registers in the CPU; queues for scheduling; signals and IPC; etc.
• A process consists of sequentially executable program and state control by an OS
• A state during running of a process is represented by the information of process state,process
structure-its data,objects and resources and process control blocks.
• A process runs on scheduling by OS , which gives the control of CPU to the process
• Process runs instructions and the continuous changes of its state takes place as the PC changes.
• Process is defined as a computation unit that processes on a CPU and whose state changes
under the control of kernel of an OS.
• Process status-running , blocked or finished
• Process structure-its data,objects,resourses and PCB.
Process State
• A process changes state as it executes.
• New - The process is being created.
• Running - Instructions are being executed.
• Waiting - Waiting for some event to occur.
• Ready - Waiting to be assigned to a processor.
• Terminated - Process has finished execution.
Process Control Block
• Contains information associated with each process
• Process State - e.g. new, ready, running etc.
• Process Number – Process ID
• Program Counter - address of next instruction to be executed
• CPU registers - general purpose registers, stack pointer etc.
• CPU scheduling information - process priority, pointer
• Memory Management information - base/limit information
• Accounting information - time limits, process number
– I/O Status information - list of I/O devices allocated
Process Scheduling Queues

• Job Queue - set of all processes in the system


• Ready Queue - set of all processes residing in main memory, ready and waiting to execute.
• Device Queues - set of processes waiting for an I/O device.
• Process migration between the various queues.
• Queue Structures - typically linked list, circular list etc.
4.1 Thread
A thread is a “lightweight” process, in the sense that different threads share the same address space.

• They share global and “static” variables, file descriptors, signal bookkeeping, code area, and
heap, but they have own thread status, program counter, registers, and stack.
• Shorter creation and context switch times, and faster IPC.
To save the state of the currently running task (registers, stack pointer,PC, etc.), and to restore that
of the new task.

• A thread consists of sequentially executable program codes under state-control by an OS.


• A state information of a thread is represented by thread-state (started , running, blocked or
finished),
• Thread structure-its data , objects and a subset of the process resources and thread-stack.
Multithreading Operating System
• Multitasking operating systems have the ability to execute different parts of a program, called
threads, simultaneously.
• These threads are mutually exclusive parts of the program and can be executed simultaneously
without interfering each other.

Advantages of Multithreaded Operating System


• Increases CPU utilization by reducing idle time
• Mutually exclusive threads of the same application can be executed simultaneously.
Disadvantages of Multithreaded Operating System
• If not properly programmed, multiple threads can interfere with each other when sharing
hardware resources such as caches.
• There are chances that the computer might hang while handling multiple process, but the
Multitasking is one of the best feature of Operating System.
Multiprocessing Operating System
• A multiprocessing system is a computer hardware configuration that includes more than one
independent processing unit. Ther term multiprocessing is generally used to refer to a large
cmputer hardware complexes found in major scientific and commercial applications.
• User can view the operating system as powerful uniprocessor system.

Advantages of Multiprocessing Operating System


• Due to multiplicity of processors, multiprocessor systems have better performance (shorter
responses times and higher throughput) than single processor systems.
• In a properly designed multiprocessor system, if one of the processors breaks down, the other
processor(s) automatically takes over the system workload until repairs are made. Hence, a
complete breakdown of such systems can be avoided.
Disadvantages of Real Time Operating System
• Expensive to procure and maintain so these systems are not suited to daily use.
• Requires immense overhead to schedule, balance, and coordinate the input, output, and
processing activities of multiple processors.
Multitasking Operating System
• Multitasking operating systems allow more than one program to run at a time.
• An operating system that gives you the perception of 2 or more tasks/jobs/processes running at
the same time.
• It does this by dividing system resources amongst these tasks/jobs/processes.
• And switching between the tasks/jobs/processes while they are executing very fast over and
over again.
• There are two basic types of multitasking: preemptive and cooperative.
• In preemptive multitasking, the operating system parcels out CPU time slices to each program.
• In cooperative multitasking, each program can control the CPU for as long as it needs it. If a
program is not using the CPU, however, it can allow another program to use it temporarily.
Advantages of Multitasking Operating System
• Multitasking increases CPU utilization.
• Multiple tasks can be handled at a given time.
Disadvantages of Multitasking Operating System
• To perform multitasking the speed of processor must be very high.
• There are chances that the computer might hang while handling multiple processes, but the
Multitasking is one of the best feature of Operating System.
Schedulers
• Also called “dispatchers”
• Schedulers are parts of the kernel responsible for determining which task runs next
• Most real-time kernels use priority-based scheduling
– Each task is assigned a priority based on its importance
– The priority is application-specific

Priority-Based Kernels
• There are two types
– Non-preemptive
– Preemptive

Non-Preemptive Kernels
• Perform “cooperative multitasking”
– Each task must explicitly give up control of the CPU
– This must be done frequently to maintain the illusion of concurrency
• Asynchronous events are still handled by ISRs
– ISRs can make a higher-priority task ready to run
– But ISRs always return to the interrupted tasks
Advantages of Non-Preemptive Kernels
• Interrupt latency is typically low
• Can use non-reentrant functions without fear of corruption by another task
– Because each task can run to completion before it relinquishes the CPU
– However, non-reentrant functions should not be allowed to give up control of the CPU
• Task-response is now given by the time of the longest task much lower than with F/B systems
• Less need to guard shared data through the use of semaphores
– However, this rule is not absolute
– Shared I/O devices can still require the use of mutual exclusion semaphores
– A task might still need exclusive access to a printer
Disadvantages of Non-Preemptive Kernels
• Responsiveness
– A higher priority task might have to wait for a long time
– Response time is nondeterministic
• Very few commercial kernels are non-preemptive

Preemptive Kernels
• The highest-priority task ready to run is always given control of the CPU
– If an ISR makes a higher-priority task ready, the higher-priority task is resumed (instead
of the interrupted task)
• Most commercial real-time kernels are preemptive
Advantages of Preemptive Kernels
• Execution of the highest-priority task is deterministic
• Task-level response time is minimized
Disadvantages of Preemptive Kernels
• Should not use non-reentrant functions unless exclusive access to these functions is ensured
Non-Preemptive Scheduling
• Why non-preemptive?
Non-preemptive scheduling is more efficient than preemptive scheduling since preemption incurs
context switching overhead which can be significant in fine-grained multithreading systems
Basic Real-Time Scheduling
• First Come First Served (FCFS)
• Round Robin (RR)
• Shortest Job First (SJF)
First Come First Served (FCFS)
• Simple “first in first out” queue
• Long average waiting time
• Negative for I/O bound processes
• Non preemptive
Round-Robin Scheduling
Round Robin (RR)
• FCFS + preemption with time quantum
• Performance (average waiting time) is proportional to the size of the time quantum.
Shortest Job First (SJF)
• Optimal with respect to average waiting time.
• Requires profiling of the execution times of tasks.
Shared Memory Communication
• Communication occurs by “simply” reading/writing to shared address page
– Really low overhead communication
– Introduces complex synchronization problems
Message Passing Communication
• Messages are collection of data objects and their structures
• Messages have a header containing system dependent control information and a message body
that can be fixed or variable size.
• When a process interacts with another, two requirements have to be satisfied.
Message Passing Communication
Synchronization and Communication.
Fixed Length
• Easy to implement
• Minimizes processing and storage overhead.
Variable Length
• Requires dynamic memory allocation, so fragmentation could occur.
Basic Communication Primitives
• Two generic message passing primitives for sending and receiving messages.
send (destination, message)
receive (source, message) source or dest={ process name, link, mailbox, port}
Addressing - Direct and Indirect
1) Direct Send/ Receive communication primitives
Communication entities can be addressed by process names (global process identifiers)

Global Process Identifier can be made unique by concatenating the network host address with the locally
generated process id. This scheme implies that only one direct logical communication path exists
between any pair of sending and receiving processes.
Symmetric Addressing: Both the processes have to explicitly name in the communication primitives.
Asymmetric Addressing: Only sender needs to indicate the recipient.
2) Indirect Send/ Receive communication primitives
Messages are not sent directly from sender to receiver, but sent to shared data structure.

Multiple clients might request services from one of multiple servers. We use mail boxes.
Abstraction of a finite size FIFO queue maintained by kernel

Synchronization and Buffering


• These are the three typical combinations.
1) Blocking Send, Blocking Receive
Both receiver and sender are blocked until the message is delivered. (provides tight synchronization
between processes)
2) Non Blocking Send, Blocking Receive
Sender can continue the execution after sending a message, the receiver is blocked until message
arrives. (most useful combination)
3) Non Blocking Send, Non Blocking Receive
Neither party waits.
Inter-Process Communication
Processes can communicate through shared areas of memory
– the Mutual Exclusion problem and Critical Sections
• Semaphores - a synchronization abstraction
• Monitors - a higher level abstraction
• Inter-Process Message Passing much more useful for information transfer
– can also be used just for synchronization
– can co-exist with shared memory communication
• Two basic operations : send(message) and receive(message)
– message contents can be anything mutually comprehensible
• data, remote procedure calls, executable code etc.
– usually contains standard fields
• destination process ID, sending process ID for any reply
• message length
• data type, data etc.
• Fixed-length messages:
– simple to implement - can have pool of standard-sized buffers
• low overheads and efficient for small lengths
• copying overheads if fixed length too long
– can be inconvenient for user processes with variable amount of data to pass
• may need a sequence of messages to pass all the data
• long messages may be better passed another way e.g. FTP
• copying probably involved, sometimes multiple copying into kernel and
out
• Variable-length messages:
– more difficult to implement - may need a heap with garbage collection
• more overheads and less efficient, memory fragmentation
– more convenient for user processes
IPC – unicast and multicast
• In distributed computing, two or more processes engage in IPC using a protocol agreed upon by
the processes. A process may be a sender at some points during a protocol, a receiver at other
points.
• When communication is from one process to a single other process, the IPC is said to be a
unicast, e.g., Socket communication. When communication is from one process to a group of
processes, the IPC is said to be a multicast, e.g., Publish/Subscribe Message model, a topic that
we will explore in a later chapter.

Unicast vs. Multicast

P 2 P 2 P 3 ... P 4

m
m m m

P 1 P 1

unicast m u lticast
Interprocess communication
• OS provides interprocess communication mechanisms:
– various efficiencies;
– communication power.
• Interprocess communication (IPC): OS provides mechanisms so that processes can pass data.
• Two types of semantics:
– blocking: sending process waits for response;
– non-blocking: sending process continues.
– Shared memory:
– processes have some memory in common;
– must cooperate to avoid destroying/missing messages.
– Message passing:
– processes send messages along a communication channel---no common address space.
Blocking, deadlock, and timeouts
• Blocking operations issued in the wrong sequence can cause deadlocks.
• Deadlocks should be avoided. Alternatively, timeout can be used to detect deadlocks.
Deadlocks and Timeouts
• Connect and receive operations can result in indefinite blocking
• For example, a blocking connect request can result in the requesting process to be suspended
indefinitely if the connection is unfulfilled or cannot be fulfilled, perhaps as a result of a
breakdown in the network .
• It is generally unacceptable for a requesting process to “hang” indefinitely. Indefinite blocking
can be avoided by using timeout.
Indefinite blocking may also be caused by a deadlock
Semaphores
• A semaphore is a key that your code acquires in order to continue execution
• If the key is already in use, the requesting task is suspended until the key is released
• There are two types
– Binary semaphores
• 0 or 1
– Counting semaphores
• >= 0
• Initialize (or create)
– Value must be provided
– Waiting list is initially empty
• Wait (or pend)
– Used for acquiring the semaphore
– If the semaphore is available (the semaphore value is positive), the value is
decremented, and the task is not blocked
– Otherwise, the task is blocked and placed in the waiting list
– Most kernels allow you to specify a timeout
– If the timeout occurs, the task will be unblocked and an error code will be returned to
the task
• Signal (or post)
– Used for releasing the semaphore
– If no task is waiting, the semaphore value is incremented
– Otherwise, make one of the waiting tasks ready to run but the value is not incremented
– Which waiting task to receive the key?
• Highest-priority waiting task
• First waiting task
Semaphore
 Semaphore serves as a key to the resource
 A flag represent the status of the resource
 Prevent re-entering Critical Region
 Can extent to counting Semaphore

Priority inversion
• Typical characterization of priority inversion
– A medium-priority task preempts a lower-priority task which is using a shared resource
on which a higher priority task is blocked
– If the higher-priority task would be otherwise ready to run, but a medium-priority task
is currently running instead, a priority inversion is said to occur
Priority Inheritance
Basic protocol [Sha 1990]
1. A job J uses its assigned priority, unless it is in its CS and blocks higher priority jobs
In which case, J inherits PH, the highest priority of the jobs blocked by J
When J exits the CS, it resumes the priority it had at the point of entry into the CS
2. Priority inheritance is transitive
Advantage
• Transparent to scheduler
Disadvantage
• Deadlock possible in the case of bad use of semaphores
• Chained blocking: if P accesses n resources locked by processes with lower priorities, P must wait
for n CS
Chained Blocking
• A weakness of the priority inheritance protocol is that it does not prevent chained blocking.
• Suppose a medium priority thread attempts to take a mutex owned by a low priority thread, but
while the low priority thread's priority is elevated to medium by priority inheritance, a high
priority thread becomes runnable and attempts to take another mutex already owned by the
medium priority thread. The medium priority thread's priority is increased to high, but the high
priority thread now must wait for both the low priority thread and the medium priority thread to
complete before it can run again.
• The chain of blocking critical sections can extend to include the critical sections of any threads
that might access the same mutex. Not only does this make it much more difficult for the system
designer to compute overhead, but since the system designer must compute the worst case
overhead, the chained blocking phenomenon may result in a much less efficient system.
• These blocking factors are added into the computation time for tasks in the RMA analysis,
potentially rendering the system unschedulable.

You might also like