Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 71

Chapter 8: Main Memory

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne


Chapter 8: Memory Management
 Background
 Swapping
 Contiguous Memory Allocation
 Paging
 Structure of the Page Table
 Segmentation
 Example: The Intel 32 and 64-bit Architectures
 Example: ARM Architecture

Operating System Concepts – 9th Edition 8.2 Silberschatz, Galvin and Gagne
Background

 Program must be brought (from disk) into


memory for it to be run
 Main memory and registers are only storage
CPU can access directly
 Memory consist of large array of words or
bytes, each with its own addresses.
 Register access in one CPU clock (or less)
 Main memory can take many cycles, causing a
stall
 Cache sits between main memory and CPU
registers
 Protection of memory required to ensure
correct operation

Operating System Concepts – 9th Edition 8.3 Silberschatz, Galvin and Gagne
Base and Limit Registers
 A pair of base and limit registers define the logical
address space
 CPU must check every memory access generated in
user mode to be sure it is between base and limit for
that user

Operating System Concepts – 9th Edition 8.4 Silberschatz, Galvin and Gagne
Hardware Address Protection

Operating System Concepts – 9th Edition 8.5 Silberschatz, Galvin and Gagne
Address Binding
 Programs on disk, ready to be brought into memory to
execute form an input queue
 Most systems allow a user process to reside in any part of the
physical memory.
 User program will go through several steps before being
executed (figure)
 Further, addresses represented in different ways at different
stages of a program’s life
 Source code addresses usually symbolic
 Compiler binds symbolic address to relocatable addresses
 i.e. “14 bytes from beginning of this module”
 Linker or loader will bind relocatable addresses to
absolute addresses
 i.e. 74014
 Each binding maps one address space to another

Operating System Concepts – 9th Edition 8.6 Silberschatz, Galvin and Gagne
Binding of Instructions and Data to Memory

 Address binding of instructions and data to memory


addresses can happen at three different stages
 Compile time: If memory location known a priori,
absolute code can be generated; must recompile
code if starting location changes
 Load time: Must generate relocatable code if
memory location is not known at compile time
 Execution time: Binding delayed until run time if
the process can be moved during its execution
from one memory segment to another
 Need hardware support for address maps (e.g.,
base and limit registers)

Operating System Concepts – 9th Edition 8.7 Silberschatz, Galvin and Gagne
Multistep Processing of a User Program

Operating System Concepts – 9th Edition 8.8 Silberschatz, Galvin and Gagne
Logical vs. Physical Address Space

Logical address – generated by the CPU; also


referred to as virtual address
Physical address – address seen by the memory
unit
 Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme
 Logical address space is the set of all logical
addresses generated by a program
 Physical address space is the set of all physical
addresses corresponding to these logical addresses
 The runtime mapping from virtual/logical to physical
address is done by a hardware device called MMU.

Operating System Concepts – 9th Edition 8.9 Silberschatz, Galvin and Gagne
Memory-Management Unit (MMU)
 Hardware device that at run time maps virtual to
physical address
 Many methods possible, covered in the rest of this
chapter
 To start, consider simple scheme where the value in
the relocation register is added to every address
generated by a user process at the time it is sent to
memory
 Base register now called relocation register
 The user program deals with logical addresses; it
never sees the real physical addresses
 Execution-time binding occurs when reference is
made to location in memory
 Logical address bound to physical addresses

Operating System Concepts – 9th Edition 8.10 Silberschatz, Galvin and Gagne
Memory-Management Unit (MMU)

Operating System Concepts – 9th Edition 8.11 Silberschatz, Galvin and Gagne
Dynamic relocation using a relocation register

 Routine is not loaded until it is called


 Better memory-space utilization; unused routine
is never loaded
 All routines kept on disk in relocatable load
format
 Useful when large amounts of code are needed
to handle infrequently occurring cases
 No special support from the operating system is
required
 Implemented through program design
 OS can help by providing libraries to
implement dynamic loading

Operating System Concepts – 9th Edition 8.12 Silberschatz, Galvin and Gagne
Dynamic Linking
 Static linking – system libraries and program code
combined by the loader into the binary program image
 Dynamic linking –linking postponed until execution time
 Small piece of code, stub, used to locate the
appropriate memory-resident library routine
 Stub replaces itself with the address of the routine, and
executes the routine
 Operating system checks if routine is in processes’
memory address
 If not in address space, add to address space
 Dynamic linking is particularly useful for libraries
 System also known as shared libraries
 Consider applicability to patching system libraries
 Versioning may be needed

Operating System Concepts – 9th Edition 8.13 Silberschatz, Galvin and Gagne
Swapping
 A process can be swapped temporarily out of
memory to a backing store, and then brought back
into memory for continued execution
 Backing store – fast disk large enough to
accommodate copies of all memory images for all
users; must provide direct access to these memory
images
 Roll out, roll in – swapping variant used for priority-
based scheduling algorithms; lower-priority process
is swapped out so higher-priority process can be
loaded and executed
 Major part of swap time is transfer time; total
transfer time is directly proportional to the amount
of memory swapped
 System maintains a ready queue of ready-to-run
processes which have memory images on disk

Operating System Concepts – 9th Edition 8.14 Silberschatz, Galvin and Gagne
Swapping (Cont.)
 Does the swapped out process need to swap back in
to same physical addresses?
 Depends on address binding method
 Plus consider pending I/O to / from process
memory space
 Modified versions of swapping are found on many
systems (i.e., UNIX, Linux, and Windows)
 Swapping normally disabled
 Started if more than threshold amount of memory
allocated
 Disabled again once memory demand reduced
below threshold

Operating System Concepts – 9th Edition 8.15 Silberschatz, Galvin and Gagne
Schematic View of Swapping

Operating System Concepts – 9th Edition 8.16 Silberschatz, Galvin and Gagne
Context Switch Time including Swapping

 If next processes to be put on CPU is not in


memory, need to swap out a process and swap in
target process
 Context switch time can then be very high
 100MB process swapping to hard disk with
transfer rate of 50MB/sec
 Swap out time of 2000 ms
 Plus swap in of same sized process
 Total context switch swapping component time
of 4000ms (4 seconds)

Operating System Concepts – 9th Edition 8.17 Silberschatz, Galvin and Gagne
Context Switch Time and Swapping (Cont.)

 Other constraints as well on swapping


 Pending I/O – can’t swap out as I/O would occur to
wrong process
 Or always transfer I/O to kernel space, then to
process space
 Known as double buffering, adds overhead
 Standard swapping not used in modern operating
systems
 But modified version common
 Swap only when free memory extremely low

Operating System Concepts – 9th Edition 8.18 Silberschatz, Galvin and Gagne
Contiguous Allocation
 Main memory must support both OS and user
processes
 Limited resource, must allocate efficiently
 Contiguous allocation is one early method
 Main memory usually into two partitions:
 Resident operating system, usually held in low
memory with interrupt vector
 User processes then held in high memory
 Each process contained in single contiguous
section of memory

Operating System Concepts – 9th Edition 8.19 Silberschatz, Galvin and Gagne
Contiguous Allocation (Cont.)
 Relocation registers used to protect user processes
from each other, and from changing operating-system
code and data
 Base register contains value of smallest physical
address
 Limit register contains range of logical addresses
– each logical address must be less than the limit
register
 MMU maps logical address dynamically

Operating System Concepts – 9th Edition 8.20 Silberschatz, Galvin and Gagne
Hardware Support for Relocation and Limit Registers

Operating System Concepts – 9th Edition 8.21 Silberschatz, Galvin and Gagne
Memory allocation - fixed
 One simplest methods for allocating memory is
to divide memory into several fixed sized
partitions.
 Each partition may contain exactly one process.
 Degree of multiprogramming is bound by the
number of partition.
 When a partition is free a process is selected
from the input queue and loaded in to the free
partition.
 When process terminates partition becomes
available for another process.

Operating System Concepts – 9th Edition 8.22 Silberschatz, Galvin and Gagne
variable-partition
 Multiple-partition allocation
 Variable-partition sizes for efficiency (sized to a given
process’ needs)
 Hole – block of available memory; holes of various size are
scattered throughout memory
 When a process arrives, it is allocated memory from a hole
large enough to accommodate it
 Process exiting frees its partition, adjacent free partitions
combined
 Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

Operating System Concepts – 9th Edition 8.23 Silberschatz, Galvin and Gagne
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?

 First-fit: Allocate the first hole that is big enough

 Best-fit: Allocate the smallest hole that is big


enough; must search entire list, unless ordered by
size
 Produces the smallest leftover hole

 Worst-fit: Allocate the largest hole; must also


search entire list
 Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed
and storage utilization

Operating System Concepts – 9th Edition 8.24 Silberschatz, Galvin and Gagne
Fragmentation
 External Fragmentation – total memory space
exists to satisfy a request, but it is not
contiguous
 Internal Fragmentation – allocated memory may
be slightly larger than requested memory; this
size difference is memory internal to a partition,
but not being used
 First fit analysis reveals that given N blocks
allocated, 0.5 N blocks lost to fragmentation
 1/3 may be unusable -> 50-percent rule

Operating System Concepts – 9th Edition 8.25 Silberschatz, Galvin and Gagne
Fragmentation (Cont.)
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free
memory together in one large block
 Compaction is possible only if relocation is
dynamic, and is done at execution time
 The simplest compaction algorithm is to move
all process towards one end of memory; all
holes move in the other direction, producing
one large hole of available memory.
 This scheme is expensive
 Another solution is – noncontiguous allocation

Operating System Concepts – 9th Edition 8.26 Silberschatz, Galvin and Gagne
Problem

1. Given the memory partitions of


100k,500k,200k,300k and 600k
apply first fit, worst fit and best fit
algorithm to place
212k,417k,112k,426k.

Operating System Concepts – 9th Edition 8.27 Silberschatz, Galvin and Gagne
Problem variable size
Process request are given as;
P1-300
P2-25
P3-125
P4-50
Determine the algorithm which can optimally satisfy this request

Operating System Concepts – 9th Edition 8.28 Silberschatz, Galvin and Gagne
Problem fixed size

Given the memory partitions of 200k,400k,600k,500k,300k and


250k apply first fit, worst fit and best fit algorithm to place 357k,
210k,468k,491k.

P1-357
P2-210
P3-468
P4-491

Operating System Concepts – 9th Edition 8.29 Silberschatz, Galvin and Gagne
Paging
 Physical address space of a process can be noncontiguous;
 Avoids external fragmentation
 Avoids problem of varying sized memory chunks
 Divide physical memory into fixed-sized blocks called
frames
 Size is power of 2, between 512 bytes and 16 Mbytes
 Divide logical memory into blocks of same size called pages
 Keep track of all free frames
 To run a program of size N pages, need to find N free frames
and load program
 Set up a page table to translate logical to physical
addresses
 Backing store likewise split into pages
 Still have Internal fragmentation

Operating System Concepts – 9th Edition 8.30 Silberschatz, Galvin and Gagne
Address Translation Scheme
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a page
table which contains base address of each page in
physical memory
 Page offset (d) – combined with base address to
define the physical memory address that is sent to
the memory unit

 For given logical address space 2m and page size 2n

Operating System Concepts – 9th Edition 8.31 Silberschatz, Galvin and Gagne
Paging Hardware

Operating System Concepts – 9th Edition 8.32 Silberschatz, Galvin and Gagne
Paging Model of Logical and Physical Memory

Operating System Concepts – 9th Edition 8.33 Silberschatz, Galvin and Gagne
Paging Example

In logical address n=2 and m=4 ; 32-byte memory


and 4-byte pages

Operating System Concepts – 9th Edition 8.34 Silberschatz, Galvin and Gagne
Paging (Cont.)
 Calculating internal fragmentation
 Page size = 2,048 bytes
 Process size = 72,766 bytes
 35 pages + 1,086 bytes
 Internal fragmentation of 2,048 - 1,086 = 962 bytes
 Worst case fragmentation = 1 frame – 1 byte
 On average fragmentation = 1 / 2 frame size
 So small frame sizes desirable?
 But each page table entry takes memory to track
 Page sizes growing over time
 Solaris supports two page sizes – 8 KB and 4 MB
 Process view and physical memory now very different
 By implementation process can only access its own memory

Operating System Concepts – 9th Edition 8.35 Silberschatz, Galvin and Gagne
Free Frames

Before allocation After allocation

Operating System Concepts – 9th Edition 8.36 Silberschatz, Galvin and Gagne
Implementation of Page Table
 Page table is kept in main memory
 Page-table base register (PTBR) points to the
page table
 Page-table length register (PTLR) indicates size
of the page table
 In this scheme every data/instruction access
requires two memory accesses
 One for the page table and one for the data /
instruction
 The two memory access problem can be solved
by the use of a special fast-lookup hardware
cache called associative memory or translation
look-aside buffers (TLBs)

Operating System Concepts – 9th Edition 8.37 Silberschatz, Galvin and Gagne
Implementation of Page Table (Cont.)
 Some TLBs store address-space identifiers (ASIDs)
in each TLB entry – uniquely identifies each
process to provide address-space protection for
that process
 Otherwise need to flush at every context
switch
 TLBs typically small (64 to 1,024 entries)
 On a TLB miss, value is loaded into the TLB for
faster access next time
 Replacement policies must be considered
 Some entries can be wired down for permanent
fast access

Operating System Concepts – 9th Edition 8.38 Silberschatz, Galvin and Gagne
Paging Hardware With TLB

Operating System Concepts – 9th Edition 8.39 Silberschatz, Galvin and Gagne
Effective Access Time
 TLB Lookup =  time unit
 Can be < 10% of memory access time
 Hit ratio = 
 Hit ratio – percentage of times that a page number is
found in the associative registers; ratio related to
number of associative registers
 Consider  = 80%,  = 20ns for TLB search, 100ns for
memory access
 Consider  = 80%,  = 20ns for TLB search, 100ns for
memory access
 EAT = 0.80 x 120 + 0.20 x 220 = 140ns
 Consider more realistic hit ratio ->  = 99%,  = 20ns for
TLB search, 100ns for memory access
 EAT = 0.99 x 120 + 0.01 x 220 = 121ns

Operating System Concepts – 9th Edition 8.40 Silberschatz, Galvin and Gagne
Memory Protection
 Memory protection implemented by associating
protection bit with each frame to indicate if read-
only or read-write access is allowed
 Can also add more bits to indicate page execute-
only, and so on
 Valid-invalid bit attached to each entry in the page
table:
 “valid” indicates that the associated page is in
the process’ logical address space, and is thus a
legal page
 “invalid” indicates that the page is not in the
process’ logical address space
 Or use page-table length register (PTLR)
 Any violations result in a trap to the kernel

Operating System Concepts – 9th Edition 8.41 Silberschatz, Galvin and Gagne
Valid (v) or Invalid (i) Bit In A Page Table

Operating System Concepts – 9th Edition 8.42 Silberschatz, Galvin and Gagne
Shared Pages
 Paging systems can make it very easy to share blocks of
memory, by simply duplicating page numbers in multiple
page frames. This may be done with either code or data.
 If code is reentrant, that means that it does not write to
or change the code in any way ( it is non self-
modifying ), and it is therefore safe to re-enter it. More
importantly, it means the code can be shared by multiple
processes, so long as each has their own copy of the
data and registers, including the instruction register.
 In the example given below, three different users are
running the editor simultaneously, but the code is only
loaded into memory ( in the page frames ) one time.
 Some systems also implement shared memory in this
fashion.

Operating System Concepts – 9th Edition 8.43 Silberschatz, Galvin and Gagne
Shared Pages Example

Operating System Concepts – 9th Edition 8.44 Silberschatz, Galvin and Gagne
Structure of the Page Table
 Memory structures for paging can get huge using
straight-forward methods
 Consider a 32-bit logical address space as on modern
computers
 Page size of 4 KB (212)
 Page table would have 1 million entries (232 / 212)
 If each entry is 4 bytes -> 4 MB of physical address
space / memory for page table alone
 That amount of memory used to cost a lot
 Don’t want to allocate that contiguously in main
memory
 Hierarchical Paging
 Hashed Page Tables
 Inverted Page Tables

Operating System Concepts – 9th Edition 8.45 Silberschatz, Galvin and Gagne
Hierarchical Page Tables
 A simple technique is a two-level page
table

Operating System Concepts – 9th Edition 8.46 Silberschatz, Galvin and Gagne
Two-Level Paging Example
 A logical address (on 32-bit machine with 1K page size) is
divided into:
 a page number consisting of 22 bits
 a page offset consisting of 10 bits

 Since the page table is paged, the page number is further


divided into:
 a 12-bit page number
 a 10-bit page offset
 Thus, a logical address is as follows:

 where p1 is an index into the outer page table, and p2 is the


displacement within the page of the inner page table
 Known as forward-mapped page table

Operating System Concepts – 9th Edition 8.47 Silberschatz, Galvin and Gagne
Address-Translation Scheme

Operating System Concepts – 9th Edition 8.48 Silberschatz, Galvin and Gagne
64-bit Logical Address Space
 Even two-level paging scheme not sufficient
 If page size is 4 KB (212)
 Then page table has 252 entries
 If two level scheme, inner page tables could be 210 4-byte
entries
 Address would look like

 Outer page table has 242 entries or 244 bytes


 One solution is to add a 2nd outer page table
 But in the following example the 2nd outer page table is still
234 bytes in size
 And possibly 4 memory access to get to one physical
memory location

Operating System Concepts – 9th Edition 8.49 Silberschatz, Galvin and Gagne
Three-level Paging Scheme

Operating System Concepts – 9th Edition 8.50 Silberschatz, Galvin and Gagne
Hashed Page Tables
 Common in address spaces > 32 bits
 The virtual page number is hashed into a page table
 This page table contains a chain of elements hashing
to the same location
 Each element contains (1) the virtual page number (2)
the value of the mapped page frame (3) a pointer to the
next element
 Virtual page numbers are compared in this chain
searching for a match
 If a match is found, the corresponding physical frame
is extracted

Operating System Concepts – 9th Edition 8.51 Silberschatz, Galvin and Gagne
Hashed Page Table

Operating System Concepts – 9th Edition 8.52 Silberschatz, Galvin and Gagne
Inverted Page Table
 Rather than each process having a page table and
keeping track of all possible logical pages, track all
physical pages
 One entry for each real page of memory
 Entry consists of the virtual address of the page
stored in that real memory location, with
information about the process that owns that page
 Decreases memory needed to store each page
table, but increases time needed to search the
table when a page reference occurs
 Use hash table to limit the search time
 But how to implement shared memory?
 One mapping of a virtual address to the shared
physical address

Operating System Concepts – 9th Edition 8.53 Silberschatz, Galvin and Gagne
Inverted Page Table Architecture

Operating System Concepts – 9th Edition 8.54 Silberschatz, Galvin and Gagne
Segmentation
 Basic Method
 Most users ( programmers ) do not think of their
programs as existing in one continuous linear
address space.
 Rather they tend to think of their memory in
multiple segments, each dedicated to a particular
use, such as code, data, the stack, the heap, etc.
 Memory segmentation supports this view by
providing addresses with a segment number
( mapped to a segment base address ) and an
offset from the beginning of that segment.
 For example, a C compiler might generate 5
segments for the user code, library code, global
variables, the stack, and the heap, as shown in
Figure .

Operating System Concepts – 9th Edition 8.55 Silberschatz, Galvin and Gagne
User’s View of a Program

Operating System Concepts – 9th Edition 8.56 Silberschatz, Galvin and Gagne
Logical View of Segmentation

4
1

3 2
4

user space physical memory space

Operating System Concepts – 9th Edition 8.57 Silberschatz, Galvin and Gagne
Segmentation Hardware
 A segment table maps segment-offset addresses to physical
addresses, and simultaneously checks for invalid addresses,
using a system similar to the page tables and relocation base
registers discussed previously. ( Note that at this point in the
discussion of segmentation, each segment is kept in contiguous
memory and may be of different sizes, but that segmentation
can also be combined with paging )
 Logical address consists of a two tuple:
<segment-number, offset>,

 Segment table – maps two-dimensional physical addresses; each


table entry has:
 base – contains the starting physical address where the
segments reside in memory
 limit – specifies the length of the segment

 Segment-table base register (STBR) points to the segment


table’s location in memory

Operating System Concepts – 9th Edition 8.58 Silberschatz, Galvin and Gagne
Segmentation Hardware

Operating System Concepts – 9th Edition 8.59 Silberschatz, Galvin and Gagne
Example of segmentation

Operating System Concepts – 9th Edition 8.60 Silberschatz, Galvin and Gagne
Problem

1. Consider a logical address space of 64 pages of 1,024


words each, mapped onto a physical memory of 32 frames.
a.How many bits are there in the logical address?
b.How many bits are there in the physical address?

2. Consider a logical address space of 32 pages of 1,024


words each, mapped onto a physical memory of 16 frames.
a.How many bits are there in the logical address?
b.How many bits are there in the physical address?

Operating System Concepts – 9th Edition 8.61 Silberschatz, Galvin and Gagne
Problem
 Consider a paging system with the page table stored in memory.
(a) If a memory reference takes 200 nanoseconds, how long does a paged memory
reference take?
(b) If we add TLB, and 75% of all page-table references are found in the TLB,
what is the effective memory reference time? (Assume that finding a page-
table entry in the associative registers takes zero time, if the entry is there.)

Operating System Concepts – 9th Edition 8.62 Silberschatz, Galvin and Gagne
Solution

•400 nanoseconds. 200 ns to access the page table plus 200 ns to


access the word in memory.
•250 nanoseconds. 75% of the time it's 200 ns, and the other 25%
of the time it's 400ns, so the equation is: e.a. = (.75*200)+
(.25*400)

Try this, too: What if the time to access the TLB is actually 2 ns --


how does your answer change?
e.a. = (.75*202)+(.25*402)

Operating System Concepts – 9th Edition 8.63 Silberschatz, Galvin and Gagne
Problem

Operating System Concepts – 9th Edition 8.64 Silberschatz, Galvin and Gagne
Solution

Operating System Concepts – 9th Edition 8.65 Silberschatz, Galvin and Gagne
Solution

Operating System Concepts – 9th Edition 8.66 Silberschatz, Galvin and Gagne
Solution

Operating System Concepts – 9th Edition 8.67 Silberschatz, Galvin and Gagne
problem

Consider a computer system with a 32-bit logical address and 4-KB page
size. The system supports up to 512 MB of physical memory. How many
entries are there in each of the following?

a.A conventional single level page table


b.An inverted page table

Operating System Concepts – 9th Edition 8.68 Silberschatz, Galvin and Gagne
Problems

Operating System Concepts – 9th Edition 8.69 Silberschatz, Galvin and Gagne
Solution

Operating System Concepts – 9th Edition 8.70 Silberschatz, Galvin and Gagne
End of Chapter 8

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne

You might also like