Virtual Memory - CH 10
Virtual Memory - CH 10
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Chapter 10: Virtual Memory
Background
Demand Paging
Copy-on-Write
Page Replacement
Allocation of Frames
Thrashing
Memory-Mapped Files
Allocating Kernel Memory
Other Considerations
Operating-System Examples
Operating System Concepts – 10th Edition 10.2 Silberschatz, Galvin and Gagne ©2018
Objectives
Operating System Concepts – 10th Edition 10.3 Silberschatz, Galvin and Gagne ©2018
Background
Code needs to be in memory to execute, but entire program rarely used
• Error code, unusual routines, large data structures
Even in those cases where the entire program is needed, it may not all be
needed at the same time
Consider ability to execute partially-loaded program
• Program no longer constrained by limits of physical memory
• Program and programs could be larger than physical memory
• Each program takes less memory while running -> more programs run
at the same time
Increased CPU utilization and throughput with no increase in
response time or turnaround time
Operating System Concepts – 10th Edition 10.4 Silberschatz, Galvin and Gagne ©2018
Virtual memory
Operating System Concepts – 10th Edition 10.5 Silberschatz, Galvin and Gagne ©2018
Virtual Memory That is Larger Than Physical Memory
Operating System Concepts – 10th Edition 10.6 Silberschatz, Galvin and Gagne ©2018
Demand Paging
Pages are only loaded when they are demanded during
program execution
Pages that are never accessed are thus never loaded into physical
memory
A demand-paging system is similar to a paging system with
swapping
Rather than swapping the entire process into memory, however, we
use a lazy swapper
Lazy swapper: never swaps a page into memory unless that page
will be needed
Swapper that deals with pages is a pager
Operating System Concepts – 10th Edition 10.7 Silberschatz, Galvin and Gagne ©2018
Demand Paging
Operating System Concepts – 10th Edition 10.8 Silberschatz, Galvin and Gagne ©2018
Basic Concepts
With swapping, pager guesses which pages will be used before swapping
out again
Instead, pager brings in only those pages into memory
How to determine that set of pages?
• Need new MMU functionality to implement demand paging
If pages needed are already memory resident
• No difference from non demand-paging
If page needed and not memory resident
• Need to detect and load the page into memory from storage
Without changing program behavior
Without programmer needing to change code
Operating System Concepts – 10th Edition 10.9 Silberschatz, Galvin and Gagne ©2018
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
Operating System Concepts – 10th Edition 10.11 Silberschatz, Galvin and Gagne ©2018
Page Table When Some Pages Are Not
in Main Memory
Operating System Concepts – 10th Edition 10.12 Silberschatz, Galvin and Gagne ©2018
Steps in Handling a Page Fault (Cont.)
Operating System Concepts – 10th Edition 10.13 Silberschatz, Galvin and Gagne ©2018
Steps in Handling Page Fault
Operating System Concepts – 10th Edition 10.14 Silberschatz, Galvin and Gagne ©2018
Stages in Demand Paging – Worse Case
Operating System Concepts – 10th Edition 10.15 Silberschatz, Galvin and Gagne ©2018
Stages in Demand Paging (Cont.)
6. While waiting, allocate the CPU to some other user
7. Receive an interrupt from the disk I/O subsystem (I/O completed)
8. Save the registers and process state for the other user
9. Determine that the interrupt was from the disk
10. Correct the page table and other tables to show page is now in memory
11. Wait for the CPU to be allocated to this process again
12. Restore the user registers, process state, and new page table, and then
resume the interrupted instruction
Operating System Concepts – 10th Edition 10.16 Silberschatz, Galvin and Gagne ©2018
Pure Demand Paging
Start process with no pages in memory
• OS sets instruction pointer to first instruction of process, non-memory-
resident page fault occurs
• And for every other process pages, page fault occurs on first access
• Never bring a page into memory until it is required
Theoretically, a given instruction could access multiple pages multiple
page faults
• Consider fetch and decode of instruction which adds 2 numbers from
memory and stores result back to memory
• Pain decreased because of locality of reference
Hardware support needed for demand paging
• Page table with valid / invalid bit
• Secondary memory (swap device with swap space)
• Instruction restart
Operating System Concepts – 10th Edition 10.17 Silberschatz, Galvin and Gagne ©2018
Performance of Demand Paging
Page Fault Rate 0 p 1
Effective Access Time (EAT)
EAT = (1– p) memory access + p page fault overhead
page-fault overhead
1. Service the page-fault interrupt
2. Read in the page (read the page from disk) – lots of time
3. Restart the process
Example
Memory access time = 200 nanoseconds
Average page fault service time = 8 milliseconds
Demand Paging Example
Memory access time = 200 nanoseconds
Average page-fault service time = 8 milliseconds
EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p x 200 + p x 8,000,000
= 200 + p x 7,999,800
If one access out of 1,000 causes a page fault, then
P = 1/1000
EAT = 8.2 microseconds.
This is a slowdown by a factor of 40!! (the access time gets 1/40!!)
If want performance degradation < 10 percent, then EAT = 200 + 20 220
• 220 > 200 + 7,999,800 x p
20 > 7,999,800 x p
• p < .0000025 => (25 page faults in any 10,000,000 memory accesses)
• < one page fault in every 400,000 memory accesses
Operating System Concepts – 10th Edition 10.19 Silberschatz, Galvin and Gagne ©2018
Copy-on-Write
Recall that the fork() system call creates a child process as a duplicate of
its parent
COW allows more efficient process creation as only modified pages are
copied
Operating System Concepts – 10th Edition 10.20 Silberschatz, Galvin and Gagne ©2018
Page Replacement
Solution
Find some page in memory that is not currently being used and page it out
We can free a frame by writing its contents to swap space and changing
the page table
Operating System Concepts – 10th Edition 10.22 Silberschatz, Galvin and Gagne ©2018
Page Replacement
Page replacement increases the effective access time
Use modify (dirty) bit to reduce overhead of page transfers
Each page has a modify bit associated with it
The modify bit for a page is set by the hardware whenever any word
or byte in the page is written
Only modified pages are written to disk
Page replacement completes separation between logical memory and
physical memory
large virtual memory can be provided on a smaller physical memory
Operating System Concepts – 10th Edition 10.23 Silberschatz, Galvin and Gagne ©2018
Demand Paging Algorithms
Frame-allocation algorithm
Determines how many frames to allocate to each process
Page-replacement algorithm
Selects the frames that are to be replaced
Repeated access to the same page does not cause a page fault
Optimal
FIFO
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 7
0 0 0 0 4 0 0 0
1 1 3 3 3 1 1
First-In-First-Out (FIFO) Algorithm
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 0 1 2 3 0 4 2 3 0 1 2 7
0 0 1 2 3 0 4 2 3 0 1 2 7 0
1 2 3 0 4 2 3 0 1 2 7 0 1
Graph of Page Faults Versus the Number of Frames
Operating System Concepts – 10th Edition 10.29 Silberschatz, Galvin and Gagne ©2018
Belady’s Anomaly
For some page-replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases
Example: reference string = 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Least Recently Used (LRU)
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 0 1 2 2 3 0 4 2 2 0 3 3 1 2 0 1 7
0 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0
1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Least Recently Used (LRU)
Stack implementation
Keep a stack of page numbers in a double link form
Page referenced: move it to the top
But each update more expensive
No search for replacement
Counter implementation
Every page entry has a counter; every time page is referenced through this entry,
copy the clock into the counter
When a page needs to be changed, look at the counters to find smallest value
Search through table needed
Least Recently Used (LRU)
Few computer systems provide sufficient hardware support for true LRU
Reference bits are associated with each entry in the page table
Additional-Reference-Bits Algorithm
Clock algorithm
An implementation of the second-chance algorithm using a
circular queue
Second Chance Algorithm
Second Chance Algorithm
Enhanced Second-Chance Algorithm
We can use reference bit and modify bit together
Then we have four cases
(0, 0) neither recently used nor modified - best page to replace
(0, 1) not recently used but modified- not quite as good, need to be
written out
(1, 0) recently used but clean - probably will be used again soon
(1,1) recently used and modified - probably will be used again soon,
and need to be written out
We replace the first page encountered in the lowest nonempty
class
Enhanced Second-Chance Algorithm
Combined Examples
Comparison
Counting Algorithms
Keep a counter of the number of references that have been
made to each page
Fixed-allocation
Gives a process a fixed number of pages within which to execute
Variable-allocation
Number of pages allocated to a process varies over the lifetime
of the process
Fixed Allocation
Equal allocation
If there are 100 frames (after allocating frames for the OS) and 5
processes, give each process 20 frames
Proportional allocation
Allocate according to the size of process
m 64
s i size of process p i
s1 10
S si s 2 127
m total n u mb e r of frames a1
10
64 5
137
si
a i allocationfor p i m a2
127
64 59
S 137
Replacement Scope
Increased swapping
Variable Allocation, Global Scope
Easiest to implement
Adopted by many operating systems
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018