Chapter 8: Main Memory

CS330 - Operating Systems

Memory Management, Address Binding, Swapping, Paging & Segmentation

8.1 Background

Memory management is crucial for multiprogramming. The OS must:

  • Keep track of which parts of memory are currently being used and by whom
  • Decide which processes to load when memory space becomes available
  • Allocate and deallocate memory space as needed

Basic Hardware

Main memory and registers are the only storage CPU can access directly.

Memory Access Times:

  • Register access: One CPU clock cycle or less
  • Main memory: Many cycles (causes stalls)
  • Cache: Sits between main memory and CPU registers

Memory Protection

Base and Limit Registers

CPU
Generates
Logical Address
Check
Address ≥ Base
Address < Base+Limit
Memory
Physical
Address

If check fails → Trap to OS (addressing error)

8.2 Address Binding

Programs must be brought into memory and placed within a process for execution.

Binding of Instructions and Data to Memory

Binding Time Description Address Type Flexibility
Compile Time If memory location known a priori, absolute code can be generated Absolute addresses Must recompile if starting location changes
Load Time Compiler generates relocatable code Relocatable addresses Final binding delayed until load time
Execution Time Binding delayed until run time Dynamic addresses Process can move during execution

Logical vs Physical Address Space

Logical Address: Generated by CPU; also called virtual address

Physical Address: Address seen by memory unit

Logical and physical addresses are the same in compile-time and load-time schemes; different in execution-time scheme

Memory-Management Unit (MMU)

Address Translation

Physical Address = Logical Address + Relocation Register

The user program deals with logical addresses; it never sees real physical addresses

8.3 Swapping

A process can be swapped temporarily out of memory to backing store and then brought back for continued execution.

Main Memory

Operating System
Process P1
Free Space

Swap Out / Swap In

Backing Store

Disk
Storage

Swapping Constraints:

  • Process must be completely idle before swapping
  • Never swap a process with pending I/O
  • Transfer time is proportional to amount of memory swapped

Swapping Time Calculation:

Process size: 100MB

Backing store transfer rate: 50MB/sec

Transfer time = 100MB / 50MB/sec = 2 seconds

Total swap time (out + in) = 4 seconds

8.4 Contiguous Memory Allocation

Memory Allocation

Main memory is divided into two partitions:

Multiple-Partition Allocation

OS
0-400K
Process 1
400-600K
Free
600-900K
Process 2
900-1024K

Dynamic Storage-Allocation Strategies

First-Fit

Allocate the first hole that is big enough

✓ Fast

✗ May create small fragments

Best-Fit

Allocate the smallest hole that is big enough

✓ Leaves larger holes

✗ Slower, creates tiny fragments

Worst-Fit

Allocate the largest hole

✓ May reduce fragmentation

✗ Poor utilization

Note: First-fit and best-fit are generally better than worst-fit in terms of speed and storage utilization

8.5 Fragmentation

External Fragmentation

Total memory space exists to satisfy request, but it is not contiguous

Process A
Free 50K
Process B
Free 100K
Process C

Problem: Can't allocate 150K even though total free = 150K

Solution: Compaction or Paging

Internal Fragmentation

Allocated memory may be larger than requested memory

Process Needs 18K
Wasted 2K

Example: Process needs 18K, gets 20K block

Occurs in: Fixed partition schemes, Paging

50-Percent Rule (Statistical Analysis):

Given N allocated blocks, another N/2 blocks will be lost to external fragmentation

One-third of memory may be unusable!

8.6 Paging

Paging allows physical address space of a process to be noncontiguous

  • Divide physical memory into fixed-sized blocks called frames
  • Divide logical memory into blocks of same size called pages
  • Keep track of all free frames
  • Set up a page table to translate logical to physical addresses

Address Translation

Logical Address Structure

Page Number (p)
Page Offset (d)
Used as index into page table
Combined with base address

Page Table Example

Logical Memory

Pages
0
Page 0
1
Page 1
2
Page 2
3
Page 3

Page Table

Page → Frame
0
5
1
6
2
1
3
2

Physical Memory

Frames
0
Free
1
Page 2
2
Page 3
3
Free
4
Free
5
Page 0
6
Page 1
7
Free

Translation Look-aside Buffer (TLB)

TLB - Hardware Cache for Page Table

Small, fast-lookup hardware cache containing page-table entries

TLB Hit
Page found in TLB
~1 memory access
TLB Miss
Access page table
~2 memory accesses
Hit Ratio
Typically 98-99%
Critical for performance

Effective Access Time Calculation

EAT with TLB

EAT = α × (TLB_access + Memory_access) + (1-α) × (TLB_access + 2×Memory_access)

Where α = TLB hit ratio

Example: Memory access = 100ns, TLB access = 20ns, Hit ratio = 80%

EAT = 0.80 × 120 + 0.20 × 220 = 96 + 44 = 140ns

Page Table Structure

Structure Type Description Advantage
Hierarchical Two or more levels of page tables Reduces memory overhead for sparse address spaces
Hashed Hash table with linked list for collisions Good for sparse address spaces > 32 bits
Inverted One entry per physical frame Decreases memory needed for page tables

Page Protection

Protection Bits in Page Table Entry:

R - Read W - Write X - Execute

Valid/Invalid Bit: Indicates if page is in process's logical address space

8.7 Segmentation

Memory-management scheme that supports user view of memory

A program is a collection of segments:

  • Main program
  • Procedures/Functions
  • Local variables, global variables
  • Common block
  • Stack
  • Symbol table

Logical Address in Segmentation

Segment Number (s)
Offset (d)

Segment Table

Segment Base Limit Description
0 1400 1000 Main Program
1 6300 400 Subroutine
2 4300 400 Stack
3 3200 1100 Data Segment
4 4700 1000 Symbol Table

Address Translation in Segmentation

Example: Logical Address (2, 53)

Step 1: Look up segment 2 in segment table
Step 2: Base = 4300, Limit = 400
Step 3: Check if offset < limit: 53 < 400 ✓
Step 4: Physical address = Base + Offset = 4300 + 53 = 4353

Segmentation vs Paging

Aspect Paging Segmentation
Division Fixed-size pages Variable-size segments
User View Invisible to programmer Visible to programmer
External Fragmentation No Yes
Internal Fragmentation Yes (last page) No
Sharing Difficult Easy (logical units)
Protection Page level Segment level

8.8 Segmentation with Paging

Combines advantages of both schemes:

  • Segment the user view
  • Page the segments for memory management
  • Eliminates external fragmentation
  • Provides sharing and protection at segment level

Address Translation Steps

Logical Address
(s, p, d)
segment, page, offset
Segment Table
Get page table
for segment s
Page Table
Get frame
for page p
Physical Address
Frame + offset d

Intel x86 Architecture Example:

Supports both segmentation and paging

Logical address → Linear address (via segmentation) → Physical address (via paging)

8.9 Practice Problems and Calculations

Example 1: Page Number and Offset Calculation

Given: Page size = 1KB (1024 bytes), Logical address = 3085

Page number = ⌊3085 / 1024⌋ = 3
Offset = 3085 mod 1024 = 3085 - (3 × 1024) = 13
Answer: Page = 3, Offset = 13

Example 2: Address Space Calculation

Given: 256 pages with 4KB page size, mapped to 64 frames

Logical address space = 256 × 4KB = 1MB = 2²⁰ bytes
Logical address bits = log₂(2²⁰) = 20 bits
Physical memory = 64 × 4KB = 256KB = 2¹⁸ bytes
Physical address bits = log₂(2¹⁸) = 18 bits

Example 3: Internal Fragmentation

Given: Process size = 72,766 bytes, Page size = 2048 bytes

Number of pages = ⌈72,766 / 2048⌉ = ⌈35.53⌉ = 36 pages
Memory allocated = 36 × 2048 = 73,728 bytes
Internal fragmentation = 73,728 - 72,766 = 962 bytes

8.10 CS330 Exam Practice

Multiple Choice Questions

Q1. Which memory allocation strategy is generally fastest?

Answer: First-fit (searches from beginning and allocates first suitable hole)

Q2. External fragmentation occurs in:

Answer: Variable-size contiguous allocation and pure segmentation

Q3. Internal fragmentation occurs in:

Answer: Fixed-size partitioning and pure paging

Q4. TLB stands for:

Answer: Translation Look-aside Buffer

Short Answer Questions

Q5. Compare paging and segmentation with respect to memory overhead.

Answer:

Paging: Requires one entry per page. For large address spaces, page tables can be huge.

Segmentation: Requires only two registers per segment (base and limit). Much less overhead.

Q6. Why do mobile operating systems like iOS and Android not support swapping?

Answer:

  1. Flash memory has limited write cycles
  2. Limited storage capacity
  3. Poor memory throughput between flash and main memory
  4. Power consumption concerns

Q7. Calculate physical address for logical address (1, 500) in paging system:

Page size = 1024 bytes, Page 1 mapped to Frame 6

Physical address = Frame × Page_size + Offset
Physical address = 6 × 1024 + 500 = 6144 + 500 = 6644

8.11 Key Terms and Definitions

Term Definition
Logical Address Address generated by CPU; virtual address
Physical Address Address seen by memory unit
MMU Memory Management Unit - hardware for address translation
Page Fixed-size block in logical memory
Frame Fixed-size block in physical memory
Page Table Data structure for page-to-frame mapping
TLB Translation Look-aside Buffer - cache for page table
Segment Logical unit in user's view (code, data, stack)
External Fragmentation Free memory exists but not contiguous
Internal Fragmentation Allocated memory larger than needed
Compaction Shuffle memory to place free memory together
Swapping Move process between main memory and disk
Base Register Holds smallest legal physical address
Limit Register Contains range of logical addresses

8.12 Diagrams from Slides

📊 Hardware Address Protection Diagram

[Insert base and limit register protection diagram]

📊 Paging Hardware with TLB

[Insert TLB and page table interaction diagram]

📊 Memory Allocation Examples

[Insert first-fit, best-fit, worst-fit visual comparison]

📊 Segmentation Example

[Insert logical to physical address mapping in segmentation]

Note: Please provide the actual diagrams from your course materials to replace these placeholders.