1 Introduction to Operating Systems
π― What is an Operating System?
Definition: A program that acts as an intermediary between users and computer hardware.
- Goals: Execute user programs, make system convenient, use hardware efficiently
- Kernel: The one program running at all times (core of OS)
Remember the 4 components: Hardware β OS β Applications β Users
πΎ Computer System Organization
| Component | Access Time | Volatility |
|---|---|---|
| Registers | ~1 ns | Volatile |
| Cache | ~2-10 ns | Volatile |
| Main Memory (RAM) | ~100 ns | Volatile |
| SSD | ~100 ΞΌs | Non-volatile |
| Magnetic Disk | ~8-50 ms | Non-volatile |
Mnemonic: "RegistersCacheRAM" β RCR β Faster and more expensive
β‘ Interrupts
Types:
- Hardware Interrupt: From I/O devices (keyboard, mouse, disk)
- Software Interrupt (Trap/Exception): From errors or system calls
Interrupt Handling Process:
- Save PCB (Program Counter and Registers)
- Determine interrupt type (polling or vectored)
- Execute Interrupt Service Routine (ISR)
- Restore saved information
- Resume execution
Interrupt Vector = Table of addresses pointing to ISRs
π DMA (Direct Memory Access)
- Used for high-speed I/O devices
- Transfers data directly between I/O and memory WITHOUT CPU
- Only ONE interrupt per block (not per byte)
Q: Which is efficient for moving large amounts of data between I/O devices and main memory without CPU?
Answer: DMA
Answer: DMA
π₯οΈ Multiprocessor Systems
Symmetric (SMP)
- Each processor performs ALL tasks
- No master-slave relationship
- Equal processors
Asymmetric (AMP)
- Master processor controls slaves
- Boss-worker relationship
- Used in large systems
Advantages:
- Increased Throughput: More work in less time
- Economy of Scale: Cost less than multiple single-processors
- Increased Reliability: Graceful degradation/fault tolerance
π Dual-Mode Operation
| Mode | Bit Value | Purpose |
|---|---|---|
| Kernel/System Mode | 0 | OS execution, privileged instructions |
| User Mode | 1 | User application execution |
System call changes mode to kernel, return resets to user
Q: Privileged instructions can only be executed in?
Answer: Kernel mode (mode bit = 0)
Answer: Kernel mode (mode bit = 0)
2 Operating System Structures
π οΈ OS Services
For Users:
- User Interface: CLI, GUI, Touchscreen
- Program Execution: Load, run, end programs
- I/O Operations: Users can't do I/O directly
- File System Manipulation: Read, write, create, delete
- Communications: Shared memory or message passing
- Error Detection: Detect CPU, memory, I/O errors
For System Efficiency:
- Resource Allocation: Manage CPU, memory, files
- Accounting: Track usage statistics
- Protection & Security: Control access, defend attacks
π System Calls
Definition: Interface between running program and OS
- Accessed via API (Win32, POSIX, Java)
- Each system call has a number
- System-call interface maintains indexed table
Parameter Passing Methods:
- Registers: Simplest, limited by register count
- Block/Table in Memory: Pass address of block
- Stack: Push parameters, pop by OS
| Category | Examples |
|---|---|
| Process Control | fork(), exec(), exit(), wait() |
| File Management | open(), close(), read(), write() |
| Device Management | request(), release(), read(), write() |
| Information Maintenance | getpid(), time(), date() |
| Communications | send(), receive(), create/delete connection |
ποΈ OS Structure Types
1. Monolithic (Simple Structure - MS-DOS)
- All functionality in one binary
- Fast but hard to maintain
- Changes affect entire system
2. Layered Approach
- Layer 0 = Hardware, Layer N = User Interface
- Each layer uses only lower layers
- Easy debugging but performance overhead
3. Microkernel
- Move non-essential components to user space
- Communication via message passing
- Advantages: Easy to extend, portable, reliable, secure
- Disadvantages: Communication overhead
4. Modules (Modern Approach)
- Core kernel + loadable modules
- Similar to layers but more flexible
- Used in Linux, macOS, Solaris, Windows
Q: Which allows direct communication among all OS services?
Answer: Modular approach (any module can call any other)
Answer: Modular approach (any module can call any other)
3 Processes
π Process Concept
Process = Program in Execution
- Program: Passive entity (executable file on disk)
- Process: Active entity (program loaded in memory)
Process Memory Layout:
- Text Section: Program code (read-only)
- Data Section: Global variables
- Heap: Dynamically allocated memory (grows upward)
- Stack: Function parameters, return addresses, local variables (grows downward)
Remember: "Text Data Heap Stack" β TDHS β From low to high memory addresses
π Process States
Process State Diagram:
NEW β [admitted] β READY β [scheduler dispatch] β RUNNING
β_____________β_[I/O or event completion]_β_WAITING_β_[I/O or event wait]_β_β
RUNNING β [interrupt] β READY
RUNNING β [exit] β TERMINATED
NEW β [admitted] β READY β [scheduler dispatch] β RUNNING
β_____________β_[I/O or event completion]_β_WAITING_β_[I/O or event wait]_β_β
RUNNING β [interrupt] β READY
RUNNING β [exit] β TERMINATED
| State | Description |
|---|---|
| NEW | Process is being created, PCB is being initialized |
| READY | Waiting to be assigned to CPU |
| RUNNING | Instructions are being executed |
| WAITING | Waiting for I/O or event to occur |
| TERMINATED | Process has finished execution |
Q: A process in Running state can next go to which states?
Answer: Ready, Waiting, Terminated
Answer: Ready, Waiting, Terminated
π¦ Process Control Block (PCB)
PCB Contains:
- Process ID (PID)
- Process State (Ready, Running, etc.)
- Program Counter (PC)
- CPU Registers
- CPU Scheduling Information (priority, queues)
- Memory Management Information
- Accounting Information (CPU time, time limits)
- I/O Status Information (open files, I/O devices)
PCB is stored in KERNEL MEMORY, NOT in text section of process
π Context Switch
Steps:
- Save PCB of current process (registers, PC, state)
- Load PCB of new process
- Resume execution of new process
Context Switch Time:
- Pure overhead (no useful work done)
- Depends on hardware support (some CPUs have multiple register sets)
- Typically 1-1000 microseconds
π Process Scheduling
| Scheduler | Also Known As | Purpose | Frequency |
|---|---|---|---|
| Long-term | Job Scheduler | Select which jobs to bring to ready queue | Infrequent (seconds/minutes) |
| Short-term | CPU Scheduler | Select which ready process gets CPU | Very frequent (milliseconds) |
| Medium-term | Swapper | Swap processes in/out of memory | Intermediate |
Process Types:
- I/O Bound: More time doing I/O, many short CPU bursts
- CPU Bound: More time computing, few long CPU bursts
Long-term scheduler controls DEGREE OF MULTIPROGRAMMING
πΆ Process Creation - fork()
fork() System Call:
- Creates a child process (copy of parent)
- Returns 0 to child process
- Returns child PID to parent process
- Returns -1 on error
pid_t pid = fork();
if (pid < 0) {
// Error occurred
fprintf(stderr, "Fork Failed");
}
else if (pid == 0) {
// Child process
printf("I am child, pid = %d", getpid());
}
else {
// Parent process
printf("I am parent, my child's pid = %d", pid);
wait(NULL); // Wait for child to terminate
}
Q: Given actual PIDs are parent=3600, child=3608:
Line A (child): pid = 0
Line B (child): pid1 = 3608
Line C (parent): pid = 3608
Line D (parent): pid1 = 3600
Line A (child): pid = 0
Line B (child): pid1 = 3608
Line C (parent): pid = 3608
Line D (parent): pid1 = 3600
Q: How many processes created by 4 fork() calls in a loop?
Answer: 2^4 = 16 processes (including parent)
Answer: 2^4 = 16 processes (including parent)
Child gets COPY of parent's memory (data, heap, stack), NOT shared!
π¬ Interprocess Communication (IPC)
Shared Memory
- Faster - no system calls after setup
- Direct memory access
- Requires synchronization
- Good for large data
Message Passing
- Easier - OS handles synchronization
- send() and receive() system calls
- Slower due to system calls
- Good for small messages
4 Threads and Concurrency
π§΅ Thread Basics
Thread = Lightweight Process
A thread is a basic unit of CPU utilization comprising:
- Thread ID
- Program Counter
- Register Set
- Stack
Threads SHARE within a process:
- Code Section
- Data Section (global variables)
- Heap
- Open Files
- OS Resources
Each thread has OWN:
- Stack
- Program Counter
- Registers
Q: Which is NOT shared by threads?
Answer: Program counter and stack
Answer: Program counter and stack
β Benefits of Multithreading
- Responsiveness: Program continues even if part blocked (e.g., web browser loading image while user interacts)
- Resource Sharing: Threads share memory and resources (easier than IPC)
- Economy:
- Thread creation: 30x faster than process (Solaris)
- Context switch between threads: faster
- Scalability: Utilizes multiprocessor architectures
Q: Why threads instead of processes?
Answer: Faster creation, faster context switch, easier communication (shared memory)
Answer: Faster creation, faster context switch, easier communication (shared memory)
π Multithreading Models
1. Many-to-One
- Many user threads β One kernel thread
- Managed by thread library (efficient)
- β Entire process blocks if one thread blocks
- β Cannot run in parallel on multiprocessors
- Example: Green Threads
2. One-to-One
- Each user thread β One kernel thread
- β More concurrency (one thread blocks, others run)
- β Can run in parallel on multiprocessors
- β Overhead of creating kernel threads
- Example: Windows, Linux
3. Many-to-Many
- Many user threads β Many kernel threads
- β Best of both worlds
- β Can run in parallel
- β If one blocks, can schedule another
- Example: Solaris prior to version 9
β‘ Concurrency vs Parallelism
Concurrency
- Multiple tasks make progress
- Time-sharing on single core
- Interleaved execution
Parallelism
- Multiple tasks execute simultaneously
- Requires multiple cores
- True simultaneous execution
Types of Parallelism:
- Data Parallelism: Same operation on different data subsets (e.g., image processing)
- Task Parallelism: Different tasks on different cores
5 CPU Scheduling
π Scheduling Criteria
| Criterion | Goal | Formula/Description |
|---|---|---|
| CPU Utilization | Maximize | Keep CPU busy (40%-90%) |
| Throughput | Maximize | # processes completed per time unit |
| Turnaround Time | Minimize | Finish Time - Arrival Time |
| Waiting Time | Minimize | Turnaround Time - Burst Time |
| Response Time | Minimize | First Response - Arrival Time |
Turnaround Time = Waiting Time + Burst Time
Waiting Time = Turnaround Time - Burst Time
Waiting Time = Turnaround Time - Burst Time
π― Scheduling Algorithms
1. First-Come First-Served (FCFS)
- Type: Non-preemptive
- Order: Process arrival order
- β Convoy effect (short processes wait for long ones)
- β Simple to implement
2. Shortest Job First (SJF)
- Type: Can be preemptive or non-preemptive
- Order: Shortest burst time first
- β Optimal (minimum average waiting time)
- β Cannot know future burst times (must estimate)
- β Starvation possible for long processes
Exponential Averaging for predicting next CPU burst:
Ο(n+1) = Ξ± Γ t(n) + (1-Ξ±) Γ Ο(n)
where Ξ± typically = 0.5
Ο(n+1) = Ξ± Γ t(n) + (1-Ξ±) Γ Ο(n)
where Ξ± typically = 0.5
3. Priority Scheduling
- Type: Can be preemptive or non-preemptive
- Order: Highest priority first (smaller number = higher priority)
- β Starvation problem
- β Solution: Aging (increase priority over time)
4. Round Robin (RR)
- Type: Preemptive
- Order: Each process gets time quantum (10-100ms)
- β Good for time-sharing systems
- β Fair allocation
- β οΈ If quantum too large β becomes FCFS
- β οΈ If quantum too small β too many context switches
Rule: 80% of CPU bursts should be shorter than time quantum
PAST EXAM: Processes with different burst times and priorities
Given: P1(BT=4, Pri=3), P2(BT=3, Pri=1), P3(BT=8, Pri=4), P4(BT=7, Pri=2), P5(BT=5, Pri=3)
All arrive at time 0.
Priority Scheduling (smaller # = higher priority):
Gantt: P2(0-3) | P4(3-10) | P1(10-14) | P5(14-19) | P3(19-27)
Waiting Times: P1=10, P2=0, P3=19-8=11, P4=3, P5=14-5=9
Given: P1(BT=4, Pri=3), P2(BT=3, Pri=1), P3(BT=8, Pri=4), P4(BT=7, Pri=2), P5(BT=5, Pri=3)
All arrive at time 0.
Priority Scheduling (smaller # = higher priority):
Gantt: P2(0-3) | P4(3-10) | P1(10-14) | P5(14-19) | P3(19-27)
Waiting Times: P1=10, P2=0, P3=19-8=11, P4=3, P5=14-5=9
π Preemptive vs Non-Preemptive
Non-Preemptive
- Process keeps CPU until terminates or waits
- Examples: FCFS, SJF (non-preemptive)
- No race conditions on shared data
Preemptive
- Process can be interrupted
- Examples: RR, SJF (preemptive/SRTF), Priority (preemptive)
- Can cause race conditions
π Practice Problem
PRACTICE: Calculate waiting time and turnaround time
Processes: P1(BT=24), P2(BT=3), P3(BT=3)
Order: P1, P2, P3, all arrive at t=0
FCFS:
Gantt: P1(0-24) | P2(24-27) | P3(27-30)
WT: P1=0, P2=24, P3=27
Average WT = (0+24+27)/3 = 17ms
SJF:
Gantt: P2(0-3) | P3(3-6) | P1(6-30)
WT: P1=6, P2=0, P3=3
Average WT = (6+0+3)/3 = 3ms β OPTIMAL!
Processes: P1(BT=24), P2(BT=3), P3(BT=3)
Order: P1, P2, P3, all arrive at t=0
FCFS:
Gantt: P1(0-24) | P2(24-27) | P3(27-30)
WT: P1=0, P2=24, P3=27
Average WT = (0+24+27)/3 = 17ms
SJF:
Gantt: P2(0-3) | P3(3-6) | P1(6-30)
WT: P1=6, P2=0, P3=3
Average WT = (6+0+3)/3 = 3ms β OPTIMAL!
6 Process Synchronization
β οΈ Critical Section Problem
Race Condition: Multiple processes access shared data concurrently, outcome depends on execution order
Critical Section: Code segment accessing shared resources
Solution Must Satisfy 3 Requirements:
- Mutual Exclusion: Only ONE process in CS at a time
- Progress: Only processes NOT in remainder section can decide who enters CS next
- Bounded Waiting: Limit on # times others enter CS before a waiting process
π Semaphores
Semaphore S: Integer variable accessed only by:
- wait(S): S--; if S<0, block
- signal(S): S++; wake up blocked process if any
// Traditional wait (busy waiting)
wait(S) {
while (S <= 0); // busy wait
S--;
}
// Traditional signal
signal(S) {
S++;
}
// Modern implementation (no busy waiting)
wait(S) {
S--;
if (S < 0) {
add this process to S->list;
block();
}
}
signal(S) {
S++;
if (S <= 0) {
remove process P from S->list;
wakeup(P);
}
}
Types:
- Binary Semaphore (Mutex): Value 0 or 1
- Counting Semaphore: Value can be any integer
Semaphore initialized to 1 = Mutex lock!
π Classic Problems
1. Bounded Buffer (Producer-Consumer)
// Shared data
semaphore mutex = 1; // mutual exclusion
semaphore empty = n; // count empty slots
semaphore full = 0; // count full slots
// Producer
do {
// produce item
wait(empty); // wait for empty slot
wait(mutex); // enter CS
// add item to buffer
signal(mutex); // exit CS
signal(full); // increment full count
} while(true);
// Consumer
do {
wait(full); // wait for full slot
wait(mutex); // enter CS
// remove item from buffer
signal(mutex); // exit CS
signal(empty); // increment empty count
// consume item
} while(true);
2. Readers-Writers Problem
// Shared data
semaphore rw_mutex = 1; // mutual exclusion for writers
semaphore mutex = 1; // protects readcount
int readcount = 0;
// Writer
do {
wait(rw_mutex);
// writing performed
signal(rw_mutex);
} while(true);
// Reader
do {
wait(mutex);
readcount++;
if (readcount == 1) // first reader
wait(rw_mutex); // lock out writers
signal(mutex);
// reading performed
wait(mutex);
readcount--;
if (readcount == 0) // last reader
signal(rw_mutex); // allow writers
signal(mutex);
} while(true);
3. Dining Philosophers
- 5 philosophers, 5 chopsticks
- Need 2 chopsticks to eat
- β Deadlock if all pick up left chopstick
- Allow only 4 philosophers at table
- Pick up both chopsticks atomically
- Odd philosophers pick left first, even pick right first
π Deadlock
Necessary Conditions (ALL must hold):
- Mutual Exclusion: At least one resource non-sharable
- Hold and Wait: Process holds resources while waiting for more
- No Preemption: Resources cannot be forcibly taken
- Circular Wait: P1 waits for P2, P2 for P3, ..., Pn for P1
Mnemonic: "My Husband Never Comes" β Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait
8 Main Memory Management
π Address Binding
Logical vs Physical Address:
- Logical (Virtual): Generated by CPU, seen by program
- Physical: Actual location in RAM, seen by memory unit
| Binding Time | When | Relocatable? |
|---|---|---|
| Compile Time | If memory location known in advance | No (absolute code) |
| Load Time | When program loaded into memory | Yes (relocatable code) |
| Execution Time | During program execution | Yes (requires MMU) |
πΊοΈ Memory Management Unit (MMU)
MMU: Hardware device that maps logical to physical addresses
Physical Address = Base Register + Logical Address
Base and Limit Registers:
- Base: Starting physical address
- Limit: Size of logical address space
- Protection: If (logical address >= limit) β error!
π§© Contiguous Memory Allocation
Fixed Partitioning:
- Memory divided into fixed-size partitions
- β Internal fragmentation
- β Limits multiprogramming
Variable Partitioning:
- Dynamic allocation based on process size
- β External fragmentation
- Solution: Compaction (requires dynamic relocation)
Allocation Algorithms:
- First-Fit: Allocate first hole big enough (fastest)
- Best-Fit: Allocate smallest hole big enough (smallest leftover)
- Worst-Fit: Allocate largest hole (largest leftover)
PAST EXAM: Memory partitions: 300KB, 600KB, 350KB, 200KB, 750KB, 125KB
Processes: 115KB, 500KB, 358KB, 200KB, 375KB
Best-Fit:
115KB β 125KB (10KB left)
500KB β 600KB (100KB left)
358KB β 750KB (392KB left)
200KB β 200KB (0KB left)
375KB β 392KB β All fit!
Processes: 115KB, 500KB, 358KB, 200KB, 375KB
Best-Fit:
115KB β 125KB (10KB left)
500KB β 600KB (100KB left)
358KB β 750KB (392KB left)
200KB β 200KB (0KB left)
375KB β 392KB β All fit!
π Paging
Concept:
- Divide logical memory into pages (fixed-size)
- Divide physical memory into frames (same size)
- Page size = Frame size (power of 2, e.g., 4KB)
- β No external fragmentation
- β Internal fragmentation (last page)
Logical Address Structure:
Logical Address = [Page Number | Page Offset]
If logical address space = 2^m, page size = 2^n
Page number = m - n bits
Page offset = n bits
If logical address space = 2^m, page size = 2^n
Page number = m - n bits
Page offset = n bits
Address Translation:
- Extract page number (p) from logical address
- Look up frame number (f) in page table[p]
- Physical address = (f Γ frame_size) + offset
PAST EXAM: Process has 4 pages, page size = 1KB
Page table: [0β5, 1β10, 2β13, 3β7]
Logical address = 1500
Solution:
Page size = 1024 bytes
Page number = 1500 / 1024 = 1
Offset = 1500 % 1024 = 476
Frame number = page_table[1] = 10
Physical address = 10 Γ 1024 + 476 = 10716
Page table: [0β5, 1β10, 2β13, 3β7]
Logical address = 1500
Solution:
Page size = 1024 bytes
Page number = 1500 / 1024 = 1
Offset = 1500 % 1024 = 476
Frame number = page_table[1] = 10
Physical address = 10 Γ 1024 + 476 = 10716
PAST EXAM: Logical address space = 256 pages, page size = 4KB
Physical memory = 64 frames
a) Bits in logical address?
256 Γ 4KB = 256 Γ 4096 = 2^8 Γ 2^12 = 2^20
Answer: 20 bits
b) Bits in physical address?
64 Γ 4KB = 64 Γ 4096 = 2^6 Γ 2^12 = 2^18
Answer: 18 bits
Physical memory = 64 frames
a) Bits in logical address?
256 Γ 4KB = 256 Γ 4096 = 2^8 Γ 2^12 = 2^20
Answer: 20 bits
b) Bits in physical address?
64 Γ 4KB = 64 Γ 4096 = 2^6 Γ 2^12 = 2^18
Answer: 18 bits
β‘ Translation Lookaside Buffer (TLB)
TLB: Fast associative cache for page table entries
- Small (64-1024 entries)
- Parallel search
- Contains recent page translations
Effective Access Time (EAT):
EAT = hit_ratio Γ (TLB_time + Memory_time)
+ (1 - hit_ratio) Γ (TLB_time + Page_Table_time + Memory_time)
EAT = hit_ratio Γ (TLB_time + Memory_time)
+ (1 - hit_ratio) Γ (TLB_time + Page_Table_time + Memory_time)
PAST EXAM:
Memory access = 50ns
TLB access = 2ns
Hit ratio = 75%
EAT = 0.75 Γ (2 + 50) + 0.25 Γ (2 + 50 + 50)
EAT = 0.75 Γ 52 + 0.25 Γ 102
EAT = 39 + 25.5 = 64.5ns
Memory access = 50ns
TLB access = 2ns
Hit ratio = 75%
EAT = 0.75 Γ (2 + 50) + 0.25 Γ (2 + 50 + 50)
EAT = 0.75 Γ 52 + 0.25 Γ 102
EAT = 39 + 25.5 = 64.5ns
ποΈ Segmentation
Concept:
- Divide memory by logical units (function, array, stack)
- Variable-size segments
- Programmer's view of memory
Logical Address:
<segment number, offset>
Segment Table:
- Base: Starting physical address
- Limit: Length of segment
- Protection: If (offset >= limit) β error!
If offset < limit:
Physical Address = Base + Offset
Else:
Segmentation Fault!
Physical Address = Base + Offset
Else:
Segmentation Fault!
EXAMPLE: Segment table
[Seg 0: base=1200, limit=1000]
[Seg 1: base=4000, limit=600]
[Seg 2: base=2000, limit=1400]
Logical address <1, 500>:
500 < 600 β Valid
Physical = 4000 + 500 = 4500
Logical address <1, 600>:
600 >= 600 β Illegal!
[Seg 0: base=1200, limit=1000]
[Seg 1: base=4000, limit=600]
[Seg 2: base=2000, limit=1400]
Logical address <1, 500>:
500 < 600 β Valid
Physical = 4000 + 500 = 4500
Logical address <1, 600>:
600 >= 600 β Illegal!
βοΈ Paging vs Segmentation
Paging
- Fixed-size pages
- Physical view
- No external fragmentation
- Internal fragmentation
- Transparent to programmer
Segmentation
- Variable-size segments
- Logical view
- External fragmentation
- No internal fragmentation
- Visible to programmer
9 Virtual Memory
π Virtual Memory Concept
Benefits:
- Logical address space > Physical memory
- More programs in memory simultaneously
- Less I/O for loading
- Faster program start
Demand Paging:
- Load pages only when needed
- Lazy swapper (never swaps unless needed)
- Uses valid-invalid bit in page table
π₯ Page Fault
Page Fault Occurs When:
- Process references a page not in memory
- Page table entry has invalid bit set
Page Fault Handling Steps:
- Check internal table (PCB) - valid or invalid reference?
- If invalid β terminate process
- If valid but not in memory β page it in:
- Get empty frame
- Swap page from disk to frame
- Update page table (set valid bit)
- Restart instruction
Effective Access Time (EAT):
EAT = (1 - p) Γ memory_access + p Γ page_fault_time
where p = page fault rate (0 β€ p β€ 1)
EAT = (1 - p) Γ memory_access + p Γ page_fault_time
where p = page fault rate (0 β€ p β€ 1)
PAST EXAM:
Memory access = 200ns
Page fault service time = 8ms = 8,000,000ns
Page fault rate = 1/1000 = 0.001
EAT = (1 - 0.001) Γ 200 + 0.001 Γ 8,000,000
EAT = 0.999 Γ 200 + 8,000
EAT = 199.8 + 8,000 = 8,199.8ns β 8.2ΞΌs
Memory access = 200ns
Page fault service time = 8ms = 8,000,000ns
Page fault rate = 1/1000 = 0.001
EAT = (1 - 0.001) Γ 200 + 0.001 Γ 8,000,000
EAT = 0.999 Γ 200 + 8,000
EAT = 199.8 + 8,000 = 8,199.8ns β 8.2ΞΌs
π Page Replacement Algorithms
When to Use: When there are no free frames available
1. FIFO (First-In-First-Out)
- Replace oldest page in memory
- Simple to implement
- β Suffers from Belady's Anomaly (more frames β more faults possible)
2. Optimal (OPT)
- Replace page that won't be used for longest time
- β Minimum page faults
- β Impossible to implement (requires future knowledge)
- Used as benchmark
3. LRU (Least Recently Used)
- Replace page not used for longest time
- β Good approximation of optimal
- β Does not suffer from Belady's Anomaly
- Implementation: Stack or counters
PAST EXAM: Reference string: 7,2,3,1,2,5,3,4,6,7,7,1,0,5,4,6,2,3,0,1
3 frames available
LRU Solution:
Total Page Faults: 18
3 frames available
LRU Solution:
| 7 | 2 | 3 | 1 | 2 | 5 | 3 | 4 | 6 | 7 | 7 | 1 | 0 | 5 | 4 | 6 | 2 | 3 | 0 | 1 |
| 7 | 7 | 7 | 1 | 1 | 1 | 3 | 3 | 3 | 7 | 7 | 7 | 7 | 5 | 5 | 5 | 2 | 2 | 2 | 1 |
| 2 | 2 | 2 | 2 | 2 | 2 | 4 | 4 | 4 | 4 | 1 | 1 | 1 | 4 | 4 | 4 | 3 | 3 | 3 | |
| 3 | 3 | 3 | 5 | 5 | 5 | 6 | 6 | 6 | 6 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 0 |
Quick comparison: FIFO simple but Belady's; OPT best but impossible; LRU practical and good
π Thrashing
Thrashing: Process spends more time paging than executing
Causes:
- Process doesn't have enough frames
- High degree of multiprogramming
- Insufficient physical memory
Solutions:
- Decrease degree of multiprogramming
- Increase physical memory (RAM)
- Use local replacement algorithm
- Provide enough frames per process
PAST EXAM: System measurements:
CPU utilization: 10%
Paging disk: 97.7%
Diagnosis: System is thrashing!
Solutions: (1) Decrease multiprogramming (2) Increase RAM
CPU utilization: 10%
Paging disk: 97.7%
Diagnosis: System is thrashing!
Solutions: (1) Decrease multiprogramming (2) Increase RAM
11 File Systems
π File Concept
File: Named collection of related information stored on secondary storage
File Attributes:
- Name: Human-readable form
- Identifier: Unique tag (inode number)
- Type: Text, binary, executable
- Location: Pointer to device and location
- Size: Current file size
- Protection: Read, write, execute permissions
- Time, date: Creation, last access, last modification
File Operations:
- Create: Find space, add directory entry
- Open: Load FCB to memory (open-file table)
- Read/Write: Use file pointer
- Seek: Reposition file pointer
- Close: Remove from open-file table
- Delete: Remove directory entry, free space
- Truncate: Delete contents, keep attributes
π Directory Structure
1. Single-Level Directory
- All files in one directory
- β Simple
- β Naming problem (all names must be unique)
- β No grouping
2. Two-Level Directory
- Master File Directory (MFD) + User File Directories (UFD)
- β Isolates users
- β Efficient searching
- β No grouping capability
3. Tree-Structured Directory
- Hierarchical structure (most common)
- β Efficient searching
- β Grouping capability
- β Current directory concept
- Absolute path: /home/user/file.txt
- Relative path: ../file.txt
4. Acyclic-Graph Directory
- Allows sharing of files/subdirectories
- Uses links (hard or symbolic)
- β οΈ Deletion problem (dangling pointers)
π File Protection
Access Control List (ACL): Specifies users and allowed operations
UNIX Protection (rwx):
| Category | Description |
|---|---|
| Owner | File creator |
| Group | Set of users who share access |
| Universe (Others) | All other users |
Permissions:
- r (read): Read file contents
- w (write): Write/modify file
- x (execute): Execute file
π― Access Methods
Sequential Access
- Records processed in order
- Like tape drive
- Used by editors, compilers
Direct Access (Random)
- Access any record directly
- Fixed-length records
- Used by databases
Indexed Access:
- Index file points to actual data
- Can have multi-level indexes
- Example: ISAM (Indexed Sequential Access Method)
12 File System Implementation
πΎ Disk Structure
Magnetic Disk:
- Platters: Circular disks
- Tracks: Concentric circles on platter
- Sectors: Subdivisions of tracks (smallest addressable unit)
- Cylinder: All tracks at same radius on all platters
Disk Access Time:
Access Time = Seek Time + Rotational Latency + Transfer Time
- Seek Time: Move head to correct cylinder (5-15ms)
- Rotational Latency: Wait for sector to rotate under head (4-8ms)
- Transfer Time: Read/write data (negligible)
Typical total access time: 8-50ms (much slower than RAM!)
π¦ Allocation Methods
1. Contiguous Allocation
- Each file occupies contiguous blocks
- β Simple, best performance for sequential access
- β Direct access easy
- β External fragmentation
- β Files cannot grow
- β Must know file size in advance
2. Linked Allocation
- Each block contains pointer to next block
- β No external fragmentation
- β Files can grow dynamically
- β Slow sequential access
- β No direct access
- β Pointers use space
- β Reliability (one bad pointer breaks chain)
3. Indexed Allocation
- Index block contains pointers to all blocks
- β Direct access supported
- β No external fragmentation
- β Overhead of index block
- β Wasted space if file is small
PAST EXAM: File of 100 blocks. Insert block in middle (as 15th).
How many disk I/O operations?
Contiguous: 201 (read 50 blocks + write 50 blocks + write new block)
Linked: 15 reads + 2 writes = 17
Indexed: 1 write + 1 update index = 2
How many disk I/O operations?
Contiguous: 201 (read 50 blocks + write 50 blocks + write new block)
Linked: 15 reads + 2 writes = 17
Indexed: 1 write + 1 update index = 2
π Free Space Management
1. Bit Vector (Bitmap)
- 1 bit per block (0=allocated, 1=free)
- β Simple, efficient to find free blocks
- β Must keep entire bitmap in memory
Bitmap size = (disk_size / block_size) / 8 bytes
Example: 1TB disk, 4KB blocks β 32MB bitmap
Example: 1TB disk, 4KB blocks β 32MB bitmap
2. Linked List
- Link all free blocks together
- β No waste of space
- β Cannot get contiguous space easily
- β Traversal requires many I/O
3. Grouping
- First free block contains addresses of n free blocks
- Last of these points to another block with n addresses
- β Can find many free blocks quickly
4. Counting
- Store address of first free block + count of contiguous free blocks
- β Efficient if space allocated/freed in large chunks
ποΈ Directory Implementation
Linear List:
- Simple to implement
- β Time-consuming to search (O(n))
Hash Table:
- Fast lookup (O(1) average)
- β Collisions must be handled
- β Fixed size (or expensive to resize)
β‘ Quick Reference & Formulas
π Essential Formulas
Paging:
Page Number = Logical Address / Page Size
Page Offset = Logical Address % Page Size
Physical Address = (Frame Number Γ Frame Size) + Offset
Logical Address Bits = logβ(Pages Γ Page Size)
Physical Address Bits = logβ(Frames Γ Frame Size)
Page Number = Logical Address / Page Size
Page Offset = Logical Address % Page Size
Physical Address = (Frame Number Γ Frame Size) + Offset
Logical Address Bits = logβ(Pages Γ Page Size)
Physical Address Bits = logβ(Frames Γ Frame Size)
TLB Effective Access Time:
EAT = h Γ (TLB_time + Memory_time) + (1-h) Γ (TLB_time + Page_Table_time + Memory_time)
where h = hit ratio
EAT = h Γ (TLB_time + Memory_time) + (1-h) Γ (TLB_time + Page_Table_time + Memory_time)
where h = hit ratio
Page Fault EAT:
EAT = (1-p) Γ memory_access + p Γ page_fault_service_time
where p = page fault rate
EAT = (1-p) Γ memory_access + p Γ page_fault_service_time
where p = page fault rate
Scheduling:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Response Time = First Response - Arrival Time
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Response Time = First Response - Arrival Time
Disk Access:
Access Time = Seek Time + Rotational Latency + Transfer Time
Access Time = Seek Time + Rotational Latency + Transfer Time
π― Quick Tips for Exams
Process vs Thread: Process = heavyweight, separate memory; Thread = lightweight, shared memory
fork() Returns: 0 to child, child PID to parent, -1 on error
Mode Bit: 0 = Kernel mode, 1 = User mode
External Fragmentation: Variable partition, Segmentation
Internal Fragmentation: Fixed partition, Paging
Internal Fragmentation: Fixed partition, Paging
Page Size = Frame Size (always equal!)
Belady's Anomaly: Only FIFO suffers (not LRU or Optimal)
Deadlock Conditions (ALL required): Mutual Exclusion, Hold & Wait, No Preemption, Circular Wait
Preemptive Algorithms: RR, SRTF, Preemptive Priority
Non-Preemptive: FCFS, SJF, Non-preemptive Priority
Non-Preemptive: FCFS, SJF, Non-preemptive Priority
π’ Common Conversions
| Unit | Value |
|---|---|
| 1 KB | 1024 bytes = 2^10 |
| 1 MB | 1024 KB = 2^20 bytes |
| 1 GB | 1024 MB = 2^30 bytes |
| 1 ms | 1,000 ΞΌs = 1,000,000 ns |
| 1 ΞΌs | 1,000 ns |
β Common Exam Mistakes to Avoid
β Mistake: Thinking page size β frame size
β Correct: Page size ALWAYS equals frame size
β Correct: Page size ALWAYS equals frame size
β Mistake: fork() returns child PID to child
β Correct: fork() returns 0 to child, child PID to parent
β Correct: fork() returns 0 to child, child PID to parent
β Mistake: Threads share stack
β Correct: Each thread has its own stack; they share code, data, heap
β Correct: Each thread has its own stack; they share code, data, heap
β Mistake: Waiting time includes burst time
β Correct: Waiting time = Turnaround time - Burst time
β Correct: Waiting time = Turnaround time - Burst time
β Mistake: Paging causes external fragmentation
β Correct: Paging eliminates external fragmentation, but causes internal fragmentation
β Correct: Paging eliminates external fragmentation, but causes internal fragmentation
π Final Checklist
Before the exam, make sure you can:
- β Calculate page number and offset from logical address
- β Convert logical to physical address using page table
- β Calculate effective access time with TLB
- β Draw Gantt charts for all scheduling algorithms
- β Calculate waiting time and turnaround time
- β Trace fork() execution and count processes created
- β Implement producer-consumer with semaphores
- β Perform page replacement (FIFO, LRU, Optimal)
- β Calculate bits needed for logical/physical addresses
- β Understand difference between process and thread
- β Know when to use each allocation algorithm (First-fit, Best-fit, Worst-fit)
- β Identify deadlock conditions
- β Understand virtual memory benefits and page faults