πŸ“š Operating Systems CS330

Comprehensive Exam Cheatsheet

Prince Sultan University

1 Introduction to Operating Systems

🎯 What is an Operating System?

Definition: A program that acts as an intermediary between users and computer hardware.
  • Goals: Execute user programs, make system convenient, use hardware efficiently
  • Kernel: The one program running at all times (core of OS)
Remember the 4 components: Hardware β†’ OS β†’ Applications β†’ Users

πŸ’Ύ Computer System Organization

Component Access Time Volatility
Registers ~1 ns Volatile
Cache ~2-10 ns Volatile
Main Memory (RAM) ~100 ns Volatile
SSD ~100 ΞΌs Non-volatile
Magnetic Disk ~8-50 ms Non-volatile
Mnemonic: "RegistersCacheRAM" β†’ RCR β†’ Faster and more expensive

⚑ Interrupts

Types:
  • Hardware Interrupt: From I/O devices (keyboard, mouse, disk)
  • Software Interrupt (Trap/Exception): From errors or system calls
Interrupt Handling Process:
  1. Save PCB (Program Counter and Registers)
  2. Determine interrupt type (polling or vectored)
  3. Execute Interrupt Service Routine (ISR)
  4. Restore saved information
  5. Resume execution
Interrupt Vector = Table of addresses pointing to ISRs

πŸš€ DMA (Direct Memory Access)

  • Used for high-speed I/O devices
  • Transfers data directly between I/O and memory WITHOUT CPU
  • Only ONE interrupt per block (not per byte)
Q: Which is efficient for moving large amounts of data between I/O devices and main memory without CPU?
Answer: DMA

πŸ–₯️ Multiprocessor Systems

Symmetric (SMP)

  • Each processor performs ALL tasks
  • No master-slave relationship
  • Equal processors

Asymmetric (AMP)

  • Master processor controls slaves
  • Boss-worker relationship
  • Used in large systems
Advantages:
  1. Increased Throughput: More work in less time
  2. Economy of Scale: Cost less than multiple single-processors
  3. Increased Reliability: Graceful degradation/fault tolerance

πŸ” Dual-Mode Operation

Mode Bit Value Purpose
Kernel/System Mode 0 OS execution, privileged instructions
User Mode 1 User application execution
System call changes mode to kernel, return resets to user
Q: Privileged instructions can only be executed in?
Answer: Kernel mode (mode bit = 0)

2 Operating System Structures

πŸ› οΈ OS Services

For Users:
  • User Interface: CLI, GUI, Touchscreen
  • Program Execution: Load, run, end programs
  • I/O Operations: Users can't do I/O directly
  • File System Manipulation: Read, write, create, delete
  • Communications: Shared memory or message passing
  • Error Detection: Detect CPU, memory, I/O errors
For System Efficiency:
  • Resource Allocation: Manage CPU, memory, files
  • Accounting: Track usage statistics
  • Protection & Security: Control access, defend attacks

πŸ“ž System Calls

Definition: Interface between running program and OS
  • Accessed via API (Win32, POSIX, Java)
  • Each system call has a number
  • System-call interface maintains indexed table
Parameter Passing Methods:
  1. Registers: Simplest, limited by register count
  2. Block/Table in Memory: Pass address of block
  3. Stack: Push parameters, pop by OS
Category Examples
Process Control fork(), exec(), exit(), wait()
File Management open(), close(), read(), write()
Device Management request(), release(), read(), write()
Information Maintenance getpid(), time(), date()
Communications send(), receive(), create/delete connection

πŸ—οΈ OS Structure Types

1. Monolithic (Simple Structure - MS-DOS)
  • All functionality in one binary
  • Fast but hard to maintain
  • Changes affect entire system
2. Layered Approach
  • Layer 0 = Hardware, Layer N = User Interface
  • Each layer uses only lower layers
  • Easy debugging but performance overhead
3. Microkernel
  • Move non-essential components to user space
  • Communication via message passing
  • Advantages: Easy to extend, portable, reliable, secure
  • Disadvantages: Communication overhead
4. Modules (Modern Approach)
  • Core kernel + loadable modules
  • Similar to layers but more flexible
  • Used in Linux, macOS, Solaris, Windows
Q: Which allows direct communication among all OS services?
Answer: Modular approach (any module can call any other)

3 Processes

πŸ“‹ Process Concept

Process = Program in Execution
  • Program: Passive entity (executable file on disk)
  • Process: Active entity (program loaded in memory)
Process Memory Layout:
  • Text Section: Program code (read-only)
  • Data Section: Global variables
  • Heap: Dynamically allocated memory (grows upward)
  • Stack: Function parameters, return addresses, local variables (grows downward)
Remember: "Text Data Heap Stack" β†’ TDHS β†’ From low to high memory addresses

πŸ”„ Process States

Process State Diagram:

NEW β†’ [admitted] β†’ READY β†’ [scheduler dispatch] β†’ RUNNING
↑_____________←_[I/O or event completion]_←_WAITING_←_[I/O or event wait]_←_↓
RUNNING β†’ [interrupt] β†’ READY
RUNNING β†’ [exit] β†’ TERMINATED
State Description
NEW Process is being created, PCB is being initialized
READY Waiting to be assigned to CPU
RUNNING Instructions are being executed
WAITING Waiting for I/O or event to occur
TERMINATED Process has finished execution
Q: A process in Running state can next go to which states?
Answer: Ready, Waiting, Terminated

πŸ“¦ Process Control Block (PCB)

PCB Contains:
  • Process ID (PID)
  • Process State (Ready, Running, etc.)
  • Program Counter (PC)
  • CPU Registers
  • CPU Scheduling Information (priority, queues)
  • Memory Management Information
  • Accounting Information (CPU time, time limits)
  • I/O Status Information (open files, I/O devices)
PCB is stored in KERNEL MEMORY, NOT in text section of process

πŸ”€ Context Switch

Steps:
  1. Save PCB of current process (registers, PC, state)
  2. Load PCB of new process
  3. Resume execution of new process
Context Switch Time:
  • Pure overhead (no useful work done)
  • Depends on hardware support (some CPUs have multiple register sets)
  • Typically 1-1000 microseconds

πŸ“… Process Scheduling

Scheduler Also Known As Purpose Frequency
Long-term Job Scheduler Select which jobs to bring to ready queue Infrequent (seconds/minutes)
Short-term CPU Scheduler Select which ready process gets CPU Very frequent (milliseconds)
Medium-term Swapper Swap processes in/out of memory Intermediate
Process Types:
  • I/O Bound: More time doing I/O, many short CPU bursts
  • CPU Bound: More time computing, few long CPU bursts
Long-term scheduler controls DEGREE OF MULTIPROGRAMMING

πŸ‘Ά Process Creation - fork()

fork() System Call:
  • Creates a child process (copy of parent)
  • Returns 0 to child process
  • Returns child PID to parent process
  • Returns -1 on error
pid_t pid = fork(); if (pid < 0) { // Error occurred fprintf(stderr, "Fork Failed"); } else if (pid == 0) { // Child process printf("I am child, pid = %d", getpid()); } else { // Parent process printf("I am parent, my child's pid = %d", pid); wait(NULL); // Wait for child to terminate }
Q: Given actual PIDs are parent=3600, child=3608:
Line A (child): pid = 0
Line B (child): pid1 = 3608
Line C (parent): pid = 3608
Line D (parent): pid1 = 3600
Q: How many processes created by 4 fork() calls in a loop?
Answer: 2^4 = 16 processes (including parent)
Child gets COPY of parent's memory (data, heap, stack), NOT shared!

πŸ’¬ Interprocess Communication (IPC)

Shared Memory

  • Faster - no system calls after setup
  • Direct memory access
  • Requires synchronization
  • Good for large data

Message Passing

  • Easier - OS handles synchronization
  • send() and receive() system calls
  • Slower due to system calls
  • Good for small messages

4 Threads and Concurrency

🧡 Thread Basics

Thread = Lightweight Process

A thread is a basic unit of CPU utilization comprising:

  • Thread ID
  • Program Counter
  • Register Set
  • Stack
Threads SHARE within a process:
  • Code Section
  • Data Section (global variables)
  • Heap
  • Open Files
  • OS Resources
Each thread has OWN:
  • Stack
  • Program Counter
  • Registers
Q: Which is NOT shared by threads?
Answer: Program counter and stack

βœ… Benefits of Multithreading

  1. Responsiveness: Program continues even if part blocked (e.g., web browser loading image while user interacts)
  2. Resource Sharing: Threads share memory and resources (easier than IPC)
  3. Economy:
    • Thread creation: 30x faster than process (Solaris)
    • Context switch between threads: faster
  4. Scalability: Utilizes multiprocessor architectures
Q: Why threads instead of processes?
Answer: Faster creation, faster context switch, easier communication (shared memory)

πŸ”— Multithreading Models

1. Many-to-One
  • Many user threads β†’ One kernel thread
  • Managed by thread library (efficient)
  • ❌ Entire process blocks if one thread blocks
  • ❌ Cannot run in parallel on multiprocessors
  • Example: Green Threads
2. One-to-One
  • Each user thread β†’ One kernel thread
  • βœ… More concurrency (one thread blocks, others run)
  • βœ… Can run in parallel on multiprocessors
  • ❌ Overhead of creating kernel threads
  • Example: Windows, Linux
3. Many-to-Many
  • Many user threads β†’ Many kernel threads
  • βœ… Best of both worlds
  • βœ… Can run in parallel
  • βœ… If one blocks, can schedule another
  • Example: Solaris prior to version 9

⚑ Concurrency vs Parallelism

Concurrency

  • Multiple tasks make progress
  • Time-sharing on single core
  • Interleaved execution

Parallelism

  • Multiple tasks execute simultaneously
  • Requires multiple cores
  • True simultaneous execution
Types of Parallelism:
  • Data Parallelism: Same operation on different data subsets (e.g., image processing)
  • Task Parallelism: Different tasks on different cores

5 CPU Scheduling

πŸ“Š Scheduling Criteria

Criterion Goal Formula/Description
CPU Utilization Maximize Keep CPU busy (40%-90%)
Throughput Maximize # processes completed per time unit
Turnaround Time Minimize Finish Time - Arrival Time
Waiting Time Minimize Turnaround Time - Burst Time
Response Time Minimize First Response - Arrival Time
Turnaround Time = Waiting Time + Burst Time
Waiting Time = Turnaround Time - Burst Time

🎯 Scheduling Algorithms

1. First-Come First-Served (FCFS)
  • Type: Non-preemptive
  • Order: Process arrival order
  • ❌ Convoy effect (short processes wait for long ones)
  • βœ… Simple to implement
2. Shortest Job First (SJF)
  • Type: Can be preemptive or non-preemptive
  • Order: Shortest burst time first
  • βœ… Optimal (minimum average waiting time)
  • ❌ Cannot know future burst times (must estimate)
  • ❌ Starvation possible for long processes
Exponential Averaging for predicting next CPU burst:
Ο„(n+1) = Ξ± Γ— t(n) + (1-Ξ±) Γ— Ο„(n)
where Ξ± typically = 0.5
3. Priority Scheduling
  • Type: Can be preemptive or non-preemptive
  • Order: Highest priority first (smaller number = higher priority)
  • ❌ Starvation problem
  • βœ… Solution: Aging (increase priority over time)
4. Round Robin (RR)
  • Type: Preemptive
  • Order: Each process gets time quantum (10-100ms)
  • βœ… Good for time-sharing systems
  • βœ… Fair allocation
  • ⚠️ If quantum too large β†’ becomes FCFS
  • ⚠️ If quantum too small β†’ too many context switches
Rule: 80% of CPU bursts should be shorter than time quantum
PAST EXAM: Processes with different burst times and priorities
Given: P1(BT=4, Pri=3), P2(BT=3, Pri=1), P3(BT=8, Pri=4), P4(BT=7, Pri=2), P5(BT=5, Pri=3)
All arrive at time 0.

Priority Scheduling (smaller # = higher priority):
Gantt: P2(0-3) | P4(3-10) | P1(10-14) | P5(14-19) | P3(19-27)
Waiting Times: P1=10, P2=0, P3=19-8=11, P4=3, P5=14-5=9

πŸ”„ Preemptive vs Non-Preemptive

Non-Preemptive

  • Process keeps CPU until terminates or waits
  • Examples: FCFS, SJF (non-preemptive)
  • No race conditions on shared data

Preemptive

  • Process can be interrupted
  • Examples: RR, SJF (preemptive/SRTF), Priority (preemptive)
  • Can cause race conditions

πŸ“ Practice Problem

PRACTICE: Calculate waiting time and turnaround time

Processes: P1(BT=24), P2(BT=3), P3(BT=3)
Order: P1, P2, P3, all arrive at t=0

FCFS:
Gantt: P1(0-24) | P2(24-27) | P3(27-30)
WT: P1=0, P2=24, P3=27
Average WT = (0+24+27)/3 = 17ms

SJF:
Gantt: P2(0-3) | P3(3-6) | P1(6-30)
WT: P1=6, P2=0, P3=3
Average WT = (6+0+3)/3 = 3ms βœ… OPTIMAL!

6 Process Synchronization

⚠️ Critical Section Problem

Race Condition: Multiple processes access shared data concurrently, outcome depends on execution order
Critical Section: Code segment accessing shared resources
Solution Must Satisfy 3 Requirements:
  1. Mutual Exclusion: Only ONE process in CS at a time
  2. Progress: Only processes NOT in remainder section can decide who enters CS next
  3. Bounded Waiting: Limit on # times others enter CS before a waiting process

πŸ”’ Semaphores

Semaphore S: Integer variable accessed only by:
  • wait(S): S--; if S<0, block
  • signal(S): S++; wake up blocked process if any
// Traditional wait (busy waiting) wait(S) { while (S <= 0); // busy wait S--; } // Traditional signal signal(S) { S++; } // Modern implementation (no busy waiting) wait(S) { S--; if (S < 0) { add this process to S->list; block(); } } signal(S) { S++; if (S <= 0) { remove process P from S->list; wakeup(P); } }
Types:
  • Binary Semaphore (Mutex): Value 0 or 1
  • Counting Semaphore: Value can be any integer
Semaphore initialized to 1 = Mutex lock!

🍝 Classic Problems

1. Bounded Buffer (Producer-Consumer)
// Shared data semaphore mutex = 1; // mutual exclusion semaphore empty = n; // count empty slots semaphore full = 0; // count full slots // Producer do { // produce item wait(empty); // wait for empty slot wait(mutex); // enter CS // add item to buffer signal(mutex); // exit CS signal(full); // increment full count } while(true); // Consumer do { wait(full); // wait for full slot wait(mutex); // enter CS // remove item from buffer signal(mutex); // exit CS signal(empty); // increment empty count // consume item } while(true);
2. Readers-Writers Problem
// Shared data semaphore rw_mutex = 1; // mutual exclusion for writers semaphore mutex = 1; // protects readcount int readcount = 0; // Writer do { wait(rw_mutex); // writing performed signal(rw_mutex); } while(true); // Reader do { wait(mutex); readcount++; if (readcount == 1) // first reader wait(rw_mutex); // lock out writers signal(mutex); // reading performed wait(mutex); readcount--; if (readcount == 0) // last reader signal(rw_mutex); // allow writers signal(mutex); } while(true);
3. Dining Philosophers
  • 5 philosophers, 5 chopsticks
  • Need 2 chopsticks to eat
  • ❌ Deadlock if all pick up left chopstick
Solutions:
  • Allow only 4 philosophers at table
  • Pick up both chopsticks atomically
  • Odd philosophers pick left first, even pick right first

πŸ’€ Deadlock

Necessary Conditions (ALL must hold):
  1. Mutual Exclusion: At least one resource non-sharable
  2. Hold and Wait: Process holds resources while waiting for more
  3. No Preemption: Resources cannot be forcibly taken
  4. Circular Wait: P1 waits for P2, P2 for P3, ..., Pn for P1
Mnemonic: "My Husband Never Comes" β†’ Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait

8 Main Memory Management

πŸ“ Address Binding

Logical vs Physical Address:
  • Logical (Virtual): Generated by CPU, seen by program
  • Physical: Actual location in RAM, seen by memory unit
Binding Time When Relocatable?
Compile Time If memory location known in advance No (absolute code)
Load Time When program loaded into memory Yes (relocatable code)
Execution Time During program execution Yes (requires MMU)

πŸ—ΊοΈ Memory Management Unit (MMU)

MMU: Hardware device that maps logical to physical addresses
Physical Address = Base Register + Logical Address
Base and Limit Registers:
  • Base: Starting physical address
  • Limit: Size of logical address space
  • Protection: If (logical address >= limit) β†’ error!

🧩 Contiguous Memory Allocation

Fixed Partitioning:
  • Memory divided into fixed-size partitions
  • ❌ Internal fragmentation
  • ❌ Limits multiprogramming
Variable Partitioning:
  • Dynamic allocation based on process size
  • ❌ External fragmentation
  • Solution: Compaction (requires dynamic relocation)
Allocation Algorithms:
  • First-Fit: Allocate first hole big enough (fastest)
  • Best-Fit: Allocate smallest hole big enough (smallest leftover)
  • Worst-Fit: Allocate largest hole (largest leftover)
PAST EXAM: Memory partitions: 300KB, 600KB, 350KB, 200KB, 750KB, 125KB
Processes: 115KB, 500KB, 358KB, 200KB, 375KB

Best-Fit:
115KB β†’ 125KB (10KB left)
500KB β†’ 600KB (100KB left)
358KB β†’ 750KB (392KB left)
200KB β†’ 200KB (0KB left)
375KB β†’ 392KB βœ… All fit!

πŸ“„ Paging

Concept:
  • Divide logical memory into pages (fixed-size)
  • Divide physical memory into frames (same size)
  • Page size = Frame size (power of 2, e.g., 4KB)
  • βœ… No external fragmentation
  • ❌ Internal fragmentation (last page)
Logical Address Structure:
Logical Address = [Page Number | Page Offset]
If logical address space = 2^m, page size = 2^n
Page number = m - n bits
Page offset = n bits
Address Translation:
  1. Extract page number (p) from logical address
  2. Look up frame number (f) in page table[p]
  3. Physical address = (f Γ— frame_size) + offset
PAST EXAM: Process has 4 pages, page size = 1KB
Page table: [0β†’5, 1β†’10, 2β†’13, 3β†’7]
Logical address = 1500

Solution:
Page size = 1024 bytes
Page number = 1500 / 1024 = 1
Offset = 1500 % 1024 = 476
Frame number = page_table[1] = 10
Physical address = 10 Γ— 1024 + 476 = 10716
PAST EXAM: Logical address space = 256 pages, page size = 4KB
Physical memory = 64 frames

a) Bits in logical address?
256 Γ— 4KB = 256 Γ— 4096 = 2^8 Γ— 2^12 = 2^20
Answer: 20 bits

b) Bits in physical address?
64 Γ— 4KB = 64 Γ— 4096 = 2^6 Γ— 2^12 = 2^18
Answer: 18 bits

⚑ Translation Lookaside Buffer (TLB)

TLB: Fast associative cache for page table entries
  • Small (64-1024 entries)
  • Parallel search
  • Contains recent page translations
Effective Access Time (EAT):
EAT = hit_ratio Γ— (TLB_time + Memory_time)
      + (1 - hit_ratio) Γ— (TLB_time + Page_Table_time + Memory_time)
PAST EXAM:
Memory access = 50ns
TLB access = 2ns
Hit ratio = 75%

EAT = 0.75 Γ— (2 + 50) + 0.25 Γ— (2 + 50 + 50)
EAT = 0.75 Γ— 52 + 0.25 Γ— 102
EAT = 39 + 25.5 = 64.5ns

πŸ—‚οΈ Segmentation

Concept:
  • Divide memory by logical units (function, array, stack)
  • Variable-size segments
  • Programmer's view of memory
Logical Address:
<segment number, offset>
Segment Table:
  • Base: Starting physical address
  • Limit: Length of segment
  • Protection: If (offset >= limit) β†’ error!
If offset < limit:
  Physical Address = Base + Offset
Else:
  Segmentation Fault!
EXAMPLE: Segment table
[Seg 0: base=1200, limit=1000]
[Seg 1: base=4000, limit=600]
[Seg 2: base=2000, limit=1400]

Logical address <1, 500>:
500 < 600 βœ… Valid
Physical = 4000 + 500 = 4500

Logical address <1, 600>:
600 >= 600 ❌ Illegal!

βš–οΈ Paging vs Segmentation

Paging

  • Fixed-size pages
  • Physical view
  • No external fragmentation
  • Internal fragmentation
  • Transparent to programmer

Segmentation

  • Variable-size segments
  • Logical view
  • External fragmentation
  • No internal fragmentation
  • Visible to programmer

9 Virtual Memory

πŸ’­ Virtual Memory Concept

Benefits:
  • Logical address space > Physical memory
  • More programs in memory simultaneously
  • Less I/O for loading
  • Faster program start
Demand Paging:
  • Load pages only when needed
  • Lazy swapper (never swaps unless needed)
  • Uses valid-invalid bit in page table

πŸ’₯ Page Fault

Page Fault Occurs When:
  • Process references a page not in memory
  • Page table entry has invalid bit set
Page Fault Handling Steps:
  1. Check internal table (PCB) - valid or invalid reference?
  2. If invalid β†’ terminate process
  3. If valid but not in memory β†’ page it in:
    • Get empty frame
    • Swap page from disk to frame
    • Update page table (set valid bit)
    • Restart instruction
Effective Access Time (EAT):
EAT = (1 - p) Γ— memory_access + p Γ— page_fault_time
where p = page fault rate (0 ≀ p ≀ 1)
PAST EXAM:
Memory access = 200ns
Page fault service time = 8ms = 8,000,000ns
Page fault rate = 1/1000 = 0.001

EAT = (1 - 0.001) Γ— 200 + 0.001 Γ— 8,000,000
EAT = 0.999 Γ— 200 + 8,000
EAT = 199.8 + 8,000 = 8,199.8ns β‰ˆ 8.2ΞΌs

πŸ”„ Page Replacement Algorithms

When to Use: When there are no free frames available
1. FIFO (First-In-First-Out)
  • Replace oldest page in memory
  • Simple to implement
  • ❌ Suffers from Belady's Anomaly (more frames β†’ more faults possible)
2. Optimal (OPT)
  • Replace page that won't be used for longest time
  • βœ… Minimum page faults
  • ❌ Impossible to implement (requires future knowledge)
  • Used as benchmark
3. LRU (Least Recently Used)
  • Replace page not used for longest time
  • βœ… Good approximation of optimal
  • βœ… Does not suffer from Belady's Anomaly
  • Implementation: Stack or counters
PAST EXAM: Reference string: 7,2,3,1,2,5,3,4,6,7,7,1,0,5,4,6,2,3,0,1
3 frames available

LRU Solution:
72312534677105462301
77711133377775552221
2222224444111444333
333555666600066600
Total Page Faults: 18
Quick comparison: FIFO simple but Belady's; OPT best but impossible; LRU practical and good

πŸŒ€ Thrashing

Thrashing: Process spends more time paging than executing
Causes:
  • Process doesn't have enough frames
  • High degree of multiprogramming
  • Insufficient physical memory
Solutions:
  • Decrease degree of multiprogramming
  • Increase physical memory (RAM)
  • Use local replacement algorithm
  • Provide enough frames per process
PAST EXAM: System measurements:
CPU utilization: 10%
Paging disk: 97.7%

Diagnosis: System is thrashing!
Solutions: (1) Decrease multiprogramming (2) Increase RAM

11 File Systems

πŸ“ File Concept

File: Named collection of related information stored on secondary storage
File Attributes:
  • Name: Human-readable form
  • Identifier: Unique tag (inode number)
  • Type: Text, binary, executable
  • Location: Pointer to device and location
  • Size: Current file size
  • Protection: Read, write, execute permissions
  • Time, date: Creation, last access, last modification
File Operations:
  • Create: Find space, add directory entry
  • Open: Load FCB to memory (open-file table)
  • Read/Write: Use file pointer
  • Seek: Reposition file pointer
  • Close: Remove from open-file table
  • Delete: Remove directory entry, free space
  • Truncate: Delete contents, keep attributes

πŸ“‚ Directory Structure

1. Single-Level Directory
  • All files in one directory
  • βœ… Simple
  • ❌ Naming problem (all names must be unique)
  • ❌ No grouping
2. Two-Level Directory
  • Master File Directory (MFD) + User File Directories (UFD)
  • βœ… Isolates users
  • βœ… Efficient searching
  • ❌ No grouping capability
3. Tree-Structured Directory
  • Hierarchical structure (most common)
  • βœ… Efficient searching
  • βœ… Grouping capability
  • βœ… Current directory concept
  • Absolute path: /home/user/file.txt
  • Relative path: ../file.txt
4. Acyclic-Graph Directory
  • Allows sharing of files/subdirectories
  • Uses links (hard or symbolic)
  • ⚠️ Deletion problem (dangling pointers)

πŸ” File Protection

Access Control List (ACL): Specifies users and allowed operations
UNIX Protection (rwx):
Category Description
Owner File creator
Group Set of users who share access
Universe (Others) All other users
Permissions:
  • r (read): Read file contents
  • w (write): Write/modify file
  • x (execute): Execute file
Example: rwxr-xr-- = Owner: rwx, Group: r-x, Others: r--

🎯 Access Methods

Sequential Access

  • Records processed in order
  • Like tape drive
  • Used by editors, compilers

Direct Access (Random)

  • Access any record directly
  • Fixed-length records
  • Used by databases
Indexed Access:
  • Index file points to actual data
  • Can have multi-level indexes
  • Example: ISAM (Indexed Sequential Access Method)

12 File System Implementation

πŸ’Ύ Disk Structure

Magnetic Disk:
  • Platters: Circular disks
  • Tracks: Concentric circles on platter
  • Sectors: Subdivisions of tracks (smallest addressable unit)
  • Cylinder: All tracks at same radius on all platters
Disk Access Time:
Access Time = Seek Time + Rotational Latency + Transfer Time
  • Seek Time: Move head to correct cylinder (5-15ms)
  • Rotational Latency: Wait for sector to rotate under head (4-8ms)
  • Transfer Time: Read/write data (negligible)
Typical total access time: 8-50ms (much slower than RAM!)

πŸ“¦ Allocation Methods

1. Contiguous Allocation
  • Each file occupies contiguous blocks
  • βœ… Simple, best performance for sequential access
  • βœ… Direct access easy
  • ❌ External fragmentation
  • ❌ Files cannot grow
  • ❌ Must know file size in advance
2. Linked Allocation
  • Each block contains pointer to next block
  • βœ… No external fragmentation
  • βœ… Files can grow dynamically
  • ❌ Slow sequential access
  • ❌ No direct access
  • ❌ Pointers use space
  • ❌ Reliability (one bad pointer breaks chain)
3. Indexed Allocation
  • Index block contains pointers to all blocks
  • βœ… Direct access supported
  • βœ… No external fragmentation
  • ❌ Overhead of index block
  • ❌ Wasted space if file is small
PAST EXAM: File of 100 blocks. Insert block in middle (as 15th).
How many disk I/O operations?

Contiguous: 201 (read 50 blocks + write 50 blocks + write new block)
Linked: 15 reads + 2 writes = 17
Indexed: 1 write + 1 update index = 2

πŸ†“ Free Space Management

1. Bit Vector (Bitmap)
  • 1 bit per block (0=allocated, 1=free)
  • βœ… Simple, efficient to find free blocks
  • ❌ Must keep entire bitmap in memory
Bitmap size = (disk_size / block_size) / 8 bytes
Example: 1TB disk, 4KB blocks β†’ 32MB bitmap
2. Linked List
  • Link all free blocks together
  • βœ… No waste of space
  • ❌ Cannot get contiguous space easily
  • ❌ Traversal requires many I/O
3. Grouping
  • First free block contains addresses of n free blocks
  • Last of these points to another block with n addresses
  • βœ… Can find many free blocks quickly
4. Counting
  • Store address of first free block + count of contiguous free blocks
  • βœ… Efficient if space allocated/freed in large chunks

πŸ—„οΈ Directory Implementation

Linear List:
  • Simple to implement
  • ❌ Time-consuming to search (O(n))
Hash Table:
  • Fast lookup (O(1) average)
  • ❌ Collisions must be handled
  • ❌ Fixed size (or expensive to resize)

⚑ Quick Reference & Formulas

πŸ“ Essential Formulas

Paging:
Page Number = Logical Address / Page Size
Page Offset = Logical Address % Page Size
Physical Address = (Frame Number Γ— Frame Size) + Offset

Logical Address Bits = logβ‚‚(Pages Γ— Page Size)
Physical Address Bits = logβ‚‚(Frames Γ— Frame Size)
TLB Effective Access Time:
EAT = h Γ— (TLB_time + Memory_time) + (1-h) Γ— (TLB_time + Page_Table_time + Memory_time)
where h = hit ratio
Page Fault EAT:
EAT = (1-p) Γ— memory_access + p Γ— page_fault_service_time
where p = page fault rate
Scheduling:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Response Time = First Response - Arrival Time
Disk Access:
Access Time = Seek Time + Rotational Latency + Transfer Time

🎯 Quick Tips for Exams

Process vs Thread: Process = heavyweight, separate memory; Thread = lightweight, shared memory
fork() Returns: 0 to child, child PID to parent, -1 on error
Mode Bit: 0 = Kernel mode, 1 = User mode
External Fragmentation: Variable partition, Segmentation
Internal Fragmentation: Fixed partition, Paging
Page Size = Frame Size (always equal!)
Belady's Anomaly: Only FIFO suffers (not LRU or Optimal)
Deadlock Conditions (ALL required): Mutual Exclusion, Hold & Wait, No Preemption, Circular Wait
Preemptive Algorithms: RR, SRTF, Preemptive Priority
Non-Preemptive: FCFS, SJF, Non-preemptive Priority

πŸ”’ Common Conversions

Unit Value
1 KB 1024 bytes = 2^10
1 MB 1024 KB = 2^20 bytes
1 GB 1024 MB = 2^30 bytes
1 ms 1,000 ΞΌs = 1,000,000 ns
1 ΞΌs 1,000 ns

βœ… Common Exam Mistakes to Avoid

❌ Mistake: Thinking page size β‰  frame size
βœ… Correct: Page size ALWAYS equals frame size
❌ Mistake: fork() returns child PID to child
βœ… Correct: fork() returns 0 to child, child PID to parent
❌ Mistake: Threads share stack
βœ… Correct: Each thread has its own stack; they share code, data, heap
❌ Mistake: Waiting time includes burst time
βœ… Correct: Waiting time = Turnaround time - Burst time
❌ Mistake: Paging causes external fragmentation
βœ… Correct: Paging eliminates external fragmentation, but causes internal fragmentation

πŸ“ Final Checklist

Before the exam, make sure you can:
  • βœ… Calculate page number and offset from logical address
  • βœ… Convert logical to physical address using page table
  • βœ… Calculate effective access time with TLB
  • βœ… Draw Gantt charts for all scheduling algorithms
  • βœ… Calculate waiting time and turnaround time
  • βœ… Trace fork() execution and count processes created
  • βœ… Implement producer-consumer with semaphores
  • βœ… Perform page replacement (FIFO, LRU, Optimal)
  • βœ… Calculate bits needed for logical/physical addresses
  • βœ… Understand difference between process and thread
  • βœ… Know when to use each allocation algorithm (First-fit, Best-fit, Worst-fit)
  • βœ… Identify deadlock conditions
  • βœ… Understand virtual memory benefits and page faults