π Arrays & Time Complexity Always in MCQs
Every final has at least a few questions asking for the time complexity of operations (insertion, deletion, search) on arrays, lists, stacks, queues, trees, sorting, etc.
Big-O Quick Reference Table
| Operation / Algorithm | Best | Average | Worst | Notes / Triggers in Questions |
|---|---|---|---|---|
| Access in array (A[i]) | O(1) | O(1) | O(1) | Direct index access β constant time |
| Search in unsorted array | O(1) | O(n) | O(n) | "Linear search", "no sorting", "scan all elements" |
| Binary Search (sorted array) | O(1) | O(log n) | O(log n) | "Sorted", "middle", "half each step" |
| Insert at end (enough capacity) | O(1) | O(1) | O(1) | Just place at A[size] |
| Insert at beginning / middle | O(n) | O(n) | O(n) | Need to shift elements |
| Delete from beginning / middle | O(n) | O(n) | O(n) | Shift to fill gap |
In several past finals, they give you a code snippet looping over an array with:
β Complexity = O(n)
If there are
β Complexity = O(nΒ²) (very common MCQ).
for (int i = 0; i < n; i++) ...β Complexity = O(n)
If there are
for loops nested:for i in 0..n
for j in 0..nβ Complexity = O(nΒ²) (very common MCQ).
How to Answer Complexity Questions
- Ignore constants: 2n + 5 β O(n)
- Pick the fastest growing term: nΒ² + 100n β O(nΒ²)
- Loops that run
ntimes β O(n) - Nested loops β multiply: outer n, inner n β O(nΒ²)
- Divide & conquer (binary search, quicksort best/avg) β O(n log n) or O(log n)
If you see "cut the problem in half every step" β think log n.
If you see "double nested loops over the same range" β think nΒ².
π Linked Lists
Singly vs Doubly Linked List
Singly Linked List
- Each node:
data,next - Can only move forward
- Less memory
- Insert/delete at head β O(1)
Doubly Linked List
- Each node:
data,next,prev - Move forward and backward
- More memory
- Insert/delete at tail with tail pointer β O(1)
Common Exam Operations
| Operation | Singly LL | Doubly LL | Complexity |
|---|---|---|---|
| Insert at head | Change head pointer | Change head and prev | O(1) |
| Insert at tail (no tail pointer) | Traverse entire list | Traverse entire list | O(n) |
| Insert at tail (with tail pointer) | Not trivial | Easy using tail | O(1) |
| Search by value | Traverse one direction | Traverse either direction | O(n) |
| Delete given key | Need prev pointer or handle head | Use prev and next |
O(n) |
A very common question from past finals:
Q: Which data structure is better for inserting/deleting in the middle of a large list when you already have a pointer to the node?
β Answer: Linked List (O(1) to adjust pointers) vs array (O(n) to shift elements).
Q: Which data structure is better for inserting/deleting in the middle of a large list when you already have a pointer to the node?
β Answer: Linked List (O(1) to adjust pointers) vs array (O(n) to shift elements).
Typical Pointer Question
Node *p = head;
while (p != nullptr) {
p = p->next;
}
π§ This loop simply traverses the list and stops at nullptr. Complexity: O(n).
Always draw 3 boxes:
prev, curr, next when youβre unsure about pointer updates in delete/insert questions.
π Stacks & Queues
Concepts & Use Cases
Stack (LIFO)
- Last In, First Out
- Operations:
push,pop,top - Used for:
- Function calls (call stack)
- Expression evaluation (postfix)
- Undo operations
Queue (FIFO)
- First In, First Out
- Operations:
enqueue,dequeue,front - Used for:
- Breadth-First Search
- Task scheduling
- Buffering (printers, network)
Array vs Linked List Implementation
| Data Structure | Array Implementation | Linked List Implementation |
|---|---|---|
| Stack |
Use top indexpush: A[++top] = x;pop: return A[top--];Overflow if top = size-1 |
Push/pop at head of list No fixed size limit A bit more memory (pointers) |
| Queue |
Use front, rearCircular queue to reuse space Common exam trick: "full" even if some cells empty |
Enqueue at tail, dequeue at head Efficient if you keep both pointers |
Final-style tracing question:
Given: Start with empty stack. Perform:
Stack state from top β bottom:
After push(1) β [1]
After push(2) β [2, 1]
After push(3) β [3, 2, 1]
After pop() β [2, 1]
After push(4) β [4, 2, 1]
Given: Start with empty stack. Perform:
push(1), push(2), push(3), pop(), push(4)Stack state from top β bottom:
After push(1) β [1]
After push(2) β [2, 1]
After push(3) β [3, 2, 1]
After pop() β [2, 1]
After push(4) β [4, 2, 1]
For postfix expression evaluation, always read from left to right and use a stack. When you see an operator, pop the last 2 operands, compute, then push the result.
π³ Trees & Binary Search Trees (BST)
Every recent final includes: building a BST from a sequence, finding height, counting nodes, or giving traversal orders (pre/in/post, level-order).
Basic Definitions
- Height of tree: Number of edges on longest path from root to a leaf (sometimes defined as number of nodes β check your slides; PSU usually uses edges).
- Depth of node: Number of edges from root to that node.
- Leaf: Node with no children.
- Full Binary Tree: Every node has 0 or 2 children.
- Complete Binary Tree: All levels full except possibly last; last filled from left to right.
BST Property
For every node X: all keys in left subtree < X.key and all keys in right subtree > X.key
Traversal Orders
| Traversal | Order | Use |
|---|---|---|
| Pre-order | Root β Left β Right | Copying tree, prefix notation |
| In-order | Left β Root β Right | Gives sorted order in BST |
| Post-order | Left β Right β Root | Delete tree, postfix notation |
| Level-order | BFS by level using queue | Shortest path in unweighted tree |
Build a BST from: 50, 30, 70, 20, 40, 60, 80
Inserting in that order gives:
Pre-order: 50, 30, 20, 40, 70, 60, 80
Post-order: 20, 40, 30, 60, 80, 70, 50
Inserting in that order gives:
50
/ \
30 70
/ \ / \
20 40 60 80
In-order: 20, 30, 40, 50, 60, 70, 80 (sorted)Pre-order: 50, 30, 20, 40, 70, 60, 80
Post-order: 20, 40, 30, 60, 80, 70, 50
BST Operation Complexities
| Operation | Average Case | Worst Case (Skewed) | Note |
|---|---|---|---|
| Search | O(log n) | O(n) | Go left / right depending on key |
| Insert | O(log n) | O(n) | Same as search + attach new node |
| Delete | O(log n) | O(n) | Three cases: 0, 1, 2 children |
In exam questions asking for complexity and they say "balanced BST" β use O(log n). If they show a skewed chain (like a linked list), complexity becomes O(n).
β°οΈ Heaps & Priority Queues
Heap Basics
- Complete binary tree stored as array.
- Max-heap: every node β₯ its children.
- Min-heap: every node β€ its children.
- Priority Queue: Abstract ADT often implemented by heap.
Array Representation
If root is at index 1:
left child of i β 2i
right child of i β 2i + 1
parent of i β βi / 2β
left child of i β 2i
right child of i β 2i + 1
parent of i β βi / 2β
Complexities
| Operation | Heap | Complexity | Why? |
|---|---|---|---|
| Insert (push) | Heap | O(log n) | Bubble up along height |
| Get max/min | Heap | O(1) | At root |
| Delete max/min (pop) | Heap | O(log n) | Swap with last, bubble down |
| Build heap from n elements | Heapify | O(n) | Trick question β not O(n log n) |
Common MCQ: "What is the complexity of building a heap from an unsorted array of n elements?"
Many students pick O(n log n) but correct answer is O(n) using bottom-up heapify.
Many students pick O(n log n) but correct answer is O(n) using bottom-up heapify.
#οΈβ£ Hash Tables
Key Ideas
- Use hash function
h(key)to map keys β index. - Goal: average O(1) for search/insert/delete.
- Collisions must be handled:
- Chaining (linked lists)
- Open addressing (linear / quadratic probing, double hashing)
Load Factor (Ξ±)
Ξ± = number of elements / table size
Larger Ξ± β more collisions β slower operations.
Collision Resolution
| Method | How it Works | Exam Clues |
|---|---|---|
| Chaining | Each table index stores a linked list of keys | O(1 + Ξ±) average search |
| Linear Probing | Try index, then index+1, index+2,... | Primary clustering issue |
| Quadratic Probing | Try index + 1Β², index + 2Β²... | Reduces clustering but harder to implement |
| Double Hashing | Use secondary hash to jump | Best distribution in open addressing |
Typical question: "For which data structure is search complexity approximately O(1) on average but O(n) in the worst case?"
β Correct answer: Hash table.
β Correct answer: Hash table.
πΈοΈ Graphs, BFS & DFS
Last CS210 finals heavily focused on BFS, DFS tree/gray/black nodes, connected components, and shortest path in unweighted graphs.
Graph Representations
| Representation | Space | When Best | Notes |
|---|---|---|---|
| Adjacency Matrix | O(VΒ²) | Dense graph, V small | Fast check edge(u,v): O(1) |
| Adjacency List | O(V + E) | Sparse graphs | Better for BFS/DFS when E βͺ VΒ² |
BFS vs DFS
BFS (Breadth-First Search)
- Uses queue
- Visits neighbors level by level
- Finds shortest path in unweighted graphs
- Good for:
- Shortest path
- Level-order traversal
DFS (Depth-First Search)
- Uses stack or recursion
- Goes deep before backtracking
- Good for:
- Detecting cycles
- Topological sorting (DAG)
- Connected components
BFS Pseudocode (Adjacency List)
BFS(G, s):
for each vertex v in G:
color[v] = WHITE
dist[v] = β
parent[v] = NIL
color[s] = GRAY
dist[s] = 0
Q = empty queue
ENQUEUE(Q, s)
while Q not empty:
u = DEQUEUE(Q)
for each v in Adj[u]:
if color[v] == WHITE:
color[v] = GRAY
dist[v] = dist[u] + 1
parent[v] = u
ENQUEUE(Q, v)
color[u] = BLACK
In exam tracing, write a small table of
color, dist, parent for each vertex and update step by step. They love asking "What is the BFS tree?" or "What is the distance from s to v?".
π Sorting & Searching
Sorting Algorithms Cheat Table
| Algorithm | Best | Average | Worst | Stable? | Notes |
|---|---|---|---|---|---|
| Selection Sort | O(nΒ²) | O(nΒ²) | O(nΒ²) | No | Find min, swap; few swaps |
| Bubble Sort | O(n) | O(nΒ²) | O(nΒ²) | Yes | Swap adjacent; can stop early |
| Insertion Sort | O(n) | O(nΒ²) | O(nΒ²) | Yes | Good for almost-sorted arrays |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | Yes | Divide & conquer, extra O(n) space |
| Quick Sort | O(n log n) | O(n log n) | O(nΒ²) | No (basic) | In-place, pivot choice matters |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | No | Uses heap, in-place, not stable |
Common CS210 final question:
"Which sorting algorithm has the same O(nΒ²) complexity in best, average and worst cases?"
β Answer: Selection Sort.
"Which sorting algorithm has the same O(nΒ²) complexity in best, average and worst cases?"
β Answer: Selection Sort.
Binary Search
- Requires sorted array.
- Repeatedly compare with middle element.
- Complexity: O(log n).
BinarySearch(A, key):
low = 0
high = n - 1
while low <= high:
mid = (low + high) / 2
if A[mid] == key:
return mid
else if A[mid] < key:
low = mid + 1
else:
high = mid - 1
return -1
If the question mentions "compare with the middle element" or "discard half of the search space each time" β answer is almost always Binary Search with O(log n) time.
πͺ Recursion & Tracing
How to Trace Recursive Code (Exam Style)
- Write a small table of calls:
f(4),f(3), etc. - Evaluate base case clearly.
- Unroll from base case back up using return values.
Example:
Trace for
int mystery(int n) {
if (n == 0) return 1;
return 2 * mystery(n - 1);
}
Trace for
mystery(3):- mystery(0) = 1 (base)
- mystery(1) = 2 * mystery(0) = 2
- mystery(2) = 2 * mystery(1) = 4
- mystery(3) = 2 * mystery(2) = 8
mystery(3) returns 8 = 2Β³+ΒΉ? Actually here pattern is 2βΏ β 2Β³ = 8.
Recurrence β Complexity (Intuition)
| Recurrence | Example | Complexity |
|---|---|---|
| T(n) = T(n-1) + O(1) | Linear recursion (countdown) | O(n) |
| T(n) = T(n/2) + O(1) | Binary search | O(log n) |
| T(n) = 2T(n/2) + O(n) | Merge sort | O(n log n) |
| T(n) = 2T(n-1) + O(1) | Very bad recursion | O(2βΏ) |
π― CS210 Final Exam Patterns & Last-Minute Checklist
What Repeats in Almost Every Final
- βοΈ Time complexity MCQs (arrays, lists, BST, sorting, hash tables).
- βοΈ Trace a piece of code and give output / complexity.
- βοΈ Build or traverse a BST (pre/in/post/level order).
- βοΈ BFS or DFS tracing with queue/stack + color/distance tables.
- βοΈ Sorting algorithm identification from description (selection vs insertion vs merge vs quick).
- βοΈ Linked list insertion/deletion pointer tricky question.
- βοΈ Heap or priority queue conceptual questions.
Common Traps
- β Saying Binary Search works on unsorted array.
- β Forgetting worst case of Quick Sort is O(nΒ²).
- β Confusing BFS (queue) with DFS (stack).
- β Thinking building a heap is O(n log n) instead of O(n).
- β Forgetting that in-order traversal of BST gives sorted order.
- β Using O(log n) for unbalanced BST instead of O(n).
Last-Minute Mental Map
Arrays
- Access O(1)
- Insert/delete in middle O(n)
Linked List
- Insert/delete at head O(1)
- Search O(n)
Stack & Queue
- LIFO & FIFO
- All operations O(1)
BST
- Balanced: O(log n)
- Skewed: O(n)
Hash Table
- Avg O(1)
- Worst O(n)
Sorting
- nΒ²: selection, insertion, bubble
- n log n: merge, quick (avg), heap
If you are stuck in an MCQ, first eliminate options that obviously break definitions: e.g., "BFS uses a stack" is immediately wrong; "in-order traversal gives random order in BST" is wrong; "hash table worst-case O(1)" is wrong.