Introduction
CPU scheduling is the heart of any operating system. Every time you run a program, the scheduler decides when and how long it gets to run on the CPU. Understanding scheduling algorithms is essential for building performant systems, debugging latency issues, and designing applications that work well with modern operating systems.
In this guide, we’ll explore the full spectrum of scheduling algorithms from basic FIFO to the sophisticated Completely Fair Scheduler used in Linux.
Scheduling Fundamentals
What is CPU Scheduling?
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CPU SCHEDULING OVERVIEW โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ The Problem: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Multiple processes want to run on limited CPU cores โ โ
โ โ โ โ
โ โ Process A Process B Process C Process D โ โ
โ โ โ โ โ โ โ โ
โ โ โผ โผ โผ โผ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ CPU Scheduler โ โ โ
โ โ โ Decides which process runs when, and for how โ โ โ
โ โ โ long before being preempted โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ โ โ
โ โ โผ โ โ
โ โ โโโโโโโโโโโ โ โ
โ โ โ CPU โ โ โ
โ โ โ Core 0 โ โ โ
โ โ โโโโโโโโโโโ โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Goals: โ
โ โข Maximize CPU utilization โ
โ โข Maximize throughput โ
โ โข Minimize turnaround time โ
โ โข Minimize waiting time โ
โ โข Minimize response time โ
โ โข Ensure fairness โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Types of Scheduling
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ TYPES OF SCHEDULING โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 1. LONG-TERM SCHEDULER (Job Scheduler) โ โ
โ โ โข Decides which jobs enter the ready queue โ โ
โ โ โข Controls degree of multiprogramming โ โ
โ โ โข Not present in modern desktop OS โ โ
โ โ โข Found in batch systems and HPC clusters โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 2. SHORT-TERM SCHEDULER (CPU Scheduler) โ โ
โ โ โข Decides which ready process runs next โ โ
โ โ โข Runs frequently (milliseconds) โ โ
โ โ โข The main scheduler we discuss โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 3. MEDIUM-TERM SCHEDULER โ โ
โ โ โข Can swap processes in/out of memory โ โ
โ โ โข Part of virtual memory management โ โ
โ โ โข Helps manage memory pressure โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Scheduling Criteria
"""Scheduling metrics and calculations."""
from dataclasses import dataclass
from typing import List
@dataclass
class Process:
pid: int
name: str
arrival_time: int
burst_time: int
priority: int = 0
@dataclass
class SchedulingResult:
"""Results of scheduling simulation."""
pid: int
name: str
arrival_time: int
burst_time: int
start_time: int
finish_time: int
turnaround_time: int
waiting_time: int
response_time: int
def calculate_metrics(processes: List[Process],
schedule: List[tuple]) -> List[SchedulingResult]:
"""Calculate scheduling metrics."""
results = []
for i, (pid, start, finish) in enumerate(schedule):
# Find the process
proc = next(p for p in processes if p.pid == pid)
# Calculate metrics
turnaround = finish - proc.arrival_time
waiting = turnaround - proc.burst_time
response = start - proc.arrival_time
results.append(SchedulingResult(
pid=pid,
name=proc.name,
arrival_time=proc.arrival_time,
burst_time=proc.burst_time,
start_time=start,
finish_time=finish,
turnaround_time=turnaround,
waiting_time=waiting,
response_time=response
))
return results
def print_results(results: List[SchedulingResult]):
"""Print scheduling results."""
print(f"{'Process':<10} {'AT':<5} {'BT':<5} {'ST':<5} {'FT':<5} "
f"{'TAT':<5} {'WT':<5} {'RT':<5}")
print("-" * 50)
total_turnaround = 0
total_waiting = 0
total_response = 0
for r in results:
print(f"{r.name:<10} {r.arrival_time:<5} {r.burst_time:<5} "
f"{r.start_time:<5} {r.finish_time:<5} "
f"{r.turnaround_time:<5} {r.waiting_time:<5} "
f"{r.response_time:<5}")
total_turnaround += r.turnaround_time
total_waiting += r.waiting_time
total_response += r.response_time
n = len(results)
print("-" * 50)
print(f"Average: - - - - "
f"{total_turnaround/n:<5.1f} {total_waiting/n:<5.1f} "
f"{total_response/n:<5.1f}")
Scheduling Algorithms
1. First-Come-First-Served (FCFS)
The simplest scheduling algorithm - processes are served in order of arrival.
"""FCFS (First-Come-First-Served) Scheduler."""
class FCFSScheduler:
"""
First-Come-First-Served Scheduling.
Characteristics:
- Non-preemptive
- Simple implementation
- Suffers from convoy effect
"""
def schedule(self, processes: List[Process]) -> List[tuple]:
"""
Schedule processes using FCFS.
Returns: List of (pid, start_time, finish_time)
"""
# Sort by arrival time
sorted_processes = sorted(processes, key=lambda p: p.arrival_time)
schedule = []
current_time = 0
for process in sorted_processes:
# Wait for process to arrive
if current_time < process.arrival_time:
current_time = process.arrival_time
start_time = current_time
current_time += process.burst_time
finish_time = current_time
schedule.append((process.pid, start_time, finish_time))
return schedule
# Example
if __name__ == "__main__":
processes = [
Process(1, "P1", 0, 10),
Process(2, "P2", 1, 5),
Process(3, "P3", 2, 8),
]
scheduler = FCFSScheduler()
schedule = scheduler.schedule(processes)
results = calculate_metrics(processes, schedule)
print("FCFS Scheduling:")
print_results(results)
FCFS Visualization
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ FCFS SCHEDULING EXAMPLE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Process | Arrival | Burst | โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ P1 | 0 | 10 โ โ
โ P2 | 1 | 5 โ โ
โ P3 | 2 | 8 โ โ
โ โ
โ Gantt Chart: โ
โ โโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโ โ
โ โ P1 โ P2 โ P3 โ โ โ
โ โ 0-10 โ 10-15 โ 15-23 โ โ โ
โ โโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโ โ
โ 0 10 15 23 โ
โ โ
โ Calculations: โ
โ โโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโโ โ
โ โ Processโ Turnaround โ Waiting โ Response โ โ
โ โโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโค โ
โ โ P1 โ 10-0=10 โ 10-10=0 โ 0-0=0 โ โ
โ โ P2 โ 15-1=14 โ 14-5=9 โ 10-1=9 โ โ
โ โ P3 โ 23-2=21 โ 21-8=13 โ 15-2=13 โ โ
โ โโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโค โ
โ โ Avg โ 15.0 โ 7.3 โ 7.3 โ โ
โ โโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโโ โ
โ โ
โ Problem: Convoy Effect - short processes wait behind long โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2. Shortest Job First (SJF)
Selects the process with the shortest burst time.
"""SJF (Shortest Job First) Scheduler."""
class SJFScheduler:
"""
Shortest Job First Scheduling.
Characteristics:
- Non-preemptive (most common version)
- Optimal for average waiting time
- Requires knowledge of burst times
- Starves long processes (aging can help)
"""
def schedule(self, processes: List[Process]) -> List[tuple]:
"""Schedule processes using SJF."""
schedule = []
remaining = processes.copy()
current_time = 0
completed = 0
while completed < len(processes):
# Find all processes that have arrived
available = [p for p in remaining if p.arrival_time <= current_time]
if not available:
# Jump to next arrival
next_arrival = min(p.arrival_time for p in remaining)
current_time = next_arrival
continue
# Select shortest job
shortest = min(available, key=lambda p: p.burst_time)
remaining.remove(shortest)
# Execute
start_time = current_time
current_time += shortest.burst_time
finish_time = current_time
schedule.append((shortest.pid, start_time, finish_time))
completed += 1
return schedule
class SJFPreemptive:
"""
Shortest Remaining Time First (Preemptive SJF).
"""
def schedule(self, processes: List[Process]) -> List[tuple]:
"""Schedule using SRTF (preemptive SJF)."""
schedule = []
remaining = {p.pid: p for p in processes}
current_time = 0
# Track remaining time for each process
remaining_time = {p.pid: p.burst_time for p in processes}
while remaining:
# Find available processes
available = [pid for pid, p in remaining.items()
if p.arrival_time <= current_time]
if not available:
# Jump to next arrival
next_arrival = min(p.arrival_time for p in remaining.values())
current_time = next_arrival
continue
# Select process with shortest remaining time
current_pid = min(available, key=lambda pid: remaining_time[pid])
# Execute for 1 time unit (preemptive)
start_time = current_time
remaining_time[current_pid] -= 1
current_time += 1
if remaining_time[current_pid] == 0:
# Process completed
schedule.append((current_pid, start_time, current_time))
del remaining[current_pid]
del remaining_time[current_pid]
return schedule
SJF vs FCFS Comparison
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ SJF vs FCFS COMPARISON โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Example: โ
โ โโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโ โ
โ โ Processโ Arrival โ Burst โ โ
โ โโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโค โ
โ โ P1 โ 0 โ 8 โ โ
โ โ P2 โ 1 โ 4 โ โ
โ โ P3 โ 2 โ 9 โ โ
โ โ P4 โ 3 โ 5 โ โ
โ โโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโ โ
โ โ
โ FCFS: โ
โ โโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโ โ
โ โ P1 โ P2 โ P3 โ P4 โ โ
โ โ 0-8 โ 8-12 โ 12-21 โ 21-26 โ โ
โ โโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโ โ
โ Avg Waiting: (0+7+8+18)/4 = 8.25 โ
โ โ
โ SJF: โ
โ โโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโ โ
โ โ P1 โ P2 โ P4 โ P3 โ โ
โ โ 0-8 โ 8-12 โ 12-17 โ 17-26 โ โ
โ โโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโ โ
โ Avg Waiting: (0+7+7+7)/4 = 5.25 โ
โ โ
โ โ SJF has lower average waiting time โ
โ โ But SJF requires knowing burst times in advance โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
3. Priority Scheduling
Processes are scheduled based on priority (higher priority = scheduled first).
"""Priority Scheduling."""
class PriorityScheduler:
"""
Priority Scheduling.
Characteristics:
- Can be preemptive or non-preemptive
- Starves low-priority processes
- Priority can be based on: memory, time, etc.
"""
def __init__(self, preemptive: bool = False):
self.preemptive = preemptive
def schedule(self, processes: List[Process]) -> List[tuple]:
"""Schedule using priority (lower number = higher priority)."""
if not self.preemptive:
return self._non_preemptive(processes)
else:
return self._preemptive(processes)
def _non_preemptive(self, processes: List[Process]) -> List[tuple]:
"""Non-preemptive priority scheduling."""
schedule = []
remaining = processes.copy()
current_time = 0
while remaining:
# Get available processes
available = [p for p in remaining if p.arrival_time <= current_time]
if not available:
current_time = min(p.arrival_time for p in remaining)
continue
# Select highest priority (lowest number)
highest = min(available, key=lambda p: p.priority)
remaining.remove(highest)
start_time = current_time
current_time += highest.burst_time
finish_time = current_time
schedule.append((highest.pid, start_time, finish_time))
return schedule
def _preemptive(self, processes: List[Process]) -> List[tuple]:
"""Preemptive priority scheduling."""
schedule = []
remaining = {p.pid: p for p in processes}
current_time = 0
remaining_time = {p.pid: p.burst_time for p in processes}
current_pid = None
while remaining:
# Find highest priority ready process
available = [(pid, p) for pid, p in remaining.items()
if p.arrival_time <= current_time]
if not available:
current_time = min(p.arrival_time for p in remaining.values())
continue
# Select highest priority
next_pid = min(available, key=lambda x: x[1].priority)[0]
# If different process, record context switch
if current_pid is not None and current_pid != next_pid:
schedule.append((current_pid, current_time, current_time))
current_pid = next_pid
# Execute for 1 unit
remaining_time[current_pid] -= 1
current_time += 1
if remaining_time[current_pid] == 0:
schedule.append((current_pid, current_time-1, current_time))
del remaining[current_pid]
del remaining_time[current_pid]
current_pid = None
return schedule
Priority Inversion Problem
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PRIORITY INVERSION PROBLEM โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Scenario: โ
โ โข Process L (low priority) acquires a lock โ
โ โข Process H (high priority) wants the lock, waits โ
โ โข Process M (medium priority) preempts L โ
โ โข Result: H waits for M to finish! โ
โ โ
โ Timeline without solution: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Time โ L runs, holds lock โ L preempted by M โ H waits โ โ
โ โ โ โโโโโโโโโโโโ โ โโโโโโโโโโโโ โ โโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Solution: Priority Inheritance โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Time โ L runs, holds lock โ L inherits H's priority โ โ
โ โ โ โโโโโโโโโโโโ โ โโโโโโโโโโโโ โ โ โ
โ โ โ โ (runs at high priority) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Linux uses: priority inheritance (PI) for mutexes โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
4. Round Robin (RR)
Time-slicing approach where each process gets a fixed time quantum.
"""Round Robin Scheduler."""
class RoundRobinScheduler:
"""
Round Robin Scheduling.
Characteristics:
- Preemptive
- Fair - each process gets equal CPU time
- Good for time-sharing systems
- Performance depends on quantum size
"""
def __init__(self, quantum: int = 4):
self.quantum = quantum
def schedule(self, processes: List[Process]) -> List[tuple]:
"""Schedule using Round Robin."""
schedule = []
from collections import deque
# Sort by arrival and add to queue
ready_queue = deque(sorted(processes, key=lambda p: p.arrival_time))
remaining = {p.pid: p.burst_time for p in processes}
arrival_index = {p.pid: p.arrival_time for p in processes}
current_time = 0
completed = 0
# Track which processes have arrived
arrived = set()
for p in processes:
if p.arrival_time == 0:
arrived.add(p.pid)
while completed < len(processes):
# Add newly arrived processes
for pid, proc in list(remaining.items()):
if proc.arrival_time <= current_time and pid not in arrived:
ready_queue.append(proc)
arrived.add(pid)
if not ready_queue:
# Jump to next arrival
next_arrival = min(p.arrival_time for p in remaining.values()
if p.arrival_time > current_time)
current_time = next_arrival
continue
# Get next process
current = ready_queue.popleft()
# Execute for quantum or until done
execute_time = min(self.quantum, remaining[current.pid])
start_time = current_time
remaining[current.pid] -= execute_time
current_time += execute_time
# Check for new arrivals during execution
for pid, proc in list(remaining.items()):
if (proc.arrival_time > start_time and
proc.arrival_time <= current_time and
pid not in arrived):
ready_queue.append(proc)
arrived.add(pid)
if remaining[current.pid] > 0:
# Not done, requeue
ready_queue.append(current)
else:
# Done
schedule.append((current.pid, start_time, current_time))
completed += 1
return schedule
Quantum Size Impact
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ROUND ROBIN - QUANTUM IMPACT โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Process Bursts: P1=6, P2=3, P3=1 โ
โ โ
โ Quantum = 1: โ
โ โโโโโโฌโโโโโฌโโโโโฌโโโโโฌโโโโโฌโโโโโฌโโโโโฌโโโโโฌโโโโโฌโโโโ โ
โ โ P1 โ P2 โ P3 โ P1 โ P2 โ P1 โ P1 โ P1 โ P1 โ โ โ
โ โ 0-1โ 1-2โ 2-3โ 3-4โ 4-5โ 5-6โ 6-7โ 7-8โ 8-9โ โ โ
โ โโโโโโดโโโโโดโโโโโดโโโโโดโโโโโดโโโโโดโโโโโดโโโโโดโโโโโดโโโโ โ
โ Context switches: 9 โ
โ Avg Waiting: (6+3+1)/3 = 3.33 โ
โ โ
โ Quantum = 4: โ
โ โโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโฌโโโโโ โ
โ โ P1(4) โ P2(3) โ P3(1) โ P1(2)โ โ โ
โ โ 0-4 โ 4-7 โ 7-8 โ 8-10โ โ โ
โ โโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโดโโโโโ โ
โ Context switches: 4 โ
โ Avg Waiting: (4+1+0)/3 = 1.67 โ
โ โ
โ Trade-off: โ
โ โข Small quantum: Better response, more overhead โ
โ โข Large quantum: Less overhead, worse response โ
โ โข Rule of thumb: 80% of CPU bursts should finish in quantum โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
5. Multilevel Queue Scheduling
Different queues for different process types with separate scheduling.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MULTILEVEL QUEUE SCHEDULING โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ System Queue (Highest) โ โ
โ โ โโโโโโโโฌโโโโโโโฌโโโโโโโฌโโโโโโโ โ โ
โ โ โSyst1 โSyst2 โSyst3 โ ... โ โโโบ RR or FCFS โ โ
โ โ โโโโโโโโดโโโโโโโดโโโโโโโดโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Interactive Queue โ โ
โ โ โโโโโโโโฌโโโโโโโฌโโโโโโโฌโโโโโโโ โ โ
โ โ โInt1 โInt2 โInt3 โ ... โ โโโบ RR (small quantum)โ โ
โ โ โโโโโโโโดโโโโโโโดโโโโโโโดโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Batch Queue (Lowest) โ โ
โ โ โโโโโโโโฌโโโโโโโฌโโโโโโโฌโโโโโโโ โ โ
โ โ โBat1 โBat2 โBat3 โ ... โ โโโบ FCFS โ โ
โ โ โโโโโโโโดโโโโโโโดโโโโโโโดโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Inter-queue scheduling: Fixed priority (system > interactive > batch)
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
6. Multilevel Feedback Queue (MLFQ)
Modern scheduler that allows processes to move between queues.
"""Multilevel Feedback Queue Scheduler."""
class MLFQScheduler:
"""
Multilevel Feedback Queue Scheduler.
Characteristics:
- Multiple queues with different priorities
- Processes can move between queues
- Aging prevents starvation
- Adapts to process behavior
"""
def __init__(self, num_queues: int = 3, base_quantum: int = 4):
self.num_queues = num_queues
self.queues = [[] for _ in range(num_queues)]
self.base_quantum = base_quantum
def schedule(self, processes: List[Process]) -> List[tuple]:
"""Schedule using MLFQ."""
schedule = []
from collections import deque
# Initialize with all processes
remaining = {p.pid: p for p in processes}
remaining_time = {p.pid: p.burst_time for p in processes}
# All start in highest priority queue
for p in processes:
self.queues[0].append(p.pid)
current_time = 0
current_queue = 0
current_pid = None
while remaining:
# Find highest non-empty queue
while current_queue < self.num_queues and not self.queues[current_queue]:
current_queue += 1
if current_queue >= self.num_queues:
break
# Get next process from current queue
pid = self.queues[current_queue].pop(0)
if pid not in remaining:
continue
# Calculate quantum for this level
quantum = self.base_quantum * (2 ** current_queue)
# Execute
execute_time = min(quantum, remaining_time[pid])
start_time = current_time
remaining_time[pid] -= execute_time
current_time += execute_time
if remaining_time[pid] > 0:
# Not done - move to lower priority (if not already lowest)
if current_queue < self.num_queues - 1:
self.queues[current_queue + 1].append(pid)
else:
# Done
schedule.append((pid, start_time, current_time))
del remaining[pid]
current_pid = None
return schedule
Modern Linux Scheduler: CFS
Completely Fair Scheduler Overview
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LINUX CFS (COMPLETELY FAIR SCHEDULER) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Key Concepts: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 1. Virtual Runtime (vruntime) โ โ
โ โ - Measures "fair time" received by each process โ โ
โ โ - Lower vruntime = more entitled to CPU โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 2. Red-Black Tree โ โ
โ โ - All runnable processes stored here โ โ
โ โ - Ordered by vruntime (leftmost = most deserving) โ โ
โ โ - O(log n) insertion/deletion โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 3. Fair Share โ โ
โ โ - Each process gets 1/n of CPU (n = runnable tasks) โ โ
โ โ - "Completely fair" means equal vruntime increase โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ CFS does NOT use: โ
โ โข Fixed time slices โ
โ โข Priority-based preemption (directly) โ
โ โข Batch/interactive distinction โ
โ โ
โ CFS DOES use: โ
โ โข Dynamic time slices based on load โ
โ โข Weight-based (nice values) vruntime scaling โ
โ โข Latency target (target scheduler latency) โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
CFS Implementation Concepts
"""CFS-like scheduler simulation."""
import heapq
from dataclasses import dataclass, field
from typing import List, Optional
@dataclass
class CFSProcess:
"""Represents a process in CFS."""
pid: int
name: str
weight: float = 1024 # Default weight (nice=0)
vruntime: float = 0.0
burst_remaining: int = 0
arrived: bool = True
# For priority comparison
def __lt__(self, other):
return self.vruntime < other.vruntime
class CFSScheduler:
"""
Simplified CFS (Completely Fair Scheduler) implementation.
"""
def __init__(self, latency_target: float = 20.0):
self.latency_target = latency_target # Target scheduling latency (ms)
self.min_granularity = 1.0 # Minimum time slice (ms)
self.rbtree = [] # Min-heap as simplified RB-tree
self.current_time = 0.0
def _calculate_time_slice(self, weight: float, total_weight: float,
num_tasks: int) -> float:
"""Calculate time slice based on weight and load."""
if num_tasks == 0:
return self.latency_target
# Time slice = (latency_target / n) * (weight / total_weight)
slice_time = (self.latency_target / num_tasks) * (weight / total_weight)
# Ensure minimum granularity
return max(slice_time, self.min_granularity)
def add_process(self, process: CFSProcess):
"""Add a process to the scheduler."""
heapq.heappush(self.rbtree, process)
def schedule(self, processes: List[CFSProcess]) -> List[tuple]:
"""Run CFS scheduling simulation."""
schedule = []
# Initialize
self.rbtree = processes.copy()
heapq.heapify(self.rbtree)
while self.rbtree:
# Get process with lowest vruntime
current = heapq.heappop(self.rbtree)
# Calculate weights
total_weight = sum(p.weight for p in self.rbtree) + current.weight
num_tasks = len(self.rbtree) + 1
# Get time slice
timeslice = self._calculate_time_slice(
current.weight, total_weight, num_tasks
)
# Execute
execute_time = min(timeslice, current.burst_remaining)
start_time = self.current_time
self.current_time += execute_time
# Update vruntime (normalized by weight)
# vruntime += actual_runtime * (1024 / weight)
current.vruntime += execute_time * (1024.0 / current.weight)
current.burst_remaining -= execute_time
schedule.append((current.pid, start_time, self.current_time))
# Re-add if not done
if current.burst_remaining > 0:
heapq.heappush(self.rbtree, current)
return schedule
# Example CFS behavior
if __name__ == "__main__":
# Two processes with different nice values
# Nice 0 = weight 1024, Nice -5 = weight 1597, Nice 5 = weight 335
p1 = CFSProcess(1, "Interactive", weight=1024, vruntime=0, burst_remaining=20)
p2 = CFSProcess(2, "Batch", weight=335, vruntime=0, burst_remaining=20) # Nice=5
cfs = CFSScheduler(latency_target=20.0)
schedule = cfs.schedule([p1, p2])
print("CFS Scheduling:")
print("Process weights: Interactive=1024, Batch=335")
print("Expected: Interactive gets ~75%, Batch gets ~25%\n")
for pid, start, finish in schedule:
name = "Interactive" if pid == 1 else "Batch"
print(f"{name}: {start:.1f} - {finish:.1f}")
Linux Scheduling Classes
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LINUX SCHEDULING CLASSES โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ stop_sched_class (Highest) โ โ
โ โ โข For CPU hotplug and migration โ โ
โ โ โข Highest priority, no fairness โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ dl_sched_class (Deadline) โ โ
โ โ โข SCHED_DEADLINE (earliest deadline first) โ โ
โ โ โข Real-time tasks with hard deadlines โ โ
โ โ โข Uses CBS (CBS) for bandwidth controller โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ rt_sched_class (Real-Time) โ โ
โ โ โข SCHED_FIFO: Fixed priority, runs until done โ โ
โ โ โข SCHED_RR: Fixed priority, round-robin within queue โ โ
โ โ โข Priorities: 0-99 (99 highest) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ fair_sched_class (CFS) โ โ
โ โ โข SCHED_NORMAL: Regular time-sharing โ โ
โ โ โข SCHED_BATCH: For batch jobs โ โ
โ โ โข SCHED_IDLE: For extremely low priority โ โ
โ โ โข Priorities: nice values (-20 to +19) mapped to โ โ
โ โ weights via ฯ*ฯยฒ relationship โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Real-Time Scheduling
Real-Time Systems Overview
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ REAL-TIME SCHEDULING โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Hard Real-Time: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Deadline MUST be met, or system fails โ โ
โ โ Examples: Medical devices, flight control โ โ
โ โ Scheduling: Rate Monotonic, EDF โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Soft Real-Time: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Deadline should be met, but missing is tolerable โ โ
โ โ Examples: Video streaming, games โ โ
โ โ Scheduling: Weighted Round Robin, CFS โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Firm Real-Time: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Missing deadline produces incorrect result โ โ
โ โ Examples: Some multimedia applications โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Rate Monotonic Scheduling (RMS)
"""Rate Monotonic Scheduling analysis."""
class RateMonotonicScheduler:
"""
Rate Monotonic Scheduling.
Characteristics:
- Fixed priority (period shorter = higher priority)
- Optimal for fixed-priority real-time scheduling
- Feasibility test: U โค n(2^(1/n) - 1)
"""
@staticmethod
def check_schedulability(tasks: List[dict]) -> tuple:
"""
Check if task set is schedulable using RMS.
Args:
tasks: List of dicts with 'period', 'execution', 'deadline'
Returns:
(is_schedulable, utilization, bound)
"""
n = len(tasks)
# Calculate total utilization
utilization = sum(t['execution'] / t['period'] for t in tasks)
# Calculate bound
bound = n * (2 ** (1/n) - 1)
is_schedulable = utilization <= bound
return is_schedulable, utilization, bound
@staticmethod
def response_time_analysis(tasks: List[dict]) -> dict:
"""
Response time analysis for RMS.
More accurate than utilization bound.
"""
# Sort by period (shorter = higher priority)
tasks = sorted(tasks, key=lambda t: t['period'])
results = []
for i, task in enumerate(tasks):
# Response time = execution + interference from higher priority
response_time = task['execution']
for j in range(i):
# Interference = ceiling of (response_time / period_j) * execution_j
interference = (
(response_time + tasks[j]['period'] - 1)
// tasks[j]['period']
) * tasks[j]['execution']
response_time += interference
results.append({
'task': task['name'],
'period': task['period'],
'execution': task['execution'],
'deadline': task['deadline'],
'response_time': response_time,
'missed': response_time > task['deadline']
})
return results
# Example
if __name__ == "__main__":
tasks = [
{'name': 'T1', 'period': 10, 'execution': 2, 'deadline': 10},
{'name': 'T2', 'period': 20, 'execution': 5, 'deadline': 20},
{'name': 'T3', 'period': 40, 'execution': 10, 'deadline': 40},
]
sched, util, bound = RateMonotonicScheduler.check_schedulability(tasks)
print(f"Utilization: {util:.3f}")
print(f"Bound: {bound:.3f}")
print(f"Schedulable: {sched}\n")
results = RateMonotonicScheduler.response_time_analysis(tasks)
print("Response Time Analysis:")
for r in results:
status = "โ" if not r['missed'] else "โ MISSED"
print(f" {r['task']}: response={r['response_time']}, deadline={r['deadline']} {status}")
Earliest Deadline First (EDF)
"""Earliest Deadline First Scheduler."""
class EDFScheduler:
"""
Earliest Deadline First Scheduling.
Characteristics:
- Dynamic priority (earliest deadline = highest priority)
- Optimal for uniprocessor real-time scheduling
- Schedulable if total utilization โค 1
"""
def schedule(self, tasks: List[dict]) -> List[tuple]:
"""
Schedule using EDF.
Args:
tasks: List of dicts with 'execution', 'deadline', 'arrival'
Returns:
Schedule as list of (task_id, start, finish)
"""
import heapq
schedule = []
current_time = 0
ready_heap = [] # (deadline, task_id, remaining_time, arrival)
task_index = {id(t): t.get('name', id(t)) for t in tasks}
# All tasks ready at time 0 for this example
for t in tasks:
heapq.heappush(ready_heap,
(t['deadline'], id(t), t['execution'], t.get('arrival', 0)))
while ready_heap:
# Get earliest deadline
deadline, tid, remaining, arrival = heapq.heappop(ready_heap)
# Can't meet deadline
if current_time > deadline:
print(f"Warning: Deadline missed at {current_time}")
continue
# Execute
start = current_time
current_time += remaining
finish = current_time
schedule.append((task_index[tid], start, finish))
return schedule
Multicore Scheduling
Challenges
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MULTICORE SCHEDULING CHALLENGES โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ 1. LOAD BALANCING โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ CPU 0: โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ (80%) โ โ
โ โ CPU 1: โโโโโโโโโโโโโโโโโโโโโโโโโโโโ (50%) โ โ
โ โ CPU 2: โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ (35%) โ โ
โ โ โ โ
โ โ Need to move work from busy cores to idle ones โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ 2. CPU AFFINITY โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โข Processors have caches (L1, L2) โ โ
โ โ โข Moving processes clears cache = performance hit โ โ
โ โ โข Need to balance affinity vs load โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ 3. CROSS-CORE SYNCHRONIZATION โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โข Lock contention increases with more cores โ โ
โ โ โข NUMA effects in multi-socket systems โ โ
โ โ โข Cache coherence traffic โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ 4. POWER MANAGEMENT โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โข Idle cores can sleep (save power) โ โ
โ โ โข Need to wake cores for load balancing โ โ
โ โ โข DVFS (Dynamic Voltage/Frequency Scaling) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Linux Multicore Scheduling
"""Linux multicore scheduling concepts."""
class LinuxMulticoreScheduler:
"""
Linux multicore scheduling overview.
"""
# Scheduler domains (from kernel/sched/sched.h)
SCHED_DOMAIN_LEVELS = {
'SD_LVL_CPU': 'Per-CPU (not in use)',
'SD_LVL_CACHE': 'Per-Core/L3 cache',
'SD_LVL_NODE': 'Per-NUMA node',
'SD_LVL_MC': 'Per-Multi-CPU package',
'SD_LVL_SIBLING': 'Hyperthread siblings',
}
# Load balancing happens at:
# 1. Periodic load balancing (scheduler tick)
# 2. New idle core discovery
# 3. Fork/exec balancing
# Key functions:
# - rebalance_domains(): Main load balancing
# - find_busiest_group(): Identify overloaded CPU group
# - move_tasks(): Migrate tasks between cores
@staticmethod
def describe_scheduling_domains():
"""Describe Linux scheduling domains."""
print("Linux Scheduling Domains:")
for level, desc in LinuxMulticoreScheduler.SCHED_DOMAIN_LEVELS.items():
print(f" {level}: {desc}")
print("\nLoad Balancing Strategy:")
print(" 1. Hierarchy: Try local, then domain-level")
print(" 2. Idle CPU preferred: Wake up idle cores first")
print(" 3. Overutilization detection: Skip if all cores busy")
print(" 4. Energy awareness: Consider power efficiency")
Algorithm Comparison
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ SCHEDULING ALGORITHM COMPARISON โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโโ โ
โ โ Algorithm โ Throughputโ Avg Waitโ Responseโ Starvationโ โ
โ โโโโโโโโโโโโโโผโโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโค โ
โ โ FCFS โ Medium โ High โ Low โ Yes โ โ
โ โ SJF โ High โ Low โ Medium โ Yes โ โ
โ โ Priority โ Medium โ Medium โ Medium โ Yes โ โ
โ โ RR โ High โ Medium โ High โ No โ โ
โ โ MLFQ โ High โ Low โ High โ No โ โ
โ โ CFS โ High โ Low โ High โ No โ โ
โ โ EDF โ High โ Low โ High โ No* โ โ
โ โโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโโ โ
โ โ
โ When to use what: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ FCFS: Batch jobs, simple systems โ โ
โ โ SJF: Known burst times, minimizing wait โ โ
โ โ RR: Time-sharing, general purpose โ โ
โ โ Priority: Real-time, differentiated service โ โ
โ โ MLFQ: Complex mixed workload โ โ
โ โ CFS: Linux production systems โ โ
โ โ EDF: Hard real-time systems โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Conclusion
CPU scheduling is a fundamental OS concept that directly impacts system performance. Key takeaways:
- No single best algorithm - Each has trade-offs between fairness, throughput, and response time
- Modern OS use hybrid approaches - Combining multiple algorithms (Linux: CFS + RT + Deadline)
- Real-time systems require different approaches - EDF and RMS provide guaranteed deadlines
- Multicore adds complexity - Load balancing, cache affinity, and NUMA all matter
- The scheduler adapts - Modern schedulers use feedback to adjust behavior
Linux’s CFS represents a mature implementation that balances fairness with performance, while real-time systems use EDF or fixed-priority approaches for deadline guarantees.
External Resources
Books
- “Operating System Concepts” - Silberschatz, Galvin, Gagne
- “Modern Operating Systems” - Andrew Tanenbaum
- “Linux Kernel Development” - Robert Love
Comments