4003-440-02 Operating Systems I
Module 7. CPU Scheduling -- Lecture Notes
Prof. Alan Kaminsky -- Winter Quarter 2012
Rochester Institute of Technology -- Department of Computer Science
CPU Scheduling Concepts
- Process/thread states and state transitions
- The CPU usage cycle
- CPU-bound processes vs. I/O-bound processes -- examples
- Circumstances under which the OS could do a context switch
- A process goes from "running" to "waiting"
- A process goes from "running" to "ready"
- A process goes from "waiting" to "ready"
- A process starts and goes to "ready"
- A process terminates
- Nonpreemptive scheduling
- The OS only does a context switch in cases (1) and (5)
- Once it starts running, a process will keep the CPU until it has to wait or it terminates
- Preemptive scheduling
- The OS can do a context switch in all five cases
- The OS can preempt the running process
- Scheduling algorithm
- The thing in the OS that decides whether to do a context switch and which process will run after a context switch
- Criteria for designing a scheduling algorithm
- CPU utilization -- (CPU-in-use time) / (Total time)
- Throughput -- Processes per second
- Turnaround time -- Time between when process is submitted and when process terminates
- Waiting time -- Total time a process spends sitting in the "ready" state
- Response time -- Time between when a user feeds in some input until when the process starts feeding back the output
- Examples of scheduling algorithms
- First come first served
- Round robin
- Shortest job first
- Highest priority first
CPU Scheduling Algorithm Simulations
First-Come-First-Served Scheduling
- Each job goes at the end of the ready queue
- The next job to run is the first one in the ready queue
- Nonpreemptive: Once a job starts running, it stays running until the end of its CPU burst
- Package edu.rit.os1.cpu.test
Round Robin Scheduling
- Each job goes at the end of the ready queue
- The next job to run is the first one in the ready queue
- Preemptive: Each job runs for at most a certain time quantum, then it gets preempted and goes to the back of the ready queue
- Package edu.rit.os1.cpu.test
- Comparison of FCFS versus RR with different time quanta
- RR tends to reduce the mean response time for I/O-bound jobs in the presence of CPU-bound jobs
Shortest Job First Scheduling
- Predict the length of each job's next CPU burst using an exponential average of the job's past CPU bursts
- Requires knowing the initial predicted CPU burst somehow
- The next job to run is the one with the smallest time remaining in its predicted CPU burst
- Nonpreemptive SJF: Once a job starts running, it stays running until the end of its CPU burst
- Preemptive SJF: When a job enters the ready queue, the running job will be preempted if the time remaining in its predicted CPU burst is greater than the predicted CPU burst of the newly ready job
- Package edu.rit.os1.cpu.test
- Comparison of FCFS, RR versus SJF with and without preemption
- Comparison of FCFS, RR versus SJF with different initial predicted CPU bursts
Highest Priority First Scheduling
- Each job has a priority (a number)
- Higher priority = lower number
- Jobs go in the ready queue in priority order
- Highest priority first = ascending order of numerical priority
- Rule: The highest-priority job is always running
- The scheduling algorithm must be preemptive
- The job at the front of the queue is always the next to run
- Package edu.rit.os1.cpu.test
- Comparison of FCFS, RR, SJF versus HPF
- Possible problem: Starvation
- A low-priority job may never get the CPU if high-priority jobs always get the CPU
- Possible solution: Aging
- A job's priority is gradually increased as long as it sits in the ready queue without running
Other CPU Scheduling Algorithms
- Multilevel queues
- Multilevel queues with feedback
- Multicore processor scheduling
- Asymmetric multiprocessing -- Scheduling decisions are done on one core only for the whole machine
- Symmetric multiprocessing -- Each core does its own scheduling decisions
- Processor affinity -- A thread stays on the same core, rarely or never migrates to a different core
- Can improve the cache hit ratio
- Load balancing -- Ensure that the processing load is distributed as equally as possible among all the cores
- Push migration -- Periodically move threads from heavily loaded cores to lightly loaded cores
- Pull migration -- When a core goes idle, it sucks active threads off non-idle cores
|
Operating Systems I
|
|
•
|
|
4003-440-02
|
|
•
|
|
Winter Quarter 2012
|
|
Course Page
|
|
Alan Kaminsky
|
|
•
|
|
Department of Computer Science
|
|
•
|
|
Rochester Institute of Technology
|
|
•
|
|
4486 +
2220 =
6706
|
|
Home Page
|
Copyright © 2013 Alan Kaminsky.
All rights reserved.
Last updated 16-Jan-2013.
Please send comments to ark@cs.rit.edu.
|