# Connexions

You are here: Home » Content » Synchronization, CPU Scheduling

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• VOCW

This module is included inLens: Vietnam OpenCourseWare's Lens
By: Vietnam OpenCourseWare

Click the "VOCW" link to see all content affiliated with them.

### Recently Viewed

This feature requires Javascript to be enabled.

# Synchronization, CPU Scheduling

Module by: Duong Anh Duc. E-mail the author

Summary: Synchronization, CPU Scheduling

## Race Conditions

A race condition occurs when two (or more) processes are about to perform some action. Depending on the exact timing, one or other goes first. If one of the processes goes first, everything works, but if another one goes first, an error, possibly fatal, occurs.

Imagine two processes both accessing x, which is initially 10.

• One process is to execute x x+1
• The other is to execute x x-1
• When both are finished x should be 10
• But we might get 9 and might get 11!
• Show how this can happen (x x+1 is not atomic)
• Tanenbaum shows how this can lead to disaster for a printer spooler

## Critical sections

We must prevent interleaving sections of code that need to be atomic with respect to each other. That is, the conflicting sections need mutual exclusion. If process A is executing its critical section, it excludes process B from executing its critical section. Conversely if process B is executing is critical section, it excludes process A from executing its critical section.

Requirements for a critical section implementation.

1. No two processes may be simultaneously inside their critical section.
2. No assumption may be made about the speeds or the number of CPUs.
3. No process outside its critical section (including the entry and exit code)may block other processes.
4. No process should have to wait forever to enter its critical section.
• I do NOT make this last requirement.
• I just require that the system as a whole make progress (so not all processes are blocked).
• I refer to solutions that do not satisfy Tanenbaum's last condition as unfair, but nonetheless correct, solutions.
• Stronger fairness conditions have also been defined.

## Mutual exclusion with busy waiting

The operating system can choose not to preempt itself. That is, we do not preempt system processes (if the OS is client server) or processes running in system mode (if the OS is self service). Forbidding preemption for system processes would prevent the problem above where x x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.

But simply forbidding preemption while in system mode is not sufficient.

• Does not work for user-mode programs. So the Unix print spooler would not be helped.
• Does not prevent conflicts between the main line OS and interrupt handlers.
• This conflict could be prevented by disabling interrupts while the main line is in its critical section.
• Indeed, disabling (a.k.a. temporarily preventing) interrupts is often done for exactly this reason.
• Do not want to block interrupts for too long or the system will seem unresponsive.

Does not work if the system has several processors.

• Both main lines can conflict.
• One processor cannot block interrupts on the other.

Software solutions for two processes

Initially P1wants=P2wants=false

Code for P1 Code for P2

Loop forever { Loop forever {

P1wants = true ENTRY P2wants = true

while (P2wants) {} ENTRY while (P1wants) {}

critical-section critical-section

P1wants = false EXIT P2wants = false

non-critical-section } non-critical-section }

Explain why this works.

But it is wrong! Why? (in case P1wants = P2wants = true then deadlock occurs)

Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We had them in the wrong order!

Initially P1wants=P2wants=false

Code for P1 Code for P2

Loop forever { Loop forever {

while (P2wants) {} ENTRY while (P1wants) {}

P1wants = true ENTRY P2wants = true

critical-section critical-section

P1wants = false EXIT P2wants = false

non-critical-section } non-critical-section }

Explain why this works.

But it is wrong again! Why? (both processes may enter the CS)

So let's be polite and really take turns. None of this wanting stuff.

Initially turn=1

Code for P1 Code for P2

Loop forever { Loop forever {

while (turn = 2) {} while (turn = 1) {}

critical-section critical-section

turn = 2 turn = 1

non-critical-section } non-critical-section }

This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three, which requires that no process in its non-critical section can stop another process from entering its critical section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS once but not again.

The first example violated rule 4 (the whole system blocked). The second example violated rule 1 (both in the critical section. The third example violated rule 3 (one process in the NCS stopped another from entering its CS).

In fact, it took years (way back when) to find a correct solution. Many earlier “solutions” were found and several were published, but all were wrong. The first correct solution was found by a mathematician named Dekker, who combined the ideas of turn and wants. The basic idea is that you take turns when there is contention, but when there is no contention, the requesting process can enter. It is very clever, but I am skipping it (I cover it when I teach distributed operating systems in V22.0480 or G22.2251). Subsequently, algorithms with better fairness properties were found (e.g., no task has to wait for another task to enter the CS twice).

What follows is Peterson's solution, which also combines turn and wants to force alternation only when there is contention. When Peterson's solution was published, it was a surprise to see such a simple soluntion. In fact Peterson gave a solution for any number of processes. A proof that the algorithm satisfies our properties (including a strong fairness condition) for any number of processes can be found in Operating Systems Review Jan 1990, pp. 18-22.

Initially P1wants=P2wants=false and turn=1

Code for P1 Code for P2

Loop forever { Loop forever {

P1wants = true P2wants = true

turn = 2 turn = 1

while (P2wants and turn=2) {} while (P1wants and turn=1) {}

critical-section critical-section

P1wants = false P2wants = false

non-critical-section non-critical-section

}}

Hardware assist (test and set)

TAS(b), where b is a binary variable, ATOMICALLY sets b = true and returns the OLD value of b.

Of course it would be silly to return the new value of b since we know the new value is true.

The word atomically means that the two actions performed by TAS(x) (testing, i.e., returning the old value of x and setting, i.e., assigning true to x) are inseparable. Specifically it is not possible for two concurrent TAS(x) operations to both return false (unless there is also another concurrent statement that sets x to false).

With TAS available implementing a critical section for any number of processes is trivial.

loop forever {

while (TAS(s)) {} ENTRY

CS

s = false EXIT

NCS

}

## Sleep and Wakeup

Remark: Tanenbaum does both busy waiting (as above) and blocking (process switching) solutions. We will only do busy waiting, which is easier. Sleep and Wakeup are the simplest blocking primitives. Sleep voluntarily blocks the process and wakeup unblocks a sleeping process. We will not cover these.

Question: Explain the difference between busy waiting and blocking process synchronization.

## Semaphores

Remark: Tannenbaum use the term semaphore only for blocking solutions. I will use the term for our busy waiting solutions. Others call our solutions spin locks.

P and V and Semaphores

The entry code is often called P and the exit code V. Thus the critical section problem is to write P and V so that

loop forever

P

critical-section

V

non-critical-section

satisfies

1. Mutual exclusion.
2. No speed assumptions.
3. No blocking by processes in NCS.
4. Forward progress (my weakened version of Tanenbaum's last condition).

Note that I use indenting carefully and hence do not need (and sometimes omit) the braces {} used in languages like C or java.

A binary semaphore abstracts the TAS solution we gave for the critical section problem.

• A binary semaphore S takes on two possible values “open” and “closed”.
• Two operations are supported
• P(S) is
• while (S=closed) {}
• S<--closed <== This is NOT the body of the while

where finding S=open and setting S<--closed is atomic

• That is, wait until the gate is open, then run through and atomically close the gate
• Said another way, it is not possible for two processes doing P(S) simultaneously to both see S=open (unless a V(S) is also simultaneous with both of them).
• V(S) is simply S<--open

The above code is not real, i.e., it is not an implementation of P. It is, instead, a definition of the effect P is to have.

To repeat: for any number of processes, the critical section problem can be solved by

loop forever

P(S)

CS

V(S)

NCS

The only specific solution we have seen for an arbitrary number of processes is the one just above with P(S) implemented via test and set.

Remark: Peterson's solution requires each process to know its processor number. The TAS soluton does not. Moreover the definition of P and V does not permit use of the processor number. Thus, strictly speaking Peterson did not provide an implementation of P and V. He did solve the critical section problem.

To solve other coordination problems we want to extend binary semaphores.

• With binary semaphores, two consecutive Vs do not permit two subsequent Ps to succeed (the gate cannot be doubly opened).
• We might want to limit the number of processes in the section to 3 or 4, not always just 1.

Both of the shortcomings can be overcome by not restricting ourselves to a binary variable, but instead define a generalized or counting semaphore.

• A counting semaphore S takes on non-negative integer values
• Two operations are supported
• P(S) is
• while (S=0) {}
• S--

where finding S>0 and decrementing S is atomic

• That is, wait until the gate is open (positive), then run through and atomically close the gate one unit
• Another way to describe this atomicity is to say that it is not possible for the decrement to occur when S=0 and it is also not possible for two processes executing P(S) simultaneously to both see the same necessarily (positive) value of S unless a V(S) is also simultaneous.
• V(S) is simply S++

These counting semaphores can solve what I call the semi-critical-section problem, where you premit up to k processes in the section. When k=1 we have the original critical-section problem.

initially S=k

loop forever

P(S)

SCS <== semi-critical-section

V(S)

NCS

Producer-consumer problem

• Two classes of processes
• Producers, which produce times and insert them into a buffer.
• Consumers, which remove items and consume them.
• What if the producer encounters a full buffer?Answer: It waits for the buffer to become non-full.
• What if the consumer encounters an empty buffer?Answer: It waits for the buffer to become non-empty.
• Also called the bounded buffer problem.
• Another example of active entities being replaced by a data structure when viewed at a lower level (Finkel's level principle).

Initially e=k, f=0 (counting semaphore); b=open (binary semaphore)

Producer Consumer

loop forever loop forever

produce-item P(f)

P(e) P(b); take item from buf; V(b)

P(b); add item to buf; V(b) V(e)

V(f) consume-item

• k is the size of the buffer
• e represents the number of empty buffer slots
• f represents the number of full buffer slots
• We assume the buffer itself is only serially accessible. That is, only one operation at a time.
• This explains the P(b) V(b) around buffer operations
• I use; and put three statements on one line to suggest that a buffer insertion or removal is viewed as one atomic operation.
• Of course this writing style is only a convention, the enforcement of atomicity is done by the P/V.
• The P(e), V(f) motif is used to force “bounded alternation”. If k=1 it gives strict alternation.

## Semaphore implementation

Unfortunately, it is rare to find hardware that implements P & V directly (or messages, or monitors). They all involve some sort of scheduling and it is not clear that scheduling stuff belongs in hardware (layering). Thus semaphores must be built up in software using some lower-level synchronization primitive provided by hardware.

Need a simple way of doing mutual exclusion in order to implement P's and V's. We could use atomic reads and writes, as in "too much milk" problem, but these are very clumsy.

Uniprocessor solution: Disable interrupts.

class semaphore {
private int count;

public semaphore (int init)
{
count = init;
}

public void P ()
{
while (1) {
Disable interrupts;
if (count > 0) {
count--;
Enable interrupts;
} else {
Enable interrupts;
}
}
}

public void V ()
{
Disable interrupts;
count++;
Enable interrupts;
}
}


What is wrong with this code?

Multiprocessor solution:

Step 1: when P fails, put process to sleep; on V just wake up everybody, processes all try P again.

Step 2: label each process with semaphore it's waiting for, then just wakeup relevant processes.

Step 3: just wakeup a single process.

Step 4: add a queue of waiting processes to the semaphore. On failed P, add to queue. On V, remove from queue.

Why can we get away with only removing one process from queue at a time?

There are several tradeoffs implicit here: how many processes in the system, how much queuing on semaphores, storage requirements, etc. The most important thing is to avoid busy-waiting.

Is it "busy-waiting" if we use the solution in step 1 above?

What do we do in a multiprocessor to implement P's and V's? Cannot just turn off interrupts to get low-level mutual exclusion.

• Turn off all other processors?
• Use atomic “add item” and “take item”, as in "producer-consumer"?

In a multiprocessor, there must be busy-waiting at some level: cannot go to sleep if do not have mutual exclusion.

Most machines provide some sort of atomic read- modify-write instruction. Read existing value, store back in one atomic operation.

• E.g. Atomic increment.
• E.g. Test and set (IBM solution). Set value to one, but return OLD value. Use ordinary write to set back to zero.
• Read-modify-writes may be implemented directly in memory hardware, or in the processor by refusing to release the memory bus.

Using test and set for mutual exclusion: It is like a binary semaphore in reverse, except that it does not include waiting. 1 means someone else is already using it, 0 means it is OK to proceed. Definition of test and set prevents two processes from getting a 0->1 transition simultaneously.

Test and set is tricky to use. Using test and set to implement semaphores: For each semaphore, keep a test-and-set integer in addition to the semaphore integer and the queue of waiting processes.

class semaphore {
private int t;
private int count;
private queue q;

public semaphore(int init)
{
t = 0;
count = init;
q = new queue();
}

public void P()
{
Disable interrupts;
while (TAS(t) != 0) { /* just spin */ };
if (count > 0) {
count--;
t = 0;
Enable interrupts;
return;
}
t = 0;
Enable interrupts;
Redispatch;
}

public V()
{
Disable interrupts;
while (TAS(t) != 0) { /* just spin */ };
if (q == empty) {
count++;
} else {
Remove first process from q;
Wake it up;
}
t = 0;
Enable interrupts;
}
}


Why do we still have to disable interrupts in addition to using test and set?

Important point: implement some mechanism once, very carefully. Then always write programs that use that mechanism. Layering is very important.

## Mutexes

Remark: Whereas we use the term semaphore to mean binary semaphore and explicitly say generalized or counting semaphore for the positive integer version, Tanenbaum uses semaphore for the positive integer solution and mutex for the binary version. Also, as indicated above, for Tanenbaum semaphore/mutex implies a blocking primitive; whereas I use binary/counting semaphore for both busy-waiting and blocking implementations. Finally, remember that in this course we are studying only busy-waiting solutions.

## Monitors

Monitors are a high-level data abstraction tool combining three features:

• Shared data.
• Operations on the data.
• Synchronization, scheduling.

They are especially convenient for synchronization involving lots of state.

Existing implementations of monitors are embedded in programming languages. Best existing implementations are the Java programming language from Sun and the Mesa language from Xerox.

There is one binary semaphore associated with each monitor, mutual exclusion is implicit: P on entry to any routine, V on exit. This synchronization is automatically done by the compiler (because he makes automatic calls to the OS), and the programmer does not seem them. They come for free when the programmer declares a module to be a monitor.

Monitors are a higher-level concept than P and V. They are easier and safer to use, but less flexible, at least in raw form as above.

Probably the best implementation is in the Mesa language, which extends the simple model above with several additions to increase the flexibility and efficiency.

Do an example: implement a producer/consumer pair.

The "classic" Hoare-style monitor (using C++ style syntax):

class QueueHandler {

private:
static int BUFFSIZE = 200;
int first;
int last;
int buff[BUFFSIZE];
condition full;
condition empty;

int ModIncr(int v) {
return (v+1)%BUFFSIZE;
}

public:
void QueueHandler (int);
int RemoveFromQueue ();
};

void
QueueHandler::QueueHandler (int val)
{
first = last = 0;
}

void
{
while (ModIncr(last) == first) {
full.wait();
}
buff[last] = val;
last = ModIncr(last);
empty.notify();
}

int
QueueHandler::RemoveFromQueue ();
{
while (first == last) {
empty.wait();
}
int ret = buff[first];
first = ModIncr(first);
full.notify();
return ret;
}


Java only allows one condition variable (implicit) per object. Here is the same solution in Java:

class QueueHandler {

final static int BUFFSIZE = 200;
private int first;
private int last;
private int buff[BUFFSIZE];

private int ModIncr(int v) {
return (v+1)%BUFFSIZE;
}

public QueueHandler (int val)
{
first = last = 0;
}

public synchronized void AddToQueue (int val) {
{
while (ModIncr(last) == first) {
try { wait(); }
catch (InterruptedException e) {}
}
buff[last] = val;
last = ModIncr(last);
notify();
}

public synchronized int RemoveFromQueue ();
{
while (first == last) {
try { wait(); }
catch (InterruptedException e) {}
}
int ret = buff[first];
first = ModIncr(first);
notify();
return ret;
}


Condition variables: things to wait on. Two types: (1) classic Hoare/Mesa condition variables and (2) Java condition variables.

Hoare/Mesa condition variables:

• condition.wait(): release monitor lock, put process to sleep. When process wakes up again, re-acquire monitor lock immediately.
• condition.notify(): wake up one process waiting on the condition variable (FIFO). If nobody waiting, do nothing.
• condition.broadcast(): wake up all processes waiting on the condition variable. If nobody waiting, do nothing.

Java condition variables:

• wait(): release monitor lock on current object; put thread to sleep.
• notify(): wake up one process waiting on the condition; this process will try to reacquire the monitor lock.
• notifyall(): wake up all processes waiting on the condition; each process will try to reacquire the monitor lock. (Of course, only one at a time will acquire the lock.)

Show how wait and notify solve the semaphore implementation problem. Mention that they can be used to implement any scheduling mechanism at all. How do wait and notify compare to P and V?

Do the readers' and writers' problem with monitors.

Summary:

• Not present in very many languages (yet), but extremely useful. Java is making monitors much more popular and well known.
• Semaphores use a single structure for both exclusion and scheduling, monitors use different structures for each.
• A mechanism similar to wait/notify is used internally to Unix for scheduling OS processes.
• Monitors are more than just a synchronization mechanism. Basing an operating system on them is an important decision about the structure of the entire system.

## Message Passing

Up until now, discussion has been about communication using shared data. Messages provide for communication without shared data. One process or the other owns the data, never two at the same time.

Message = a piece of information that is passed from one process to another.

Mailbox = a place where messages are stored between the time they are sent and the time they are received.

Operations:

• Send: place a message in a mailbox. If the mailbox is full, wait until there is enough space in the mailbox.
• Receive: remove a message from a mailbox. If the mailbox is empty, then wait until a message is placed in it.

There are two general styles of message communication:

• 1-way: messages flow in a single direction (Unix pipes, or producer/consumer):
• 2-way: messages flow in circles (remote procedure call, or client/server):

Producer & consumer example:

 Producer Consumer int buffer1[1000];while (1) {-- prepare buffer1 -- mbox.send(&buffer1);}; int buffer2[1000];while (1) {mbox.receive(&buffer2);-- process buffer2 --};

Note that buffer recycling is implicit, whereas it was explicit in the semaphore implementation.

Client & Server example:

Note that this looks a lot like a procedure call&return. Explain the various analogs between procedure calls and message operations:

• Parameters:
• Result:
• Name of procedure:

Why use messages?

• Many kinds of applications fit into the model of processing a sequential flow of information, including all of the Unix filters.
• The component parties can be totally separate, except for the mailbox:
• Less error-prone, because no invisible side effects: no process has access to another's memory.
• They might not trust each other (OS vs. user).
• They might have been written at different times by different programmers who knew nothing about each other.
• They might be running on different processors on a network, so procedure calls are out of the question.

Which are more powerful, messages or monitors?

Message systems vary along several dimensions:

• Relationship between mailboxes and processes:
• One mailbox per process, use process name in send and receive (simple but restrictive) [RC4000].
• No strict mailbox-process association, use mailbox name (can have multiple mailboxes per process, can pass mailboxes from process to process, but trickier to implement) [Unix].
• Extent of buffering:
• Buffering (more efficient for large transfers when sender and receiver run at varying speeds).
• None -- rendezvous protocols (simple, OK for call-return type communication, know that message was received).
• Conditional vs. unconditional ops:
• Unconditional receive: return message if mailbox is not empty, otherwise wait until message arrives.
• Conditional receive: return message if mailbox is not empty, otherwise return special "empty" value.
• Unconditional send: wait until mailbox has space.
• Conditional send: return "full" if no space in mailbox (message is discarded).

What happens with rendezvous protocols and conditional operations?

• Almost all systems allow many processes to wait on the same mailbox at the same time. Messages get passed to processes in order.
• A few systems allow each process to wait on several mailboxes at once. The process gets the first message to arrive on any of the mailboxes. This is actually quite useful (give Caesar as an example).
• Constraints on what gets passed in messages:
• None: just a stream of bytes (Unix pipes).
• Enforce message boundaries (send and receive in same chunks).
• Protected objects (e.g. a token for a mailbox).

How would the following systems fall into the above classifications?

• Condition variables
• Unix pipes

## Classical IPC Problems

### The Dining Philosophers Problem

A classical problem from Dijkstra

• 5 philosophers sitting at a round table
• Each has a plate of spaghetti
• There is a fork between each two
• Need two forks to eat

What algorithm do you use for access to the shared resource (the forks)?

• The obvious solution (pick up right; pick up left) deadlocks.
• Big lock around everything serializes.
• Good code in the book.

The purpose of mentioning the Dining Philosophers problem without giving the solution is to give a feel of what coordination problems are like. The book gives others as well. We are skipping these (again this material would be covered in a sequel course).

### The Readers and Writers Problem

• Two classes of processes.
• Readers, which can work concurrently.
• Writers, which need exclusive access.
• Must prevent 2 writers from being concurrent.
• Must prevent a reader and a writer from being concurrent.
• Must permit readers to be concurrent when no writer is active.
• Perhaps want fairness (e.g., freedom from starvation).
• Variants

Quite useful in multiprocessor operating systems and database systems. The “easy way out” is to treat all processes as writers in which case the problem reduces to mutual exclusion (P and V). The disadvantage of the easy way out is that you give up reader concurrency. Again for more information see the web page referenced above.

### The barbershop problem

The original barbershop problem was proposed by Dijkstra. A variation of it appears in Silberschatz and Galvin’s Operating Systems Concepts. A barbershop consists of a waiting room with n chairs, and the barber room containing the barber chair. If there are no customers to be served, the barber goes to sleep. If a customer enters the barbershop and all chairs are occupied, then the customer leaves the shop. If the barber is busy, but chairs are available, then the customer sits in one of the free chairs. If the barber is asleep, the customer wakes up the barber. Write a program to coordinate the barber and the customers.

## Scheduling

Until now we have talked about processes, from now on we will talk about resources, the things operated upon by processes. Resources range from cpu time to disk space to channel I/O time.

Resources fall into two classes:

• Preemptible: processor or I/O channel. Can take resource away, use it for something else, then give it back later.
• Non-preemptible: once given, it cannot be reused until process gives it back. Examples are file space, terminal, and maybe memory.

OS makes two related kinds of decisions about resources:

• Allocation: who gets what. Given a set of requests for resources, which processes should be given which resources in order to make most efficient use of the resources? Implication is that resources are not easily preemptible.
• Scheduling: how long can they keep it. When more resources are requested than can be granted immediately, in which order should they be serviced? Examples are processor scheduling (one processor, many processes), memory scheduling in virtual memory systems. Implication is that resource is preemptible.

## CPU Scheduling

Processes may be in any one of three general scheduling states:

• Running.
• Ready. That is, waiting for CPU time. Scheduler and dispatcher determine transitions between this and running state.
• Blocked. Waiting for some other event: disk I/O, message, semaphore, etc. Transitions into and out of this state are caused by various processes.

There are two parts to CPU scheduling:

• The dispatcher provides the basic mechanism for running processes.
• The scheduler is a piece of OS code that decides the priorities of processes and how long each will run.

This is an example of policy/mechanism separation.

Goals for Scheduling Disciplines

• Efficiency of resource utilization (keep CPU and disks busy).
• Minimize response time. (Define response time.)
• Distribute cycles equitably. What does this mean?

### FCFS (also called FIFO)

Run until finished.

• In the simplest case this means uniprogramming.
• Usually, "finished" means "blocked". One process can use CPU while another waits on a semaphore. Go to back of run queue when ready.
• Problem: one process can monopolize CPU.

Solution: limit maximum amount of time that a process can run without a context switch. This time is called a time slice.

### Round Robin

Run process for one time slice, then move to back of queue. Each process gets equal share of the CPU. Most systems use some variant of this. What happens if the time slice is not chosen carefully?

Originally, Unix had 1 sec. time slices. Too long. Most timesharing systems today use time slices of 10,000 - 100,000 instructions.

Implementation of priorities: run highest priority processes first, use round-robin among processes of equal priority. Re-insert process in run queue behind all processes of greater or equal priority.

Even round-robin can produce bad results occasionally. Go through example of ten processes each requiring 100 time slices.

What is the best we can do?

### STCF

Shortest time to completion first with preemption. This minimizes the average response time.

As an example, show two processes, one doing 1 ms computation followed by 10 ms I/O, one doing all computation. Suppose we use 100 ms time slice: I/O process only runs at 1/10th speed, effective I/O time is 100 ms. Suppose we use 1 ms time slice: then compute-bound process gets interrupted 9 times unnecessarily for each valid interrupt. STCF works quite nicely.

Unfortunately, STCF requires knowledge of the future. Instead, we can use past performance to predict future performance.

### Exponential Queue (also called "multi-level feedback queues")

Attacks both efficiency and response time problems.

• Give newly runnable process a high priority and a very short time slice. If process uses up the time slice without blocking then decrease priority by 1 and double time slice for next time.
• Go through the above example, where the initial values are 1ms and priority 100.
• Techniques like this one are called adaptive. They are common in interactive systems.
• The CTSS system (MIT, early 1960's) was the first to use exponential queues.

Fair-share scheduling as implemented in Unix:

• Figure out each process' "share" of CPU, based on number of processes and priorities.
• Keep a history of recent CPU usage for each process: if it is getting less than its share, boost priority. If it is getting more than its share, reduce priority.
• Careful: could be unstable!

Summary:

• In principle, scheduling algorithms can be arbitrary, since the system should behave the same in any event.
• However, the algorithms have crucial effects on the behavior of the system:
• Overhead: number of context swaps.
• Efficiency: utilization of CPU and devices.
• Response time: how long it takes to do something.
• The best schemes are adaptive. To do absolutely best, we would have to be able to predict the future.

## Priority Inversion Problem

There are some curious interactions between scheduling and synchronization. A classic problem caused by this interaction was first observed in 1979 but Butler Lampson and David Redell at Xerox.

Suppose that you have three processes:

 P1: Highest priority P2: Medium priority P3: Lowest priority

And suppose that you have the following critical section, S:

S: mutex.P()

. . .

. . .

mutex.V()

The three processes execute as follows:

1. P3 enters S, locking the critical section.
2. P3 is preempted by the scheduler and P2 starts running.
3. P2 is preempted by the scheduler and P1 starts running.
4. P1 tries to enter S and is blocked at the P operation.
5. P2 starts running again, preventing P1 from running.

So, what's going wrong here? To really understand this situation, you should try to work out the example for yourself, before continuing to read.

• As long as process P2 is running, process P3 cannot run.
• If P3 cannot run, then it cannot leave the critical section S.
• If P3 does not leave the critical section, then P1 cannot enter.

As a result, P2 running (at medium priority) is blocking P1 (at highest priority) from running. This example is not an academic one. Many designers of real-time systems, where priority can be crucial, have stumbled over issue. You can read the original paper by Lampson and Redell to see their suggestion for handling the situation.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks