Skip to content Skip to navigation


You are here: Home » Content » Using Temporal Logic to Specify Properties



What is a lens?

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Model Checking Concurrent Programs"

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • eScience, eResearch and Computational Problem Solving

    This module is included inLens: eScience, eResearch and Computational Problem Solving
    By: Jan E. OdegardAs a part of collection: "Model Checking Concurrent Programs"

    Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Using Temporal Logic to Specify Properties

Module by: Ian Barland, John Greiner, Moshe Vardi. E-mail the authors

So far, we've seen some of the built-in checks performed by SPIN: assertions, deadlock, non-progress, and one notion of fairness. But what about other properties we'd like to check for? We might be interested in properties which the implementors of our particular tool weren't interested in, or hadn't thought of. Fortunately, SPIN does include a general property-checking mechanism: If we can state our property in a particular formal temporal logic, then SPIN will allow us to check the property.

First, we'll introduce the syntax and semantics of one particular temporal logic, LTL. Then, we'll return we'll return to SPIN and how it uses temporal logic for verification.

First, we reflect that some properties — safety properties — can be phrased in terms of ``some particular state is unreachable''. For example, a state where two processes are sending information to the printer simultaneously never occurs. Failed assertions can be viewed as a transition to a fail-state, and we require that this state is unreachable. Deadlock can be viewed as a state with no legal outgoing transition, and not all processes sitting in an end-state.

Similarly we reflect that other properties — liveness properties — are about entire traces, not just some state in a trace. For instance, non-progress cycles is a cycle which does not contain any states labeled as progress states.

We want a richer language to have a way of talking about traces, but that is precise and not subject to English's vagaries. We'll work towards using logic — a temporal logic specifically designed to be conducive to expressing concepts which arise in concurrent programming. But before we can even do that, we need a clear model of concurrent programs, and more formal definitions of some ideas we've already seen.

Concurrent Programs as State Space Automata

The notions we've seen of states, traces, and state spaces aren't specific to Promela, but are useful in any formalization of concurrent systems. In fact, we are ready to give one precise definition of a ``concurrent system'' in terms states and traces. It is similar to many conventional automata definitions, but includes a way to relate the current state to propositions, which we can later construct formulas out of.

Definition 1: State Space Automaton
A State Space Automaton AA is a tuple S s0 T Prop P S s0 T Prop P where:
  • SS is a set of states. As before, a state is an assignment to all program variables (both local an global), including the program counter (line number) for each process.
  • s0S s0 S is the initial state or start state.
  • TS×S T S S is the transition relation. Equivalently, this is the edge relation of the state space.
  • PropProp, a set of propositions.
  • P : S2Prop P : S2Prop is a mapping which, given a state, indicates all the properties that are true in that state.

It is the set PropProp which will allow us to talk about our program variables in our logic formulas. The elements of PropProp are simple propositions involving program variables. For example,

  • philosopher_2_has_forkphilosopher_2_has_fork
  • (ctr>3)(ctr3)
  • (in[].size() == max_buffer_size)(in[].size() == max_buffer_size) (that is, the buffer in[] is full)
Each of these might be true in some states and false in other states; the function PP gives a truth assignment to the propositions (a different truth assignment for each state).

Definition 2: Trace
Given a state space automaton, a trace σσ (sometimes called an ωω-trace) is a (possibly infinite) sequence of states σ0σ0, σ1σ1, σ2σ2, … which respects the automaton's transtion relation TT.

Thus a trace is a path through the state space (as we have already seen). Now that we have logic propositions associated with each state, we can see that during a trace, those propositions will change their truth value over time. We'll illustrate this with diagrams like the following.

Figure 1: A timing diagram for a trace σσ, and propositions pp and qq. Each propositional variable's value during the trace is depicted as a line. When the line is high, the proposition is true; when low, false.
Figure 1 (always_if_p_q_timeline.png)

In this example, pp and qq are propositions from PropProp. Since in the trace σσ, we have pp being true at (for example) index 2, we write σ2 pσ2 p. In this diagram, it is also the case that σ0 (pq)σ0 (pq). 1

Although a bit unintuitive at first, it will be convenient to convert all finite traces into infinite ones. To do the conversion, we simply envision that once the automaton reaches its ostensibly last (nnth) state, it languishes furiously in that state over and over:

Definition 3: Stutter-Extend
The finite trace σ0,,σnσ0,,σn can be stutter-extended to the infinite trace σ0, …, σn, σn, σn, σ0, …, σn, σn, σn, .

To ensure this still satisfies the definition of a trace (where successive states must obey the automaton's transition relation), it is often assumed that every state has a transition to itself. Note that this convention precludes assuming that something must change between one state and the next, which is plausible since we are modeling asynchronous systems. This will be reflected in our formal logic below, which will have no built-in primitive for ``the-next-state''.

State Formulas

Now that we have a formal model of what a trace is, we can start to make formal statements about what happens in a trace. The simplest statements are state formulas, i.e., propositional formulas built out of PropProp. For a given trace σσ and state formula ψψ, we can immediately decide whether a particular state σiσi satisifes ψψ (notationally, ``σi ψσi ψ'').

Notation Abuse Alert:

Deciding whether some particular state of a trace satisfies a formula also depends implicitly on the automaton ΣΣ. Technically we should write Σ,σi ψΣ,σi ψ, though in practice ΣΣ is clear from context.

Example 1

For example, given a trace σσ, we might ask about what is happening at σ957σ957, and whether σ957 ¬Proc1_is_printingσ957 ¬Proc1_is_printing, or whether σ957 ((ctr>3)philosopher_2_has_fork)σ957 ((ctr3)philosopher_2_has_fork). The answer would be found by taking the automaton's function PP, and seeing what values the truth assignment Pσ957 P σ957 assigns to our formula's individual propositions such as (ctr>3)(ctr3).

Example 2

Consider the trace σσ we saw previously and is repeated here.

Figure 2: A timing diagram for a trace σσ, and propositions pp and qq.
Figure 2 (always_if_p_q_timeline.png)

For which ii does σi (pq)σi (pq)? For which ii does σi (pq)σi (pq)? How about σi (pq)σi (pq)?

However it doesn't make sense a priori to ask whether an entire trace satisfies some particular state formula; unlike regular propositional logic, the truth of state formulas changes over time, as the trace σσ progresses.

Introducing Temporal Connectives

So, how do we talk about formulas holding (or not) over time? In addition to making formulas out of ,,¬¬ and propositions from the set PropProp, we'll allow the use of temporal connectives.

  • always, or henceforth: We say that φφ is true at a moment ii, iff φφ is true from moment ii onwards.
  • eventually: We say that φφ is true at moment ii, iff φφ will eventually be true at moment ii or later.
  • UUstrong until: We say that (φUψ)(φUψ) is true at moment ii, iff ψψ eventually becomes true, and until then φφ is true.
  • WWweak until: Like Strong Until, but without the requirement that ψψ eventually becomes true.

As a mnemonic for the symbols ``'' and ``'', we can imagine a square block of wood sitting on a table. Orienting it like , it will always sit there; orienting it like , it will eventually teeter.

More formally, we define these connectives as follows.

Definition 4: Always
σi φσi φ iff ji. σj φji.σj φ


Figure 3: A generic trace σσ satisfying σi pσi p. pp is true in state ii and all later states.
Figure 3 (always_p_timeline.png)

Exercise 1

Decide whether each is true or false for following trace σσ (the same one seen earlier).

Figure 4: A timing diagram for a trace σσ, and propositions pp and qq
Figure 4 (always_if_p_q_timeline.png)
  1. σ9 qσ9 q
  2. σ9 ¬pσ9 ¬p
  3. σ0 qσ0 q
  4. σ0 (pq)σ0 (pq)


  1. True; from σ9σ9 onwards, qq stays true.
  2. True. Even stronger, σ7 ¬pσ7 ¬p.
  3. False, since σ3 qσ3 q.
  4. True, since at each individual index ii (from 0 on up), σi (pq)σi (pq).
Definition 5: Eventually
σi φσi φ iff ki. σk φki.σk φ


Figure 5: A generic trace σσ satisfying σi qσi q. qq is true in some state kk after state ii.
Figure 5 (eventually_q_timeline.png)

Exercise 2

Decide whether each is true or false for the trace σσ seen earlier.

  1. σ9 pσ9 p
  2. σ9 ¬pσ9 ¬p
  3. σ0 qσ0 q
  4. σ0 ¬qσ0 ¬q
  5. σ9 (qp)σ9 (qp)
  6. σ0 ¬(pq)σ0 ¬(pq)


  1. False; from σ9σ9 onwards, pp stays false.
  2. True. This is a much weaker statement than σ9 ¬pσ9 ¬p, which we already saw was true.
  3. Extremely true, since qq is already true in σ0σ0 (and elsewhere, too).
  4. True, since ¬q¬q is true in σ3σ3 (and elsewhere too).
  5. False.
  6. False. We already mentioned that σ0 (pq)σ0 (pq); if you think about it, this means that σ0 ¬ ¬(pq) σ0 ¬ ¬(pq) . (While we won't spend time discussing algebraic equivalences for LTL formulas, this is certainly reminiscent of DeMorgan's Law.)

Before continuing on with our two other connectives (strong- and weak-until), let's pull back a bit and look at our notation.

Notation Abuse Alert:

The nit-picky — er, the careful reader will have noticed that although we write σi σi , really, the truth value of the formula depends not on just a single state, but rather that state and the entire suffix of the trace after that. So writing σ i σ i would be more accurate. Moreover, we are using not just the trace but also the timing diagrams, which are just the graph of an automaton AA's mapping function (``PP''). Thus the correct formulation really needs to be written A σ i A σ i (and it is required that σσ be a legal trace with respect to the transitions of AA). However, now that we realize our abuse of notation, we'll revert and just write σi σi .
In practice, we are often considering an entire trace, and wondering whether some property holds always or eventually. It's extremely natural to extend our notation from particular indices to the trace itself:


Trace σσ models formula φφ (``σ φσ φ'') iff σ0 φσ0 φ.
For example, from the above exercises, since σ0 (pq)σ0 (pq), we say simply σ (pq)σ (pq).

Definition 6: Strong Until
σi (φUψ)σi (φUψ) iff ki. (σk ψj. ( i j < k σj φ) )ki.(σk ψj. ( i j < k σj φ) ) .


Figure 6: A generic trace σσ satisfying σi (rUs)σi (rUs). rr is true in states ii through (k1)(k1).
Figure 6 (suntil_r_s_timeline.png)

Exercise 3

Decide whether each is true or false for following trace σσ: (This is the same trace seen earlier.)

Figure 7: A timing diagram for a trace σσ, and propositions pp and qq.
Figure 7 (always_if_p_q_timeline.png)
  1. σ0 (qUp)σ0 (qUp)
  2. σ2 (qUp)σ2 (qUp)
  3. σ5 (qUp)σ5 (qUp)
  4. σ9 (qUp)σ9 (qUp)


  1. True. qq is true in σ0σ0, and it doesn't take long before pp becomes true in σ1σ1.
  2. False. Starting at σ2σ2, we see that pp doesn't become true until σ5σ5, yet qq doesn't stay true that long.
  3. Very true, since pp is already true in σ5σ5. (In the definition, take kk=5; then there are not any jj to even need to worry about.)
  4. False. Since pp is false from σ9σ9 onward, we can't even find a (k9)(k9) such that σk pσk p. However, it is true that σ9 (qWp)σ9 (qWp).

Exercise 4

Give the formal semantics of Weak Until (in the same way we have given semantics for , , and Strong Until).


σi (φWψ)σi (φWψ) iff (ji. σj φ ( k i . σk ψ i j < k . σj φ ))(ji. σj φ ( k i . σk ψ i j < k . σj φ ))

Exercise 5

Weak Until could be omitted from our core set of LTL connectives, since it can be defined in terms of the remaining connectives. Provide such a definition. That is, give an LTL formula equivalent to (φWψ)(φWψ).


Here are three equivalent definitions. The first follows directly from the above exercise.

  • (φWψ)(φ(φUψ))(φWψ)(φ(φUψ))
  • (φWψ)(¬φ(φUψ))(φWψ)(¬φ(φUψ))
  • (φWψ)(φU(ψφ))(φWψ)(φU(ψφ))

Any logic that includes temporal connectives is a temporal logic. More specifically, this is Linear Temporal Logic (LTL). The ``linear'' part of this name means that we are looking only at individual traces, rather than making formulas that discuss all possible traces of a program. After all, a single program can result in many possible traces. An LTL formula does not allow us to say, for example, that a property holds in all possible executions of the program. However, SPIN overcomes some limitations by essentially testing an LTL formula against all possible traces. Will will return to SPIN in the next section.

In summary, the syntax used in SPIN is given by the following grammar.

Table 1: SPIN's Grammar for LTL formulas
ltl ::=   atom  
    | (ltl binop ltl)  
    | unop ltl  
    | (ltl)  
unop ::=   [] ``Always''
    | <> ``Eventually''
    | ! ``Not''
binop ::=   U ``Strong Until''
    | W ``Weak Until''
    | && ``And''
    | || ``Or''
atom ::=   true  
    | false  
    | namename #define'd identifiers


Some presentations of LTL use a box or ``G'' (Globally) for Always, and use a diamond or ``F'' for Eventually. Confusingly, some use ``U'' for Weak Until. Also, some include additional connectives like ``X'' (Next), which cannot be defined in terms of those given.

Do we really need a whole new logic?

We could use regular ol' first-order logic as our formal language for specifying temporal properties. We'd do this by adding an explicit time index to each proposition in PropProp, turning it into a unary relation. For example, instead of considering Philosopher_B_has_forkPhilosopher_B_has_fork in state 17, we'd instead write Philosopher_B_has_fork(17)Philosopher_B_has_fork(17), and so on.

This approach, though, quickly becomes cumbersome. Consider the everyday concept ``Someday, X will happen''. We have to go through some contortions to shoehorn this concept into first-order logic: ``There exists a day dd, such that dd is today or later, and X will happen on dd''. Quite a mouthful for what is about the simplest temporal concept possible!

The awkwardness stems from the fact that in English, ``eventually'', ``always'', and ``until'' are primitives. When we design a formal language to capture with these concepts, shouldn't we too include them as primitives? Having our formal representations reflect our natural conceptions of problems is the first law of programming!

This is why we have decided to incur the overhead of defining a new language, LTL. Still, you have noticed that the semantics we've given are phrased in terms of ``i.i.…''. From a language designer's perspective, we might say that LTL is a high-level language, and we give an interpreter for it written in first-order logic (over the domain of trace-indices).

Translating between English and LTL: Examples

Let's look at how to turn a natural-English specification into temporal logic, and vice versa.

Example 3

How might we express the common client/server relationship ``if there is a request (rr), then it will be serviced (ss)''?

Hmm, that seems obvious: (rs)(rs) should do it, no? Alas, no. The problem is, the truth of this formula — which has no temporal connectives at all — only depends upon the trace's initial state, σ0σ0 — in this case, whether σ0 (¬rs)σ0 (¬rs). This certainly doesn't match our intent, since we want the request and service to be able to happen at points in the future. Let's try again…

Problem 1

Does the formula (rs)(rs) capture the English notion ``if rr is true, then ss will be true''?


That's only a bit better: it does now allow the implication (rs)(rs) to hold in any state σiσi. But our intent is to allow the servicing ss to happen after the request rr. The formula as given only expresses ``Whenever rr is true, then ss is also true at that moment.'' If rr and ¬s¬s both hold at the same time (which wouldn't be surprising), then (rs)(rs) fails for that trace.

After more thought, we can get something which does arguably match our intent: (rs)(rs). ``Whenever a request is made, a servicing will eventually happen.''

However, even this may not be what everybody wants. Some considerations:

  • Even if rr is raised (and lowered) several times, a single moment of ss can satisfy the entire trace. This matches an intuition of rr indicating a single ring of a telephone (and ss being `pick up the phone'), but may not match the intuition of rr being ``leave a voicemail'' (and ss being ``respond to a voicemail'').
    Figure 8: A trace showing multiple requests followed by a single servicing.
    Figure 8 (request_service_timeline1.png)
  • A trace in which ss is always true (regardless of rr) certainly satisfies φφ. Did you expect the English statement to encompass this?
  • Any trace in which requests are serviced instantaneously will satisfy φφ. While this might be intended, it might also be a bug of leftover servicings from previous requests.
    Figure 9: A trace showing multiple requests and a single servicing. The servicing arguably corresponds to only the first request and might be an undesired behavior.
    Figure 9 (request_service_timeline2.png)
  • A trace in which a request is never made still satisfies this formula. While that is probably what is intended, it may not have occurred to you from the English statement alone.
One advantage to having a formal language is that because the semantics are precisely defined, it helps us think about corner cases such as the above (and, gives an unambiguous meaning to the result, good for writing specs).

Example 4

As a more complicated variation, we generally want to match requests and their servicing. For example, what if there are two requests, r1r1 and r2r2, arriving in that order? In a particular context, do we expect that there are also exactly two servicings, s1s1 and s2s2? If so, do we also expect them in that order?

Problem 1

As a partial solution, how can we express ``whenever r1r1, s1s1 will happen before s2s2''?


If we are also requiring that s2s2 indeed actually happen, then we can use the following: ((r1s2)(¬s2Us1)) ((r1s2)(¬s2Us1)) .

Note that since LTL does not have quantifiers (like and ∃) we cannot use variable indexing in our formulas. In other words, we can't talk about riri and sjsj, but instead, must always refer to specific variables like r1r1 and s2s2.

From these examples and the ones following, we can see that English is typically too ambiguous. To get the appropriate logic formula, we need to go back to the original problem and determine what was truly meant. Formalizing your specifications into temporal logic gives you the opportunity to closely examine what those specifications really should be.

Given a formula, try to understand it by translating it into English. Providing sample timelines that satisfy the formula is also quite helpful.

Exercise 6

Describe the meaning of pp in words and a timeline.


Literally, we can simply say ``always, eventually pp''. While that phrase is fairly common among logicians, it's not very natural or meaningful to most people. A clearer, but more long-winded, equivalent is ``at any point in time, pp will happen in the future''.

More concise, though, is ``pp happens infinitely often''. To understand that, consider a timeline. In words, at any time t0t0, pp will happen sometime in the future; call that time t1t1. And at moment t1t1+1, pp will happen sometime in the future; call that t2t2. Repeat forever.

Figure 10: The beginning of a trace where pp happens infinitely often.
Figure 10 (always_eventually_p_timeline.png)

Exercise 7

Describe the meaning of (q¬p)(q¬p) in words and a timeline.


Again, a literal translation is ``always, if q then always not p''. Correct, but a bit more idiomatic-English would be ``whenever q, then forever not p'' or ``Once q, p is forever false''.

Figure 11: The beginning of a trace satisfying the property.
Figure 11 (eventually_if_q_always_not_p_timeline.png)

Example 5

A railroad wants to make a new controller for single-track railroad crossings. Naturally, they don't want any accidents with cars at the crossing, so they want to verify their controller. Their propositions PropProp include train_is_approachingtrain_is_approaching, train_is_crossingtrain_is_crossing, light_is_flashinglight_is_flashing, and gate_is_downgate_is_down.

Problem 1

Brainstorming: Using natural English, what are some properties we'd like to have true? Feel free to use words like ``always'', ``eventually'', ``while'', and ``until''. You may add new propositions to PropProp, if you think it's appropriate.

For each, feel free to demonstrate that the one property is not sufficient: give a trace which satisfies the property but is either unacceptable or unrealistic. (For instance, ``the gate is always down'' is safe but unacceptable; ``a train is always crossing'' is unrealistic since there aren't infinite trains.)


We can think of lots of examples, such as these.

  • Whenever a train passing, the gate is down.
  • If a train is approaching or passing, then the light is flashing.
  • If the gate is up and the light is not flashing, then no train is passing or approaching.
  • If a train is approaching, the gate will be down before the train passes.
  • If a train has finished passing, then later the gate will be up.
  • The gate will be up infinitely many times.
  • If a train is approaching, then it will be passing, and later it will be done passing with no train approaching. (Thus, trains aren't infinitely long, and there are gaps between the trains.)

To formalize such statements, we would start with the primitive propositions involved. These could be

  • aa (``a train is approaching the crossing'')
  • pp (``a train is passing the crossing'')
  • ll (``the light is flashing'')
  • gg (``the gate is down'')

This choice forces us to not refer to individual trains and, thus we must simplify some of our properties, e.g., ``If a train is approaching, the gate will be down before the next train passes.'' (Think about the consequences of not making this change.)

Some of these English descriptions are ambiguous, however. E.g., can a single train be approaching and passing simultaneously? When writing formal specifications, we'll be forced to think about what we mean to say, and provide an unambiguous answer, one way or the other. Continuing this example, we'll say that once a train is passing, it is no longer considered to be approaching.

Problem 2

Encode some of the properties from the previous exercise in temporal logic. It is often helpful to first turn the statement into a timeline (essentially a trace).

  1. ``Whenever a train passing, the gate is down.''
  2. ``If a train is approaching or passing, then the light is flashing.''
  3. ``If the gate is up and the light is not flashing, then no train is passing or approaching.''
  4. ``If a train is approaching, the gate will be down before the next train passes.''
  5. ``If a train has finished passing, then later the gate will be up.''
  6. ``The gate will be up infinitely many times.''
  7. ``If a train is approaching, then it will be passing, and later it will be done passing with no train approaching.''

Keep in mind that for any concept, there are many different (but equivalent) formulas for expressing it.

  1. (pg)(pg)
  2. ((ap)l)((ap)l)
  3. ((¬g¬l)(¬p¬a)) ((¬g¬l)(¬p¬a))
  4. Figure 12: A trace satisfying this property.
    Figure 12 (train_timeline1.png)
    Observe in the trace shown that in one instance, the gate goes down at the same time the train passes. In the LTL version we're using, we can't express the idea ``… strictly before the next train passes.'' Instead, we'll allow this simultaneity as a possibility. (a(¬pWg)) (a(¬pWg)) I.e., if a train is approaching, then a train is not passing until the gate is down. However, this still allows that the gate could be back up by the time a train passes, and a train might never pass.
  5. Figure 13: A trace satisfying this property.
    Figure 13 (train_timeline2.png)
    ((pU¬p) (¬pg) ) ((pU¬p) (¬pg) ) Since our logic cannot talk about the past, we have to shift our perspective to start while the train is still passing. We've effectively reworded the statement to say that if a train is passing and will finish passing, then sometime after it is no longer passing, the gate will be up. However, the status of the gate at other times is not commented on. We're only saying there is one moment after the train passes that the gate is up.
  6. ¬g¬g
  7. Figure 14: A trace satisfying this property.
    Figure 14 (train_timeline3.png)
    (a (p (¬p¬a) ) ) (a (p (¬p¬a) ) )

This last condition is not something that a controller would enforce. Instead, we would expect the train scheduling software to enforce this. The controller would be allowed to assume this, and we'd want to verify a property ψψ only for traces that also satisfy our scheduling assumptions φφ. Of course, we can also do this by incorporating assumptions into the formulas: (φψ)(φψ).

Exercise 8

We've previously seen that SPIN knows some built-in concepts such as deadlock, livelock, weak- and strong-fairness. We'll see in the next section that SPIN can also be used to verify any LTL formula. But first, let's characterize some of those built-in concepts as LTL formulas using the context of a print server.

We will use the following propositions:

  • bb : This is a bad state. (It corresponds to Promela assert(!bb)).
  • ee : This is a valid end state. (It corresponds to all Promela processes being at an end label).
  • p1p1, p2p2: Progress is being made (either of two types of progress); they correspond to a Promela process being at a certain progress label.2
  • rr: There is a pending request.
  • ss: A request is being serviced.
Give a temporal formula expressing each of the following:

  1. No error occurs, i.e., we never reach a bad state.
  2. No deadlock occurs. (We'll take deadlock here to mean that all five of our variables stop changing; without having a different proposition for each state, this is close enough.)
  3. Progress is always made. That is, this trace doesn't go forever without making progress.
  4. Weak Fairness: if a request is made and held, it will get serviced. I.e., it's not the case that a request stays true forever but is never serviced.
  5. Strong Fairness: if a request is made infinitely often, then it will be serviced.


For each of these problems, there are many equivalent formulas, some of which are easier to verify as equivalent than others.

  1. ¬ b ¬ b
  2. ¬ ((b¬b)(p¬p)(r¬r)(s¬s)¬e) ¬ ((b¬b)(p¬p)(r¬r)(s¬s)¬e) . Remember that if we're in an end-state, then it doesn't count as deadlock.
  3. (p1p2) (p1p2) (p1p2) (p1p2). In fact, SPIN provides a built-in variable np_, which corresponds to ``all processes are in a non-progress state''. When SPIN does its built-in check for non-progress cycles, it actually just verifying !<>[]np_. You might recall that SPIN's idea of progress is state-based, not transition-based: SPIN (counterintuitively) considers it progress when a process does nothing but squat in a progress state. (This can only happen if weak fairness is not being enforced, or a guard is being labeled as progress.) How might we capture ``progress'' but still preclude those traces which just sit idly at a progress label? We could add a second progress label immediately following p1p1, call it q1q1. Similarly, we can introduce q2q2. Then we could verify that ((p1q1)(p1q1))((p1q1)(p1q1)). (This assumes that all labels are private to each process.)
  4. ``It's always not the case that rr is always on, and yet ss doesn't happen an infinite number of times'': ¬ (r¬s) ¬ (r¬s) . One equivalent formula is ``if rr stays on forever, then it's being serviced infinitely often.'': (rs) (rs) .
  5. (rs)(rs)

Exercise 9

For our simple producer/consumer model, what is the basic property we'd like to guarantee? Roughly, ``each product is consumed exactly once''. That is, Produce and Consume are called alternatingly, starting with Produce. (In particular, as stated this precludes modeling ``produce twice and queue the results''.)

Expressing this actually requires several clauses. Give colloquial English translations of the following LTL formulas. We can make the generic model a bit concrete by interpreting startProducestartProduce and finishProducefinishProduce as starting and finishing the production of a fine, hand-crafted ale, with startConsumestartConsume, finishConsumefinishConsume corresponding to enjoying such an ale. (The more health-concious reader might prefer to instead interpret these as start/finish the training for a marathon, and start/finish running a marathon, respectively.)

  1. (finishProducestartConsume) (finishProducestartConsume)
  2. (finishProduce(¬startProduceUfinishConsume)) (finishProduce(¬startProduceUfinishConsume))
  3. (¬startConsumeWfinishProduce)(¬startConsumeWfinishProduce)
  4. (finishConsume(¬startConsumeWfinishProduce)) (finishConsume(¬startConsumeWfinishProduce))


  1. If an ale is produced, then an ale will be consumed (possibly immediately). Note that we don't talk about which ale is consumed; in the following clauses we will try to enforce that it really is the same ale that was recently produced.
  2. If an ale is produced, then no ale-production is started until after an ale has been consumed.
  3. No ale is consumed before (an initial) ale is produced. Note that a trace in which no ale is ever produced still satisfies this formula.
  4. If an ale is consumed, nothing else will be consumed until an(other) ale has been produced.

Exercise 10

In the previous exercise, the formulas entailed that production of an item could not even begin until the previous object had been entirely consumed. This requirement might make sense for the interpretation of marathon training/running, but it doesn't seem essential to the production of hand-crafted ale, even if we do want to preserve the policy of never queuing up products.

Suppose we have a computer network port whose buffer can store only one web page's data at a time. The producer could correspond to some server sending a web page to that computer, and the consumer might be a browser which processes the data being received.

  1. Do we want to allow the producer to be producing the next product before the consumer finishes the current one?
  2. What change would you make to the formulas if you did want to allow this produce-while-consuming behavior (as in the ale interpretation)?


  1. No. That would mean the buffer is being overwritten with a new web-page's data, even though the browser is still trying to process the current page in the buffer.
  2. We would replace (finishProduce(¬startProduceUfinishConsume)) (finishProduce(¬startProduceUfinishConsume)) with (finishProduce(¬finishProduceUfinishConsume)) (finishProduce(¬finishProduceUfinishConsume)) .

Note that we want to include the possibility that nothing is ever produced — we don't always want to enforce weak fairness. Consider an operating system's code for a print server — it is truly important that not owning a printer doesn't mean the OS always crashes!

Through this section, we've seen some of the common patterns that can arise in the formulas. Following upon the idea of Design Patterns as describing common programming idioms, ``Specification Patterns'' describe common logic specification idioms. For a list of LTL specification patterns, see Property Pattern Mappings for LTL.

In addition, there are also algebraic equivalences for LTL, just as there are for propositional logic. We have already seen a DeMorgan-like example; there are also less obvious equivalences, such as distributing over : (φψ)(φψ)(φψ)(φψ) and the redundancy of -over-: (φψ) (φψ) (φψ) (φψ) It can be fun to dabble in rewriting LTL formulas, but (alas) we won't visit that topic further here.

Using Temporal Logic in SPIN

So, we have a Promela program and an LTL formula to verify. How do we use the tool? In SPIN, in the "Run" menu, select "LTL Property Manager". This opens a new window. The top half contains several subparts for entering an LTL formula, while the bottom half is for using it.

In the "Formula" textbox, you can type an LTL formula. If you prefer, you can enter the operators using the buttons below, rather than typing. Also, you can use the "Load" button to restore a previously-saved LTL formula from a file. For a formula, the next checkboxes allow you to either verify that the formula always holds or never holds. The "Notes" textbox is simply descriptive text. If you plan on saving the formula, it is helpful to remind yourself what this formula means and is used for. The "Symbol Definitions" textbox is where the LTL variables (typically p, q, etc.) are related to the Promela code. This is done through #defines which are temporarily added to your Promela code.

Example 6

Let's consider another critical section example. The following code simply puts the critical section into a loop. Also included is our standard code to check that the mutual exclusion property is satisfied, although the assertion is commented out.

 1  /* Number of copies of the process to run. */
 2  #define NUM_PROCS 3
 4  show bool locked = false;
 5  show int  num_crit = 0;
 7  active[NUM_PROCS] proctype loop()
 8  {
 9    do
10    :: true ->
12       atomic {
13         !locked ->
14         locked = true;
15       }
17       /* Critical section of code. */
18       /* Something interesting would go here. */
20       num_crit++;
21       /* assert(num_crit == 1); */
22       num_crit--;
24       locked = false;
25       /* End critical section of code. */
26    od;
27  }

Using temporal logic, we can accomplish the same check without the assertion. There are several similar ways of doing this.

One way is to verify that num_crit always has the value 0 or 1. To do this, we'll make the following definition in xspin's ``Symbol Definitions'' textbox:

#define p ((num_crit == 0) || (num_crit == 1))
We then want to verify that pp always holds, so our formula is []p and mark that we always want this formula to hold.

To verify with this formula is a two-step process. First, using the "Generate" button generates what is known as a never claim, shown in the window. This is more Promela code that is temporarily added to your program. This creates a special process that checks the LTL formula and is guaranteed to be executed after every normal step of your code. You don't need to understand the generated never claim, although SPIN presents it for advanced users. The never claim uses a couple syntactic features of Promela not covered in this module. Next, using the "Run Verification" button opens a small version of the verification options window that we're already familiar with. Running the verification puts any output in the LTL window.

Using spin from the command-line:

To verify temporal formulas from the command-line, you need to do three things: First, generate a never claim using spin -f formula. For example,

prompt>  spin -f '[] (p ->  <> (q || !r))'
Be sure to quote the formula so the shell doesn't try to interpret special characters. Next, take the output (which is Promela code for a never claim), and paste it at the end of your Promela source file. Also add #defines to the top of your file, defining each propositional variable used in your LTL formula. Finally, verify the resulting code.

Example 7

Generate and verify the code from the previous example.

The automatically-generated never claim is shown below. It uses labels and gotos which we haven't discussed, but which will be familiar to many programmers. (This code represents a small automaton, in this case with two states, which gets incorporated into the larger state space automaton that we've described.) The use of the constant 1 here is equivalent to the more mnemonic true.

never {    /* !([]p) */
        :: (! ((p))) -> goto accept_all
        :: (1) -> goto T0_init

Running the verification, it does not report any errors. However, note that verifying an LTL property automatically turns off deadlock checking:

Full statespace search for:
        invalid end states      - (disabled by never claim)
However, it still reports a warning that the process never exits, in case we care:

unreached in proctype loop
        line 27, "pan.___", state 11, "-end-"
        (1 of 11 states)

Exercise 11

We want to accomplish the same goal as the previous example, but with a formula that should never hold. What's an appropriate formula?


Using the same definition of p, we can simply use <>!p. I.e., we don't want it to ever happen that p is false.

Exercise 12

Now let's verify something that can't be done easily with just assertions. We also want to make sure that num_crit continually alternates between the values of 0 and 1, rather than maintaining the same value. How can we do this?


Use the following symbol definitions:

#define p (num_crit == 0)
#define q (num_crit == 1)
and verify that

([]<>p) && ([]<>q)
always holds.

Exercise 13

How could we use LTL to verify whether no process is starved? Of course, this verification should fail, generating a trail where at least one process is in fact starved.


In essence, we want to verify

([]<>(_pid == 0)) && ([]<>(_pid == 1)) && ([]<>(_pid == 2))
The problem is that _pid is a local variable to each process, but, unfortunately, never claims are their own separate process, and can't access those variables. One solution is to create an equivalent global variable:

pid running;

Inside the critical section code add the line

running = _pid;
Use the following symbol definitions:

#define p (running == 0)
#define q (running == 1)
#define r (running == 2)
and verify whether

([]<>p) && ([]<>q) && ([]<>r)
always holds.

The never claim's process will numbered higher than any started by an active proctype. So, in this case, its PID is 3.

As expected, the verification fails, and a trail file is created. In the author's sample verification, this trail has only processes 0 and 1 entering the critical section, although there are many possible counter-examples.

It turns out, SPIN has a special syntax to express whether a certain process is at a certain label. So instead of defining a new variable, we could alternately go to the critical section and add a new label (say ``crit:'') and then define:

#define p (loop[0]@crit)   /* 0th 'loop' process is at crit. */
#define q (loop[1]@crit)   /* 1st 'loop' process is at crit. */
#define r (loop[2]@crit)   /* 2nd 'loop' process is at crit. */


  1. Technically, these diagrams are a bit too suggestive: a trace is only a sequence of discrete states, with no concept of time existing in between states. And the numbering of the xx-axis is only an index to the sequence of states — it's not meant to imply that the system's states necessarily correspond to periodic samplings of our program. Yes, it would be more accurate to replace these continuous timelines with a textual list of true and false (to represent the automaton's mapping PP). However, the diagrams better communicate information to us humans, especially those familiar with such timelines as used by hardware circuit designers. We just need to be careful not to read in between the times.
  2. Promela does includes a handy syntax for testing whether a particular process is at a particular label: ``philosopher[3]@eating'' is true when the fourth (0-indexed) philosopher process is at the label eating.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens


A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks