The ancient Greeks loved to hang around on the stoa, sip some wine, and debate. But at the end of the day, they wanted to sit back and decide who had won the argument. When Socrates claims that one statement follows from another, is it actually so? Shouldn't there be some set of rules to officially determine when an argument is correct? Thus began the formal study of logic.
Aside:
These issues are of course still with us today. And while it might be difficult to codify realworld arguments about (say) guncontrol laws, programs can be fully formalized, and correctness can be specified. We'll look at three examples where formal proofs are applicable:
 playing a simple game, WaterWorld;
 checking a program for type errors;
 circuit verification.
WaterWorld
Consider a game called WaterWorld, where each location is either empty sea or contains a pirate. When you enter a location, you must correctly anticipate whether or not it contains pirates.
 If you correctly anticipate open sea, you are able to enter and determine how many of the (up to 3) adjacent locations contain a pirate.
 If you correctly anticipate a pirate, the location is tagged as dangerous, and you gather no further information.
(You can freely download WaterWorld.)

For instance, in the first board, what assertions can we be sure of? What, exactly, is your reasoning? How about in the second board? You can certainly envision wanting a computer player that can deduce certain moves, an make those for you automatically.
Type Checking
When writing a program, we'd like to simply look at the program and
determine whether it has any bugs, without having to run it.
We'll see in the future, however, that such a general problem cannot be
solved. Instead, we focus on finding more limited kinds of errors.
Type checking determines whether
all functions are called with the correct
type of inputs.
E.g., the function +
should be
called with numbers, not Booleans,
and a function which a programmer has declared
to return an integer really should always return an integer.
Consider the following program:
// average:
// Simply divide sum by N, but guard against dividing by 0.
//
realorfalse average( real sum, natNum N ) {
if (N != 0)
return sum / N;
else
return false;
}
One reason programmers are required to declare the intended
type of each variable
is so that the computer (the compiler) can prove that
certain errors won't occur.
How can you or the compiler prove, in the above,
that average
returns a real number or false,
but never returns (say) a string, and doesn't raise an exception?
Deductions are made based on premises about the types that are passed in,
along with axioms about the input and return types of the builtin functions
if
,
!=
, and
/
,
as well as which exceptions those builtins might raise.
Consider this variant:
// augmentaverage:
// Given an old sum and N, compute the average if one more
// datum were included.
//
real augment_average( real old_sum, natNum old_N, real new_datum ) {
return average( old_sum + new_datum, old_N + 1 );
}
Most compilers will reject augmentaverage
, claiming
that it may actually return false
.
However, we're able prove that it really will
only return a real
,
by using some knowledge about natural numbers and adding 1,
plus some knowledge of what average
returns.
(Note that our reasoning uses aspects of
average
's interface which aren't explicitly stated;
most
type systems aren't expressive enough to allow more detailed type contracts,
for reasons we'll allude to later.)
So we see that many compilers have overly conservative typecheckers,
rejecting code which is perfectly safe,
because they are reasoning with only a narrow set of typerules.
This example alludes to another use of logic: Not only is it the foundation of writing proofs (ones that can be created or checked by computers), but logic can also be used as an unambiguous specification language. Observe that while a function's implementation is always specified formally and unambiguously — in a programming language — the interface is specified entirely English, aside from a few type declarations. Many bugs stem from ambiguities in the English, that different humans interpret differently (or, don't think about). Being able to use logic to specify an interface (and cannot be modified even if the somebody later tunes the implementation) is an important skill for programmers, even when those logic formulas aren't used in proofs.
Circuit Verification
Given a circuit's blueprints, will it work as advertised? In 1994, Intel had to recall five million of its Pentium processors, due to a bug in the arithmetic circuitry: This cost Intel nearly half a billion dollars, lots of bad publicity, and it happened after intensive testing. Might it have been possible to have a program try to prove the chip's correctness or uncover an error, before casting it in silicon?
Software and hardware companies are increasingly turning to the use of automated proofs, rather than semihaphazard testing, to verify (parts of) large products correct. However, it is a formidable task, and how to do this is also an active area of research.
There are of course many more examples; one topical popular concern is verifying certain security properties of electronic voting machines (often provided by vendors who keep their source software a proprietary secret).
Having proofs of correctness is not just comforting; it allows us to save effort (less time testing, and also able to make better optimizations), and prevent recall of faulty products. But: who decides a proof is correct — the employee with best SAT scores?!? Is there some trusted way to verify proofs, besides careful inspection by a skilled, yet still errorprone, professional?