Connexions

You are here: Home » Content » Intro to Logic » Logic: Looking Back

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This collection is included in aLens by: Digital Scholarship at Rice University

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Logic: Looking Back

Summary: A recap of where we've been, and why we traveled there.

Why didn't we begin with quantifiers all along?

We saw three stages of logics:

• Propositional logic, with formulas like DickLikesJane¬JaneLikesDick DickLikesJane JaneLikesDick . While the propositions are named suggestively, nothing in the logic enforces a relation among these; it is equivalent to A¬B A B .
• Predicate logic, where variables (and constants) can express a connection between different parts of the formula: likesyx¬likesxy likes y x likes x y Predicate logic introduced the idea of variables, and required domains and interpretations to determine truth. But it can't bind variables, and thus requires an interpretation of xx and yy to evaluate.
• First-order logic, which included two quantifiers to bind variables: y:x:likesyx¬likesxyy x likes y x likes x y
So why, you might ask, didn't we just start out with first-order logic in the first lecture? One reason, clearly, is to introduce concepts one at a time: everything you needed to know about one level was needed in the next, and then some. But there's more: by restricting our formalisms, we can't express all the concepts of the bigger formalism, but we can have automated ways of checking statements or finding proofs.

In general, this is a common theme in the theory of any subject: determining when and where you can (or, need to) trade off expressibility for predictive value. For example, …

• Linguistics: Having a set of precise rules for (say) Tagalog grammar allows you to determine what is and isn't a valid sentence; details of the formal grammar can reveal relations to other languages which aren't otherwise so apparent. On the other hand, a grammar for any natural language is unlikely to exactly capture all things which native speakers say and understand. If working with a formal grammar, one needs to know what is being lost and what is being gained.
• Dismissing a grammar as irrelevant because it doesn't entirely reflect usage is missing the point of the grammar;
• Conversely, condemning some real-life utterances as ungrammatical (and ignoring them) forgets that the grammar is a model which captures many (if not all) important properties.
Of course, any reasonable debate on this topic respects these two poles and is actually about where the best trade-off between them lies.
• Psychology: Say, Piaget might propose four stages of learning in children. It may not trade off total accuracy, for (say) clues of what to look for in brain development.
• Physics: Modern pedagogy must trade off quantum accuracy for Newtonian approximations. Researchers exploring fields like particle physics must trade off exact simulations for statistical (“stochastic”) approximations.
Understanding the theoretical foundations of a field is often critical for knowing how to apply various techniques in practice.

Logic and everyday reasoning

We've looked at the impreciseness and ambiguity of natural language statements, but these are not the only problems hidden in natural language arguments. The following illustrates a common form of hidden assumption: saying “the tenth reindeer of Santa Claus is …” implies the existence some tenth reindeer. More subtly, humans use much more information than what is spoken in a conversation. Even aside from body language, consider a friend asking you “Hey, are you hungry?” While as a formal statement this doesn't have any information, in real life it highly suggests that your friend is hungry.

A much more blatant form of missing information is when the speaker simply chooses to omit it. When arguing for a cause it is standard practice to simply describe its advantages, without any of its disadvantages or alternatives.

Aside:

Economists measure things not in cost, but opportunity cost, the price of something minus the benefits of what you'd get using the price for something else. E.g., for $117 million the university can build a new research center. But what else could you do on campus with$117m?
Historically, logic and rhetoric, the art of persuasion through language, are closely linked.

Other logics

You've now been introduced to two logics: propositional and first-order. But, the story does not have to end here. There are many other logics, each with their uses.

Limitations of first-order logic's expressiveness

We can make first-order sentences to express concepts as “ vertices aa and bb are connected by a path of length 2 ”, as well as “…by a path of length 3”, length4 length 4 , etc.

Note:

Write a couple of these sentences!
But trying to write “ vertices aa and bb are connected [by a path of any length] ” isn't obvious … in fact, it can be proven that no first-order sentence can express this property! Nor can it express the closely-related property “the graph is connected” (without reference to two named vertices aa and bb).

Hmm, what about second-order logic? It has a bigger name; whatever it means, perhaps it can express more properties?

What exactly is second-order logic? In first-order logic, quantifiers range over elements of the domain: “ there exist numbers xx and yy, … ”. In second-order logic, you can additionally quantify over sets of elements of the domain: “ there is a set of numbers, such that … ”.

Example 1

For instance, “ for all vertices xx and yy, there exists a set of vertices (call the set “Red”), the red vertices include a path from xx to yy. More precisely, “ every Red vertex has exactly two Red neighbors, or it is xx or yy (which each have exactly 1 red neighbor) ”. Is this sentence true exactly when the graph is connected? Why does this description of “red vertices” not quite correspond to “ just the vertices on a path from xx to yy?

An interesting phenomenon: There are some relations between how difficult it is to write down a property, and how difficult to compute it! How might you try to formalize the statement “there is a winning strategy for chess”?

A shortcoming of first-order logic is that it is impossible to express the concept “path”. (This can be proven, though we won't do so here.)

Thus, some other logics used to formalize certain systems include:

• As mentioned, second-order logic is like first-order logic, but it also allows quantification over entire relations. Thus, you can make formulas that state things like “ For all relations RR, if RR is symmetric and transitive, then … ”. While less common, we could continue with third-order, fourth-order, etc.
• Temporal logic is based on quantification over time. This is useful to describe how a program's state changes over time. In particular, it is used for describing concurrent program specifications and communication protocols, sequences of communications steps used in security or networking. See, for example, TeachLogic's Model-Checking module.
• Linear logic is a “resource-aware” logic. Every premise must be used, but it may be used only once. This models, for example, how keyboard input is usually handled: reading an input also removes it from the input stream, so that it can't be read again.

Logic in computer science

Logics provide us with a formal language useful for

• specifying properties unambiguously,
• proving that programs and systems do (or don't) have the claimed properties, and
• gaining greater insight into other languages such as database queries.

Programming language type systems are a great example of these first two points. The connectives allow us to talk about pairs and structures (xx and yy), unions (xx or yy), and functions (if you give the program a xx, it produces a yy). The “generics” in Java, C++, and C# are based upon universal quantification, while “wildcards” in Java are based upon existential quantification. One formalization of this strong link between logic and types is called the Curry-Howard isomorphism.

Compilers have very specific logics built into them. In order to optimize your code, analyses check what properties your code has e.g., are variables bb and cc needed at the same time, or can they be stored in the same hardware register?

More generally, it would be great to be able to verify that our hardware and software designs were correct. First, specifying what “correct” means requires providing the appropriate logical formulas. With hardware, automated verification is now part of the regular practice. However, it is so computationally expensive that it can only be done on pieces of a design, but not, say, a whole microprocessor. With software, we also frequently work with smaller pieces of code, proving individual functions or algorithms correct. However, there are two big inter-related problems. Many of the properties we'd like to prove about our software are “undecidable” it is impossible to check the property accurately for every input. Also, specifying full correctness typically requires extensions to first-order logic, most of which are incomplete. 1 As we've seen, that means that we cannot prove everything we want. While proving hardware and software correct has its limitations, logic provides us with tools that are still quite useful. For an introduction to one approach used in verification, see TeachLogic's Model-Checking module.

Footnotes

1. Even something as simple as first-order logic using the integers as our domain and addition and multiplication as relations is undecidable. Kurt Gödel, 1931

Content actions

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks