Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » Applied Probability » Conditional Probability

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Conditional Probability

Module by: Paul E Pfeiffer. E-mail the author

Summary: The probability P(A) of an event A is a measure of the likelihood that the event will occur on any trial. New, but partial, information determines a conditioning event C , which may call for reassessing the likelihood of event A. For a fixed conditioning event C, this new assignment to all events constitutes a new probability measure. In addition, because of the way it is derived from the original, or prior, probability, the conditional probability measure has a number of special properties which are important in applications. Determination of the conditioning event is key.

Introduction

The probability P(A)P(A) of an event A is a measure of the likelihood that the event will occur on any trial. Sometimes partial information determines that an event C has occurred. Given this information, it may be necessary to reassign the likelihood for each event A. This leads to the notion of conditional probability. For a fixed conditioning event C, this assignment to all events constitutes a new probability measure which has all the properties of the original probability measure. In addition, because of the way it is derived from the original, the conditional probability measure has a number of special properties which are important in applications.

Conditional probability

The original or prior probability measure utilizes all available information to make probability assignments P(A),P(B)P(A),P(B), etc., subject to the defining conditions (P1), (P2), and (P3). The probability P(A)P(A) indicates the likelihood that event A will occur on any trial.

Frequently, new information is received which leads to a reassessment of the likelihood of event A. For example

  • An applicant for a job as a manager of a service department is being interviewed. His résumé shows adequate experience and other qualifications. He conducts himself with ease and is quite articulate in his interview. He is considered a prospect highly likely to succeed. The interview is followed by an extensive background check. His credit rating, because of bad debts, is found to be quite low. With this information, the likelihood that he is a satisfactory candidate changes radically.
  • A young woman is seeking to purchase a used car. She finds one that appears to be an excellent buy. It looks “clean,” has reasonable mileage, and is a dependable model of a well known make. Before buying, she has a mechanic friend look at it. He finds evidence that the car has been wrecked with possible frame damage that has been repaired. The likelihood the car will be satisfactory is thus reduced considerably.
  • A physician is conducting a routine physical examination on a patient in her seventies. She is somewhat overweight. He suspects that she may be prone to heart problems. Then he discovers that she exercises regularly, eats a low fat, high fiber, variagated diet, and comes from a family in which survival well into their nineties is common. On the basis of this new information, he reassesses the likelihood of heart problems.

New, but partial, information determines a conditioning event C , which may call for reassessing the likelihood of event A. For one thing, this means that A occurs iff the event ACAC occurs. Effectively, this makes C a new basic space. The new unit of probability mass is P(C)P(C). How should the new probability assignments be made? One possibility is to make the new assignment to A proportional to the probability P(AC)P(AC). These considerations and experience with the classical case suggests the following procedure for reassignment. Although such a reassignment is not logically necessary, subsequent developments give substantial evidence that this is the appropriate procedure.

Definition. If C is an event having positive probability, the conditional probability of A, given C is

P ( A | C ) = P ( A C ) P ( C ) P ( A | C ) = P ( A C ) P ( C )
(1)

For a fixed conditioning event C, we have a new likelihood assignment to the event A. Now

P ( A | C ) 0 , P ( Ω | C ) = 1 , and P ( j A j | C ) = P j A j C P ( C ) = j P ( A j C ) / P ( C ) = j P ( A j | C ) P ( A | C ) 0 , P ( Ω | C ) = 1 , and P ( j A j | C ) = P j A j C P ( C ) = j P ( A j C ) / P ( C ) = j P ( A j | C )
(2)

Thus, the new function P(|C)P(|C) satisfies the three defining properties (P1), (P2), and (P3) for probability, so that for fixed C, we have a new probability measure, with all the properties of an ordinary probability measure.

Remark. When we write P(A|C)P(A|C) we are evaluating the likelihood of event A when it is known that event C has occurred. This is not the probability of a conditional event A|CA|C. Conditional events have no meaning in the model we are developing.

Example 1: Conditional probabilities from joint frequency data

A survey of student opinion on a proposed national health care program included 250 students, of whom 150 were undergraduates and 100 were graduate students. Their responses were categorized Y (affirmative), N (negative), and D (uncertain or no opinion). Results are tabulated below.

Table 1
  Y N D
U 60 40 50
G 70 20 10

Suppose the sample is representative, so the results can be taken as typical of the student body. A student is picked at random. Let Y be the event he or she is favorable to the plan, N be the event he or she is unfavorable, and D is the event of no opinion (or uncertain). Let U be the event the student is an undergraduate and G be the event he or she is a graduate student. The data may reasonably be interpreted

P ( G ) = 100 / 250 , P ( U ) = 150 / 250 , P ( Y ) = ( 60 + 70 ) / 250 , P ( Y U ) = 60 / 250 , etc. P ( G ) = 100 / 250 , P ( U ) = 150 / 250 , P ( Y ) = ( 60 + 70 ) / 250 , P ( Y U ) = 60 / 250 , etc.
(3)

Then

P ( Y | U ) = P ( Y U ) P ( U ) = 60 / 250 150 / 250 = 60 150 P ( Y | U ) = P ( Y U ) P ( U ) = 60 / 250 150 / 250 = 60 150
(4)

Similarly, we can calculate

P ( N | U ) = 40 / 150 , P ( D | U ) = 50 / 150 , P ( Y | G ) = 70 / 100 , P ( N | G ) = 20 / 100 , P ( D | G ) = 10 / 100 P ( N | U ) = 40 / 150 , P ( D | U ) = 50 / 150 , P ( Y | G ) = 70 / 100 , P ( N | G ) = 20 / 100 , P ( D | G ) = 10 / 100
(5)

We may also calculate directly

P ( U | Y ) = 60 / 130 , P ( G | N ) = 20 / 60 , etc. P ( U | Y ) = 60 / 130 , P ( G | N ) = 20 / 60 , etc.
(6)

Conditional probability often provides a natural way to deal with compound trials carried out in several steps.

Example 2: Jet aircraft with two engines

An aircraft has two jet engines. It will fly with only one engine operating. Let F1 be the event one engine fails on a long distance flight, and F2 the event the second fails. Experience indicates that P(F1)=0.0003P(F1)=0.0003. Once the first engine fails, added load is placed on the second, so that P(F2|F1)=0.001P(F2|F1)=0.001. Now the second engine can fail only if the other has already failed. Thus F2F1F2F1 so that

P ( F 2 ) = P ( F 1 F 2 ) = P ( F 1 ) P ( F 2 | F 1 ) = 3 × 10 - 7 P ( F 2 ) = P ( F 1 F 2 ) = P ( F 1 ) P ( F 2 | F 1 ) = 3 × 10 - 7
(7)

Thus reliability of any one engine may be less than satisfactory, yet the overall reliability may be quite high.

The following example is taken from the UMAP Module 576, by Paul Mullenix, reprinted in UMAP Journal, vol 2, no. 4. More extensive treatment of the problem is given there.

Example 3: Responses to a sensitive question on a survey

In a survey, if answering “yes” to a question may tend to incriminate or otherwise embarrass the subject, the response given may be incorrect or misleading. Nonetheless, it may be desirable to obtain correct responses for purposes of social analysis. The following device for dealing with this problem is attributed to B. G. Greenberg. By a chance process, each subject is instructed to do one of three things:

  1. Respond with an honest answer to the question.
  2. Respond “yes” to the question, regardless of the truth in the matter.
  3. Respond “no” regardless of the true answer.

Let A be the event the subject is told to reply honestly, B be the event the subject is instructed to reply “yes,” and C be the event the answer is to be “no.” The probabilities P(A)P(A), P(B)P(B), and P(C)P(C) are determined by a chance mechanism (i.e., a fraction P(A)P(A) selected randomly are told to answer honestly, etc.). Let E be the event the reply is “yes.” We wish to calculate P(E|A)P(E|A), the probability the answer is “yes” given the response is honest.

SOLUTION

Since E=EABE=EAB, we have

P ( E ) = P ( E A ) + P ( B ) = P ( E | A ) P ( A ) + P ( B ) P ( E ) = P ( E A ) + P ( B ) = P ( E | A ) P ( A ) + P ( B )
(8)

which may be solved algebraically to give

P ( E | A ) = P ( E ) - P ( B ) P ( A ) P ( E | A ) = P ( E ) - P ( B ) P ( A )
(9)

Suppose there are 250 subjects. The chance mechanism is such that P(A)=0.7P(A)=0.7, P(B)=0.14P(B)=0.14 and P(C)=0.16P(C)=0.16. There are 62 responses “yes,” which we take to mean P(E)=62/250P(E)=62/250. According to the pattern above

P ( E | A ) = 62 / 250 - 14 / 100 70 / 100 = 27 175 0 . 154 P ( E | A ) = 62 / 250 - 14 / 100 70 / 100 = 27 175 0 . 154
(10)

The formulation of conditional probability assumes the conditioning event C is well defined. Sometimes there are subtle difficulties. It may not be entirely clear from the problem description what the conditioning event is. This is usually due to some ambiguity or misunderstanding of the information provided.

Example 4: What is the conditioning event?

Five equally qualified candidates for a job, Jim, Paul, Richard, Barry, and Evan, are identified on the basis of interviews and told that they are finalists. Three of these are to be selected at random, with results to be posted the next day. One of them, Jim, has a friend in the personnel office. Jim asks the friend to tell him the name of one of those selected (other than himself). The friend tells Jim that Richard has been selected. Jim analyzes the problem as follows.

ANALYSIS

Let Ai,1i5Ai,1i5 be the event the ith of these is hired (A1 is the event Jim is hired, A3 is the event Richard is hired, etc.). Now P(Ai)P(Ai) (for each i) is the probability that finalist i is in one of the combinations of three from five. Thus, Jim's probability of being hired, before receiving the information about Richard, is

P ( A 1 ) = 1 × C ( 4 , 2 ) C ( 5 , 3 ) = 6 10 = P ( A i ) , 1 i 5 P ( A 1 ) = 1 × C ( 4 , 2 ) C ( 5 , 3 ) = 6 10 = P ( A i ) , 1 i 5
(11)

The information that Richard is one of those hired is information that the event A3 has occurred. Also, for any pair ijij the number of combinations of three from five including these two is just the number of ways of picking one from the remaining three. Hence,

P ( A 1 A 3 ) = C ( 3 , 1 ) C ( 5 , 3 ) = 3 10 = P ( A i A j ) , i j P ( A 1 A 3 ) = C ( 3 , 1 ) C ( 5 , 3 ) = 3 10 = P ( A i A j ) , i j
(12)

The conditional probability

P ( A 1 | A 3 ) = P ( A 1 A 3 ) P ( A 3 ) = 3 / 10 6 / 10 = 1 / 2 P ( A 1 | A 3 ) = P ( A 1 A 3 ) P ( A 3 ) = 3 / 10 6 / 10 = 1 / 2
(13)

This is consistent with the fact that if Jim knows that Richard is hired, then there are two to be selected from the four remaining finalists, so that

P ( A 1 | A 3 ) = 1 × C ( 3 , 1 ) C ( 4 , 2 ) = 3 6 = 1 / 2 P ( A 1 | A 3 ) = 1 × C ( 3 , 1 ) C ( 4 , 2 ) = 3 6 = 1 / 2
(14)

Discussion

Although this solution seems straightforward, it has been challenged as being incomplete. Many feel that there must be information about how the friend chose to name Richard. Many would make an assumption somewhat as follows. The friend took the three names selected: if Jim was one of them, Jim's name was removed and an equally likely choice among the other two was made; otherwise, the friend selected on an equally likely basis one of the three to be hired. Under this assumption, the information assumed is an event B3 which is not the same as A3. In fact, computation (see Example 5, below) shows

P ( A 1 | B 3 ) = 6 10 = P ( A 1 ) P ( A 1 | A 3 ) P ( A 1 | B 3 ) = 6 10 = P ( A 1 ) P ( A 1 | A 3 )
(15)

Both results are mathematically correct. The difference is in the conditioning event, which corresponds to the difference in the information given (or assumed).

Some properties

In addition to its properties as a probability measure, conditional probability has special properties which are consequences of the way it is related to the original probability measure P()P(). The following are easily derived from the definition of conditional probability and basic properties of the prior probability measure, and prove useful in a variety of problem situations.

(CP1) Product rule If P(ABCD)>0P(ABCD)>0, then P(ABCD)=P(A)P(B|A)P(C|AB)P(D|ABC)P(ABCD)=P(A)P(B|A)P(C|AB)P(D|ABC).

Derivation

The defining expression may be written in product form: P(AB)=P(A)P(B|A)P(AB)=P(A)P(B|A). Likewise

P ( A B C ) = P ( A ) P ( A B ) P ( A ) P ( A B C ) P ( A B ) = P ( A ) P ( B | A ) P ( C | A B ) P ( A B C ) = P ( A ) P ( A B ) P ( A ) P ( A B C ) P ( A B ) = P ( A ) P ( B | A ) P ( C | A B )
(16)

and

P ( A B C D ) = P ( A ) P ( A B ) P ( A ) P ( A B C ) P ( A B ) P ( A B C D ) P ( A B C ) = P ( A ) P ( B | A ) P ( C | A B ) P ( D | A B C ) P ( A B C D ) = P ( A ) P ( A B ) P ( A ) P ( A B C ) P ( A B ) P ( A B C D ) P ( A B C ) = P ( A ) P ( B | A ) P ( C | A B ) P ( D | A B C )
(17)

This pattern may be extended to the intersection of any finite number of events. Also, the events may be taken in any order.

Example 5: Selection of items from a lot

An electronics store has ten items of a given type in stock. One is defective. Four successive customers purchase one of the items. Each time, the selection is on an equally likely basis from those remaining. What is the probability that all four customes get good items?

SOLUTION

Let Ei be the event the ith customer receives a good item. Then the first chooses one of the nine out of ten good ones, the second chooses one of the eight out of nine goood ones, etc., so that

P ( E 1 E 2 E 3 E 4 ) = P ( E 1 ) P ( E 2 | E 1 ) P ( E 3 | E 1 E 2 ) P ( E 4 | E 1 E 2 E 3 ) = 9 10 8 9 7 8 6 7 = 6 10 P ( E 1 E 2 E 3 E 4 ) = P ( E 1 ) P ( E 2 | E 1 ) P ( E 3 | E 1 E 2 ) P ( E 4 | E 1 E 2 E 3 ) = 9 10 8 9 7 8 6 7 = 6 10
(18)

Note that this result could be determined by a combinatorial argument: under the assumptions, each combination of four of ten is equally likely; the number of combinations of four good ones is the number of combinations of four of the nine. Hence

P ( E 1 E 2 E 3 E 4 ) = C ( 9 , 4 ) C ( 10 , 4 ) = 126 210 = 3 / 5 P ( E 1 E 2 E 3 E 4 ) = C ( 9 , 4 ) C ( 10 , 4 ) = 126 210 = 3 / 5
(19)

Example 6: A selection problem

Three items are to be selected (on an equally likely basis at each step) from ten, two of which are defective. Determine the probability that the first and third selected are good.

SOLUTION

Let Gi,1i3 Gi,1i3 be the event the ith unit selected is good. Then G1G3=G1G2G3G1G2cG3G1G3=G1G2G3G1G2cG3. By the product rule

P ( G 1 G 3 ) = P ( G 1 ) P ( G 2 | G 1 ) P ( G 3 | G 1 G 2 ) + P ( G 1 ) P ( G 2 c | G 1 ) P ( G 3 | G 1 G 2 c ) = 8 10 7 9 6 8 + 8 10 2 9 7 8 = 28 45 0 . 62 P ( G 1 G 3 ) = P ( G 1 ) P ( G 2 | G 1 ) P ( G 3 | G 1 G 2 ) + P ( G 1 ) P ( G 2 c | G 1 ) P ( G 3 | G 1 G 2 c ) = 8 10 7 9 6 8 + 8 10 2 9 7 8 = 28 45 0 . 62
(20)

(CP2) Law of total probability Suppose the class {Ai:1in}{Ai:1in} of events is mutually exclusive and every outcome in E is in one of these events. Thus, E=A1EA2EAnEE=A1EA2EAnE, a disjoint union. Then

P ( E ) = P ( E | A 1 ) P ( A 1 ) + P ( E | A 2 ) P ( A 2 ) + + P ( E | A n ) P ( A n ) P ( E ) = P ( E | A 1 ) P ( A 1 ) + P ( E | A 2 ) P ( A 2 ) + + P ( E | A n ) P ( A n )
(21)

Example 7

A compound experiment.

Five cards are numbered one through five. A two-step selection procedure is carried out as follows.

  1. Three cards are selected without replacement, on an equally likely basis.
    • If card 1 is drawn, the other two are put in a box
    • If card 1 is not drawn, all three are put in a box
  2. One of cards in the box is drawn on an equally likely basis (from either two or three)

Let Ai be the event the ith card is drawn on the first selection and let Bi be the event the card numbered i is drawn on the second selection (from the box). Determine P(B5)P(B5), P(A1B5)P(A1B5), and P(A1|B5)P(A1|B5).

SOLUTION

From Example 4, we have P(Ai)=6/10P(Ai)=6/10 and P(AiAj)=3/10P(AiAj)=3/10. This implies

P ( A i A j c ) = P ( A i ) - P ( A i A j ) = 3 / 10 P ( A i A j c ) = P ( A i ) - P ( A i A j ) = 3 / 10
(22)

Now we can draw card five on the second selection only if it is selected on the first drawing, so that B5A5B5A5. Also A5=A1A5A1cA5A5=A1A5A1cA5. We therefore have B5=B5A5=B5A1A5B5A1cA5B5=B5A5=B5A1A5B5A1cA5. By the law of total probability (CP2),

P ( B 5 ) = P ( B 5 | A 1 A 5 ) P ( A 1 A 5 ) + P ( B 5 | A 1 c A 5 ) P ( A 1 c A 5 ) = 1 2 3 10 + 1 3 3 10 = 1 4 P ( B 5 ) = P ( B 5 | A 1 A 5 ) P ( A 1 A 5 ) + P ( B 5 | A 1 c A 5 ) P ( A 1 c A 5 ) = 1 2 3 10 + 1 3 3 10 = 1 4
(23)

Also, since A1B5=A1A5B5A1B5=A1A5B5,

P ( A 1 B 5 ) = P ( A 1 A 5 B 5 ) = P ( A 1 A 5 ) P ( B 5 | A 1 A 5 ) = 3 10 1 2 = 3 20 P ( A 1 B 5 ) = P ( A 1 A 5 B 5 ) = P ( A 1 A 5 ) P ( B 5 | A 1 A 5 ) = 3 10 1 2 = 3 20
(24)

We thus have

P ( A 1 | B 5 ) = 3 / 20 5 / 20 = 6 10 = P ( A 1 ) P ( A 1 | B 5 ) = 3 / 20 5 / 20 = 6 10 = P ( A 1 )
(25)

Occurrence of event B1 has no affect on the likelihood of the occurrence of A1. This condition is examined more thoroughly in the chapter on "Independence of Events".

Often in applications data lead to conditioning with respect to an event but the problem calls for “conditioning in the opposite direction.”

Example 8: Reversal of conditioning

Students in a freshman mathematics class come from three different high schools. Their mathematical preparation varies. In order to group them appropriately in class sections, they are given a diagnostic test. Let Hi be the event that a student tested is from high school i, 1i31i3. Let F be the event the student fails the test. Suppose data indicate

P ( H 1 ) = 0 . 2 , P ( H 2 ) = 0 . 5 , P ( H 3 ) = 0 . 3 , P ( F | H 1 ) = 0 . 10 , P ( F | H 2 ) = 0 . 02 , P ( F | H 3 ) = 0 . 06 P ( H 1 ) = 0 . 2 , P ( H 2 ) = 0 . 5 , P ( H 3 ) = 0 . 3 , P ( F | H 1 ) = 0 . 10 , P ( F | H 2 ) = 0 . 02 , P ( F | H 3 ) = 0 . 06
(26)

A student passes the exam. Determine for each i the conditional probability P(Hi|Fc)P(Hi|Fc) that the student is from high school i.

SOLUTION

P ( F c ) = P ( F c | H 1 ) P ( H 1 ) + P ( F c | H 2 ) P ( H 2 ) + P ( F c | H 3 ) P ( H 3 ) = 0 . 90 0 . 2 + 0 . 98 0 . 5 + 0 . 94 0 . 3 = 0 . 952 P ( F c ) = P ( F c | H 1 ) P ( H 1 ) + P ( F c | H 2 ) P ( H 2 ) + P ( F c | H 3 ) P ( H 3 ) = 0 . 90 0 . 2 + 0 . 98 0 . 5 + 0 . 94 0 . 3 = 0 . 952
(27)

Then

P ( H 1 | F c ) = P ( F c H 1 ) P ( F c ) = P ( F c | H 1 ) P ( H 1 ) P ( F c ) = 180 952 = 0 . 1891 P ( H 1 | F c ) = P ( F c H 1 ) P ( F c ) = P ( F c | H 1 ) P ( H 1 ) P ( F c ) = 180 952 = 0 . 1891
(28)

Similarly,

P ( H 2 | F c ) = P ( F c | H 2 ) P ( H 2 ) P ( F c ) = 590 952 = 0 . 5147 and P ( H 3 | F c ) = P ( F c | H 3 ) P ( H 3 ) P ( F c ) = 282 952 = 0 . 2962 P ( H 2 | F c ) = P ( F c | H 2 ) P ( H 2 ) P ( F c ) = 590 952 = 0 . 5147 and P ( H 3 | F c ) = P ( F c | H 3 ) P ( H 3 ) P ( F c ) = 282 952 = 0 . 2962
(29)

The basic pattern utilized in the reversal is the following.

(CP3) Bayes' rule If Ei=1nAiEi=1nAi (as in the law of total probability), then

P ( A i | E ) = P ( A i E ) P ( E ) = P ( E | A i ) P ( A i ) P ( E ) 1 i n The law of total probability yields P ( E ) P ( A i | E ) = P ( A i E ) P ( E ) = P ( E | A i ) P ( A i ) P ( E ) 1 i n The law of total probability yields P ( E )
(30)

Such reversals are desirable in a variety of practical situations.

Example 9: A compound selection and reversal

Begin with items in two lots:

  1. Three items, one defective.
  2. Four items, one defective.

One item is selected from lot 1 (on an equally likely basis); this item is added to lot 2; a selection is then made from lot 2 (also on an equally likely basis). This second item is good. What is the probability the item selected from lot 1 was good?

SOLUTION

Let G1 be the event the first item (from lot 1) was good, and G2 be the event the second item (from the augmented lot 2) is good. We want to determine P(G1|G2)P(G1|G2). Now the data are interpreted as

P ( G 1 ) = 2 / 3 , P ( G 2 | G 1 ) = 4 / 5 , P ( G 2 | G 1 c ) = 3 / 5 P ( G 1 ) = 2 / 3 , P ( G 2 | G 1 ) = 4 / 5 , P ( G 2 | G 1 c ) = 3 / 5
(31)

By the law of total probability (CP2),

P ( G 2 ) = P ( G 1 ) P ( G 2 | G 1 ) + P ( G 1 c ) P ( G 2 | G 1 c ) = 2 3 4 5 + 1 3 3 5 = 11 15 P ( G 2 ) = P ( G 1 ) P ( G 2 | G 1 ) + P ( G 1 c ) P ( G 2 | G 1 c ) = 2 3 4 5 + 1 3 3 5 = 11 15
(32)

By Bayes' rule (CP3),

P ( G 1 | G 2 ) = P ( G 2 | G 1 ) P ( G 1 ) P ( G 2 ) = 4 / 5 × 2 / 3 11 / 15 = 8 11 0 . 73 P ( G 1 | G 2 ) = P ( G 2 | G 1 ) P ( G 1 ) P ( G 2 ) = 4 / 5 × 2 / 3 11 / 15 = 8 11 0 . 73
(33)

Example 10: Additional problems requiring reversals

  • Medical tests. Suppose D is the event a patient has a certain disease and T is the event a test for the disease is positive. Data are usually of the form: prior probability P(D)P(D) (or prior odds P(D)/P(Dc)P(D)/P(Dc)), probability P(T|Dc)P(T|Dc) of a false positive, and probability P(Tc|D)P(Tc|D) of a false negative. The desired probabilities are P(D|T)P(D|T) and P(Dc|Tc)P(Dc|Tc).
  • Safety alarm. If D is the event a dangerous condition exists (say a steam pressure is too high) and T is the event the safety alarm operates, then data are usually of the form P(D)P(D), P(T|Dc)P(T|Dc), and P(Tc|D)P(Tc|D), or equivalently (e.g., P(Tc|Dc)P(Tc|Dc) and P(T|D)P(T|D)). Again, the desired probabilities are that the safety alarms signals correctly, P(D|T)P(D|T) and P(Dc|Tc)P(Dc|Tc).
  • Job success. If H is the event of success on a job, and E is the event that an individual interviewed has certain desirable characteristics, the data are usually prior P(H)P(H) and reliability of the characteristics as predictors in the form P(E|H)P(E|H) and P(E|Hc)P(E|Hc). The desired probability is P(H|E)P(H|E).
  • Presence of oil. If H is the event of the presence of oil at a proposed well site, and E is the event of certain geological structure (salt dome or fault), the data are usually P(H)P(H) (or the odds), P(E|H)P(E|H), and P(E|Hc)P(E|Hc). The desired probability is P(H|E)P(H|E).
  • Market condition. Before launching a new product on the national market, a firm usually examines the condition of a test market as an indicator of the national market. If H is the event the national market is favorable and E is the event the test market is favorable, data are a prior estimate P(H)P(H) of the likelihood the national market is sound, and data P(E|H)P(E|H) and P(E|Hc)P(E|Hc) indicating the reliability of the test market. What is desired is P(H|E),P(H|E), the likelihood the national market is favorable, given the test market is favorable.

The calculations, as in Example 8, are simple but can be tedious. We have an m-procedure called bayes to perform the calculations easily. The probabilities P(Ai)P(Ai) are put into a matrix PA and the conditional probabilities P(E|Ai)P(E|Ai) are put into matrix PEA. The desired probabilities P(Ai|E)P(Ai|E) and P(Ai|Ec)P(Ai|Ec) are calculated and displayed

Example 11: MATLAB calculations for Example 8

>> PEA = [0.10 0.02 0.06];
>> PA =  [0.2 0.5 0.3];
>> bayes
Requires input PEA = [P(E|A1) P(E|A2) ... P(E|An)]
and PA = [P(A1) P(A2) ... P(An)]
Determines PAE  = [P(A1|E) P(A2|E) ... P(An|E)]
       and PAEc = [P(A1|Ec) P(A2|Ec) ... P(An|Ec)]
Enter matrix PEA of conditional probabilities  PEA
Enter matrix  PA of probabilities  PA
P(E) = 0.048
P(E|Ai)   P(Ai)     P(Ai|E)   P(Ai|Ec)
0.1000    0.2000    0.4167    0.1891
0.0200    0.5000    0.2083    0.5147
0.0600    0.3000    0.3750    0.2962
Various quantities are in the matrices PEA, PA, PAE, PAEc, named above

The procedure displays the results in tabular form, as shown. In addition, the various quantities are in the workspace in the matrices named, so that they may be used in further calculations without recopying.

The following variation of Bayes' rule is applicable in many practical situations.

(CP3*) Ratio form of Bayes' rule P(A|C)P(B|C)=P(AC)P(BC)=P(C|A)P(C|B)P(A)P(B)P(A|C)P(B|C)=P(AC)P(BC)=P(C|A)P(C|B)P(A)P(B)

The left hand member is called the posterior odds, which is the odds after knowledge of the occurrence of the conditioning event. The second fraction in the right hand member is the prior odds, which is the odds before knowledge of the occurrence of the conditioning event C. The first fraction in the right hand member is known as the likelihood ratio. It is the ratio of the probabilities (or likelihoods) of C for the two different probability measures P(|A)P(|A) and P(|B)P(|B).

Example 12: A performance test

As a part of a routine maintenance procedure, a computer is given a performance test. The machine seems to be operating so well that the prior odds it is satisfactory are taken to be ten to one. The test has probability 0.05 of a false positive and 0.01 of a false negative. A test is performed. The result is positive. What are the posterior odds the device is operating properly?

SOLUTION

Let S be the event the computer is operating satisfactorily and let T be the event the test is favorable. The data are P(S)/P(Sc)=10P(S)/P(Sc)=10, P(T|Sc)=0.05P(T|Sc)=0.05, and P(Tc|S)=0.01P(Tc|S)=0.01. Then by the ratio form of Bayes' rule

P ( S | T ) P ( S c | T ) = P ( T | S ) P ( T | S c ) P ( S ) P ( S c ) = 0 . 99 0 . 05 10 = 198 so that P ( S | T ) = 198 199 = 0 . 9950 P ( S | T ) P ( S c | T ) = P ( T | S ) P ( T | S c ) P ( S ) P ( S c ) = 0 . 99 0 . 05 10 = 198 so that P ( S | T ) = 198 199 = 0 . 9950
(34)

The following property serves to establish in the chapters on "Independence of Events" and "Conditional Independence" a number of important properties for the concept of independence and of conditional independence of events.

(CP4) Some equivalent conditions If 0<P(A)<10<P(A)<1 and 0<P(B)<10<P(B)<1, then

P ( A | B ) * P ( A ) iff P ( B | A ) * P ( B ) iff P ( A B ) * P ( A ) P ( B ) and P ( A | B ) * P ( A ) iff P ( B | A ) * P ( B ) iff P ( A B ) * P ( A ) P ( B ) and
(35)
P ( A B ) * P ( A ) P ( B ) iff P ( A c B c ) * P ( A c ) P ( B c ) iff P ( A B c ) P ( A ) P ( B c ) P ( A B ) * P ( A ) P ( B ) iff P ( A c B c ) * P ( A c ) P ( B c ) iff P ( A B c ) P ( A ) P ( B c )
(36)

where *is<,,=,, or>*is<,,=,, or> and is>,,=,,or<is>,,=,,or<, respectively.

Because of the role of this property in the theory of independence and conditional independence, we examine the derivation of these results.

VERIFICATION of (CP4)

  1. P(AB)*P(A)P(B)P(AB)*P(A)P(B) iff P(A|B)*P(A)P(A|B)*P(A) (divide by P(B)P(B) — may exchange A and Ac)
  2. P(AB)*P(A)P(B)P(AB)*P(A)P(B) iff P(B|A)*P(B)P(B|A)*P(B) (divide by P(A)P(A) — may exchange B and Bc)
  3. P(AB)*P(A)P(B)P(AB)*P(A)P(B) iff [P(A)-P(ABc)]*P(A)[1-P(Bc)[P(A)-P(ABc)]*P(A)[1-P(Bc) iff -P(ABc)*-P(A)P(Bc)-P(ABc)*-P(A)P(Bc) iff P(ABc)P(A)P(Bc)P(ABc)P(A)P(Bc)
  4. We may use c to get P(AB)*P(A)P(B)P(AB)*P(A)P(B) iff P(ABc)P(A)P(Bc)P(ABc)P(A)P(Bc) iff P(AcBc)*P(Ac)P(Bc)P(AcBc)*P(Ac)P(Bc)

A number of important and useful propositons may be derived from these.

  1. P(A|B)+P(Ac|B)=1P(A|B)+P(Ac|B)=1, but, in general, P(A|B)+P(A|Bc)1P(A|B)+P(A|Bc)1.
  2. P(A|B)>P(A)P(A|B)>P(A) iff P(A|Bc)<P(A)P(A|Bc)<P(A).
  3. P(Ac|B)>P(Ac)P(Ac|B)>P(Ac) iff P(A|B)<P(A)P(A|B)<P(A).
  4. P(A|B)>P(A)P(A|B)>P(A) iff P(Ac|Bc)>P(Ac)P(Ac|Bc)>P(Ac).

VERIFICATION — Exercises (see problem set)

Repeated conditioning

Suppose conditioning by the event C has occurred. Additional information is then received that event D has occurred. We have a new conditioning event CDCD. There are two possibilities:

  1. Reassign the conditional probabilities. PC(A)PC(A) becomes
    PC(A|D)=PC(AD)PC(D)=P(ACD)P(CD)PC(A|D)=PC(AD)PC(D)=P(ACD)P(CD)
    (37)
  2. Reassign the total probabilities: P(A)P(A) becomes
    PCD(A)=P(A|CD)=P(ACD)P(CD)PCD(A)=P(A|CD)=P(ACD)P(CD)
    (38)

Basic result: PC(A|D)=P(A|CD)=PD(A|C)PC(A|D)=P(A|CD)=PD(A|C). Thus repeated conditioning by two events may be done in any order, or may be done in one step. This result extends easily to repeated conditioning by any finite number of events. This result is important in extending the concept of "Independence of Events" to "Conditional Independence". These conditions are important for many problems of probable inference.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks