# Connexions

You are here: Home » Content » Applied Probability » Problems on Conditional Independence, Given a Random Vector

• Preface to Pfeiffer Applied Probability

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This collection is included in aLens by: Digital Scholarship at Rice University

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection:

Collection by: Paul E Pfeiffer. E-mail the author

# Problems on Conditional Independence, Given a Random Vector

Module by: Paul E Pfeiffer. E-mail the author

## Exercise 1

The pair {X,Y} ci |H{X,Y} ci |H. XX exponential (u/3)(u/3), given H=uH=u; YY exponential (u/5)(u/5), given H=uH=u; and HH uniform [1,2][1,2]. Determine a general formula for P(X>r,Y>s)P(X>r,Y>s), then evaluate for r=3,s=10r=3,s=10.

### Solution

P ( X > r , Y > s | H = u ) = e - u r / 3 e - u s / 5 = e - a u , a = r 3 + s 5 P ( X > r , Y > s | H = u ) = e - u r / 3 e - u s / 5 = e - a u , a = r 3 + s 5
(1)
P ( X > r , Y > s ) = e - a u f H ( u ) d u = 1 2 e - a u d u = 1 a [ e - a - e - 2 a ] P ( X > r , Y > s ) = e - a u f H ( u ) d u = 1 2 e - a u d u = 1 a [ e - a - e - 2 a ]
(2)
For r = 3 , s = 10 , a = 3 , P ( X > 3 , Y > 10 ) = 1 3 ( e - 3 - e - 6 ) = 0 . 0158 For r = 3 , s = 10 , a = 3 , P ( X > 3 , Y > 10 ) = 1 3 ( e - 3 - e - 6 ) = 0 . 0158
(3)

## Exercise 2

A small random sample of size n=12n=12 is taken to determine the proportion of the student body which favors a proposal to expand the student Honor Council by adding two additional members “at large.” Prior information indicates that this proportion is about 0.6 = 3/5. From a Bayesian point of view, the population proportion is taken to be the value of a random variable H. It seems reasonable to assume a prior distribution HH beta (4,3)(4,3), giving a maximum of the density at (4-1)/(4+3-2)=3/5(4-1)/(4+3-2)=3/5. Seven of the twelve interviewed favor the proposition. What is the best mean-square estimate of the proportion, given this result? What is the conditional distribution of H, given this result?

### Solution

HH Beta (r,s)(r,s), r=4,s=3,n=12,k=7r=4,s=3,n=12,k=7

E [ H | S = k ] = k + r n + r + s = 7 + 4 12 + 4 + 3 = 11 19 E [ H | S = k ] = k + r n + r + s = 7 + 4 12 + 4 + 3 = 11 19
(4)

## Exercise 3

Let {Xi:1in}{Xi:1in} be a random sample, given H. Set W=(X1,X2,,Xn)W=(X1,X2,,Xn). Suppose X conditionally geometric (u)(u), given H=u; H=u; i.e., suppose P(X=k|H=u)=u(1-u)kP(X=k|H=u)=u(1-u)k for all k0k0. If HH uniform
on [0, 1], determine the best mean square estimator for H, given W.

### Solution

E [ H | W = k ] = E [ H I { k } ( W ) ] E [ I { k } ( W ) ] = E { H E [ I { k } ( W ) | H ] } E { E [ I { k } ( W ) | H ] } E [ H | W = k ] = E [ H I { k } ( W ) ] E [ I { k } ( W ) ] = E { H E [ I { k } ( W ) | H ] } E { E [ I { k } ( W ) | H ] }
(5)
= u P ( W = k | H = u ) f H ( u ) d u P ( W = k | H = u ) f H ( u ) d u , k = ( k 1 , k 2 , , k n ) = u P ( W = k | H = u ) f H ( u ) d u P ( W = k | H = u ) f H ( u ) d u , k = ( k 1 , k 2 , , k n )
(6)
P ( W = k | H = u ) = i = 1 n u ( 1 - u ) k i = u n ( 1 - u ) k * k * = i = 1 n k i P ( W = k | H = u ) = i = 1 n u ( 1 - u ) k i = u n ( 1 - u ) k * k * = i = 1 n k i
(7)
E [ H | W = k ] = 0 1 u n + 1 ( 1 - u ) k * d u 0 1 u n ( 1 - u ) k * d u = Γ ( n + 2 ) Γ ( k * + 1 ) Γ ( n + 1 + k * + 2 ) · Γ ( n + k * + 2 ) Γ ( n + 1 ) Γ ( k * + 1 ) = E [ H | W = k ] = 0 1 u n + 1 ( 1 - u ) k * d u 0 1 u n ( 1 - u ) k * d u = Γ ( n + 2 ) Γ ( k * + 1 ) Γ ( n + 1 + k * + 2 ) · Γ ( n + k * + 2 ) Γ ( n + 1 ) Γ ( k * + 1 ) =
(8)
n + 1 n + k * + 2 n + 1 n + k * + 2
(9)

## Exercise 4

Let {Xi:1in}{Xi:1in} be a random sample, given H. Set W=(X1,X2,,Xn)W=(X1,X2,,Xn). Suppose X conditionally Poisson (u)(u), given H=u; H=u; i.e., suppose P(X=k|H=u)=e-uuk/k!P(X=k|H=u)=e-uuk/k!. If HH gamma (m,λ)(m,λ), determine the best mean square estimator for H, given W.

### Solution

E [ H | W = k ] = u P ( W = k | H = u ) f H ( u ) d u P ( W = k | H = u ) f H ( u ) d u E [ H | W = k ] = u P ( W = k | H = u ) f H ( u ) d u P ( W = k | H = u ) f H ( u ) d u
(10)
P ( W = k | H = u ) = i = 1 n e - u u k i k i ! = e - n u u k * A k * = i = 1 n k i P ( W = k | H = u ) = i = 1 n e - u u k i k i ! = e - n u u k * A k * = i = 1 n k i
(11)
f H ( u ) = λ m u m - 1 e - λ u Γ ( m ) f H ( u ) = λ m u m - 1 e - λ u Γ ( m )
(12)
E [ H | W = k ] = 0 u k * + m e - ( λ + n ) u d u 0 u k * + m - 1 e - ( λ + n ) u d u = Γ ( m + k * + 1 ) ( λ + n ) k * + m + 1 · ( λ + n ) k * + m Γ ( m + k * ) = m + k * λ + n E [ H | W = k ] = 0 u k * + m e - ( λ + n ) u d u 0 u k * + m - 1 e - ( λ + n ) u d u = Γ ( m + k * + 1 ) ( λ + n ) k * + m + 1 · ( λ + n ) k * + m Γ ( m + k * ) = m + k * λ + n
(13)

## Exercise 5

Suppose {N,H}{N,H} is independent and {N,Y} ci |H{N,Y} ci |H. Use properties of conditional expectation and conditional independence to show that

E [ g ( N ) h ( Y ) | H ] = E [ g ( N ) ] E [ h ( Y ) | H ] a . s . E [ g ( N ) h ( Y ) | H ] = E [ g ( N ) ] E [ h ( Y ) | H ] a . s .
(14)

### Solution

E[g(N)h(H)|H]=E[g(N)|H]E[h(Y)|H]a.s.E[g(N)h(H)|H]=E[g(N)|H]E[h(Y)|H]a.s. by (CI6) and

E [ g ( N ) | H ] = E [ g ( N ) ] a . s . E [ g ( N ) | H ] = E [ g ( N ) ] a . s . by (CE5).

## Exercise 6

Consider the composite demand D introduced in the section on Random Sums in "Random Selecton"

D = n = 0 I { k } ( N ) X n where X n = k = 0 n Y k , Y 0 = 0 D = n = 0 I { k } ( N ) X n where X n = k = 0 n Y k , Y 0 = 0
(15)

Suppose {N,H}{N,H} is independent, {N,Yi} ci |H{N,Yi} ci |H for all i, and E[Yi|H]=e(H)E[Yi|H]=e(H), invariant with i. Show that E[D|H]=E[N]E[Y|H]a.s.E[D|H]=E[N]E[Y|H]a.s..

### Solution

E [ D | H ] = n = 1 E [ I { n } ( N ) X n | H ] a . s . E [ D | H ] = n = 1 E [ I { n } ( N ) X n | H ] a . s .
(16)
E [ I { n } ( N ) X n | H ] = k = 1 n E [ I { n } ( N ) Y k | H ] = k = 1 n P ( N = n ) E [ Y | H ] = P ( N = n ) n E [ Y | H ] a . s . E [ I { n } ( N ) X n | H ] = k = 1 n E [ I { n } ( N ) Y k | H ] = k = 1 n P ( N = n ) E [ Y | H ] = P ( N = n ) n E [ Y | H ] a . s .
(17)
E [ D | H ] = n = 1 n P ( N = n ) E [ Y | H ] = E [ N ] E [ Y | H ] a . s . E [ D | H ] = n = 1 n P ( N = n ) E [ Y | H ] = E [ N ] E [ Y | H ] a . s .
(18)

## Exercise 7

The transition matrix P for a homogeneous Markov chain is as follows (in m-file npr16_07.m):

P = 0 . 23 0 . 32 0 . 02 0 . 22 0 . 21 0 . 29 0 . 41 0 . 10 0 . 08 0 . 12 0 . 22 0 . 07 0 . 31 0 . 14 0 . 26 0 . 32 0 . 15 0 . 05 0 . 33 0 . 15 0 . 08 0 . 23 0 . 31 0 . 09 0 . 29 P = 0 . 23 0 . 32 0 . 02 0 . 22 0 . 21 0 . 29 0 . 41 0 . 10 0 . 08 0 . 12 0 . 22 0 . 07 0 . 31 0 . 14 0 . 26 0 . 32 0 . 15 0 . 05 0 . 33 0 . 15 0 . 08 0 . 23 0 . 31 0 . 09 0 . 29
(19)
1. Obtain the absolute values of the eigenvalues, then consider increasing powers of P to observe the convergence to the long run distribution.
2. Take an arbitrary initial distribution p0p0 (as a row matrix). The product p0*Pkp0*Pk is the distribution for stage k. Note what happens as k becomes large enough to give convergence to the long run transition matrix. Does the end result change with change of initial distribution p0p0?

### Solution

ev = abs(eig(P))'
ev = 1.0000    0.0814    0.0814    0.3572    0.2429
a = ev(4).^[2 4 8 16 24]
a = 0.1276    0.0163    0.0003    0.0000    0.0000
% By P^16 the rows agree to four places
p0 = [0.5 0 0 0.3 0.2];     % An arbitrarily chosen p0
p4 = p0*P^4
p4 =    0.2297    0.2622    0.1444    0.1644    0.1992
p8 = p0*P^8
p8 =    0.2290    0.2611    0.1462    0.1638    0.2000
p16 = p0*P^16
p16 =   0.2289    0.2611    0.1462    0.1638    0.2000
p0a = [0 0 0 0 1];          % A second choice of p0
p16a = p0a*P^16
p16a =  0.2289    0.2611    0.1462    0.1638    0.2000


## Exercise 8

The transition matrix P for a homogeneous Markov chain is as follows (in m-file npr16_08.m):

P = 0 . 2 0 . 5 0 . 3 0 0 0 0 0 . 6 0 . 1 0 . 3 0 0 0 0 0 . 2 0 . 7 0 . 1 0 0 0 0 0 0 0 0 . 6 0 . 4 0 0 0 0 0 0 . 5 0 . 5 0 0 0 . 1 0 . 3 0 0 . 2 0 . 1 0 . 1 0 . 2 0 . 1 0 . 2 0 . 1 0 . 2 0 . 2 0 . 2 0 P = 0 . 2 0 . 5 0 . 3 0 0 0 0 0 . 6 0 . 1 0 . 3 0 0 0 0 0 . 2 0 . 7 0 . 1 0 0 0 0 0 0 0 0 . 6 0 . 4 0 0 0 0 0 0 . 5 0 . 5 0 0 0 . 1 0 . 3 0 0 . 2 0 . 1 0 . 1 0 . 2 0 . 1 0 . 2 0 . 1 0 . 2 0 . 2 0 . 2 0
(20)
1. Note that the chain has two subchains, with states {1,2,3}{1,2,3} and {4,5}{4,5}. Draw a transition diagram to display the two separate chains. Can any state in one subchain be reached from any state in the other?
2. Check the convergence as in part (a) of Exercise 7. What happens to the state probabilities for states 6 and 7 in the long run? What does that signify for these states? Can these states be reached from any state in either of the subchains? How would you classify these states?

### Solution

Increasing power Pn show the probability of being in states 6, 7 go to zero. These states cannot be reached from any of the other states.

## Exercise 9

The transition matrix P for a homogeneous Markov chain is as follows (in m-file npr16_09.m):

P = 0 . 1 0 . 2 0 . 1 0 . 3 0 . 2 0 0 . 1 0 0 . 6 0 0 0 0 0 . 4 0 0 0 . 2 0 . 5 0 0 . 3 0 0 0 0 . 6 0 . 1 0 0 . 3 0 0 . 2 0 . 2 0 . 1 0 . 2 0 0 . 1 0 . 2 0 0 0 . 2 0 . 7 0 0 . 1 0 0 0 . 5 0 0 0 0 0 . 5 P = 0 . 1 0 . 2 0 . 1 0 . 3 0 . 2 0 0 . 1 0 0 . 6 0 0 0 0 0 . 4 0 0 0 . 2 0 . 5 0 0 . 3 0 0 0 0 . 6 0 . 1 0 0 . 3 0 0 . 2 0 . 2 0 . 1 0 . 2 0 0 . 1 0 . 2 0 0 0 . 2 0 . 7 0 0 . 1 0 0 0 . 5 0 0 0 0 0 . 5
(21)
1. Check the transition matrix P for convergence, as in part (a) of Exercise 7. How many steps does it take to reach convergence to four or more decimal places? Does this agree with the theoretical result?
2. Examine the long run transition matrix. Identify transient states.
3. The convergence does not make all rows the same. Note, however, that there are two subgroups of similar rows. Rearrange rows and columns in the long run Matrix so that identical rows are grouped. This suggests subchains. Rearrange the rows and columns in the transition matrix P and see that this gives a pattern similar to that for the matrix in Exercise 8. Raise the rearranged transition matrix to the power for convergence.

### Solution

Examination of P16 suggests sets {2,7}{2,7} and {3,4,6}{3,4,6} of states form subchains. Rearrangement of P may be done as follows:

PA = P([2 7 3 4 6 1 5], [2 7 3 4 6 1 5])
PA =
0.6000    0.4000         0         0         0         0         0
0.5000    0.5000         0         0         0         0         0
0         0    0.2000    0.5000    0.3000         0         0
0         0    0.6000    0.1000    0.3000         0         0
0         0    0.2000    0.7000    0.1000         0         0
0.2000    0.1000    0.1000    0.3000         0    0.1000    0.2000
0.2000    0.2000    0.1000    0.2000    0.1000    0.2000         0
PA16 = PA^16
PA16 =
0.5556    0.4444         0         0         0         0         0
0.5556    0.4444         0         0         0         0         0
0         0    0.3571    0.3929    0.2500         0         0
0         0    0.3571    0.3929    0.2500         0         0
0         0    0.3571    0.3929    0.2500         0         0
0.2455    0.1964    0.1993    0.2193    0.1395    0.0000    0.0000
0.2713    0.2171    0.1827    0.2010    0.1279    0.0000    0.0000


It is clear that original states 1 and 5 are transient.

## Exercise 10

Use the m-procedure inventory1 (in m-file inventory1.m) to obtain the transition matrix for maximum stock M=8, M=8, reorder point m=3m=3, and demand DD Poisson(4).

1. Suppose initial stock is six. What will the distribution for Xn, n=1,3,5n=1,3,5 (i.e., the stock at the end of periods 1, 3, 5, before restocking)?
2. What will the long run distribution be?

### Solution

inventory1
Enter value M of maximum stock  8
Enter value m of reorder point  3
Enter row vector of demand values  0:20
Enter demand probabilities  ipoisson(4,0:20)
Result is in matrix P
p0 = [0 0 0 0 0 0 1 0 0];
p1 = p0*P
p1 =
Columns 1 through 7
0.2149    0.1563    0.1954    0.1954    0.1465    0.0733    0.0183
Columns 8 through 9
0         0
p3 = p0*P^3
p3 =
Columns 1 through 7
0.2494    0.1115    0.1258    0.1338    0.1331    0.1165    0.0812
Columns 8 through 9
0.0391    0.0096
p5 = p0*P^5
p5 =
Columns 1 through 7
0.2598    0.1124    0.1246    0.1311    0.1300    0.1142    0.0799
Columns 8 through 9
0.0386    0.0095
a = abs(eig(P))'
a =
Columns 1 through 7
1.0000    0.4427    0.1979    0.0284    0.0058    0.0005    0.0000
Columns 8 through 9
0.0000    0.0000
a(2)^16
ans =
2.1759e-06       % Convergence to at least five decimals for P^16
pinf = p0*P^16      % Use arbitrary p0,  pinf approx p0*P^16
pinf =  Columns 1 through 7
0.2622    0.1132    0.1251    0.1310    0.1292    0.1130    0.0789
Columns 8 through 9
0.0380    0.0093


## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks