Skip to content Skip to navigation Skip to collection information

OpenStax-CNX

You are here: Home » Content » Topics in Applied Probability » Martingale Sequences: Examples and Further Patterns

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

  • NSF Partnership display tagshide tags

    This collection is included inLens: NSF Partnership in Signal Processing
    By: Sidney Burrus

    Click the "NSF Partnership" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

  • Featured Content display tagshide tags

    This collection is included inLens: Connexions Featured Content
    By: Connexions

    Click the "Featured Content" link to see all content affiliated with them.

    Click the tag icon tag icon to display tags associated with this content.

Also in these lenses

  • UniqU content

    This collection is included inLens: UniqU's lens
    By: UniqU, LLC

    Click the "UniqU content" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Martingale Sequences: Examples and Further Patterns

Module by: Paul E Pfeiffer. E-mail the author

Examples and further patterns

Theorem 1: A4-1 Sums of Independent Random Variables

Suppose YN is an independent, integrable sequence. Set Xn=k=0nYkn0Xn=k=0nYkn0.

If E[Yn]()0n1E[Yn]()0n1, then XN is a (S)MG.

Theorem 2: A4-2 Products of nonnegative random variables

Suppose YNZN,Yn0a.s.nYNZN,Yn0a.s.n. Consider XN:Xn=ck=0nYk,c>0XN:Xn=ck=0nYk,c>0.

If E[Yn+1|Wn]()1a.s.nE[Yn+1|Wn]()1a.s.n, then (XN,ZN)(XN,ZN) is a (S)MG

Proof

XnWnXnWn and Xn+1=Yn+1XnXn+1=Yn+1Xn. Hence, E[Xn+1|Wn]=XnE[Zn+1|Wn]()Xna.s.nE[Xn+1|Wn]=XnE[Zn+1|Wn]()Xna.s.n

Theorem 3: A4-3 Discrete random walk

Consider Y0=0Y0=0 and {Yn:1n}{Yn:1n} iid. Set Xn=k=0nYkn0Xn=k=0nYkn0. Suppose P(Yn=k)=pkP(Yn=k)=pk. Let

g Y ( s ) = E [ s Y n ] = k p k s k , s > 0 g Y ( s ) = E [ s Y n ] = k p k s k , s > 0
(1)

Now gY(1)=1,gY'(1)=E[Yn],gY''(s)=kk(k-1)pksk-2>0fors>0gY(1)=1,gY'(1)=E[Yn],gY''(s)=kk(k-1)pksk-2>0fors>0. Hence, gY(s)=1gY(s)=1 has at most two roots, one of which is s=1s=1.

  1. s=1s=1 is a minimum point iff E[Yn]=0E[Yn]=0, in which case XN is a MG (see A4-1)
  2. If gY(r)=1gY(r)=1 for 0<r<10<r<1, then E[rYn]=1n1E[rYn]=1n1. Let Z0=1,Zn=rXn=k=1nrYkZ0=1,Zn=rXn=k=1nrYk. By A4-2, ZN is a MG

For the MG case in Theorem IXA3-6, the Yn are centered at conditional expectation; that is

E[Yn+1|Wn]=0a.s.E[Yn+1|Wn]=0a.s. The following is an extension of that pattern.

Theorem 4: A4-4 More general sums

Consider integrable YNZNYNZN and bounded HNZNHNZN. Let Wn=Wn= a constant for n<0n<0 and Hn=1Hn=1 for n<0n<0. Set

X n = k = 0 n { Y k - E [ Y k | W k - 1 ] } H k - 1 n 0 X n = k = 0 n { Y k - E [ Y k | W k - 1 ] } H k - 1 n 0
(2)

Then (XN,ZN)(XN,ZN) is a MG.

Proof

XnWn;n0XnWn;n0 and E[Xn+1|Wn]=Xn+HnE{Yn+1-E[Yn+1|Wn]|Wn}=Xn+0a.s.E[Xn+1|Wn]=Xn+HnE{Yn+1-E[Yn+1|Wn]|Wn}=Xn+0a.s.

IXA4-2

Theorem 5: A4-5 Sums of products

Suppose YN is absolutely fair relative to ZN, with E[|Yn|k]<nE[|Yn|k]<n, fixed k>0k>0. For nknk, set

X n = 0 i 1 < n Y i 1 Y i 2 Y i k G n X n = 0 i 1 < n Y i 1 Y i 2 Y i k G n
(3)

Then (XNk,ZNk)Nk={k,k+1,k+2,}(XNk,ZNk)Nk={k,k+1,k+2,} is a MG,

Proof

Xn+1=Xn+Kn+1Xn+1=Xn+Kn+1, where

K n + 1 = Y n + 1 0 i 1 < n Y i 1 Y i 2 Y i k - 1 = Y n + 1 K n * K n * W n K n + 1 = Y n + 1 0 i 1 < n Y i 1 Y i 2 Y i k - 1 = Y n + 1 K n * K n * W n
(4)
E [ K n + 1 | W n ] = K n * E [ Y n + 1 | W n ] = 0 a . s . n k E [ K n + 1 | W n ] = K n * E [ Y n + 1 | W n ] = 0 a . s . n k
(5)

We consider, next, some relationships with homogeneous Markov sequences.

Suppose (XN,ZN)(XN,ZN) is a homogeneous Markov sequence with finite state space E={1,2,,M}E={1,2,,M} and transition matrix P=[p(i,j)]P=[p(i,j)]. A function f on E is represented by a column matrixf=[f(1),f(2),,f(M)]Tf=[f(1),f(2),,f(M)]T. Then f(Xn)f(Xn) has value f(k)f(k) when Xn=kXn=k. PfPf is an m×1m×1 column matrix and Pf(j)Pf(j) is the jth element of that matrix. Consider E[f(Xn+1)|Wn]=E[f(Xn+1)|Xn]a.s.E[f(Xn+1)|Wn]=E[f(Xn+1)|Xn]a.s.. Now

E [ f ( X n + 1 ) | X n = j ] = k E f ( k ) p ( j , k ) = P f ( j ) so that E [ f ( X n + 1 ) | W n ] = P f ( X n ) E [ f ( X n + 1 ) | X n = j ] = k E f ( k ) p ( j , k ) = P f ( j ) so that E [ f ( X n + 1 ) | W n ] = P f ( X n )
(6)

A nonnegative function f on E is called (super)harmonic for P iff Pf()fPf()f.

Theorem 6: A4-6 Positive supermartingales and superharmonic functions.

Suppose (XN,ZN)(XN,ZN) is a homogeneous Markov sequence with finite state space E={1,2,,M}E={1,2,,M} and transition matrix P=[p(i,j)]P=[p(i,j)]. For nonnegative f on E, let Yn=f(Xn)nNYn=f(Xn)nN. Then (YN,ZN)(YN,ZN) is a positive (super)martingale P(SR)MG iff f is (super)harmonic for P.

Proof

As noted above E[f(Xn+1)|Wn]=Pf(Xn)E[f(Xn+1)|Wn]=Pf(Xn).

  1. If f is (super)harmonic Pf(Xn)()f(Xn)=YnPf(Xn)()f(Xn)=Yn, so that
    E[Yn+1|Wn]()Yna.s.E[Yn+1|Wn]()Yna.s.
    (7)
  2. If (YN,ZN)(YN,ZN) is a P(SR)MG, then
    Yn=f(Xn)()E[f(Xn+1)|Wn]=Pf(Xn)a.s.,sothatfis(super)harmonicYn=f(Xn)()E[f(Xn+1)|Wn]=Pf(Xn)a.s.,sothatfis(super)harmonic
    (8)

IX A4-3

An eigenfunction f and associated eigenvalue λ for P satisfy Pf=λfPf=λf (i.e., (λI-P)f=0(λI-P)f=0). In most cases, |λ|<1|λ|<1. For real λ, 0<λ<10<λ<1, the eigenfunctions are superharmonic functions. We may use the construction of Theorem IXA3-12 to obtain the associated MG.

Theorem 7: A4-7 Martingales induced by eigenfunctions for homogeneous Markov sequences

Let (YN,ZN)(YN,ZN) be a homogenous Markov sequence, and f be an eigenfunction with eigenvalue λ. Put Xn=λ-nf(Yn)Xn=λ-nf(Yn). Then, by Theorem IAXA3-12, (XN,ZN)(XN,ZN) is a MG.

Theorem 8: A4-8 A dynamic programming example.

We consider a horizon of N stages and a finite state space E={1,2,,M}E={1,2,,M}.

  • Observe the system at prescribed instants
  • Take action on the basis of previous states and actions.

Suppose the observed state is j and the action is aAaA. Two results ensue:

  1. A return r(j,a)r(j,a) is realized
  2. The system moves to a new state

Let:

Yn=Yn= state in nth period, 0nN-10nN-1

An=An= action taken on the basis of Y0,A0,,Yn-1,An-1,YnY0,A0,,Yn-1,An-1,Yn

[A0 is the initial action based on the initial state Y0]

A policyπ is a set of functions (π0,π1,,πN-1)(π0,π1,,πN-1), such that

A n = π n ( Y 0 , A 0 , , Y n - 1 , A n - 1 , Y n ) 0 n N - 1 A n = π n ( Y 0 , A 0 , , Y n - 1 , A n - 1 , Y n ) 0 n N - 1
(9)

The expected return under policy π, when Y0=j0Y0=j0 is

R ( π , j 0 ) = E k = 0 N - 1 r ( Y k , A k ) R ( π , j 0 ) = E k = 0 N - 1 r ( Y k , A k )
(10)

The goal is to determine π to maximize R(π,j0)R(π,j0).

Let Zk=(Yk,Ak)Zk=(Yk,Ak) and Wn=(Z0,Z1,,Zn)Wn=(Z0,Z1,,Zn). If {Yk:0kN-1}{Yk:0kN-1} is Markov, then use of (CI9) and (CI11) shows that for any policy the Z-process is Markov. Hence

E [ I M ( Y n + 1 ) | W n ] = E [ I M ( Y n + 1 ) | Z n ] a . s . n : 0 n N - 1 , Borel sets M E [ I M ( Y n + 1 ) | W n ] = E [ I M ( Y n + 1 ) | Z n ] a . s . n : 0 n N - 1 , Borel sets M
(11)

We assume time homogeneity in the sense that

P ( Y n + 1 = j | Y n = i , A n = a ) = p ( j | i , a ) , invariant with n , i , j E , a A P ( Y n + 1 = j | Y n = i , A n = a ) = p ( j | i , a ) , invariant with n , i , j E , a A
(12)

We make a dynamic programming approach

Define recursively f N , f N - 1 , , f 0 f N , f N - 1 , , f 0 as follows:

fN(j)=0,jEfN(j)=0,jE. For n=N,N-1,,2,1n=N,N-1,,2,1, set

f n - 1 ( j ) = max { r ( j , a ) + k E f n ( k ) p ( k | j , a ) : a A } f n - 1 ( j ) = max { r ( j , a ) + k E f n ( k ) p ( k | j , a ) : a A }
(13)

Put

X n = k = 1 n { f k ( Y k ) - E [ f k ( Y k ) | W k - 1 ] } X n = k = 1 n { f k ( Y k ) - E [ f k ( Y k ) | W k - 1 ] }
(14)

Then, by A4-2, (XN,ZN)(XN,ZN) is a MG, with E[Xn]=0,0nNE[Xn]=0,0nN and

f n - 1 ( Y n - 1 ) r ( Y n - 1 , A n - 1 ) + k E f n ( k ) p ( k | Z n - 1 ) = r ( Y n - 1 , A n - 1 ) + E [ f n ( Y n ) | W n - 1 ] f n - 1 ( Y n - 1 ) r ( Y n - 1 , A n - 1 ) + k E f n ( k ) p ( k | Z n - 1 ) = r ( Y n - 1 , A n - 1 ) + E [ f n ( Y n ) | W n - 1 ]
(15)

IX A4-4

We may therefore assert

0 = E [ X N ] = E k = 1 N { f k ( Y k ) - E [ f k ( Y k ) | W k - 1 ] } E k = 1 N { f k ( Y k ) + r ( Y k - 1 , A k - 1 ) - f k - 1 ( Y k - 1 ) } 0 = E [ X N ] = E k = 1 N { f k ( Y k ) - E [ f k ( Y k ) | W k - 1 ] } E k = 1 N { f k ( Y k ) + r ( Y k - 1 , A k - 1 ) - f k - 1 ( Y k - 1 ) }
(16)
= E k = 0 N - 1 r ( Y k , A k ) + f N ( Y N ) - f 0 ( Y 0 ) = E k = 0 N - 1 r ( Y k , A k ) - E [ f 0 ( Y 0 ) ] = E k = 0 N - 1 r ( Y k , A k ) + f N ( Y N ) - f 0 ( Y 0 ) = E k = 0 N - 1 r ( Y k , A k ) - E [ f 0 ( Y 0 ) ]
(17)

Hence, R(π,Y0)E[f0(Y0)]R(π,Y0)E[f0(Y0)]. For Y0=j0,R(π,j0)f0(j0)Y0=j0,R(π,j0)f0(j0). If a policy π* can be found which yields equality, then π* is an optimal policy.

The following procedure leads to such a policy.

  • For each jEjE, let πn-1*(Y0,A0,Y1,A1,,An-2,j)=πn-1*(j)πn-1*(Y0,A0,Y1,A1,,An-2,j)=πn-1*(j) be the action which maximizes
    r(j,a)+kEfn(k)p(k|j,a)=r(j,a)+E[fn(Yn)|Yn-1=j,An-1=a]r(j,a)+kEfn(k)p(k|j,a)=r(j,a)+E[fn(Yn)|Yn-1=j,An-1=a]
    (18)
    Thus, An*=πn*(Yn)An*=πn*(Yn).
  • Now, fn-1(Yn-1)=r(Yn-1,An-1*)-E[fn(Yn)|Zn-1*]fn-1(Yn-1)=r(Yn-1,An-1*)-E[fn(Yn)|Zn-1*], which yields equality in the argument above. Thus, R(π*,j)=f0(j)R(π*,j)=f0(j) and π* is optimal.

Note that π* is a Markov policy, An*=πn*(Yn)An*=πn*(Yn). The functions fn depend on the future stages, but once determined, the policy is Markov.

Theorem 9: A4-9 Doob's martingale

Let X be an integrable random variable and ZN an arbitrary sequence of random vectors. For each n, let Xn=E[X|Wn]Xn=E[X|Wn]. Then (XN,ZN)(XN,ZN) is a MG.

Proof

E [ | X n | ] = E { | E [ X | W n ] | } E { E [ | X | | W n ] } = E [ | X | ] < E [ | X n | ] = E { | E [ X | W n ] | } E { E [ | X | | W n ] } = E [ | X | ] <
(19)
E [ X n + 1 | W n ] = E { E [ X | W n + 1 ] | W n } = E [ X | W n ] = X n a . s . E [ X n + 1 | W n ] = E { E [ X | W n + 1 ] | W n } = E [ X | W n ] = X n a . s .
(20)

Theorem 10: A4-9a Best mean-square estimators

If XL2XL2, then Xn=E[X|Wn]Xn=E[X|Wn] is the best mean-square estimator of X, given Wn=(Z0,Z1,,Zn)Wn=(Z0,Z1,,Zn). (XN,ZN)(XN,ZN) is a MG.

Theorem 11: A4-9b Futures pricing

Let XN be a sequence of “spot” prices for a commodity. Let t0 be the present and t0+Tt0+T be a fixed future. The agent can be expected to know the past history Ut0=(X0,X1,,Xt0)Ut0=(X0,X1,,Xt0), and will update as t increases beyond t0. Put Yk=E[Xt0+T|Ut0+k]Yk=E[Xt0+T|Ut0+k], the expected futures price, given the history up to t0+kt0+k. Then {Yk:0kT}{Yk:0kT} is a Doob's MG, with Y=Xt0+TY=Xt0+T, relative to {Zk:0kT}{Zk:0kT}, where Z0=Ut0Z0=Ut0 and Zk=Xt0+kZk=Xt0+k for 1kT1kT.

Theorem 12: A4-9c Discounted futures

Assume rate of return is r per unit time. Then α=1/(1+r)α=1/(1+r) is the discount factor. Let

V k = E [ α T - k X t 0 + T | U t 0 + k ] = α T - k Y k V k = E [ α T - k X t 0 + T | U t 0 + k ] = α T - k Y k
(21)

Then

E [ V k + 1 | U t 0 + k ] = α T - k E [ Y k + 1 | U t 0 + k ] = α T - k - 1 Y k > α T - k Y k = V k a . s . E [ V k + 1 | U t 0 + k ] = α T - k E [ Y k + 1 | U t 0 + k ] = α T - k - 1 Y k > α T - k Y k = V k a . s .
(22)

Thus {Vk:0kT}{Vk:0kT} is a SMG relative to {Zk:0kT}{Zk:0kT}.

Implication from martingale theory is that all methods to determine profitable patterns of prediction from past history are doomed to failure.

IX A4-5

Theorem 13: A4-10 Present discounted value of capital

If α=1/(1+r)α=1/(1+r) is the discount factor, Xn is the dividend at time n, and Vn is the present value, at time n, of all future returns, then

V n = k = 1 α k X n + k so that V n + 1 = k = 1 α k X n + k + 1 = k = 2 α k - 1 X n + k V n = k = 1 α k X n + k so that V n + 1 = k = 1 α k X n + k + 1 = k = 2 α k - 1 X n + k
(23)
= 1 α k = 1 α k X n + k - X n + 1 = ( 1 + r ) V n - X n + 1 = 1 α k = 1 α k X n + k - X n + 1 = ( 1 + r ) V n - X n + 1
(24)

Note that Vn+1()VnVn+1()Vn iff r()Xn+1/Vnr()Xn+1/Vn. Set Yn=E[Vn|Un]Yn=E[Vn|Un]. Then Yn+1=(1+r)E[Vn|Un+1]-Xn+1a.s.Yn+1=(1+r)E[Vn|Un+1]-Xn+1a.s. so that

E [ Y n + 1 | U n ] = ( 1 + r ) Y n - E [ X n + 1 | U n ] E [ Y n + 1 | U n ] = ( 1 + r ) Y n - E [ X n + 1 | U n ]
(25)

Thus, (YN,XN)(YN,XN) is a (S)MG iff

r ( ) E [ X n + 1 | U n ] E [ V n | U n ] = Expected return next period, given U n Expected present value, given U n r ( ) E [ X n + 1 | U n ] E [ V n | U n ] = Expected return next period, given U n Expected present value, given U n
(26)

Summary: Convergence of Submartingales

The submartingale convergence theorem

Theorem 14

If (XN,ZN)(XN,ZN) is a SMG with limnE[Xn+]<limnE[Xn+]<, then there exists XWXW such that XnXa.s.XnXa.s.

Uniform integrability and some convergence conditions

Definition. The class {Xt:tT}{Xt:tT} is uniformly integrable iff

sup { E [ I { | X t | > a } | X t | ] : t T } 0 as a sup { E [ I { | X t | > a } | X t | ] : t T } 0 as a
(27)

Theorem 15

Any of the following conditions ensures uniform integrability:

  1. The class is dominated by an integrable random variable Y.
  2. The class is finite and integrable.
  3. There is a u.i. class {Yt:tT}{Yt:tT} such that |Xt||Yt|a.s.|Xt||Yt|a.s. for all tTtT.
  4. X integrable implies Doob's MG {Xn=E[X|Wn]:nN}{Xn=E[X|Wn]:nN} is u.i.

Definition. The class {Xt:tT}{Xt:tT} is uniformly absolutely continuous iff for each ϵ>0ϵ>0 there is a δ>0δ>0 such that P(A)<δP(A)<δ implies supT{E[IA|Xt|]:tT}<ϵsupT{E[IA|Xt|]:tT}<ϵ.

Theorem 16

XT={Xt:tT}XT={Xt:tT} is u.i. iff both (i) XT is u.a.c., and (ii) supT{E[|Xt|]}<supT{E[|Xt|]}<.

Definition. XnPXXnPX iff P(|Xn-X|>ϵ)P(|Xn-X|>ϵ) as nn for all ϵ>0ϵ>0.

X n L p X iff E [ | X n - X | p ] 0 as n X n L p X iff E [ | X n - X | p ] 0 as n
(28)

Theorem 17

XnXa.s.XnXa.s. implies XnPXXnPX

Theorem 18

If (i) XnLpXXnLpX, (ii) XnXa.s.XnXa.s. and (iii) limnE[Xn|Z]limnE[Xn|Z] exists a.s., then

lim n E [ X n | Z ] = E [ X | Z ] a . s . lim n E [ X n | Z ] = E [ X | Z ] a . s .
(29)

Theorem 19

Suppose (XN,ZN)(XN,ZN) is a (S)MG. Consider the following

  • (A) : limnE[Xn+]<limnE[Xn+]< or, equivalently, supnE[|Xn|]<supnE[|Xn|]< - - - - - - - - - - - -
  • (a) : XN is uniformly integrable.
  • (a++) : XN+XN+ is uniformly integrable.
  • (b) : XnL1XXnL1X.
  • (b++) : Xn+L1XXn+L1X.
  • (c) : There is an integrable XWXW such that
    XnXa.s.andE[X|Wn]()Xna.s.nNXnXa.s.andE[X|Wn]()Xna.s.nN
    (30)
  • (c'') : Condition (c) with even for a MG.
  • (d) : There is an integrable X with E[X|Wn]()Xna.s.nNE[X|Wn]()Xna.s.nN
  • (d'') : Condition (d) with even for a MG.

Then

  1. Each of the propositions (a) through (d'') implies (A), hence SMG convergence.
  2. (a) (a++)
  3. (a) (b) (c) (d)
  4. (a++) (b++) (c'') (d'')
  5. For a MG, (d) (a), so that (a) (b) (c) (d)

The notion of regularity is characterized in terms of the conditions in the theorem.

Definition. A martingale (XN,ZN)(XN,ZN) is said to be martingale regular iff the equivalent conditions (a), (b), (c), (d) in the theorem hold.

A submartingale (XN,ZN)(XN,ZN) is said to be submartingale regular iff the equivalent conditions (a++), (b++), (c''), (d'') in the theorem hold.

Remarks

  1. Since a MG is a SMG, a martingale regular MG is also submartingale regular.
  2. It is not true, in general that a submartingale regular SMG is martingale regular. We do have for SMG (a) (b) (c) (d).
  3. Regularity may be viewed in terms of membership of X in the (S)MG. The condition E[X|Wn]()Xna.s.E[X|Wn]()Xna.s. is indicated by saying X belongs to the (S)MG or by saying the (S)MG is closed (on the right) by X.

Summary

For a martingale(XN,ZN)(XN,ZN)

  1. If martingale regular, then XnXWa.s.XnXWa.s. and Xn=E[X|Wn]a.s.nNXn=E[X|Wn]a.s.nNE[Xn+k|Wn]=E{E[X|Wn+k]|Wn}=E[X|Wn]=Xna.s.E[Xn+k|Wn]=E{E[X|Wn+k]|Wn}=E[X|Wn]=Xna.s. and E[X0]=E[Xn]=E[X]nNE[X0]=E[Xn]=E[X]nN
  2. If submartingale regular, but not martingale regular, then XnXWa.s.XnXWa.s. but E[X|Wn]Xna.s.nNE[X|Wn]Xna.s.nN and E[X0]=E[Xn]E[X]<nNE[X0]=E[Xn]E[X]<nN

For a submartingale(XN,ZN)(XN,ZN)

Either martingale regularity or submartingale regularity implies

XnXWa.s.XnXWa.s. and XnE[Xn+1|Wn]E[X|Wn]a.s.nNXnE[Xn+1|Wn]E[X|Wn]a.s.nN

and E[X0]E[Xn]E[X]<nNE[X0]E[Xn]E[X]<nN

If XN is uniformly integrable, then E[Xn]E[X]E[Xn]E[X].

Theorem 20

If (XN,ZN)(XN,ZN) is a MG with E[Xn2]<KnıNE[Xn2]<KnıN, then the proceess is MG regular, with

X n X a . s . E [ ( X - X n ) 2 ] 0 and E [ X n ] = E [ X ] n N X n X a . s . E [ ( X - X n ) 2 ] 0 and E [ X n ] = E [ X ] n N
(31)

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks