Abstract
On jump Markov processes. This is just a place-holder at this stage.
Keywords: something or other
A jump Markov process proceeds by the Markov system performing finite, that is, not infinitesimal jumps on moving from to . We will denote the jump by . Although we will be able to demonstrate that the Markov propagator density function for such a process can be fully characterized by two functions, as was also the case with the continuous Markov processes, on account of this finiteness we will no longer be able to truncate Jump Krammers-Moyal equations don’t trunctate. Kramers-Moyal equations. There is no Fokker-Planck equivalent for the jump Markov process.
Because the Kramers-Moyal equations don’t truncate, there is another machinery, the master equations, which are differential-integral equations, that is preferably used in this context. Although still intractable analytically, the master equations provide us with tools for numerical investigations of jump processes.
The jump Markov process propagator density looks as follow
| (1) |
where
The same propagator density is sometimes expressed as
| (2) |
where is called the consolidated characterizing function of a jump Markov process. It is the probability that the Markov process at at will jump in the next by between and away from . Functions and can be expressed in terms of as follows
| (3) |
The master equations can be expressed in terms of the consolidated characterizing functions in which case they look as follows (forward and backward):
| (4) |
| (5) |
Unlike continuous Markov processes, the jump processes can be characterized by a yet another function, called the next jump density function
| (6) |
The function is the probability density that a jump Markov process that is at at will perform its next jump at about past , that is, between and landing between and , or, to put it in other words, that the next jump will happen within after landing the system within away from . This function can be expressed in terms of and as follows
| (7) |
This will simplify for temporally homogeneous processes to
| (8) |
Completely homogeneous jump Markov processes are particularly interesting, because they’re the simplest possible and so we can say something about them. The two characterizing functions, and , become
| (9) |
with the next jump density function simplifying to
| (10) |
An important thing to notice is that whereas we could characterize completely homogeneous continuous Markov processes (they are called Wiener processes) by two constants, here we have one constant only and one irreducible function of the jump itself.
It is interesting to consider given by exponential, Gaussian and Cauchy-Lorentz distributions. The systems so defined become quite tractable and, more importantly, applicable to a range of physical phenomena, for example, diffusion and Brownian motion.
In this module we are also going to take another look at quantum mechanics, asking if quantum mechanics can be simulated by jump Markov processes. There was a vigorous discussion about this in the literature.
It all started with a paper by Hardy, Home, Squires and Whitaker, “Realism and the quantum-mechanical two-state oscillator,” published on the 1st of April 1992 in Physical Review A, vol. 45, no. 7, pp. 4267–4270. Nearly two years later Gillespie referred to this paper in his article “Why quantum mechanics cannot be formulated as a Markov process,” published in March 1994 in Physical Review A, vol. 49, no. 3, pp. 1607–1612. A year after that, in May 1995, Garbaczewski and Olkiewicz responded with “Why quantum dynamics can be formulated as a Markov process,” published in Physical Review A, vol. 51, no. 5, pp. 3445–3453, to which Gillespie responded in June 1996 in “Comment on ‘Why quantum dynamics can be formulated as a Markov process’”, published in Physical Review A, vol. 53, no. 6, pp. 4602–4604. Not to be outdone, Gorbaczewski and Olkiewicz published their riposte in August 1996, “Comment on ‘Why quantum mechanics cannot be formulated as a Markov Process’” in Physical Review A, vol. 54, no. 2, pp. 1733–1736. In the same issue of Physical Review A, following the article by Gorbaczewski and Olkiewicz, Gillespie presented his final one-page argument, pp. 1737–1738. But the last word in this exchange belonged to Hardy, Home, Squires and Whitaker, who in their “Comment on ‘Why quantum mechanics cannot be formulated as a Markov process’”, published in October 1997 in Physical Review A, vol. 56, no. 4, pp. 3301–3303, pointed out that they did not claim their process described in the original 1992 paper to be a Markov process. They demonstrated a concrete model that satisfied their requirements (of 1992) and pointed out various problems with Gillespie’s own arguments.
The important moral of the story is that not every stochastic jump process must be a Markov process to begin with. Markovianism is a quite special requirement that may not always apply. Another moral is that some reasoning that Gillespie is so fond of in his papers and in his book (which here we faithfully follow, because it’s an excellent introduction to Markov processes) is not necessarily unquestionable. In particular, the trick of dividing into does stretch things a bit. Shouldn’t we treat as indivisible instead? In effect the Gillespie’s trick leads to smoothness conditions that may be stronger than necessary. Gillespie’s is not infinitesimal enough.
To understand this fascinating discussion though we must master the formalism of jump Markov processes first, and so we begin
A jump process found at at will most likely stay at for a while, then it’ll suddenly jump away from . We can’t normally say how long it’s going to stay at , but we are interested in processes for which a probability exists that the jump will occur within the next . Not precisely at after , mind you, but within following . And so
| (11) |
is this probability. The probability function does not have to exist, but if it does not, then there’s not much else we can say about such processes. So we stick to this assumption. There’s one other thing we can obviously assume, namely that , meaning that the process is stuck at at . We will also assume that is a smooth function of times and . These two assumptions, in combination with the assumption that exists in the first place, go quite far already. Whether we assume too much by doing so will transpire later, when we attempt to apply the theory to known physical processes. But for the time being, the assumptions are purely operational: we need them so that we can develop a tractable theory. It will turn out eventually that the assumption of Markovianism demands a certain degree of smoothness of , although it will also turn out that this is a sufficient, not a necessary, condition.
Now, once the process has jumped, it’ll land somewhere, and where it lands may be described by another probability
| (12) |
This is the probability that once the process has jumped it’ll land at between some finite and away from , that is, between and . And here we assume also that is a smooth function of time , again so that the theory is tractable, but it’ll transpire down the road that the assumption of Markovianism demands a certain degree of smoothness of , although this will be a sufficient, not a necessary, condition too.
Both functions and must be non-negative, because they are probabilities. Additionally being probability density must integrate to over .
The proof of this equation is rooted in the Markov definition of the underlying process and in the proportional function lemma, that was proved in module m44258:
Lemma 3.1 (Proportional Function).
If
then where does not depend on (but it may depend on other parameters of the problem.
We prove equation (13) as follows.
Proof. We are going to demonstrate that
| (14) |
from which equation (13) follows by the use of the Proportional Function lemma. To get there we divide the infinitesimal into portions of length each. If is the probability that the system that’s in at will jump away from in the next , then is the probability that the system will not jump in the next . Since we have divided into segments, the probability that the system will not jump in must be equal to the product of probabilities that the system will not jump in each time segment, that is
| (15) |
Alas, because is supposed to be an infinitesimal, the times are infinitesimally close to each other, so we can replace them all with just , which yields
| (16) |
Now we make a yet another use of the fact that is infinitesimal. Let us recall that and since is smooth, we can readily assume that is infinitesimal too. Therefore we can approximate
| (17) |
wherefrom, in combination with equation (16), equation (14) follows, which ends the proof. □
In view of our comments on the discussion between Gillespie, Hardy and his collaborators, and Gorbaczewski and Olkiewicz, it is important that we recognize that, in this case, we have assumed the smoothness of from the beginning. Whether this assumption is more than is strictly required by Markovianism remains to be seen. The reasoning here is quite similar to how we demonstrated in m44258 that
| (18) |
for the continuous Markov processes, wherefrom we deduced that the propagator density function in this case had to be a Gaussian. The assumptions behind equation (13) preclude, for example, a sharp spike in probability at the end of , or indeed, within a finite time interval thereafter.
Given the meaning of , the meaning of is the probability that the process in at will jump away from in the next . Furthermore, because is infinitesimal, we expect only one jump to occur within or none at all. The probability of two jumps occurring would be proportional to .
The jump Markov process propagator density function can now be written in the following form
| (19) |
which we read as follows. On the left side we have the probability of the Markov process finding itself removed by from which it occupied at upon having advanced the time by . As is finite, this is unmistakably a jump propagator. On the right side of the equation, we express the same as the probability that the process in at will jump in the next times the probability that having jumped it will land by a finite away from or—and this is what the plus stands for—the probability that the process will not jump, in which case it will stay at , which is to say, it will displace by , which is what is signified here by the delta function. So, the process will jump, or it will not.
Dividing both sides by yields
| (20) |
In the following, we’re going to look more closely at two issues. First, we’ll derive a formula for in terms of . Second, we’re going to see that equation (20) satisfies the Chapman-Komogorov equation to the first order in , wherefrom we’ll also obtain a more precise idea as to how smooth the functions and have to be.
| (21) |
It is more tricky, because we deal here with probabilities and they have their own special rules. Let be the probability that the system that’s in at will not jump away from in . This probability, of course, is
| (22) |
The probability that the system still will not jump in the next is and it can be decomposed into the product of the probability that the system will not jump in and the probability that it will not jump in the following either:
| (23) |
wherefrom
| (24) |
This then integrates to
| (25) |
Now we drop the , reverting to itself, which yields
| (26) |
For the very small we should expect to be small too. The exponential function then will be approximated by
| (27) |
and equation (26) becomes (13), our starting point. We see here that the infinitesimal relationship given by equation (13) itself was not enough to reconstruct the correct relationship between and for the finite . Additional information, provided by equation (23), was needed to get it right.
| (28) |
This condition assumes that the infinitesimal can be split into two sub-infinitesimals, and and that the process itself must be Markov-divisible on top of these two intervals. This is a weighty assumption, especially given that is an arbitrary real number in . It will result in further continuity conditions down the road.
We begin by substituting equation (20) in equation (28). The first under the integral becomes
| (29) |
The second is
| (30) |
We have to multiply them now, but we only keep terms linear in , because is infinitesimal after all. The first component in each sum is proportional to , so we can neglect first times first right away. But the second component of each sum contains a term that is not proportional to , which is the term itself. In summary, we’ll end up with the first component of the first sum times the delta from the second sum, plus the first component of the second sum times the delta from the first sum, plus a term that’ll contain a product of the two deltas. This last term looks as follows
| (31) |
to which we add
| (32) |
This is to be integrated over . The deltas make the integration easy. simply converts to zero and converts to , which yields
| (33) |
We want this to be equal to
| (34) |
The requirement imposes certain conditions on and . For example, comparing the terms yields
| (35) |
This must hold in the limit.
If we assume that
| (36) |
then
| (37) |
We add a similar assumption regarding , that is,
| (38) |
to demonstrate similarly that
| (39) |
But by now we can see it even without the explicit computation. If both and are to be expanded as per equations (36) and (38), the first term, the one that does not involve , is simply . The terms that are proportional to cancel out. All other terms have in them, so they all vanish in the limit .
Consequently, we see that assumptions given by equations (36) and (38) are sufficient to ensure that the propagator in the form of equation (20) satisfies the Chapman-Kolmogorov condition given by equation (28). However, we have not proven that they are necessary. There may exist less restrictive conditions that would still ensure the satisfaction of equation (28).
Functions and are called the characterizing functions of the jump Markov process. Whereas the continuous Markov process is also described by two functions (see module m44258, where we call them , the drift function, and , the diffusion function), the functions depend on two variables and . Here, for jump processes, we have three variables. The third one is , the finite jump length. The resulting description is going to be more elaborate. We will also show that in a certain limit jump processes become continuous processes.
Looking at equation (20) we see that the two functions enter the expression for the propagator in a special way, namely through the product of and and then through itself in the second summand, the one proportional to the Dirac delta. Because is the probability density in , it must integrate to 1. In turn, does not depend on . We can therefore define
| (40) |
for which we’ll find that
| (41) |
and
| (42) |
Function is called the consolidated characterizing function of the jump Markov process and it can be used instead of and to encode the jump process propagator density as follows
| (43) |
The expression
| (44) |
is the probability that the system in at will jump in the next by between and away from .
Now, we switch to one dimention, and .
Once we have the jump propagator density function, we can calculate propagator moments, namely
| (45) |
Here we have made use of
| (46) |
to kill the second term, the one with . From the above then
| (47) |
Defining the moments of and , the consolidated characterizing function of the jump process, by
| (48) |
and
| (49) |
we can rewrite equation (47) as
| (50) |
For the above definitions to be meaningful, the corresponding integrals must be convergent: the moments exist if the corresponding moments of (or ) exist.
The Kramers-Moyal equations (forward and backward) were derived in module m44014 from the Kolmogorov equations. They allowed us to express the time derivatives of the Markov process probability density function in terms of the propagator density moments, thus providing us with some sort of evolutionary equations, not unlike the Schrödinger equation of quantum mechanics—though, in general, as we’ll see in the case of the jump processes, they may not be explicitly solvable. In the case of the continuous Markov processes, the forward Kramers-Moyal equation turned out to be the familiar Fokker-Planck equation that with some additional non-Markovian assumptions could be converted into the Schrödinger equation—the “non-Markovian” phrase here being important.
The forward Kramers-Moyal equation is
| (51) |
Substituting equation (50) yields
| (52) |
The backward Kramers-Moyal equation is
| (53) |
Substituting equation (50) yields
| (54) |
The Kramers-Moyal equations for the jump Markov processes are partial differential equations of the infinite order in the or variable. Unlike in the continuous Markov process case, they cannot be truncated in general, so they are nothing but trouble. They can be truncated approximately when the jump Markov process can be considered a continuous one, approximately too, in which case they turn into the Fokker-Planck equations. We are going to demonstrate this in the following.
To see that the Kramers-Moyal equations do not truncate we make use of the concavity of for . Concavity means that curves away from a tangent line for any . In other words, any tangent provides us with a low estimate for .
Let us then consider . The slope of the tangent at is
| (55) |
The equation for the tangent line is
| (56) |
where is as above and is given by the requirement that , that is
| (57) |
In summary,
| (58) |
and we can state that
| (59) |
Now, let us consider a random variable , such that . In this case
| (60) |
This holds for any , including . So, let us substitute the latter, which yields
| (61) |
Now, this holds only for positive random variables, that is variables for which if . But for an arbitrary random variable , its even powers satisfy this requirement, therefore we can state that for an arbitrary random variable
| (62) |
Let us return now to the definition of , which is a moment of the characterizing function , that is
| (63) |
where is the probability density of parametrized by . The above considerations therefore apply and we can state that
| (64) |
The strict inequality in this case holds whenever . In turn, only if , which corresponds to there being no jumps at all, which is not an interesting case. And so, excluding this case, we can state that
| (65) |
This tells us that the successive terms in the Kramers-Moyal equations, at least the even ones, do not become identically zero, implying that the equations do not truncate, and so cannot be used, well, not easily, to solve for . Of course, we can still state that the infinite sums on the right sides of the equations must be convergent, by construction, because the left sides are finite and well defined.
If the jumps are short and viewed from a long distance, they may not be discernible individually. In this case we may see the system progress continuously in some random fashion. It is in this limit that the Kramers-Moyal equations for the jump Markov process may turn into the Fokker-Planck equations, but only if certain conditions are met. We are now going to explore what these conditions may be.
If is a random variable, then , where is a volume within which the Markov process unfolds, is also a random variable with the probability density such that
| (66) |
where . From this we obtain
| (67) |
or
| (68) |
Also, because , we get that
| (69) |
For a large volume , is small, so this is the parameter we are going to expand the Kramers-Moyal equation in. We begin with the forward form given by equation (52), and substitute in place of and in place of . The left side becomes
| (70) |
And on the right side, using the form with moments of the consolidated characteristic function, , we obtain
| (71) |
Combining equations (70) and (71) yields
| (72) |
Whether the right side of this equation can be truncated approximately depends on how scales with . Let us look again at the definition of :
| (73) |
Now, if, for example,
| (74) |
then
| (75) |
Therefore the right side of equation (72) becomes in the limit
| (76) |
Let us compare this with the forward Fokker-Planck equation, which in this case would read
| (77) |
Comparing equations 76 and 77 tells us that if we were to identify
| (78) |
and assuming that was still finite for , whereas the higher terms vanished in the limit, the observed process would look like a continuous Markov process. But this is predicated on equation (74). In other words, it depends on how scales with .
Since the Kramers-Moyal equations, forward and backward, don’t truncate, in general neither exactly nor approximately, they are not of much use in the case of the jump Markov processes. The master equations, on the other hand, are differential-integral equations that at least can be written in a compact form and that are tractable numerically. Yet, surprisingly perhaps, both Kramers-Moyal and master equations derive from the same Chapman-Kolmogorov equations, that are in essence differential-integral equations too. We arrive at Kramers-Moyal equations by expanding the function in the Chapman-Kolmogorov integral in the Taylor series and replacing the resulting infinite series of integrals with an infinite series in which the integrals are merely encapsulated in :
| (79) |
This therefore is a renaming exercise, with the basic problem just swept under the carpet. Little wonder then that for the jump Markov processes it has crawled out to haunt us.
Let us return then to the starting point, to the Chapman-Kolmogorov equation. We commence with the forward one
| (80) |
What this equation says is as follows. We have a system that transits from to . We posit that at time the system must pass through some on its way to , and that therefore its trajectory is
| (81) |
For any given , the probability of reaching through from is
| (82) |
But as may run all over , that is, the process may transfer through or through or through and so on, on its way to , the or logic operators translate into the addition of the related probabilities, so we need to sum equation (82) over all possible s, which is how we arrive at the integral in equation (80).
This is quite similar to how quantum mechanics is constructed by the means of Feynman path integrals, but here we sum over probabilities, whereas in the Feynman method we sum over probability amplitudes. Otherwise the resulting mathematics and methodology are really similar. Why it is the probability amplitudes that we sum in quantum physics instead of probabilities themselves, as the conventional logic might dictate, is the central, unexplained puzzle of the quantum world.
Looking at the forward Chapman-Kolmogorov equation we may think that it implies a certain continuity of the system’s evolution on its way from at to at and we may ask how this is compatible with the idea of a jump Markov process. But jump Markov processes are still described by equation (80). If a jump is to occur from to then we may find that most are zero, with the exception of , where itself spikes in the form of the Dirac delta. may assume the form of multiple spikes scattered over with different weights for each location . The Chapman-Kolmogorov equation then says that the probability of the system transiting from at to at is a sum of probabilities that correspond to jumps from to times probabilities of jumps from to :
| (83) |
But what, we may ask next, if the system is such that it stays put between and and doesn’t jump at all, and then only it jumps directly to at ? In this case, the probability would be . The point is that the system must be somewhere at , whether the jump has occurred or not. The formalism of Markov processes assumes a continuous existence of the system in time, whereas it may jump in space or move in some other way amenable to stochastic analysis.
Let us get back to the Chapman-Kolmogorov equation (80). We can reformulate it also as follows
| (84) |
This equation says something similar to equation (80). It says that on its way from at to at the system passes through some at , where can be any in . The probability of getting from to should therefore be a sum of probabilities of passing through at on the way, which are
| (85) |
Now we are going to shrink and in both equations to and and make use of the fact that in this limit the relevant probabilities also shrink to the propagator density, namely
| (86) |
We apply this to equation (80) first replacing the term with under the integral:
| (87) |
Similarly we replace the term in equation (84):
| (88) |
This yields two equivalent though differently expressed equations
| (89) |
and
| (90) |
Now, we do not intend to subtract equation (90) from equation (89) to form the time derivative, because one has in it whereas the other one has , so this wouldn’t work. Instead we’ll use equation (89) to form the forward master equation and we’ll use equation (90) later to form the backward master equation. We’ll do this by substituting the equation for the jump propagator in place of in both,
| (91) |
The substitution converts equation (89) to
| (92) |
because the delta in the second summand has killed the .
Let us observe that there is a stand-alone in equation (92). So, we simply subtract from both sides of equation (92), which reduces the second summand to term only. Then we divide both sides by and obtain the forward master equation for the jump Markov process,
| (93) |
Remembering that is the consolidated characterizing function of the jump Markov process, , and that
| (94) |
(it also holds for , but in this case it is customary to use instead) lets us rewrite equation (93) as
| (95) |
Now we return to equation (90) and substitute equation (91) in place of :
| (96) |
We notice there is a pure term in the second summand. We move this term to the left side of the equation, divide both sides by and take the limit , which yields the backward master equation for the jump Markov process,
| (97) |
Again, making use of the normalization condition given by equation (94), but this time with , we can rewrite the above equation in terms of the consolidated characterizing function of the jump Markov process as follows:
| (98) |
Because the evolution equations for jump Markov processes, namely, the Kramers-Moyal equations and the master equation, do not in general simplify, the evolutions of the moments remain as we discussed in module m44014 with one improvement. Let us recall that for the jump processes , see equation (50), where
| (99) |
The reason for this was that the no-jump term in the propagator integrated to zero when calculating the moments on account of the Dirac delta function of . Therefore we can substitute in every place in the moments evolution equations where appears.
And so, the general equation for the evolution of the moment of a jump Markov process is
| (100) |
Specifically, the equations for the evolution of the mean, variance and covariance are
The integral of the Markov process is not in itself a Markov process. It is a stochastic process that remembers its past over which the integral is accumulated. But on account of its close relationship to the Markov process, of which it is the integral, the process is tractable. We discussed such processes and their general properties in module m44376 The observation that still applies, therefore we can write down the relevant equations as follows.
The equation for the evolution of the moments of the integral process
| (104) |
is
| (105) |
where the cross moments have to be evaluated by solving
| (106) |
We specify these for the two lowest moments, which yields
| (107) |
| (108) |
where must be found by solving
| (109) |
| (110) |
where must be found by solving
| (111) |
The next jump density function provides us with another way to characterize Markov jump processes. Whereas
| (112) |
is the probability of an excursion from at by after the passing of , the next jump function
| (113) |
is the probability that the next jump will actually occur at past and the system will jump by away from .
We can express in terms of the two characteristic functions of the jump process, and . Let us recall their physical meaning:
| (114) |
see equation (26). Whereas
| (115) |
was the probability that the jump would not occur within past , see equation (25)
The probability that the jump will happen at after and that it will take the system by away from , in other words, , is equal to
| (116) |
times
| (117) |
times
| (118) |
In summary
| (119) |
which upon the division of both sides by and reordering of multiplicants on the right side of the equation yields
| (120) |
As can be seen from above, function naturally splits into
| (121) |
and
| (122) |
so that
| (123) |
Because integrates to over , it is easy to see that
| (124) |
Then
| (125) |
It is useful to observe that for temporally homogeneous processes the term becomes and the expressions for and simplify to
| (126) |
and
| (127) |
We can rewrite equation (126) as follows
| (128) |
where is Pausing time.the average pausing time in . Indeed, let us observe that equation (126) is an exponential distribution with a standard deviation of , which avails us of the interpretation Interpretation of of , in the case of temporally homogeneous jump processes, as the inverse of the average pausing time at . We should also observe that in this case, the probability density of the jump displacement from , represented by , is independent of the average pausing time .
For the remainder Completely homogeneous jump Markov processesof this section we shall focus on completely homogeneous jump Markov processes. In this case the two characteristic functions, and reduce to a constant, (or ), and a function of , namely, . We observe the important difference when contrasted with completely homogeneous continuous Markov processes, in which case, the Wiener process is fully characterized by two constants, the drift coefficient and the diffusion coefficient .
The probability Interpretation of density of the jump displacement from ,
| (129) |
is in principle arbitrary, although it must satisfy the smoothness conditions discussed in Section 3. But it is both interesting and highly applicable to consider three special cases
First, however, Kramers-Moyal and master equationswe are going to specify the time evolution equations for the completely homogeneous case.
We recall that because of the complete homogeneity of the system, the probability density of transition from to depends on differences and only,
| (133) |
Looking at equation (50) we find that the propagator density function moments are constants,
| (134) |
where
| (135) |
The forward Kramers-Moyal equation (51) therefore is
| (136) |
and the forward master equation (93) becomes
| (137) |
The backward equations are not in this case independent and can be obtained from the forward ones above by the following substitutions
| (138) |
Neither Quadrature solution for equation (136) nor (137) are in general tractable, but let us recall another way to find that happens to work for completely homogeneous processes. In this case we have the formula
| (139) |
where is the number of time slices that divide the interval such that
| (140) |
and
| (141) |
is the propagator density Fourier transform.
We normally take a limit which converts a polynomial into the function so that the final expression is tractable. And so it is in this case.
Given that for the completely homogeneous jump Markov process
| (142) |
we evaluate as follows
| (143) |
where
| (144) |
is the Fourier transform of .
The next step is to evaluate . Here we make use of the compound interest formula by Bernoulli
| (145) |
Looking at equation (143) we can see that it can be rewritten in this form, namely
| (146) |
wherefrom
| (147) |
and on substitution to equation (139)
| (148) |
To progress further we must have an explicit expression for . For most functions of interest the integral in equation (148) is far from trivial, but it is tractable numerically.
Finally, Evolution of the momentslet us write down equations for the evolution of the moments and sums for the completely homogeneous jump Markov processes.
And so, equation (100) becomes
| (149) |
| (150) |
| (151) |
| (152) |
These results are trivial.
Equations for the integral of the Markov process are more complicated. Equation (105) remains unchanged. Equation (106) for the cross-moments becomes
| (153) |
| (154) |
| (155) |
which solves to We will need this intermediate result to evaluate the evolution of the covariance too. See equation (159).
| (156) |
with the initial condition set to zero. Now, equation (108) solves to
| (157) |
| (158) |
because is a constant. Therefore is defined by its initial condition
| (159) |
from equation (156) above. In this way we find, following equation (110), that
| (160) |
As we now turn to specific examples, our methodology will be as follows. Every formula in this section, so far, has been expressed in terms of , the moments of . But is a probability density function, and the functions that are of interest to us here have all been discussed in module m43336 and the corresponding moments evaluated. So, there won’t be so much computation involved.
The jump displacement probability density function is given by
| (161) |
Now we look up what we learned about this distribution in module m43336. There we found that
| (162) |
This lets us rewrite equations for the evolution of the moments right away. Let us note that we will require the first two moments only:
| (163) |
And so we find that equations (150)-(152) translate to
| (164) |
and equations (154), (157) and (160) for the integral of the jump Markov process translate to
| (165) |
The forward Kramers-Moyal equation becomes
| (166) |
because in equation (136) cancels with in equation (162).
Now we refer to Connexions module m44258 in which we discussed continuous Markov processes and, in particular, the Wiener process, which is a completely homogeneous continuous Markov process. Let us recall that in this case we had
| (167) |
where (drift) and (diffusion) are constants. Comparing equation (164) with equation (167) we notice the similarity upon having identified
| (168) |
It is easy to see, comparing with the corresponding equations for the Wiener process in Connexions module m44376 that the same substitutions will also work for , and . The reason for this is easy to see. Let us write down the first two terms of the forward Kramers-Moyal equation for the completely homogeneous jump Markov process with the exponential displacement density function:
| (169) |
and compare this with the forward Fokker-Planck equation for the Wiener process
| (170) |
The two equations become identical on substitution given by equation (168) and assuming that we can truncate higher terms in equation (169). This would be the case if is sufficiently small, so small that . Because is the average length of a jump,
| (171) |
we infer that for exponentially distributed jumps with a very short average displacement, the completely homogeneous jump Markov processes look like Wiener processes.
It is interesting to invert equation (168). This yields
| (172) |
Let us recall now that is the inverse of , the average pausing time in , but here, because does not depend on it is simply the inverse of the average pausing time. And , as we stated above, is the average length of the jump. So, for to be small we must have small diffusion or large drift or both. This then implies that must be large, but this then means that the average pausing time is even smaller than . In summary, for a completely homogeneous jump Markov system to look like a continuous system, we must have very short jumps that happen very quickly–the pausing time must be small.
Of course, for the time being, these observations pertain to the jump displacement density function being exponential.
The forward master equation (137) becomes
| (173) |
because is for negative . It provides us with a numerical procedure for finding assuming that is a known function of at .
The quadrature solution for is given by equation (148), which we recall here for convenience
| (174) |
As we pointed out before, to progress, we must evaluate
| (175) |
in this case
| (176) |
We change to which yields
| (177) |
and so
| (178) |
It is usually desirable to get rid of the imaginary unit from the denominator. We do this by multiplying by :
| (179) |
We substitute equation (179) into the term in the integral of equation (174), which yields
| (180) |
Now, let us recall that
| (181) |
Because of the asymmetry of , the corresponding integral vanishes, which lives us with the real part only, the one. Furthermore, because is symmetric, we don’t need to integrate from to . We can integrate from to and multiply by two, which leaves us with
| (182) |
This does not look at a glance like an undoable integral. It is somewhat similar to
| (183) |
so there may be a way to crack it. We could expect to contain , perhaps in a limit and with some other complications.
Following our previous example, the first step is to retrieve the relevant formulae from module m43336.
This time our jump displacement density function is
| (184) |
Incidentally, the reason we write instead of is to draw the eye to the obvious variable substitution in the integral
| (185) |
The corresponding moments of the jump displacement density function are
| (186) |
The evolution of the mean, the variance and the covariance require the first two moments only. From the above
| (187) |
Therefore equations (150)-(152) translate to
| (188) |
That is just , that is, constant, may be inferred from the symmetry of . In this case the system may just as easily to the left as to the right of .
Equations (154), (157) and (160) for the integral of the jump Markov process yield
| (189) |
The right side of the Kramers-Moyal equation
| (190) |
is different from zero for even only. Therefore we replace with to the right of the sum and run itself from through . This yields
| (191) |
In the limit of small sigma, we can introduce approximate truncation of higher terms and then this is a diffusion equation with the diffusion coefficient of . Or, if we were to compare to the Fokker-Planck equation, we’d have .
How come there is no drift? This is because are non-zero for even values of only, so there is no term.
Looking at the master equation (93) we replace with the exponential distribution in (this is where the comes from) and is a constant. So we obtain
| (192) |
The quadrature formula (148) calls for evaluation of
| (193) |
which here becomes
| (194) |
In Connexions module m43336, we demonstrated that
| (195) |
This result is directly applicable here upon the following substitutions:
| (196) |
Therefore
| (197) |
The latter cancels with in front of the integral in equation (194), which yields
| (198) |
Thus the quadrature formula becomes
| (199) |
where we have made use of the nonsymmetry of and of the symmetry of , with respect to , thus integrating from to only and multiplying the result by .
compound interest formula, 2
Gillespie, Daniel T., 3
Hardy, Lucien, 4
infinitesimality of d, 5
Markov process
Chapman-Kolmogorov equations, 6, 7, 8
continuous, 9
Fokker-Planck equation, 10, 11
integral of, 12
covariance, 13
cross moments, 14
mean, 15
moments, 16
variance, 17
jump, 18
characterizing functions, 19, 20
consolidated characterizing function, 21
covariance, 22
homogeneous, 23
Kramers-Moyal equations, 24, 25, 26
master equations, 27
mean, 28
next jump density function, 29, 30
propagator density, 31, 32, 33, 34
tractability, 35
variance, 36
Proportional Function Lemma, 37
quadrature, 38
Schrödinger equation, 39