Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » Autocorrelation of Random Processes

Navigation

Recently Viewed

This feature requires Javascript to be enabled.

Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.
 

Autocorrelation of Random Processes

Module by: Michael Haag. E-mail the author

Summary: The module will explain Autocorrelation and its function and properties. Also, examples will be provided to help you step through some of the more complicated statistical analysis.

Note: You are viewing an old version of this document. The latest version is available here.

Before diving into a more complex statistical analysis of random signals and processes, let us quickly review the idea of correlation. Recall that the correlation of two signals or variables is the expected value of the product of those two variables. Since our main focus is to discover more about random processes, a collection of random signals, then imagine us dealing with two samples of a random processes, where each sample is taken at a different point in time. The expected value of these two variables will now depend on how quickly they change in regards to time. For example, if the two variables are taken from almost the same time period, then we should expect them to be have a high correlation. We will now look at a correlation function that relates a pair of random variables from the same process to the time separations between them, where the argument to this correlation function will be the time difference.

Autocorrelation

The first of these correlation functions we will discuss is the autocorrelation, where each of the random variables we will deal with come from the same random process.

Definition 1: Autocorrelation
the expected value of the product of a random variable or signal realization with a time-shifted version of itself
With a simple calculation and analysis of the autocorrelation function, we can discover a few important characteristics about our random process. These include:
  1. How quickly our random signal or processes changes with respect to the time function
  2. Whether our process has a periodic component and what the expected frequency might be
As was mentioned above, the autocorrelation function is simply the expected value of a product. Assume we have a pair of random variables from the same process, X 1 =X t 1 X 1 X t 1 and X 2 =X t 2 X 2 X t 2 , then the autocorrelation is often written as
R xx t 1 t 2 =E X 1 X 2 = x 1 x 2 f x 1 x 2 d x 2 d x 1 R xx t 1 t 2 E X 1 X 2 x 1 x 2 x 1 x 2 f x 1 x 2
(1)
For stationary processes, we can generalize this expression a little further. Given a wide-sense stationary processes, it can be proven that the expected values from our random process will be independent of the origin of our time function. Therefore, we can say that our autocorrelation function will depend on the time difference and not some absolute time. For this discussion, we will let τ= t 2 t 1 τ t 2 t 1 , and thus we generalize our autocorrelation expression as
R xx tt+τ= R xx τ=EXtXt+τ R xx t t τ R xx τ E X t X t τ
(2)

note:

The autocorrelation function above is expressed for continuous-time processes, but it can be just as easily written in terms of discrete-time processes.

Properties of Autocorrelation

Below we will look at several properties of the autocorrelation function that hold for stationary random processes.

  • Autocorrelation is an even function for τ τ R xx τ= R xx τ R xx τ R xx τ
  • The mean-square value can be found by evaluating the autocorrelation where τ=0 τ 0 , which gives us R xx 0=X2- R xx 0 X 2
  • The autocorrelation function will have its largest value when τ=0 τ 0 . This value can appear again, for example in a periodic function at the values of the equivalent periodic points, but will never be exceeded. R xx 0| R xx τ| R xx 0 R xx τ
  • If we take the autocorrelation of a period function, then R xx τ R xx τ will also be periodic with the same frequency.

Estimating the Autocorrleation with Time-Averaging

Sometimes the whole random process is not available to us. In these cases, we would still like to be able to find out some of the characteristics of the stationary random process, even if we just have part of one sample function. In order to do this we can estimate the autocorrelation from a given interval, 0 0 to T T seconds, of the sample function.

Ř xx τ=1Tτ0Tτxtxt+τd t Ř xx τ 1 T τ t T τ 0 x t x t τ
(3)
However, a lot of times we will not have sufficient information to build a complete continuous-time function of one of our random signals for the above analysis. If this is the case, we can treat the information we do know about the function as a discrete signal and use the discrete-time formula for estimating the autocorrelation.
Ř xx m=1Nm n =0Nm1xnxn+m Ř xx m 1 N m n N m 1 0 x n x n m
(4)

Examples

Below we will look at a variety of examples that use the autocorrelation function. We will begin with a simple example dealing with Gaussian White Noise (GWN) and a few basic statistical properties that will prove very useful in these and future calculations.

Example 1

We will let xn x n represent our GWN. For this problem, it is important to remember the following fact about the mean of a GWN function: Exn=0 E x n 0

Figure 1: Gaussian density function. By examination, can easily see that the above statement is true - the mean equals zero.
Figure 1 (GWN.png)

Along with being zero-mean, recall that GWN is always independent. With these two facts, we are now ready to do the short calculations required to find the autocorrelation. R xx nn+m=Exnxn+m R xx n n m E x n x n m Since the function, xn x n , is independent, then we can take the product of the individual expected values of both functions. R xx nn+m=ExnExn+m R xx n n m E x n E x n m Now, looking at the above equation we see that we can break it up further into two conditions: one when m m and nn are equal and one when they are not equal. When they are equal we can combine the expected values. We are left with the following piecewise function to solve: R xx nn+m={ExnExn+m  if  m0Ex2n  if  m=0 R xx n n m E x n E x n m m 0 E x n 2 m 0 We can now solve the two parts of the above equation. The first equation is easy to solve as we have already stated that the expected value of xn x n will be zero. For the second part, you should recall from statistics that the expected value of the square of a function is equal to the variance. Thus we get the following results for the autocorrelation: R xx nn+m={0  if  m0σ2  if  m=0 R xx n n m 0 m 0 σ 2 m 0 Or in a more concise way, we can represent the results as R xx nn+m=σ2δm R xx n n m σ 2 δ m

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks