# Connexions

You are here: Home » Content » The Cramer-Rao Lower Bound

### Recently Viewed

This feature requires Javascript to be enabled.

# The Cramer-Rao Lower Bound

Module by: Clayton Scott, Robert Nowak. E-mail the authors

The Cramer-Rao Lower Bound (CRLB) sets a lower bound on the variance of any unbiased estimator. This can be extremely useful in several ways:

1. If we find an estimator that achieves the CRLB, then we know that we have found an MVUB estimator!
2. The CRLB can provide a benchmark against which we can compare the performance of any unbiased estimator. (We know we're doing very well if our estimator is "close" to the CRLB.)
3. The CRLB enables us to rule-out impossible estimators. That is, we know that it is physically impossible to find an unbiased estimator that beats the CRLB. This is useful in feasibility studies.
4. The theory behind the CRLB can tell us if an estimator exists that achieves the bound.

## Estimator Accuracy

Consider the likelihood function px| θ p θ x , where θθ is a scalar unknown (parameter). We can plot the likelihood as a function of the unknown, as shown in Figure 1.

The more "peaky" or "spiky" the likelihood function, the easier it is to determind the unknown parameter.

### Example 1

Suppose we observe x=A+w x A w where w𝒩σσ2 w σ σ 2 and AA is an unknown parameter. The "smaller" the noise w w is, the easier it will be to estimate AA from the observation xx.

Suppose A=3 A 3 and σ=1/3 σ 13 .

Given this density function, we can easily rule-out estimates of AA greater than 4 or less than 2, since it is very unlikely that such AA could give rise to out observation.

On the other hand, suppose σ=1 σ 1 .

In this case, it is very difficult to estimate AA. Since the noise power is larger, it is very difficult to distinguish AA from the noise.

The key thing to notice is that the estimation accuracy of AA depends on σσ, which in effect determines the peakiness of the likelihood. The more peaky, the better localized the data is about the true parameter.

To quantify the notion, note that the peakiness is effectively measured by the negative of the second derivative of the log-likelihood at its peak, as seen in Figure 4.

### Example 2

x=A+w x A w

logpx| A =(log2πσ2)12σ2xA2 p A x 2 σ 2 1 2 σ 2 x A 2
(1)
logpx| A A =1σ2(xA) A p A x 1 σ 2 x A
2logpx| A A 2 2 =1σ2 A 2 p A x 1 σ 2
(2)
The curvature increases as σ2 σ 2 decreases (curvature=peakiness).

In general, the curavture will depend on the observation data; 2logpx| A θ 2 2 θ 2 p A x is a function of xx. Therefore, an average measure of curvature is more appropriate.

E2logpx| θ θ 2 2 θ 2 p θ x
(3)
This average-out randomness due to the data and is a function of θθ alone.

We are now ready to state the CRLB theorem.

### Theorem 1: Cramer-Rao Lower Bound Theorem

Assume that the pdf px| θ p θ x satisfies the "regularity" condition θ :Elogpx| θ θ =0 θ θ p θ x 0 where the expectation is take with respect to px| θ p θ x . Then, the variance of any unbiased estimator θ ^ θ must satisfy

σ( θ ^)21E2logpx| θ θ 2 2 θ 1 θ 2 p θ x
(4)
where the derivative is evaluated at the true value of θθ and the expectation is with respect to px| θ p θ x . Moreover, an unbiased estimator may be found that attains the bound for all θθ if and only if
logpx| θ θ =Iθ(gθθ) θ p θ x I θ g θ θ
(5)
for some functions gg and II.

The corresponding estimator is MVUB and is given by θ ^=gx θ g x , and the minimum variance is 1Iθ 1 I θ .

#### Example

x=A+w x A w where w𝒩0σ2 w 0 σ 2 θ=A θ A A :Elogp θ =E1σ2(xA)=0 A θ p 1 σ 2 x A 0 CRLB=1E2logp θ 2 2 =11σ2=σ2 CRLB 1 θ 2 p 1 1 σ 2 σ 2 Therefore, any unbiased estimator A ^ A has σ( A ^)2σ2 A σ 2 . But we know that A ^=x A x has σ( A ^)2=σ2 A σ 2 . Therefore, A ^=x A x is the MVUB estimator.

##### note:
θ=A θ A Iθ=1σ2 I θ 1 σ 2 gx=x g x x

#### Proof

First consider the reguarity condition: Elogpx| θ θ =0 θ p θ x 0

##### note:
Elogpx| θ θ =logpx| θ θ px| θ d θ =px| θ θ d θ θ p θ x θ θ p θ x p θ x θ θ p θ x
Now assuming that we can interchange order of differentiation and integration Elogpx| θ θ =px| θ d θ θ =1 θ =0 θ p θ x θ θ p θ x θ 1 0 So the regularity condition is satisfied whenever this interchange is possible1; i.e., when derivative is well-defined, fails for uniform density.

Now lets derive the CRLB for a scalar parameter α=gθ α g θ , where the pdf is px| θ p θ x . Consider any unbiased estimator of α α: α ^(E α ^=α=gθ) α α α g θ Note that this is equivalent to α ^px| θ d x =gθ x α p θ x g θ where α ^ α is unbiased. Now differentiate both side α ^px| θ θ d x =gθ θ x α θ p θ x θ g θ or α ^logpx| θ θ px| θ d x =gθ θ x α θ p θ x p θ x θ g θ

Now, exmploiting the regularity condition,

( α ^α)logpx| θ θ px| θ d x =gθ θ x α α θ p θ x p θ x θ g θ
(6)
since αlogpx| θ θ px| θ d x =αElogpx| θ =0 x α θ p θ x p θ x α p θ x 0 Now apply the Cauchy-Schwarz inequality to the integral above: gθ θ 2=( α ^α)logpx| θ θ px| θ d x 2 θ g θ 2 x α α θ p θ x p θ x 2 gθ θ 2 α ^α2px| θ d x logpx| θ θ px| θ d θ θ g θ 2 x α α 2 p θ x θ θ p θ x p θ x σ( α ^)2 α is α ^α2px| θ d x x α α 2 p θ x , so
σ( α ^)2gθ θ 2Elogpx| θ θ 2 α θ g θ 2 θ p θ x 2
(7)
Now we note that Elogpx| θ θ 2=E2logpx| θ θ 2 2 θ p θ x 2 θ 2 p θ x Why? Regularity condition. Elogpx| θ θ =logpx| θ θ px| θ d x =0 θ p θ x x θ p θ x p θ x 0 Thus, logpx| θ θ px| θ d x θ =0 θ x θ p θ x p θ x 0 or 2logpx| θ θ 2 2 px| θ +logpx| θ θ px| θ θ d x =0 x θ 2 p θ x p θ x θ p θ x θ p θ x 0 Therefore, E2logpx| θ θ 2 2 =logpx| θ θ logpx| θ θ px| θ d x =Elogpx| θ θ 2 θ 2 p θ x x θ p θ x θ p θ x p θ x θ p θ x 2 Thus, Equation 7 becomes σ( α ^)2gθ θ 2E2logpx| θ θ 2 2 α θ g θ 2 θ 2 p θ x
##### note:
If gθ=θ g θ θ , then numerator is 1.

#### Example: DC Level in White Guassian Noise

n ,n1N: x n =A+ w n n n 1 N x n A w n where w n 𝒩0σ2 w n 0 σ 2 px| A =12πσ2N2e(1σ2 n =1N x n A2) p A x 1 2 σ 2 N 2 1 σ 2 n 1 N x n A 2 logpx| A A =((log2πσ2N2)12σ2 n =1N x n A2) A =1σ2 n =1N x n A A p A x A 2 σ 2 N 2 1 2 σ 2 n 1 N x n A 2 1 σ 2 n 1 N x n A Elogpx| A A =0 A p A x 0 2logpx| A A 2 2 =Nσ2 A 2 p A x N σ 2 Therefore, the variance of any unbiased estimator satisfies: σ( A ^)2σ2N A σ 2 N The sample-mean estimator A ^=1N n =1N x n A 1 N n 1 N x n attains this bound and therefore is MVUB.

### Corollary 1

When the CRLB is attained σ( θ ^)2=1Iθ θ 1 I θ where Iθ=E2logpx| θ θ 2 2 I θ θ 2 p θ x The quantity Iθ I θ is called Fisher Information that xx contains about θθ.

#### Proof

By CRLB Theorem, σ( θ ^)2=1E2logpx| θ θ 2 2 θ 1 θ 2 p θ x and logpx| θ θ =Iθ( θ ^θ) θ p θ x I θ θ θ This yields 2logpx| θ θ 2 2 =Iθ θ ( θ ^θ)Iθ θ 2 p θ x θ I θ θ θ I θ which in turn yields E2logpx| θ θ 2 2 =Iθ θ 2 p θ x I θ So, σ( θ ^)2=1Iθ θ 1 I θ

The CRLB is not always attained.

### Example 3: Phase Estimation

n ,n1N: x n =Acos2π f 0 n+φ+ w n n n 1 N x n A 2 f 0 n φ w n The amplitude and frequency are assumed known w n 𝒩0σ2 w n 0 σ 2 idd. px| φ =12πσ2N2e(12σ2 n =1N x n Acos2π f 0 n+φ) p φ x 1 2 σ 2 N 2 1 2 σ 2 n 1 N x n A 2 f 0 n φ logpx| φ φ =(Aσ2) n =1N x n sin2π f 0 n+φA2sin4π f 0 n+φ φ p φ x A σ 2 n 1 N x n 2 f 0 n φ A 2 4 f 0 n φ 2logpx| φ φ 2 2 =(Aσ2) n =1N x n cos2π f 0 n+φAcos2π f 0 n+2φ φ 2 p φ x A σ 2 n 1 N x n 2 f 0 n φ A 2 f 0 n 2 φ E2logpx| φ φ 2 2 =A2σ2 n =1N1/2+1/2cos4π f 0 n+2φcos4π f 0 n+2φ φ 2 p φ x A 2 σ 2 n 1 N 12 12 4 f 0 n 2 φ 4 f 0 n 2 φ Since Iφ=E2logpx| φ φ 2 2 I φ φ 2 p φ x , IφNA22σ2 I φ N A 2 2 σ 2 because f 0 ,0< f 0 <k:1Ncos4π f 0 n0 f 0 0 f 0 k 1 N 4 f 0 n 0 Therefore, σ( φ ^)22σ2NA2 φ 2 σ 2 N A 2

In this case, it can be shown that there does not exist a gg such that logpx| φ φ Iφ(gxφ) φ p φ x I φ g x φ Therefore, an unbiased phase estimator that attains the CRLB does not exist.

However, a MVUB estimator may still exist--only its variance will be larger than the CRLB.

## Efficiency

An estimator which is unbiased and attains the CRLB is said to be efficient.

### Example 4

Sample-mean estimator is efficient.

### Example 5

Supposed three unbiased estimators exist for a param θθ.

### Example 6: Sinusoidal Frequency Estimation

f 0 ,0< f 0 <1/2: s n f 0 =Acos2π f 0 n+φ f 0 0 f 0 12 s n f 0 A 2 f 0 n φ n ,n1N: x n = s n f 0 + w n n n 1 N x n s n f 0 w n AA and φφ are known, while f0 f0 is unknown. σ(f^0)2σ2A2 n =1N2πnsin2π f 0 n+φ2 f 0 σ 2 A 2 n 1 N 2 n 2 f 0 n φ 2 Suppose A2σ2=1 A 2 σ 2 1 (SNR), where N=10 N 10 and φ=0 φ 0 .

## CRLB for Vector Parameter

θ= θ 1 θ 2 θ p θ θ 1 θ 2 θ p θ ^ θ is unbiased, i.e., i ,i1p:Eθ^i= θ i i i 1 p θ i θ i

## CRLB

σ(θ^i)2Iθ-1i,i θ i I θ i i where ij :Iθi,j=E2logpx| θ θ i θ j i j I θ i j θ i θ j p θ x Iθ I θ is the Fisher Information Matrix.

### Theorem 2: Cramer-Rao Lower Bound - Vector Parameter

Assume the pdf px| φ p φ x satisfies the "regularity" condition θ :Elogpx| θ θ =0 θ θ p θ x 0 Then the convariance matrix of any unbiased estimator θ ^ θ satisfies C θ ^ Iθ-10 C θ ^ I θ 0 (meaning C θ ^ Iθ-1 C θ ^ I θ is p.s.d.) The Fisher Information matrix is Iθi,j=E2logpx| θ θ 2 2 I θ i j θ 2 p θ x Furthermore, θ ^ θ attains the CRLB ( C θ ^ =Iθ-1 C θ ^ I θ ) iff logpx| θ θ =Iθ(gxθ) θ p θ x I θ g x θ and θ ^=gx θ g x

#### Example: DC Level in White Guassian Noise

n ,n1N: x n =A+ w n n n 1 N x n A w n AA is unknown and w n 𝒩0σ2 w n 0 σ 2 , where σ2 σ 2 is unknown. θ=Aσ2 θ A σ 2 logpx| θ =((N2log(2π)))N2logσ212σ2 n =2N x n A2 p θ x N 2 2 N 2 σ 2 1 2 σ 2 n 2 N x n A 2 logpx| θ A =1σ2 n =1N x n A A p θ x 1 σ 2 n 1 N x n A logpx| θ σ2 =Nσ2+12σ4 n =1N x n A2 σ 2 p θ x N σ 2 1 2 σ 4 n 1 N x n A 2 (2logpx| θ A 2 2 =Nσ2)Nσ2 A 2 p θ x N σ 2 N σ 2 (2logpx| θ A σ2 =(1σ4 n =1N x n A))0 A σ 2 p θ x 1 σ 4 n 1 N x n A 0 (2logpx| θ σ2 2 2 =N2σ41σ6 n =1N x n A2)N2σ4 σ 2 2 p θ x N 2 σ 4 1 σ 6 n 1 N x n A 2 N 2 σ 4 Which leads to Iθ=( Nσ20 0N2σ4 ) I θ N σ 2 0 0 N 2 σ 4 σ( A ^)2σ2N A σ 2 N σ( σ2 ^)22σ4N σ 2 2 σ 4 N Note that the CRLB for A ^ A is the same whether or not σ2 σ 2 is known. This happens in this case due to the diagonal nature of the Fisher Information Matrix.

In general the Fisher Information Matrix is not diagonal and consequently the CRLBs will depend on other unknown parameters.

## Footnotes

1. This is simply the Fundamental Theorem of Calculus applied to px| θ p θ x . So long as px| θ p θ x is absolutely continuous with respect to the Lebesgue measure, this is possible.

## Glossary

idd:
independent and identically distributed

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

### Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks