# Connexions

You are here: Home » Content » Statistical Signal Processing » The Minimum Variance Unbiased Estimator

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Course by: Clayton Scott. E-mail the author

# The Minimum Variance Unbiased Estimator

Module by: Clayton Scott, Robert Nowak. E-mail the authors

Summary: This module motivates and introduces the minimum variance unbiased estimator (MVUE). This is the primary criterion in the classical (frequentist) approach to parameter estimation. We introduce the concepts of mean squared error (MSE), variance, bias, unbiased estimators, and the bias-variance decomposition of the MSE.

## In Search of a Useful Criterion

In parameter estimation, we observe an N N-dimensional vector X X of measurements. The distribution of XX is governed by a density or probability mass function f θ x f θ x , which is parameterized by an unknown parameter θθ. We would like to establish a useful criterion for guiding the design and assessing the quality of an estimator θx ^ θ x . We will adopt a classical (frequentist) view of the unknown parameter: it is not itself random, it is simply unknown.

One possibility is to try to design an estimator that minimizes the mean-squared error, that is, the expected squared deviation of the estimated parameter value from the true parameter value. For a scalar parameter, the MSE is defined by

MSE θ ^θ=E θx ^θ2 MSE θ θ θ x θ 2
(1)
For a vector parameter θ θ, this definition is generalized by
MSE θ ^θ=E( θx ^θ)T( θx ^θ) MSE θ θ θ x θ θ x θ
(2)
The expectation is with respect to the distribution of XX. Note that for a given estimator, the MSE is a function of θθ.

While the MSE is a perfectly reasonable way to assess the quality of an estimator, it does not lead to a useful design criterion. Indeed, the estimator that minimizes the MSE is simply the estimator

θx ^=θ θ x θ
(3)
Unfortunately, this depends on the value of the unknown parameter, and is therefore not realizeable! We need a criterion that leads to a realizeable estimator.

### Note:

In the Bayesian Approach to Parameter Estimation, the MSE is a useful design rule.

## The Bias-Variance Decomposition of the MSE

It is possible to rewrite the MSE in such a way that a useful optimality criterion for estimation emerges. For a scalar parameter θθ, [Insert 1] This expression is called the bias-variance decomposition of the mean-squared error. The first term on the right-hand side is called the variance of the estimator, and the second term on the right-hand side is the square of the bias of the estimator. The formal definition of these concepts for vector parameters is now given:

Let θ ^ θ be an estimator of the parameter θθ.

Definition 1: variance
The variance of θ ^ θ is [Insert 2]
Definition 2: bias
The bias of θ ^ θ is [Insert 3]

The bias-variance decomposition also holds for vector parameters: [Insert 4] The proof is a straighforward generalization of the argument for the scalar parameter case.

### Exercise 1

Prove the bias-variance decomposition of the MSE for the vector parameter case.

The MSE decomposes into the sum of two non-negative terms, the squared bias and the variance. In general, for an arbitrary estimator, both of these terms will be nonzero. Furthermore, as an estimator is modified so that one term increases, typically the other term will decrease. This is the so-called bias-variance tradeoff. The following example illustrates this effect.

### Example 1

Let A ~ =α1N n =1N x n A ~ α 1 N n 1 N x n , where x n =A+ w n x n A w n , w n 𝒩0σ2 w n 0 σ 2 , and αα is an arbitrary constant.

Let's find the value of αα that minimizes the MSE.

MSE A ~ =E A ~ A2 MSE A ~ A ~ A 2
(4)

#### note:

A ~ =α S N A ~ α S N , S N 𝒩Aσ2N S N A σ 2 N
MSE A ~ =E A ~ 22E A ~ A+A2=α2E1N2 i , j =1N x i x j 2αE1N n =1N x n A+A2=α21N2 i , j =1NE x i x j 2α×1N n =1NE x n +A2 MSE A ~ A ~ 2 2 A ~ A A 2 α 2 1 N 2 i j 1 N x i x j 2 α 1 N n 1 N x n A A 2 α 2 1 N 2 i j 1 N x i x j 2 α 1 N n 1 N x n A 2
(5)
E x i x j ={A2+σ2  if  i=jA2  if  ij x i x j A 2 σ 2 i j A 2 i j
MSE A ~ =α2(A2+σ2N)2αA2+A2=α2σ2N+α12A2 MSE A ~ α 2 A 2 σ 2 N 2 α A 2 A 2 α 2 σ 2 N α 1 2 A 2
(6)
σ( A ~ )2=α2σ2N A ~ α 2 σ 2 N Bias2 A ~ =α12A2 Bias A ~ 2 α 1 2 A 2 MSE A ~ α =2ασ2N+2(α1)A2=0 α MSE A ~ 2 α σ 2 N 2 α 1 A 2 0
α * =A2A2+σ2N α * A 2 A 2 σ 2 N
(7)
The optimal value α* α* dpends on the unknown parameter A! Therefore the estimator is not realizable.

Note that the problematic dependence on the parameter enters through the Bias component of the MSE. Therefore, a reasonable alternative is to constrain the estimator to be unbiased, and then find the estimator that produces the minimum variance (and hence provides the minimum MSE among all unbiased estimators).

#### note:

Sometimes no unbiased estimator exists, and we cannot proceed at all in this direction.

In this example, note that as the value of αα varies, one of the squared bias or variance terms increases, while the other one decreases. Futhermore, note that the dependence of the MSE on the unknown parameter is manifested in the bias.

## Unbiased Estimators

Since the bias depends on the value of the unknown parameter, it seems that any estimation criterion that depends on the bias would lead to an unrealizable estimator, as the previous example suggests (although in certain cases realizable minimum MSE estimators can be found). As an alternative to minimizing the MSE, we could focus on estimators that have a bias of zero. In this case, the bias contributes zero to the MSE, and in particular, it does not involve the unknown parameter. By focusing on estimators with zero bias, we may hope to arrive at a design criterion that yields realizable estimators.

Definition 3: unbiased
An estimator θ ^ θ is called unbiased if its bias is zero for all values of the unknown parameter. Equivalently, [Insert 5]
For an estimator to be unbiased we require that on average the estimator will yield the true value of the unknown parameter. We now give some examples.

The sample mean of a random sample is always an unbiased estimator for the mean.

### Example 2

Estimate the DC level in the Guassian white noise.

Suppose we have data x 1 , , x N x 1 , , x N and model the data by x n =A+ w n   ,   n1N    n n 1 N x n A w n where AA is the unknown DC level, and w n 𝒩σσ2 w n σ σ 2 .

The parameter is <A< A .

Consider the sample-mean estimator: A ^=1N n =1N x n A 1 N n 1 N x n Is A ^ A unbiased? Yes.

Since E· · is a linear operator, E A ^=1N n =1NE x n =1N n =1NA=A A 1 N n 1 N x n 1 N n 1 N A A Therefore, A is unbiased!

What does the unbiased restriction really imply? Recall that θ ^=gx θ g x , a function of the data. Therefore, E θ ^=θ   θ θ θ and E θ ^=gxpx| θ d x =θ   θ θ x g x p θ x θ Hence, to be unbiased, the estimator ( g· g · ) must satisfy an integral equation involving the densities px| θ p θ x .

It is possible that an estimator can be unbiased for some parameter values, but be biased for others.

The bias of an estimator may be zero for some values of the unknown parameter, but not others. In this case, the estimator is not an unbiased estimator.

### Example 3

A ~ =12N n =1N x n A ~ 1 2 N n 1 N x n E A ~ =12A={0  if  (A=0)unbiased12A  if  (A0)biased A ~ 1 2 A 0 A 0 unbiased 1 2 A A 0 biased An unbiased estimator is not necessarily a good estimator.

Some unbiased estimators are more useful than others.

### Example 4

x n =A+ w n   ,   w n 𝒩σσ2    w n w n σ σ 2 x n A w n A^1= x 1 A 1 x 1 EA^1=A A 1 A A^2=1N n =1N x n A 2 1 N n 1 N x n EA^2=A A 2 A σ(A^1)2=σ2 A 1 σ 2 σ(A^2)2=σ2N A 2 σ 2 N Both estimators are unbiased, but A^2 A 2 has a much lower variance and therefore is a better estimator.

#### note:

A 1 N ^ A 1 N is inconsistent. A 2 N ^ A 2 N is consistent.

## Minimum Variance Unbiased Estimators

Direct minimization of the MSE generally leads to non-realizable estimators. Since the dependence of an estimator on the unknown parameter appears to come from the bias term, we hope that constraining the bias to be zero will lead to a useful design criterion. But if the bias is zero, then the mean-squared error is just the variance. This gives rise to the minimum variance unbiased estimator (MVUE) for θθ.

Definition 4: MVUE
An estimator θ ^ θ is the minimum variance unbiased estimator if it is unbiased and has the smallest variance of any unbiased estimator for all values of the unknown parameter. In other words, the MVUE satisfies the following two properties: [Insert 6]
The minimum variance unbiased criterion is the primary estimation criterion in the classical (non-Bayesian) approach to parameter estimation. Before delving into ways of finding the MVUE, let's first consider whether the MVUE always exists.

## Existence of the MVUE

The MVUE does not always exist. In fact, it may be that no unbiased estimators exist, as the following example demonstrates.

Place [Insert 7] here and make it an example (5).

Even if unbiased estimators exist, it may be that no single unbiased estimator has the minimum variance for all values of the unknown parameter.

Place [Insert 8] here and make it an example (6).

### Exercise 2

Compute the variances of the estimators in the previous examples. Using the Cramer-Rao Lower bound, show that one of these two estimators has minimum variance among all unbiased estimators. Deduce that no single realizable estimator can have minimum variance among all unbiased estimators for all parameter values (i.e., the MVUE does not exist). When using the Cramer-Rao bound, note that the likelihood is not differentable at θ=0 θ 0 .

## Methods for Finding the MVUE

Despite the fact that the MVUE doesn't always exist, in many cases of interest it does exist, and we need methods for finding it. Unfortunately, there is no 'turn the crank' algorithm for finding MVUE's. There are, instead, a variety of techniques that can sometimes be applied to find the MVUE. These methods include:

1. Compute the Cramer-Rao Lower Bound, and check the condition for equality.
2. Find a complete sufficient statistic and apply the Rao-Blackwell Theorem.
3. If the data obeys a general linear model, restrict to the class of linear unbiased estimators, and find the minimum variance estimator within that class. This method is in general suboptimal, although when the noise is Gaussian, it produces the MVUE.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks