# Connexions

You are here: Home » Content » Statistical terminology

### Recently Viewed

This feature requires Javascript to be enabled.

# Statistical terminology

Module by: Christopher Curran. E-mail the author

Summary: This module contains a set of definitions from statistics that might be useful for advanced undergraduates.

## Important definitions in statistics

It is not unusual for students to forget important concepts learned in an earlier course. This set of definitions is intended to stir memories of those wonderful times when you were learning statistics and econometrics. It is not intended to replace a statistics course but to provide you with a handy guide to the denfinition of some important terms in the statistical tools used by economists.

## Random variables

### Random experiment

A random experiment is an experiment whose outcome is uncertain.

### Outcome space

The outcome space (also sometimes referred to as the sample space) is the list of all possible outcomes of a random experiment.

### Example 1: Single toss of a coin.

Consider the toss of a coin. Since the outcome is uncertain, tossing the coin is an example of a random experiment. The outcome space consists of a heads and a tails. If we let X be 0 if the outcome is a heads and let X equal 1 if the outcome is a tails, then X is a random variable. Since X only can take on integer values (0 or 1), it is a discrete random variable.

### Random variable

A random variable is a number that can be assigned to an outcome of a random experiment. A discrete random variable has a finite number of possible values while a continuous random variable has an infinite number of potential values.

### Non-stochastic variable

A non-stochastic variable is any variable that is not a random variable; i.e., does not represent the outcome of a random experiment.

### Example 2: Multiple tosses of a coin.

Let x equal the number of heads that occur when a coin is tossed n times. The tossing of the coin n times is a random experiment. The outcome space of this random experiment is an integar between 0 and n. Since the value x is equal represents the outcome of a random experiment, it is a random variable.

### Random sample

A random sample of size n out of a population of size N has the characteristic that every member of the population is equally likely to be chosen.

### Example 3: Height of college age women.

Consider a random sample of the population of college age women. The height, x, of any woman chosen from this population is a random variable with a value somewhere in the outcome space, where the outcome space is a number between (say) 24 and 96 inches. Since in theory we can have as accurate a measurement as we might like, x can be thought of as being a continuous random variable.

## Probability

### General terms

#### Example 4

##### Discrete distribution.

Figure 1 illustrates a discrete probability distribution where x i x i MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadIhadaWgaaWcbaGaamyAaaqabaaaaa@3800@ goes from 1 to 8. The areas in the shaded rectangles sum to 1.

### Mathematical expectation

#### The variance of a distribution.

The population variance, σ 2 , σ 2 , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeo8aZnaaCaaaleqabaGaaGOmaaaakiaacYcaaaa@394F@ of a distribution is σ 2 =E[ ( xμ ) 2 ]. σ 2 =E[ ( xμ ) 2 ]. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeo8aZnaaCaaaleqabaGaaGOmaaaakiabg2da9iaadweadaWadaqaamaabmaabaGaamiEaiabgkHiTiabeY7aTbGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaOGaay5waiaaw2faaiaac6caaaa@432F@ Example 9 shows a shortcut way to calculate the population variance.

#### Example 9: Calculation of the population variance using the expected value operator.

Define the variance operator, V, to be:

Then,

E[ ( xμ ) 2 ]= ( xμ ) 2 f( x )dx . E[ ( xμ ) 2 ]= ( xμ ) 2 f( x )dx . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadweadaWadaqaamaabmaabaGaamiEaiabgkHiTiabeY7aTbGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaOGaay5waiaaw2faaiabg2da9maapeaabaWaaeWaaeaacaWG4bGaeyOeI0IaeqiVd0gacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaOGaamOzamaabmaabaGaamiEaaGaayjkaiaawMcaaiaadsgacaWG4baaleqabeqdcqGHRiI8aOGaaiOlaaaa@4DF1@

Squaring the term in the integral gives: ( x 2 2μx+ μ 2 )f( x )dx =E( x 2 2μx+ μ 2 ). ( x 2 2μx+ μ 2 )f( x )dx =E( x 2 2μx+ μ 2 ). MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaapeaabaWaaeWaaeaacaWG4bWaaWbaaSqabeaacaaIYaaaaOGaeyOeI0IaaGOmaiabeY7aTjaadIhacqGHRaWkcqaH8oqBdaahaaWcbeqaaiaaikdaaaaakiaawIcacaGLPaaacaWGMbWaaeWaaeaacaWG4baacaGLOaGaayzkaaGaamizaiaadIhaaSqabeqaniabgUIiYdGccqGH9aqpcaWGfbWaaeWaaeaacaWG4bWaaWbaaSqabeaacaaIYaaaaOGaeyOeI0IaaGOmaiabeY7aTjaadIhacqGHRaWkcqaH8oqBdaahaaWcbeqaaiaaikdaaaaakiaawIcacaGLPaaacaGGUaaaaa@5687@

Expand of the left-hand-side of this equality:

Thus, we have established that:

Evaluating the last two terms gives

and

Thus, the variance of the distribution is

#### Expected value operation rules.

As shown in Example 9, the expected value operation allows several linear operations. Let a and b be a non-stochastic variables and x be a random variable. Then we have

These rules work both for discrete and continuous random variables.

### Joint distributions

#### The joint pdf for two random variables.

is a joint pdf. This definition can be extended easily to include more than two random variables.

#### Covariance between two random variables.

If x and y are random variables, then the covariance between the two variables, Cov( x,y ) Cov( x,y ) MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadoeacaWGVbGaamODamaabmaabaGaamiEaiaacYcacaWG5baacaGLOaGaayzkaaaaaa@3CD4@ or σ xy , σ xy , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeo8aZnaaBaaaleaacaWG4bGaamyEaaqabaGccaGGSaaaaa@3A8D@ is defined to be Cov( x,y )=E[ ( x μ x )( y μ y ) ]. Cov( x,y )=E[ ( x μ x )( y μ y ) ]. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadoeacaWGVbGaamODamaabmaabaGaamiEaiaacYcacaWG5baacaGLOaGaayzkaaGaeyypa0JaamyramaadmaabaWaaeWaaeaacaWG4bGaeyOeI0IaeqiVd02aaSbaaSqaaiaadIhaaeqaaaGccaGLOaGaayzkaaWaaeWaaeaacaWG5bGaeyOeI0IaeqiVd02aaSbaaSqaaiaadMhaaeqaaaGccaGLOaGaayzkaaaacaGLBbGaayzxaaGaaiOlaaaa@4E02@ Expansion gives the alternative definition that σ xy =E( xy ) μ x μ y . σ xy =E( xy ) μ x μ y . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeo8aZnaaBaaaleaacaWG4bGaamyEaaqabaGccqGH9aqpcaWGfbWaaeWaaeaacaWG4bGaamyEaaGaayjkaiaawMcaaiabgkHiTiabeY7aTnaaBaaaleaacaWG4baabeaakiabeY7aTnaaBaaaleaacaWG5baabeaakiaac6caaaa@46A3@

#### Correlation coefficient.

The correlation coefficient, ρ, ρ, MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeg8aYjaacYcaaaa@3859@ is defined to be ρ xy = σ xy σ x σ y . ρ xy = σ xy σ x σ y . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeg8aYnaaBaaaleaacaWG4bGaamyEaaqabaGccqGH9aqpdaWcaaqaaiabeo8aZnaaBaaaleaacaWG4bGaamyEaaqabaaakeaacqaHdpWCdaWgaaWcbaGaamiEaaqabaGccqaHdpWCdaWgaaWcbaGaamyEaaqabaaaaOGaaiOlaaaa@4583@ The correlation coefficient is a unitless number that varies between -1 and +1. Clearly, two random variables are stochastically independent if and only if ρ xy =0. ρ xy =0. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeg8aYnaaBaaaleaacaWG4bGaamyEaaqabaGccqGH9aqpcaaIWaGaaiOlaaaa@3C4C@

### Discrete distributions

#### Poisson distribution.

The discrete random variable x has a Poisson distribution if f( x )={ m x e m x! ,  x=0,1, 0  elsewhere f( x )={ m x e m x! ,  x=0,1, 0  elsewhere MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadAgadaqadaqaaiaadIhaaiaawIcacaGLPaaacqGH9aqpdaGabaabaeqabaWaaSaaaeaacaWGTbWaaWbaaSqabeaacaWG4baaaOGaamyzamaaCaaaleqabaGaeyOeI0IaamyBaaaaaOqaaiaadIhacaGGHaaaaiaacYcacaqGGaGaaeiiaiaadIhacqGH9aqpcaaIWaGaaiilaiaaigdacaGGSaGaeSOjGSeabaGaaGimaiaabccacaqGGaGaaeyzaiaabYgacaqGZbGaaeyzaiaabEhacaqGObGaaeyzaiaabkhacaqGLbaaaiaawUhaaaaa@54A7@ For the Poisson distribution μ= σ 2 =m. μ= σ 2 =m. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiabeY7aTjabg2da9iabeo8aZnaaCaaaleqabaGaaGOmaaaakiabg2da9iaad2gacaGGUaaaaa@3E05@ The Poisson distribution is used quite often in queuing theory to, among other things, describe the arrival of customers at a cashier's station.

## Characteristics of an estimator of a population parameter θ

### Finite estimators

#### Mean square error.

The mean square error (MSE) of an estimator is defined to be MSE( θ ^ )=E[ ( θ ^ θ ) 2 ]. MSE( θ ^ )=E[ ( θ ^ θ ) 2 ]. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaad2eacaWGtbGaamyramaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaacqGH9aqpcaWGfbWaamWaaeaadaqadaqaaiqbeI7aXzaajaGaeyOeI0IaeqiUdehacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaGccaGLBbGaayzxaaGaaiOlaaaa@4705@ It is relatively easy to show that MSE( θ ^ )=V( θ ^ )+ ( B( θ ^ ) ) 2 . MSE( θ ^ )=V( θ ^ )+ ( B( θ ^ ) ) 2 . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaad2eacaWGtbGaamyramaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaacqGH9aqpcaWGwbWaaeWaaeaacuaH4oqCgaqcaaGaayjkaiaawMcaaiabgUcaRmaabmaabaGaamOqamaabmaabaGafqiUdeNbaKaaaiaawIcacaGLPaaaaiaawIcacaGLPaaadaahaaWcbeqaaiaaikdaaaGccaGGUaaaaa@4902@ Often a biased estimator with a smaller MSE may be preferred to an unbiased estimator with a relatively larger MSE.

#### Efficiency.

An estimator θ ^ θ ^ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqbeI7aXzaajaaaaa@37AF@ is relatively more efficient than θ ˜ θ ˜ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqbeI7aXzaaiaaaaa@37AE@ if and only if V( θ ^ )<V( θ ˜ ). V( θ ^ )<V( θ ˜ ). MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadAfadaqadaqaaiqbeI7aXzaajaaacaGLOaGaayzkaaGaeyipaWJaamOvamaabmaabaGafqiUdeNbaGaaaiaawIcacaGLPaaacaGGUaaaaa@3FF2@ Generally, we would prefer to use the most efficient estimator available (if it is unbiased).

### Asymtoptic estimators

#### Example 10

Greene2 offers this example of plim: Suppose x n x n MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadIhadaWgaaWcbaGaamOBaaqabaaaaa@3805@ equals 0 with probability 1( 1 n ) 1( 1 n ) MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaaigdacqGHsisldaqadaqaamaalaaabaGaaGymaaqaaiaad6gaaaaacaGLOaGaayzkaaaaaa@3AD8@ and n with probability ( 1 n ). ( 1 n ). MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaabmaabaWaaSaaaeaacaaIXaaabaGaamOBaaaaaiaawIcacaGLPaaacaGGUaaaaa@39E2@ As n increases, the second point becomes more remote from the first point. However, at the same time the probability of observing the second point becomes more and more unlikely. This effect is shown in Figure 5 where as n increases the probability distribution concentrates more and more on 1.

#### Consistency.

The estimator θ ^ θ ^ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqbeI7aXzaajaaaaa@37AF@ is a consistent estimator of θ if and only if plim θ ^ =θ. plim θ ^ =θ. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadchaciGGSbGaaiyAaiaac2gacuaH4oqCgaqcaiabg2da9iabeI7aXjaac6caaaa@3EE2@

#### Asymmtotically unbiased.

An estimator θ ^ θ ^ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqbeI7aXzaajaaaaa@37AF@ is an asymtotically unbiased estimator of θ if lim n E[ θ ^ ]=θ. lim n E[ θ ^ ]=θ. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiGacYgacaGGPbGaaiyBamaaBaaaleaacaWGUbGaeyOKH4QaeyOhIukabeaakiaadweadaWadaqaaiqbeI7aXzaajaaacaGLBbGaayzxaaGaeyypa0JaeqiUdeNaaiOlaaaa@4530@

## Footnotes

2. Greene, William H. (1990). Econometric Analysis (New York: Macmillan Publishing Company): 103.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks