# Connexions

You are here: Home » Content » Signal and Information Processing for Sonar » The Likelihood Ratio Test

### Recently Viewed

This feature requires Javascript to be enabled.

Inside Collection (Course):

Course by: Laurence Riddle. E-mail the author

# The Likelihood Ratio Test

Module by: Don Johnson. E-mail the author

In statistics, hypothesis testing is some times known as decision theory or simply testing. The key result around which all decision theory revolves is the likelihood ratio test.

## The Likelihood Ratio Test

In a binary hypothesis testing problem, four possible outcomes can result. Model 0 0 did in fact represent the best model for the data and the decision rule said it was (a correct decision) or said it wasn't (an erroneous decision). The other two outcomes arise when model 1 1 was in fact true with either a correct or incorrect decision made. The decision process operates by segmenting the range of observation values into two disjoint decision regions 0 0 and 1 1 . All values of r r fall into either 0 0 or 1 1 . If a given r r lies in 0 0 , for example, we will announce our decision "model 0 0 was true"; if in 1 1 , model 1 1 would be proclaimed. To derive a rational method of deciding which model best describes the observations, we need a criterion to assess the quality of the decision process. Optimizing this criterion will specify the decision regions.

The Bayes' decision criterion seeks to minimize a cost function associated with making a decision. Let C i j C i j be the cost of mistaking model j j for model i i ( ij i j ) and C i i C i i the presumably smaller cost of correctly choosing model i i: C i j > C i i C i j C i i , ij i j . Let π i π i be the a priori probability of model i i. The so-called Bayes' cost C- C is the average cost of making a decision.

C-= i , j ij C i j π j Pr say   i   when   H j   true = i , j ij C i j π j Pr say   i | H j   true C i j i j C i j π j say   i   when   H j   true i j i j C i j π j H j   true say   i
(1)
The Bayes' cost can be expressed as
C-= i , j ij C i j π j Prr i | 0   true = i , j ij C i j π j p r | H j rd r = C 0 0 π 0 p r | 0 r+ C 0 1 π 1 p r | 1 rd r + C 1 0 π 0 p r | 0 r+ C 1 1 π 1 p r | 1 rd r C i j i j C i j π j 0   true r i i j i j C i j π j r i p r H j r r 0 C 0 0 π 0 p r 0 r C 0 1 π 1 p r 1 r r 1 C 1 0 π 0 p r 0 r C 1 1 π 1 p r 1 r
(2)
p r | i r p r i r is the conditional probability density function of the observed data r r given that model i i was true. To minimize this expression with respect to the decision regions 0 0 and 1 1 , ponder which integral would yield the smallest value if its integration domain included a specific observation vector. This selection process defines the decision regions; for example, we choose 0 0 for those values of r r which yield a smaller value for the first integral. π 0 C 0 0 p r | 0 r+ π 1 C 0 1 p r | 1 r< π 0 C 1 0 p r | 0 r+ π 1 C 1 1 p r | 1 r π 0 C 0 0 p r 0 r π 1 C 0 1 p r 1 r π 0 C 1 0 p r 0 r π 1 C 1 1 p r 1 r We choose 1 1 when the inequality is reversed. This expression is easily manipulated to obtain the decision rule known as the likelihood ratio test.
p r | 1 rp r | 0 r 0 1 π 0 ( C 1 0 C 0 0 ) π 1 ( C 0 1 C 1 1 ) p r 1 r p r 0 r 0 1 π 0 C 1 0 C 0 0 π 1 C 0 1 C 1 1
(3)
The comparison relation means selecting model 1 1 if the left-hand ratio exceeds the value on the right; otherwise, 0 0 is selected. Thus, the likelihood ratio p r | 1 rp r | 0 r p r 1 r p r 0 r symbolically represented by Λr Λ r , is computed from the observed value of r r and then compared with a threshold η η equaling π 0 ( C 1 0 C 0 0 ) π 1 ( C 0 1 C 1 1 ) π 0 C 1 0 C 0 0 π 1 C 0 1 C 1 1 . Thus, when two models are hypothesized, the likelihood ratio test can be succinctly expressed as the comparison of the likelihood ratio with a threshold.
Λr 0 1 η Λ r 0 1 η
(4)

The data processing operations are captured entirely by the likelihood ratio p r | 1 rp r | 0 r p r 1 r p r 0 r . Furthermore, note that only the value of the likelihood ratio relative to the threshold matters; to simplify the computation of the likelihood ratio, we can perform any positively monotonic operations simultaneously on the likelihood ratio and the threshold without affecting the comparison. We can multiply the ratio by a positive constant, add any constant, or apply a monotonically increasing function which simplifies the expressions. We single one such function, the logarithm, because it simplifies likelihood ratios that commonly occur in signal processing applications. Known as the log-likelihood, we explicitly express the likelihood ratio test with it as

lnΛr 0 1 lnη Λ r 0 1 η
(5)
Useful simplifying transformations are problem-dependent; by laying bare that aspect of the observations essential to the model testing problem, we reveal the sufficient statistic ϒr ϒ r : the scalar quantity which best summarizes the data (Lehmann, pp. 18-22). The likelihood ratio test is best expressed in terms of the sufficient statistic.
ϒr 0 1 γ ϒ r 0 1 γ
(6)
We will denote the threshold value by γ γ when the sufficient statistic is used or by η η when the likelihood ratio appears prior to its reduction to a sufficient statistic.

As we shall see, if we use a different criterion other than the Bayes' criterion, the decision rule often involves the likelihood ratio. The likelihood ratio is comprised of the quantities p r | i r p r i r , termed the likelihood function, which is also important in estimation theory. It is this conditional density that portrays the probabilistic model describing data generation. The likelihood function completely characterizes the kind of "world" assumed by each model; for each model, we must specify the likelihood function so that we can solve the hypothesis testing problem.

A complication, which arises in some cases, is that the sufficient statistic may not be monotonic. If monotonic, the decision regions 0 0 and 1 1 are simply connected (all portions of a region can be reached without crossing into the other region). If not, the regions are not simply connected and decision region islands are created (see this problem). Such regions usually complicate calculations of decision performance. Monotonic or not, the decision rule proceeds as described: the sufficient statistic is computed for each observation vector and compared to a threshold.

### Example 1

An instructor in a course in detection theory wants to determine if a particular student studied for his last test. The observed quantity is the student's grade, which we denote by r r. Failure may not indicate studiousness: conscientious students may fail the test. Define the models as

• 0 0 :   did not study
• 1 1 :   did study
The conditional densities of the grade are shown in Figure 1. Based on knowledge of student behavior, the instructor assigns a priori probabilities of π 0 =1/4 π 0 14 and π 1 =3/4 π 1 34 . The costs C i j C i j are chosen to reflect the instructor's sensitivity to student feelings: C 0 1 =1= C 1 0 C 0 1 1 C 1 0 (an erroneous decision either way is given the same cost) and C 0 0 =0= C 1 1 C 0 0 0 C 1 1 . The likelihood ratio is plotted in Figure 1 and the threshold value η η, which is computed from the a priori probabilities and the costs to be 1/3 13, is indicated. The calculations of this comparison can be simplified in an obvious way. r50 0 1 1/3 r 50 0 1 13 or r 0 1 50/3=16.7 r 0 1 503 16.7 The multiplication by the factor of 50 is a simple illustration of the reduction of the likelihood ratio to a sufficient statistic. Based on the assigned costs and a priori probabilities, the optimum decision rule says the instructor must assume that the student did not study if the student's grade is less than 16.7; if greater, the student is assumed to have studied despite receiving an abysmally low grade such as 20. Note that as the densities given by each model overlap entirely: the possibility of making the wrong interpretation always haunts the instructor. However, no other procedure will be better!

## References

1. E.L. Lehmann. (1986). Testing Statistical Hypotheses. (second edition). New York: John Wiley and Sons.

## Content actions

EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks