Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » Statistical Learning Theory » Introduction to Classification and Regression

Navigation

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Endorsed by Endorsed (What does "Endorsed by" mean?)

This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.

Also in these lenses

  • Evowl

    This collection is included inLens: Rice LMS's Lens
    By: Rice LMS

    Comments:

    "Language: en"

    Click the "Evowl" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Introduction to Classification and Regression

Module by: Robert Nowak. E-mail the author

Pattern Classification

Recall that the goal of classification is to learn a mapping from the feature space, XX, to a label space, YY. This mapping, ff, is called a classifier. For example, we might have

X = R d Y = { 0 , 1 } . X = R d Y = { 0 , 1 } .
(1)

We can measure the loss of our classifier using 0-10-1 loss; i.e.,

( y ^ , y ) = 1 { y ^ y } = { 1 , y ^ y 0 , y ^ = y . ( y ^ , y ) = 1 { y ^ y } = { 1 , y ^ y 0 , y ^ = y .
(2)

Recalling that risk is defined to be the expected value of the loss function, we have

R ( f ) = E X Y ( f ( X ) , Y ) = E X Y 1 { f ( X ) Y } = P X Y f ( X ) Y . R ( f ) = E X Y ( f ( X ) , Y ) = E X Y 1 { f ( X ) Y } = P X Y f ( X ) Y .
(3)

The performance of a given classifier can be evaluated in terms of how close its risk is to the Bayes' risk.

Definition 1: (Bayes' Risk)
The Bayes' risk is the infimum of the risk for all classifiers:
R * = inf f R ( f ) . R * = inf f R ( f ) .
(4)
We can prove that the Bayes risk is achieved by the Bayes classifier.
Definition 2: Bayes Classifier
The Bayes classifier is the following mapping:
f * ( x ) = 1 , η ( x ) 1 / 2 0 , o t h e r w i s e f * ( x ) = 1 , η ( x ) 1 / 2 0 , o t h e r w i s e
(5)
where
η ( x ) P Y | X ( Y = 1 | X = x ) . η ( x ) P Y | X ( Y = 1 | X = x ) .
(6)
Note that for any xx, f*(x)f*(x) is the value of y{0,1}y{0,1} that maximizes PXY(Y=y|X=x)PXY(Y=y|X=x).

Theorem 1: Risk of the Bayes Classifier

R ( f * ) = R * . R ( f * ) = R * .
(7)

Proof

Let g(x)g(x) be any classifier. We will show that

P ( g ( X ) Y | X = x ) P ( f * ( x ) Y | X = x ) . P ( g ( X ) Y | X = x ) P ( f * ( x ) Y | X = x ) .
(8)

For any gg,

P ( g ( X ) Y | X = x ) = 1 - P Y = g ( X ) | X = x = 1 - P Y = 1 , g ( X ) = 1 | X = x + P Y = 0 , g ( X ) = 0 | X = x = 1 - E [ 1 { Y = 1 } 1 { g ( X ) = 1 } | X = x ] + E [ 1 { Y = 0 } 1 { g ( X ) = 0 } | X = x ] = 1 - 1 { g ( x ) = 1 } E [ 1 { Y = 1 } | X = x ] + 1 { g ( x ) = 0 } E [ 1 { Y = 0 } | X = x ] = 1 - 1 { g ( x ) = 1 } P Y = 1 | X = x + 1 { g ( x ) = 0 } P Y = 0 | X = x = 1 - 1 { g ( x ) = 1 } η ( x ) + 1 { g ( x ) = 0 } 1 - η ( x ) . P ( g ( X ) Y | X = x ) = 1 - P Y = g ( X ) | X = x = 1 - P Y = 1 , g ( X ) = 1 | X = x + P Y = 0 , g ( X ) = 0 | X = x = 1 - E [ 1 { Y = 1 } 1 { g ( X ) = 1 } | X = x ] + E [ 1 { Y = 0 } 1 { g ( X ) = 0 } | X = x ] = 1 - 1 { g ( x ) = 1 } E [ 1 { Y = 1 } | X = x ] + 1 { g ( x ) = 0 } E [ 1 { Y = 0 } | X = x ] = 1 - 1 { g ( x ) = 1 } P Y = 1 | X = x + 1 { g ( x ) = 0 } P Y = 0 | X = x = 1 - 1 { g ( x ) = 1 } η ( x ) + 1 { g ( x ) = 0 } 1 - η ( x ) .
(9)

Next consider the difference

P g ( x ) Y | X = x - P f * ( x ) Y | X = x = η ( x ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } + ( 1 - η ( x ) ) 1 { f * ( x ) = 0 } - 1 { g ( x ) = 0 } = η ( x ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } - ( 1 - η ( x ) ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } = 2 η ( x ) - 1 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } , P g ( x ) Y | X = x - P f * ( x ) Y | X = x = η ( x ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } + ( 1 - η ( x ) ) 1 { f * ( x ) = 0 } - 1 { g ( x ) = 0 } = η ( x ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } - ( 1 - η ( x ) ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } = 2 η ( x ) - 1 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } ,
(10)

where the second equality follows by noting that 1{g(x)=0}=1-1{g(x)=1}1{g(x)=0}=1-1{g(x)=1}. Next recall

f * ( x ) = 1 , η ( x ) 1 / 2 0 , o t h e r w i s e . f * ( x ) = 1 , η ( x ) 1 / 2 0 , o t h e r w i s e .
(11)

For xx such that η(x)1/2η(x)1/2, we have

( 2 η ( x ) - 1 ) 0 1 { f * ( x ) = 1 } 1 - 1 { g ( x ) = 1 } 0 o r 1 0 ( 2 η ( x ) - 1 ) 0 1 { f * ( x ) = 1 } 1 - 1 { g ( x ) = 1 } 0 o r 1 0
(12)

and for xx such that η(x)<1/2η(x)<1/2, we have

( 2 η ( x ) - 1 ) < 0 1 { f * ( x ) = 1 } 0 - 1 { g ( x ) = 1 } 0 o r 1 0 , ( 2 η ( x ) - 1 ) < 0 1 { f * ( x ) = 1 } 0 - 1 { g ( x ) = 1 } 0 o r 1 0 ,
(13)

which implies

2 η ( x ) - 1 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } 0 2 η ( x ) - 1 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } 0
(14)

or

P ( g ( X ) Y | X = x ) P ( f * ( x ) Y | X = x ) . P ( g ( X ) Y | X = x ) P ( f * ( x ) Y | X = x ) .
(15)

Note that while the Bayes classifier achieves the Bayes risk, in practice this classifier is not realizable because we do not know the distribution PXYPXY and so cannot construct η(x)η(x).

Regression

The goal of regression is to learn a mapping from the input space, XX, to the output space, YY. This mapping, ff, is called a estimator. For example, we might have

X = R d Y = R . X = R d Y = R .
(16)

We can measure the loss of our estimator using squared error loss; i.e.,

( y ^ , y ) = ( y - y ^ ) 2 . ( y ^ , y ) = ( y - y ^ ) 2 .
(17)

Recalling that risk is defined to be the expected value of the loss function, we have

R ( f ) = E X Y [ ( f ( X ) , Y ) ] = E X Y [ ( f ( X ) - Y ) 2 ] . R ( f ) = E X Y [ ( f ( X ) , Y ) ] = E X Y [ ( f ( X ) - Y ) 2 ] .
(18)

The performance of a given estimator can be evaluated in terms of how close the risk is to the infimum of the risk for all estimator under consideration:

R * = inf f R ( f ) . R * = inf f R ( f ) .
(19)

Theorem 2: Minimum Risk under Squared Error Loss (MSE)

Let f*(x)=EY|X[Y|X=x]f*(x)=EY|X[Y|X=x]

R ( f * ) = R * . R ( f * ) = R * .
(20)

Proof

R ( f ) = E X Y ( f ( X ) - Y ) 2 = E X E Y | X ( f ( X ) - Y ) 2 | X = E X E Y | X ( f ( X ) - E Y | X [ Y | X ] + E Y | X [ Y | X ] - Y ) 2 | X = E X [ E Y | X [ ( f ( X ) - E Y | X [ Y | X ] ) 2 | X ] + 2 E Y | X ( f ( X ) - E Y | X [ Y | X ] ) ( E Y | X [ Y | X ] - Y ) | X + E Y | X [ ( E Y | X [ Y | X ] - Y ) 2 | X ] ] = E X [ E Y | X [ ( f ( X ) - E Y | X [ Y | X ] ) 2 | X ] + 2 ( f ( X ) - E Y | X [ Y | X ] ) × 0 + E Y | X [ ( E Y | X [ Y | X ] - Y ) 2 | X ] ] = E X Y ( f ( X ) - E Y | X [ Y | X ] ) 2 + R ( f * ) . R ( f ) = E X Y ( f ( X ) - Y ) 2 = E X E Y | X ( f ( X ) - Y ) 2 | X = E X E Y | X ( f ( X ) - E Y | X [ Y | X ] + E Y | X [ Y | X ] - Y ) 2 | X = E X [ E Y | X [ ( f ( X ) - E Y | X [ Y | X ] ) 2 | X ] + 2 E Y | X ( f ( X ) - E Y | X [ Y | X ] ) ( E Y | X [ Y | X ] - Y ) | X + E Y | X [ ( E Y | X [ Y | X ] - Y ) 2 | X ] ] = E X [ E Y | X [ ( f ( X ) - E Y | X [ Y | X ] ) 2 | X ] + 2 ( f ( X ) - E Y | X [ Y | X ] ) × 0 + E Y | X [ ( E Y | X [ Y | X ] - Y ) 2 | X ] ] = E X Y ( f ( X ) - E Y | X [ Y | X ] ) 2 + R ( f * ) .
(21)

Example

Thus if f*(x)=EY|X[Y|X=x]f*(x)=EY|X[Y|X=x], then R(f*)=R*R(f*)=R*, as desired.  

Empirical Risk Minimization

Definition 3: Empirical Risk
Let {Xi,Yi}i=1niidPXY{Xi,Yi}i=1niidPXY be a collection of training data. Then the empirical risk is defined as
R ^ n ( f ) = 1 n i = 1 n ( f ( X i ) , Y i ) . R ^ n ( f ) = 1 n i = 1 n ( f ( X i ) , Y i ) .
(22)
Empirical risk minimization is the process of choosing a learning rule which minimizes the empirical risk; i.e.,
f ^ n = arg min f F R ^ n ( f ) . f ^ n = arg min f F R ^ n ( f ) .
(23)

Example 1: Pattern Classification

Let the set of possible classifiers be

F = x sign ( w ' x ) : w R d F = x sign ( w ' x ) : w R d
(24)

and let the feature space, XX, be [0,1]d[0,1]d or RdRd. If we use the notation fw(x)sign(w'x)fw(x)sign(w'x), then the set of classifiers can be alternatively represented as

F = f w : w R d . F = f w : w R d .
(25)

In this case, the classifier which minimizes the empirical risk is

f ^ n = arg min f F R ^ n ( f ) = arg min w R d 1 n i = 1 n 1 { sign ( w ' X i ) Y i } . f ^ n = arg min f F R ^ n ( f ) = arg min w R d 1 n i = 1 n 1 { sign ( w ' X i ) Y i } .
(26)
Figure 1: Example linear classifier for two-class problem.
Figure 1 (LinearClassifier.png)

Example 2: Regression

Let the feature space be

X = [ 0 , 1 ] X = [ 0 , 1 ]
(27)

and let the set of possible estimators be

F = degree d polynomials on [ 0 , 1 ] . F = degree d polynomials on [ 0 , 1 ] .
(28)

In this case, the classifier which minimizes the empirical risk is

f ^ n = arg min f F R ^ n ( f ) = arg min f F 1 n i = 1 n ( f ( X i ) - Y i ) 2 . f ^ n = arg min f F R ^ n ( f ) = arg min f F 1 n i = 1 n ( f ( X i ) - Y i ) 2 .
(29)

Alternatively, this can be expressed as

w ^ = arg min w R d + 1 1 n i = 1 n ( w 0 + w 1 X i + ... + w d X i d - Y i ) 2 = arg min w R d + 1 V w - Y 2 w ^ = arg min w R d + 1 1 n i = 1 n ( w 0 + w 1 X i + ... + w d X i d - Y i ) 2 = arg min w R d + 1 V w - Y 2
(30)

where VV is the Vandermonde matrix

V = 1 X 1 ... X 1 d 1 X 2 ... X 2 d 1 X n ... X n d . V = 1 X 1 ... X 1 d 1 X 2 ... X 2 d 1 X n ... X n d .
(31)

The pseudoinverse can be used to solve for w^:w^:

w ^ = ( V ' V ) - 1 V ' Y . w ^ = ( V ' V ) - 1 V ' Y .
(32)

A polynomial estimate is displayed in Figure 2.

Figure 2: Example polynomial estimator. Blue curve denotes f*f*, magenta curve is the polynomial fit to the data (denoted by dots).
Figure 2 (polyFitting3.png)

Overfitting

Suppose FF, our collection of candidate functions, is very large. We can always make

min f F R ^ n ( f ) min f F R ^ n ( f )
(33)

smaller by increasing the cardinality of FF, thereby providing more possibilities to fit to the data.

Consider this extreme example: Let FF be all measurable functions. Then every function ff for which

f ( x ) = Y i , x = X i for i = 1 , ... , n any value , otherwise f ( x ) = Y i , x = X i for i = 1 , ... , n any value , otherwise
(34)

has zero empirical risk (R^n(f)=0R^n(f)=0). However, clearly this could be a very poor predictor of YY for a new input XX .

Example 3: Classification Overfitting

Consider the classifier in Figure 3; this demonstrates overfitting in classification. If the data were in fact generated from two Gaussian distributions centered in the upper left and lower right quadrants of the feature space domain, then the optimal estimator would be the linear estimator in Figure 1; the overfitting would result in a higher probability of error for predicting classes of future observations.

Figure 3: Example of overfitting classifier. The classifier's decision boundary wiggles around in order to correctly label the training data, but the optimal Bayes classifier is a straight line.
Figure 3 (OverfitClassifier.png)

Example 4: Regression Overfitting

Below is an m-file that simulates the polynomial fitting. Feel free to play around with it to get an idea of the overfitting problem.

% poly fitting
% rob nowak  1/24/04
clear
close all
 
% generate and plot "true" function
t = (0:.001:1)';
f = exp(-5*(t-.3).^2)+.5*exp(-100*(t-.5).^2)+.5*exp(-100*(t-.75).^2);
figure(1)
plot(t,f)
 
% generate n training data & plot
n = 10;
sig = 0.1; % std of noise
x = .97*rand(n,1)+.01;
y = exp(-5*(x-.3).^2)+.5*exp(-100*(x-.5).^2)+.5*exp(-100*(x-.75).^2)+sig*randn(size(x));
figure(1)
clf
plot(t,f)
hold on
plot(x,y,'.')
 
% fit with polynomial of order k  (poly degree up to k-1)
k=3;
for i=1:k
    V(:,i) = x.^(i-1);
end
p = inv(V'*V)*V'*y;
 
for i=1:k
    Vt(:,i) = t.^(i-1);
end
yh = Vt*p;
figure(1)
clf
plot(t,f)
hold on
plot(x,y,'.')
plot(t,yh,'m')
 
Figure 4: Example polynomial fitting problem. Blue curve is f*f*, magenta curve is the polynomial fit to the data (dots). (a) Fitting a polynomial of degree d=0d=0: This is an example of underfitting (b)d=2d=2 (c) d=4d=4 (d) d=6d=6: This is an example of overfitting. The empirical loss is zero, but clearly the estimator would not do a good job of predicting yy when xx is close to one.
(a)
Figure 4(a) (polyFitting1.png)
(b)
Figure 4(b) (polyFitting3.png)
(c)
Figure 4(c) (polyFitting5.png)
(d)
Figure 4(d) (polyFitting7.png)

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks