# Connexions

You are here: Home » Content » Facial Recognition using Eigenfaces: Projection onto Face Space

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice University ELEC 301 Projects

This module is included inLens: Rice University ELEC 301 Project Lens
By: Rice University ELEC 301As a part of collection: "ELEC 301 Projects Fall 2009"

Click the "Rice University ELEC 301 Projects" link to see all content affiliated with them.

#### Also in these lenses

• Lens for Engineering

This module is included inLens: Lens for Engineering
By: Sidney BurrusAs a part of collection: "ELEC 301 Projects Fall 2009"

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

# Facial Recognition using Eigenfaces: Projection onto Face Space

Module by: Aron Yu, Catherine Elder, Jeff Yeh, Norman Pai. E-mail the authorsEdited By: Aron Yu, Catherine Elder, Jeff Yeh, Norman Pai

Summary: Methods

## Projection onto Face Space

### Compute Weight Matrix

Now that we have the eigenfaces, we could proceed to projecting the training images onto the face space. The K-dimension face space is spanned with the top K eigenfaces. An interesting thing to note here is that each axis of the face space is weighted with respect to the eigenvalue associated with it. So the first few axes will contain more weight than the later axes.

To project the mean-subtracted training images Vi’s onto the face space, we first take each image and compute its weight wi on each of the axis by taking the dot product between the image and an eigenface. This process is repeated for each eigenface with each training image. The resulting weights are put into a weight matrix WM with a dimension of K x W.

V j = w 1 u 1 + w 2 u 2 + . . . + w k u k V j = w 1 u 1 + w 2 u 2 + . . . + w k u k size 12{V rSub { size 8{j} } =w rSub { size 8{1} } u rSub { size 8{1} } +w rSub { size 8{2} } u rSub { size 8{2} } + "." "." "." +w rSub { size 8{k} } u rSub { size 8{k} } } {}
(1)
WM = ( w 1 ) V 1 ( w 1 ) V 2 . . . ( w 1 ) V W ( w 2 ) V 1 ( w 2 ) V 2 . . . ( w 2 ) V W ( w 3 ) V 1 ( w 3 ) V 2 . . . ( w 3 ) V W . . . . . . . . . . . . ( w k ) V 1 ( w k ) V 2 . . . ( w k ) V W WM = ( w 1 ) V 1 ( w 1 ) V 2 . . . ( w 1 ) V W ( w 2 ) V 1 ( w 2 ) V 2 . . . ( w 2 ) V W ( w 3 ) V 1 ( w 3 ) V 2 . . . ( w 3 ) V W . . . . . . . . . . . . ( w k ) V 1 ( w k ) V 2 . . . ( w k ) V W size 12{ ital "WM"= left [ matrix { $$w rSub { size 8{1} }$$ rSub { size 8{V rSub { size 6{1} } } } {} # $$w rSub {1} size 12{$$ rSub {V rSub { size 6{2} } } } {} # size 12{ "." "." "." } {} # size 12{ $$w rSub {1} size 12{$$ rSub {V rSub { size 6{W} } } }} {} ## size 12{ $$w rSub {2} size 12{$$ rSub {V rSub { size 6{1} } } }} {} # size 12{ $$w rSub {2} size 12{$$ rSub {V rSub { size 6{2} } } }} {} # size 12{ "." "." "." } {} # size 12{ $$w rSub {2} size 12{$$ rSub {V rSub { size 6{W} } } }} {} ## size 12{ $$w rSub {3} size 12{$$ rSub {V rSub { size 6{1} } } }} {} # size 12{ $$w rSub {3} size 12{$$ rSub {V rSub { size 6{2} } } }} {} # size 12{ "." "." "." } {} # size 12{ $$w rSub {3} size 12{$$ rSub {V rSub { size 6{W} } } }} {} ## size 12{ "." "." "." } {} # size 12{ "." "." "." } {} # size 12{ "." "." "." } {} # size 12{ "." "." "." } {} ## size 12{ $$w rSub {k} size 12{$$ rSub {V rSub { size 6{1} } } }} {} # size 12{ $$w rSub {k} size 12{$$ rSub {V rSub { size 6{2} } } }} {} # size 12{ "." "." "." } {} # size 12{ $$w rSub {k} size 12{$$ rSub {V rSub { size 6{W} } } }} {} } right ]} {}
(2)

### Compute Threshold Values

When given a test image (red dot), it is first projected onto the face space using the same method as before and then categorized using some threshold values. By testing these and graphing the minimum distance between each test image and the closest image in the training set, we were able to experimentally come up with the thresholds. The following graph shows the result of a particular run on the HFH dataset.

This graph shows the minimum distance between each test image and the closest image in the training set. Images 0-91 represent faces in the training set, images 92-160 are faces not in the training set, and images 161-225 are images that are not faces. Because the training set is randomly selected from our databases, the thresholds vary each time the code is run. The thresholds are dynamic values that change with respect to the furthest distance d between any two training images. However, a trend emerged after looking at the data, and we determined that the thresholds were to be set at 10% (0.1d) and 20%(0.2d) of the maximum distance between two faces in the training set for determining if it was a match and whether or not the image was a face at all, respectively. These thresholds are used when determining the success or failure of recognition for both the JAFFE and Rice University datasets.

As the figure shows, in this particular run, our algorithm successfully identified all faces in the training set as known faces. Similarly, all but one of the unknown faces fell within the correct threshold. However, our algorithm had some trouble identifying images that were not faces: about ¼ of them were identified as unknown faces. We think this occurred because some non-face images had the same round shape as a face (fruit, for example), or had features (animals). Despite this weakness, our algorithm was successful overall.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks