# Connexions

You are here: Home » Content » Facial Recognition using Eigenfaces: Obtaining Eigenfaces

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice University ELEC 301 Projects

This module is included inLens: Rice University ELEC 301 Project Lens
By: Rice University ELEC 301As a part of collection: "ELEC 301 Projects Fall 2009"

Click the "Rice University ELEC 301 Projects" link to see all content affiliated with them.

#### Also in these lenses

• Lens for Engineering

This module is included inLens: Lens for Engineering
By: Sidney BurrusAs a part of collection: "ELEC 301 Projects Fall 2009"

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

# Facial Recognition using Eigenfaces: Obtaining Eigenfaces

Module by: Aron Yu, Catherine Elder, Jeff Yeh, Norman Pai. E-mail the authorsEdited By: Aron Yu, Catherine Elder, Jeff Yeh, Norman Pai

Summary: Testing

## Obtaining Eigenfaces

### Eigenface Concept

Each image is loaded into a computer as a matrix of different intensities. All the images were converted to gray scale so that we only need to operate on one layer of image (instead of three layers for a RBG image). A vector whose direction is unchanged when multiplied by the matrix is referred to as an eigenvector of that matrix. The eigenvectors of the covariance matrix associated with a large set of faces are called eigenfaces. The eigenfaces can be thought of as a basis for the set of faces. Just as any vector in a vector space is composed of a linear combination of the basis vectors, each face in the set can be expressed as a linear combination of the eigenfaces.

### Format Input Data

To compute the eigenfaces, a portion of a given dataset is first chosen randomly to be the training set. The images in the training set are used to construct the image matrix A (Note: All images in the dataset must have the same dimensions). The training set can be chosen by selecting a given percentage of the dataset or by selecting a given number of images per person from the database. Once the images are selected, each image Ii is vectorized into a column vector Pi such that its length equals to the total number of pixels in the image. This process brings the mathematics of all the computations down to a lower-dimensionality space.

I = N × M P i = 1 × NM I = N × M P i = 1 × NM size 12{ matrix { I=N times M {} # drarrow {} # P rSub { size 8{i} } =1 times ital "NM"{} } } {}
(1)

The mean face of the training set is computed and subtracted from all the images within the training set (given W images in the training set).

μ = 1 W i = 0 W P i μ = 1 W i = 0 W P i size 12{μ= { {1} over {W} } Sum rSub { size 8{i=0} } rSup { size 8{W} } {P rSub { size 8{i} } } } {}
(2)
V i = P i μ V i = P i μ size 12{V rSub { size 8{i} } =P rSub { size 8{i} } - μ} {}
(3)

Finally, the mean subtracted training images are put into a single matrix of dimension NM x W, forming the image matrix A.

A = [ V 1 V 2 V 3 . . . V W ] A = [ V 1 V 2 V 3 . . . V W ] size 12{A= $matrix { V rSub { size 8{1} } {} # V rSub { size 8{2} } {} # V rSub { size 8{3} } {} # "." "." "." {} # V rSub { size 8{W} } {} }$ } {}
(4)

### Compute Eigenfaces

Typical PCA calculation would first retrieve the covariance matrix C. Covariance measures the relation of how much two random variables vary together such that if the covariance is positive when both dimensions increase together and negative when they are inversely proportional. The eigenfaces were then obtained by computing the eigenvectors of the covariance matrix C. This computation will yield NM unique eigenvectors.

C = AA T C = AA T size 12{C= ital "AA" rSup { size 8{T} } } {}
(5)
cov ( X , Y ) = ( x i x ¯ ) ( y i y ¯ ) ̲ n 1 cov ( X , Y ) = ( x i x ¯ ) ( y i y ¯ ) ̲ n 1 size 12{"cov" $$X,Y$$ = matrix { {underline { Sum { $$x rSub { size 8{i} } - {overline {x}}$$ $$y rSub { size 8{i} } - {overline {y}}$$ } }} {} ## n-1 } } {}
(6)

But in the case of this project, these resulting matrix of dimension NM x NM was way too large for MATLAB to process. Furthermore, even if MATLAB had the ability to process such a large matrix it would still later be too computationally intensive. Instead of computing the covariance matrix C directly, this project utilizes a smaller matrix S with dimensions W x W that can still be able to compute the eigenfaces efficiently. This simplification stems from the fact that the rank of the covariance matrix C is limited by the number of images in the training set. Since there are at most W-1 non-trivial eigenfaces for C, there is no need to compute all of the eigenfaces for the dataset. This simplification will prove to be useful as long as NM>>W.

S = A T A S = A T A size 12{S=A rSup { size 8{T} } A} {}
(7)

The smaller matrix dimensions will be computationally effective later when large databases will be sorted through since only W eigenvalues and eigenfaces will be used.

Now, using some linear algebra tricks, we could show that the eigenvalues of C and S are the same and that the top W eigenvectors of C (ui) can be obtained from the eigenvectors of S (vi).

Sv i = λ i v i A T Av i = λ i v i AA T Av i = λ i Av i CAv i = λ i Av i Cu i = λ i u i Sv i = λ i v i A T Av i = λ i v i AA T Av i = λ i Av i CAv i = λ i Av i Cu i = λ i u i alignl { stack { size 12{ ital "Sv" rSub { size 8{i} } =λ rSub { size 8{i} } v rSub { size 8{i} } } {} # A rSup { size 8{T} } ital "Av" rSub { size 8{i} } =λ rSub { size 8{i} } v rSub { size 8{i} } {} # ital "AA" rSup { size 8{T} } ital "Av" rSub { size 8{i} } =λ rSub { size 8{i} } ital "Av" rSub { size 8{i} } {} # ital "CAv" rSub { size 8{i} } =λ rSub { size 8{i} } ital "Av" rSub { size 8{i} } {} # ital "Cu" rSub { size 8{i} } =λ rSub { size 8{i} } u rSub { size 8{i} } {} } } {}
(8)

In this manner, we can see that the eigenvectors of C can be derived from

Av i = u i Av i = u i size 12{ ital "Av" rSub { size 8{i} } =u rSub { size 8{i} } } {}
(9)

where the computation of vi’s were much less computationally extensive than the direct computation of ui’s.

These ui vectors will constitute the columns of the eigenfaces.

Eigenface = [u1u2u3...uW][u1u2u3...uW] size 12{ matrix { $u rSub { size 8{1} } {} # u rSub { size 8{2} } {} # u rSub { size 8{3} } {} # "." "." "." {} # u rSub { size 8{W} }$ {} } } {}

### Top K Eigenfaces

Even with this complexity reduction, it is still redundant to use all W eigenfaces for the reconstruction process. We could reduce the number of eigenfaces used even more by indentifying the eigenfaces that contain more content than the others. To determine this property, we bring our attentions to the eigenvalues that correspond to the individual eigenfaces. We immediately see that there are some eigenfaces that have higher eigenvalues than the others.

After arranging the eigenvalues in descending order, the result becomes clearer. We conclude that the eigenfaces corresponding to high eigenvalues contain more content. In other words, the higher the eigenvalue, the more characteristic features of the face the particular eigenvector describes. Therefore, we simplify the reconstruction process by only using the top K eigenfaces. This completes the training process of our implementation.

In terms of the eigenfaces themselves, we found that the more important eigenfaces (those with higher eigenvalues) had lower spatial frequency than the less important eigenfaces (those with lower eigenvalues). This is apparent in the figure above, where the first eigenfaces look blurry and indistinct, and the later eigenfaces have sharp edges and look more like individual people. This suggests that faces can be identified based on their low-frequency components alone.

## Eigenface Recognition Face Datasets

### Test 1 (JAFFE database)

For the first test of this project’s eigenface generation algorithm, the Japanese Female Facial Expression (JAFFE) Database was used. The JAFFE database fit our ideal conditions of similar lighting conditions, solid white backgrounds, and normalization of facial features such as the nose, eyes, and lips. The database is a set of 180 images of seven facial expressions (six basic facial expressions and one neutral).

### Test 2 (Rice University)

For the Rice database, we chose students of both genders and diverse ethnicities. Each subject was told to express each of six emotions: neutral, happy, sad, surprised, angry, and disgusted. We had sixteen subjects, each with six emotions, and we took two pictures per emotion, resulting in a database of 192 images. We also created a special database of two emotions: closed eyes and expression of choice, to be used for demonstration at the poster session.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks