Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » ELEC 301 Projects Fall 2008 » Selective Listening: Drown Out the Noise- Results

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice University ELEC 301 Projects

    This collection is included inLens: Rice University ELEC 301 Project Lens
    By: Rice University ELEC 301

    Click the "Rice University ELEC 301 Projects" link to see all content affiliated with them.

Also in these lenses

  • Lens for Engineering

    This collection is included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Selective Listening: Drown Out the Noise- Results

Module by: Jose Hernandez, Jon Stanley, Ricky Barrera, Jake Poteet. E-mail the authors

Summary: The results to several trials in varying cases.

Results

Note: all supporting code needed for the main Matlab codes to work is attached below Media File: Supporting Code.zip

Ideal Case: MATLAB

Like last year, fastICA works well in MATLAB. For example, mixing and then separating a siren and a voice using MATLAB exclusively works quite well as can be seen in the figure below. fastICA in this very ideal environment was able to separate the two mixed signals into the independent sources of a voice, the lower left spectrogram, and a siren, the lower right spectrogram.

Figure 1
Figure 1 (graphics1.jpg)

For a better grasp of our results, here are the sound files of the mixed signal, isolated siren, and isolated voice, respectively. Also, the MATLAB code used for this trial is also attached after the sound files. Media File: demo1_mix.wav Media File: demo1_sr1.wav Media File: demo1_sr2.wav Media File: demo1.m

Real Time Acoustic Mixing Case: fast ICA used

Performing this same feat using an actual microphone, however, fails. For example, we recorded two sources, voice and tone, simultaneously with two microphones which resulted in the spectrograms of the two mixed signals shown below. Once these mixed signals were passed through the fastICA algorithm, the results were terrible source isolation. As you can see by the lower two spectrograms of the figure below, the independent components look almost the same as the mixed signals we started out with.

Figure 2
Figure 2 (graphics2.jpg)

Here are the sound files for the two mixed signals and the two "separated sources". The code use to carry out this trial is last. Media File: demo2_mix1.wav Media File: demo2_mix2.wav Media File: demo2_source1.wav Media File: demo2_source2.wav Media File: demo2 code.m

Real Time Acoustic Mixing: STFICA or fastICA used

We conjecture two reasons to explain why fastICA is unsuccessful in this scenario.

First, atmospheric and room conditions will change the signal using convolutive operations, rather than the scaling ones that fastICA implements. Second, the characteristic response of the microphones both changes the signals and varies from microphone to microphone, introducing both inaccuracy and imprecision. The original ICA technique, fastICA, does not automatically account for these deviations. Also, although the fastICA does implement a single stage of prewhitening, it may not be enough to alter the input mixed signals so that they look independent of one another in time and space, therefore satisfying the fastICA assumption of independent inputs. So we decided to use the STFICA model in order to account for the convolutive matrix involved and to allow for a user-specifiable number of prewhitening stages.

It was at this time that we experimented with the number of prewhitening stages by setting an iteration level and then watching the output spectrograms for each iteration. Our group could not find a pattern or relation between the iteration number of the prewhitening and the effectiveness of the source isolation, but it was definitely observed that more than stage helps in the source isolation process. Sometimes one iteration would result in some separation, and then the next few iterations did not result in any source separation at all.

Using the STFICA algorithm in some real world cases worked out better than the original fastICA procedure. In one experiment, we produced a pure tone and recoded the source with two microphones. The expected sources that would be isolated were the tone and any ambient noise. The mixed signal of each of the two microphones was passed through the fastICA code and also separately through the STFICA code for comparison. Even with this very simple case, fastICA produced poor results as can be seen in the middle two spectrograms of the output independent components. The spectrograms look almost identical to the original mixed signals that were the inputs. STFICA, on the other hand, separated the pure tone from the white noise exceptionally well. As can be seen in the last row of the figure, the tone (located on the bottom left) was well isolated from the ambient white noise (spectrogram on the bottom right).

Figure 3
Figure 3 (graphics3.jpg)

Here are the sound files for the two mixed signals, the two "separated signals" produced by fastICA, and the two separated components produced by the STFICA. Media File: demo3_mix1.wav Media File: demo3_mix2.wav Media File: demo3_source1 ICA.wav Media File: demo3_source2 ICA.wav Media File: demo3_source1 STFICA.wav Media File: demo3_source2 STFICA.wav Media File: demo3 code.m

In more complicated situations where the sources were multiple human speakers, a human speaker and a tone, or other, we did not achieve the same success. The modified algorithm sometimes made one voice more prominent than the other, but it appeared to be doing filtering in a way that was not achieving the desired result. The success here was not as great as with the simple tone with noise case.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks