Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » Signal Processing in Processing: Elementary Filters

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

Signal Processing in Processing: Elementary Filters

Module by: Davide Rocchesso, Pietro Polotti. E-mail the authors

Based on: Signal Processing in Processing: Elementary Filters by Davide Rocchesso, Pietro Polotti

Summary: The fundamental filters for 1D and 2D signals are introduced and examples are given in Processing.

FIR filters

The Finite Impulse Response (FIR) filters are all those filters characterised by an impulse response with a finite number of samples. They are realized by the operation of convolution. For each sample of the convolution product a weighted sum of a finite number of input samples is computed.

Averaging filter

The simplest non trivial FIR filter is the filter that computes the running average of two contiguous samples, and the corresponding convolution can be expressed as

yn=0.5xn+0.5xn1 y n 0.5 x n 0.5 x n 1
(1)
. The impulse response has values 0.5 0.5 at instants 0 0 and 1 1, and zero anywhere else.

If we put a sinusoidal signal into the filter, the output will still be a sinusoidal signal scaled in amplitude and delayed in phase according to the frequency response , which is

Hω=cosω2e(iω2) H ω ω 2 ω 2
(2)
and its magnitude and phase are represented in Figure 1.

Figure 1
Magnitude and phase response for the averaging filter
Magnitude and phase response for the averaging filter (fir1.png)

The one just presented is a first-order (or lenght 22) filter, because it uses only one sample of the past sequence of input. From Figure 1 we see that the frequency response is of the low-pass kind, because the high frequencies are attenuated as compared to the low frequencies. Attenuating high frequencies means smoothing the rapid signal variations. If one wants a steeper frequency response from an FIR filter, the order must be increased or, in other words, more samples of the input signal have to be processed by the convolution operator to give one sample of output.

Symmetric second-order FIR filter

A symmetric second-order FIR filter has an impulse response whose form is a 0 a 1 a 0 a 0 a 1 a 0 , and the frequency response turns out to be Hω=( a 1 +2 a 0 cosω)e(iω) H ω a 1 2 a 0 ω ω . Convolution can be expressed as

yn= a 0 xn+ a 1 xn1+ a 0 xn2 y n a 0 x n a 1 x n 1 a 0 x n 2
(3)
In the special case where a 0 =0.17654 a 0 0.17654 and a 1 =0.64693 a 1 0.64693 the frequency response (magnitude and phase) is represented in Figure 2.

Figure 2
Magnitude and phase response of a second-order FIR filter
Magnitude and phase response of a second-order FIR filter
	   (fir2.png)

High-pass filters

Given the simple low-pass filters that we have just seen, it is sufficient to change sign to a coefficient to obtain a high-pass kind of response, i.e. to emphasize high frequencies as compared to low frequencies. For example, the Figure 3 displays the magnitude of the frequency responses of high-pass FIR filters of the first and second order, whose impulse responses are, respectively 0.50.5 0.5 0.5 and 0.176540.646930.17654 0.17654 0.64693 0.17654 .

Figure 3
Frequency response (magnitude) of first- (left) and second- (right) order high-pass FIR filter.
(a) (b)
Figure 3(a) (fir1hp2.png)Figure 3(b) (fir2hp2.png)

To emphasize high frequencies means to make rapid variations of signal more evident, being those variations time transients in the case of sounds, or contours in the case of images.

FIR filters in 2D

In 2D, the impulse response of an FIR filter is a convolution mask with a finite number of elements, i.e. a matrix. In particular, the averaging filter can be represented, for example, by the convolution matrix 19( 111 111 111 ) 1 9 1 1 1 1 1 1 1 1 1 .

Example 1: Noise cleaning

The low-pass filters (and, in particular, the smoothing filters) perform some sort of smoothing of the input signal, in the sense that the resulting signal has a smoother design, where abrupt discontinuities are less evident. This can serve the purpose of reducing the perceptual effect of noises added to audio signals or images. For example, the code reported below loads an image, it corrupts with white noise, and then it filters half of it with an averaging filter, thus obtaining Figure 4.

Figure 4
Smoothing
Smoothing (vetroRottoSporco.gif)

// smoothed_glass
// smoothing filter, adapted from REAS:
// http://processing.org/learning/topics/blur.html

size(210, 170); 
PImage a;  // Declare variable "a" of type PImage 
a = loadImage("vetro.jpg"); // Load the images into the program 
image(a, 0, 0); // Displays the image from point (0,0) 
 
// corrupt the central strip of the image with random noise 
float noiseAmp = 0.2;
loadPixels(); 
for(int i=0; i<height; i++) { 
  for(int j=width/4; j<width*3/4; j++) { 
    int rdm = constrain((int)(noiseAmp*random(-255, 255) + 
	      red(pixels[i*width + j])), 0, 255); 
    pixels[i*width + j] = color(rdm, rdm, rdm);
  } 
} 
updatePixels(); 
  
int n2 = 3/2; 
int m2 = 3/2; 
float  val = 1.0/9.0;
int[][] output = new int[width][height]; 
float[][] kernel = { {val, val, val}, 
                     {val, val, val}, 
                     {val, val, val} }; 
  
// Convolve the image 
for(int y=0; y<height; y++) { 
  for(int x=0; x<width/2; x++) { 
    float sum = 0; 
    for(int k=-n2; k<=n2; k++) { 
      for(int j=-m2; j<=m2; j++) { 
        // Reflect x-j to not exceed array boundary 
        int xp = x-j; 
        int yp = y-k; 
        if (xp < 0) { 
          xp = xp + width; 
        } else if (x-j >= width) { 
          xp = xp - width; 
        } 
        // Reflect y-k to not exceed array boundary 
        if (yp < 0) { 
          yp = yp + height; 
        } else if (yp >= height) { 
          yp = yp - height; 
        } 
        sum = sum + kernel[j+m2][k+n2] * red(get(xp, yp)); 
      } 
    } 
    output[x][y] = int(sum);  
  } 
} 
 
// Display the result of the convolution 
// by copying new data into the pixel buffer 
loadPixels(); 
for(int i=0; i<height; i++) { 
  for(int j=0; j<width/2; j++) { 
    pixels[i*width + j] = 
	      color(output[j][i], output[j][i], output[j][i]);
  } 
} 
updatePixels(); 
         
	

For the purpose of smoothing, it is common to create a convolution mask by reading the values of a Gaussian bell in two variables. A property of gaussian functions is that their Fourier transform is itself gaussian. Therefore, impulse response and frequency response have the same shape. However, the transform of a thin bell is a large bell, and vice versa. The larger the bell, the more evident the smoothing effect will be, with consequential loss of details. In visual terms, a gaussian filter produces an effect similar to that of an opalescent glass superimposed over the image. An example of Gaussian bell is 1273( 14741 41626164 72641267 41626164 14741 ) 1 273 1 4 7 4 1 4 16 26 16 4 7 26 41 26 7 4 16 26 16 4 1 4 7 4 1 .

Conversely, if the purpose is to make the contours and salient tracts of an image more evident (edge crispening or sharpening), we have to perform a high-pass filtering. Similarly to what we saw in Section 4 this can be done with a convolution matrix whose central value has opposite sign as compared to surrounding values. For instance, the convolution matrix ( -1-1-1 -19-1 -1-1-1 ) -1 -1 -1 -1 9 -1 -1 -1 -1 produces the effect of Figure 5.

Figure 5
Edge crispening
Edge crispening (vetroSharpened.gif)

Non-linear filtering: median filter

A filter whose convolution mask is signal-dependent looses its characteristics of linearity. Median filters use the mask to select a set of pixels of the input images, and replace the central pixel of the mask with the median value of the selected set. Given a set of N N (odd) numbers, the median element of the set is the one that separates N12 N 1 2 smaller elements from N12 N 1 2 larger elements. A typical median filter mask is cross-shaped. For example, a 3×3 3 3 mask can cover, when applied to a certain image point, the pixels with values ( x4x 79912 x9x ) x 4 x 7 99 12 x 9 x , thus replacing the value 99 99 with the mean value 9 9.

Exercise 1

Rewrite the filtering operation filtra() of the Sound Chooser presented in the module Media Representation in Processing in such a way that it implements the FIR filter whose frequency response is represented in Figure 2. What happens if the filter is applied more than once?

Solution


          
 //filtra = new function
void filtra(float[] DATAF, float[] DATA, float a0, float a1) {

  for(int i = 3; i < DATA.length; i++){
    DATAF[i] = a0*DATA[i]+a1*DATA[i-1]+a0*DATA[i-2];//Symmetric FIR filter of the second order
  }
}
 
 	      
	    

By writing a for loop that repeats the filtering operation a certain number of times, one can verify that the effect of filtering is emphasized. This intuitive result is due to the fact that, as far as the signal is concerned, going through m m filters of order N N (in our case N=2 N 2 ) is equivalent to going through a single filter of order mN m N

Exercise 2

Considered the Processing code of the blurring example contained in the Processing examples, modify it so that it performs a Gaussian filtering.

Solution

// smoothing Gaussian filter, adapted from REAS:
// http://processing.org/learning/topics/blur.html

size(200, 200); 
PImage a;  // Declare variable "a" of type PImage 
a = loadImage("vetro.jpg"); // Load the images into the program 
image(a, 0, 0); // Displays the image from point (0,0) 
 
int n2 = 5/2; 
int m2 = 5/2; 
int[][] output = new int[width][height]; 
float[][] kernel = { {1, 4, 7, 4, 1}, 
                     {4, 16, 26, 16, 4}, 
                     {7, 26, 41, 26, 7}, 
                     {4, 16, 26, 16, 4},
                     {1, 4, 7, 4, 1} }; 
 
for (int i=0; i<5; i++)
  for (int j=0; j< 5; j++)
    kernel[i][j] = kernel[i][j]/273; 
// Convolve the image 
for(int y=0; y<height; y++) { 
  for(int x=0; x<width/2; x++) { 
    float sum = 0; 
    for(int k=-n2; k<=n2; k++) { 
      for(int j=-m2; j<=m2; j++) { 
        // Reflect x-j to not exceed array boundary 
        int xp = x-j; 
        int yp = y-k; 
        if (xp < 0) { 
          xp = xp + width; 
        } else if (x-j >= width) { 
          xp = xp - width; 
        } 
        // Reflect y-k to not exceed array boundary 
        if (yp < 0) { 
          yp = yp + height; 
        } else if (yp >= height) { 
          yp = yp - height; 
        } 
        sum = sum + kernel[j+m2][k+n2] * red(get(xp, yp)); 
      } 
    } 
    output[x][y] = int(sum);  
  } 
} 
 
// Display the result of the convolution 
// by copying new data into the pixel buffer 
loadPixels(); 
for(int i=0; i<height; i++) { 
  for(int j=0; j<width/2; j++) { 
    pixels[i*width + j] = color(output[j][i], output[j][i], output[j][i]); 
  } 
} 
updatePixels(); 
 	      
	    

Exercise 3

Modify the code of Example 1 so that the effects of the averaging filter mask and the 110( 111 121 111 ) 1 10 1 1 1 1 2 1 1 1 1 are compared. What happens if the central value of the convolution mask is increased further? Then, try to implement the median filter with a 3×3 3 3 cross-shaped mask.

Solution

Median filter:


          
// smoothed_glass
// smoothing filter, adapted from REAS:
// http://www.processing.org/learning/examples/blur.html

size(210, 170); 
PImage a;  // Declare variable "a" of type PImage 
a = loadImage("vetro.jpg"); // Load the images into the program 
image(a, 0, 0); // Displays the image from point (0,0) 
 
// corrupt the central strip of the image with random noise 
float noiseAmp = 0.1;
loadPixels(); 
for(int i=0; i<height; i++) { 
  for(int j=width/4; j<width*3/4; j++) { 
    int rdm = constrain((int)(noiseAmp*random(-255, 255) + 
      red(pixels[i*width + j])), 0, 255); 
    pixels[i*width + j] = color(rdm, rdm, rdm);
  } 
} 
updatePixels(); 
  
int[][] output = new int[width][height]; 
int[] sortedValues = {0, 0, 0, 0, 0};
int grayVal;
  
// Convolve the image 
for(int y=0; y<height; y++) { 
  for(int x=0; x<width/2; x++) { 
    int indSort = 0;
    for(int k=-1; k<=1; k++) { 
      for(int j=-1; j<=1; j++) { 
        // Reflect x-j to not exceed array boundary 
        int xp = x-j; 
        int yp = y-k; 
        if (xp < 0) { 
          xp = xp + width; 
        } else if (x-j >= width) { 
          xp = xp - width;  
        } 
        // Reflect y-k to not exceed array boundary 
        if (yp < 0) { 
          yp = yp + height; 
        } else if (yp >= height) { 
          yp = yp - height; 
        } 
        if ((((k != j) && (k != (-j))) ) || (k == 0)) { //cross selection
          grayVal = (int)red(get(xp, yp)); 
          indSort = 0;
          while (grayVal < sortedValues[indSort]) {indSort++; }          
          for (int i=4; i>indSort; i--) sortedValues[i] = sortedValues[i-1];
          sortedValues[indSort] = grayVal;
        } 
      } 
    } 
    output[x][y] = int(sortedValues[2]);  
    for (int i=0; i< 5; i++) sortedValues[i] = 0;
  } 
} 
 
// Display the result of the convolution 
// by copying new data into the pixel buffer 
loadPixels(); 
for(int i=0; i<height; i++) { 
  for(int j=0; j<width/2; j++) { 
    pixels[i*width + j] = 
      color(output[j][i], output[j][i], output[j][i]);
  } 
} 
updatePixels(); 
          

IIR Filters

The filtering operation represented by Equation 1 is a particular case of difference equation, where a sample of output is only function of the input samples. More generally, it is possible to construct recursive difference equations, where any sample of output is a function of one or more other output samples.

yn=0.5yn1+0.5xn y n 0.5 y n 1 0.5 x n
(4)
allows to compute (causally) each sample of the output by only knowing the output at the previous instant and the input at the same instant. It is easy to realize that by feeding the system represented by Equation 4 with an impulse, we obtain the infinite-length sequence y=0.50.250.1250.0625... y 0.5 0.25 0.125 0.0625 ... . For this purpose, filters of this kind are called Infinite Impulse Response (IIR) filters. The order of an IIR filter is equal to the number of past output samples that it has to store for processing, as dictated by the difference equation. Therefore, the filter of Equation 4 is a first-order filter. For a given filter order, IIR filters allow frequency responses that are steeper than those of FIR filters, but phase distortions are always introduced by IIR filters. In other words, the different spectral components are delayed by different time amounts. For example, Figure 6 shows the magnitude and phase responses for the first-order IIR filter represented by the difference equation Equation 4. Called a a the coefficient that weights the dependence on the output previous value ( 0.5 0.5 in the specific Equation 4), the impulse response takes the form hn=an h n a n . The more a a is closer to 1 1, the more sustained is the impulse response in time, and the frequency response increases its steepness, thus becoming emphasizing its low-pass character. Obviously, values of a a larger than 1 1 gives a divergent impulse response and, therefore, an unstable behavior of the filter.

Figure 6
Magnitude and phase response of the IIR first-order filter
Magnitude and phase response of the IIR first-order filter (iir1resp.png)

IIR filters are widely used for one-dimensional signals, like audio signals, especially for real-time sample-by-sample processing. Vice versa, it doesn't make much sense to extend recursive processing onto two dimensions. Therefore, in image processing FIR filters are mostly used.

Resonant filter

In the audio field, second-order IIR filters are particularly important, because they realize an elementary resonator. Given the difference equation

yn= a 1 yn1+ a 2 yn2+ b 0 xn y n a 1 y n 1 a 2 y n 2 b 0 x n
(5)
one can verify that it produces the frequency response of Figure 7. The coefficients that gives dependence on the past can be expressed as a 1 =2rcos ω 0 a 1 2 r ω 0 and a 2 =r2 a 2 r 2 , where ω 0 ω 0 is the frequency of the resonance peak and r r gives peaks that gets narrower when approaching 1 1.

Figure 7
Magnitude and phase response of the second-order IIR filter
Magnitude and phase response of the second-order IIR filter  (iir2.png)

Exercise 4

Verify that the filtering operation filtra() of the Sound Chooser presented in module Media Representation in Processing implements an IIR resonant filter. What is the relation between r r and the mouse position along the small bars?

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks