The Finite Impulse Response (FIR) filters are all those filters characterised by an impulse response with a finite number of samples. They are realized by the operation of convolution. For each sample of the convolution product a weighted sum of a finite number of input samples is computed.
Averaging filter
The simplest non trivial FIR filter is the filter that computes the running average of two contiguous samples, and the corresponding convolution can be expressed as
If we put a sinusoidal signal into the filter, the output will still be a sinusoidal signal scaled in amplitude and delayed in phase according to the frequency response , which is
Magnitude and phase response for the averaging filter 

The one just
presented is a firstorder (or lenght
Symmetric secondorder FIR filter
A symmetric secondorder FIR filter has an impulse
response whose form is
Magnitude and phase response of a secondorder FIR filter 

Highpass filters
Given the
simple lowpass filters that we have just seen, it is
sufficient to change sign to a coefficient to obtain a
highpass kind of response, i.e. to emphasize
high frequencies as compared to low frequencies. For example,
the Figure 3 displays the magnitude of the
frequency responses of highpass FIR filters of the first and
second order, whose impulse responses are, respectively
Frequency response (magnitude) of first (left) and second (right) order highpass FIR filter.  


To emphasize high frequencies means to make rapid variations of signal more evident, being those variations time transients in the case of sounds, or contours in the case of images.
FIR filters in 2D
In 2D, the impulse response of an FIR filter is a
convolution mask with a finite number of elements, i.e. a
matrix. In particular, the averaging filter
can be represented, for example, by the convolution matrix
Example 1: Noise cleaning
The lowpass filters (and, in particular, the smoothing filters) perform some sort of smoothing of the input signal, in the sense that the resulting signal has a smoother design, where abrupt discontinuities are less evident. This can serve the purpose of reducing the perceptual effect of noises added to audio signals or images. For example, the code reported below loads an image, it corrupts with white noise, and then it filters half of it with an averaging filter, thus obtaining Figure 4.
Smoothing 

// smoothed_glass
// smoothing filter, adapted from REAS:
// http://processing.org/learning/topics/blur.html
size(210, 170);
PImage a; // Declare variable "a" of type PImage
a = loadImage("vetro.jpg"); // Load the images into the program
image(a, 0, 0); // Displays the image from point (0,0)
// corrupt the central strip of the image with random noise
float noiseAmp = 0.2;
loadPixels();
for(int i=0; i<height; i++) {
for(int j=width/4; j<width*3/4; j++) {
int rdm = constrain((int)(noiseAmp*random(255, 255) +
red(pixels[i*width + j])), 0, 255);
pixels[i*width + j] = color(rdm, rdm, rdm);
}
}
updatePixels();
int n2 = 3/2;
int m2 = 3/2;
float val = 1.0/9.0;
int[][] output = new int[width][height];
float[][] kernel = { {val, val, val},
{val, val, val},
{val, val, val} };
// Convolve the image
for(int y=0; y<height; y++) {
for(int x=0; x<width/2; x++) {
float sum = 0;
for(int k=n2; k<=n2; k++) {
for(int j=m2; j<=m2; j++) {
// Reflect xj to not exceed array boundary
int xp = xj;
int yp = yk;
if (xp < 0) {
xp = xp + width;
} else if (xj >= width) {
xp = xp  width;
}
// Reflect yk to not exceed array boundary
if (yp < 0) {
yp = yp + height;
} else if (yp >= height) {
yp = yp  height;
}
sum = sum + kernel[j+m2][k+n2] * red(get(xp, yp));
}
}
output[x][y] = int(sum);
}
}
// Display the result of the convolution
// by copying new data into the pixel buffer
loadPixels();
for(int i=0; i<height; i++) {
for(int j=0; j<width/2; j++) {
pixels[i*width + j] =
color(output[j][i], output[j][i], output[j][i]);
}
}
updatePixels();
For the purpose of smoothing, it is common to create a
convolution mask by reading the values of a Gaussian bell in
two variables. A property of gaussian functions is that their Fourier transform is itself gaussian. Therefore, impulse response and frequency response have the same shape. However, the transform of a thin bell is a large bell, and vice versa. The larger the bell, the more evident the
smoothing effect will be, with consequential loss of
details. In visual terms, a gaussian filter produces an effect similar to that of an opalescent glass superimposed over the image. An example of Gaussian bell is
Conversely, if the purpose is to make the contours and
salient tracts of an image more evident (edge
crispening or sharpening), we have to perform a
highpass filtering. Similarly to what we saw in Section 4 this can be done with a convolution matrix
whose central value has opposite sign as compared to
surrounding values. For instance, the convolution matrix
Edge crispening 

Nonlinear filtering: median filter
A filter whose convolution mask is signaldependent looses
its characteristics of linearity. Median filters use the
mask to select a set of pixels of the input images, and
replace the central pixel of the mask with the median
value of the selected set. Given a set of
Exercise 1
Rewrite the filtering operation
filtra()
of the Sound Chooser presented in the
module Media Representation in
Processing in such a way that it implements the FIR
filter whose frequency response is represented in Figure 2. What happens if the filter is
applied more than once?
Solution
//filtra = new function
void filtra(float[] DATAF, float[] DATA, float a0, float a1) {
for(int i = 3; i < DATA.length; i++){
DATAF[i] = a0*DATA[i]+a1*DATA[i1]+a0*DATA[i2];//Symmetric FIR filter of the second order
}
}
By writing a for
loop that repeats the
filtering operation a certain number of times, one can
verify that the effect of filtering is emphasized. This
intuitive result is due to the fact that, as far as the
signal is concerned, going through
Exercise 2
Considered the Processing code of the blurring example contained in the Processing examples, modify it so that it performs a Gaussian filtering.
Solution
// smoothing Gaussian filter, adapted from REAS:
// http://processing.org/learning/topics/blur.html
size(200, 200);
PImage a; // Declare variable "a" of type PImage
a = loadImage("vetro.jpg"); // Load the images into the program
image(a, 0, 0); // Displays the image from point (0,0)
int n2 = 5/2;
int m2 = 5/2;
int[][] output = new int[width][height];
float[][] kernel = { {1, 4, 7, 4, 1},
{4, 16, 26, 16, 4},
{7, 26, 41, 26, 7},
{4, 16, 26, 16, 4},
{1, 4, 7, 4, 1} };
for (int i=0; i<5; i++)
for (int j=0; j< 5; j++)
kernel[i][j] = kernel[i][j]/273;
// Convolve the image
for(int y=0; y<height; y++) {
for(int x=0; x<width/2; x++) {
float sum = 0;
for(int k=n2; k<=n2; k++) {
for(int j=m2; j<=m2; j++) {
// Reflect xj to not exceed array boundary
int xp = xj;
int yp = yk;
if (xp < 0) {
xp = xp + width;
} else if (xj >= width) {
xp = xp  width;
}
// Reflect yk to not exceed array boundary
if (yp < 0) {
yp = yp + height;
} else if (yp >= height) {
yp = yp  height;
}
sum = sum + kernel[j+m2][k+n2] * red(get(xp, yp));
}
}
output[x][y] = int(sum);
}
}
// Display the result of the convolution
// by copying new data into the pixel buffer
loadPixels();
for(int i=0; i<height; i++) {
for(int j=0; j<width/2; j++) {
pixels[i*width + j] = color(output[j][i], output[j][i], output[j][i]);
}
}
updatePixels();
Exercise 3
Modify the code of Example 1 so that the effects of the
averaging filter mask and
the
Solution
Median filter:
// smoothed_glass
// smoothing filter, adapted from REAS:
// http://www.processing.org/learning/examples/blur.html
size(210, 170);
PImage a; // Declare variable "a" of type PImage
a = loadImage("vetro.jpg"); // Load the images into the program
image(a, 0, 0); // Displays the image from point (0,0)
// corrupt the central strip of the image with random noise
float noiseAmp = 0.1;
loadPixels();
for(int i=0; i<height; i++) {
for(int j=width/4; j<width*3/4; j++) {
int rdm = constrain((int)(noiseAmp*random(255, 255) +
red(pixels[i*width + j])), 0, 255);
pixels[i*width + j] = color(rdm, rdm, rdm);
}
}
updatePixels();
int[][] output = new int[width][height];
int[] sortedValues = {0, 0, 0, 0, 0};
int grayVal;
// Convolve the image
for(int y=0; y<height; y++) {
for(int x=0; x<width/2; x++) {
int indSort = 0;
for(int k=1; k<=1; k++) {
for(int j=1; j<=1; j++) {
// Reflect xj to not exceed array boundary
int xp = xj;
int yp = yk;
if (xp < 0) {
xp = xp + width;
} else if (xj >= width) {
xp = xp  width;
}
// Reflect yk to not exceed array boundary
if (yp < 0) {
yp = yp + height;
} else if (yp >= height) {
yp = yp  height;
}
if ((((k != j) && (k != (j))) )  (k == 0)) { //cross selection
grayVal = (int)red(get(xp, yp));
indSort = 0;
while (grayVal < sortedValues[indSort]) {indSort++; }
for (int i=4; i>indSort; i) sortedValues[i] = sortedValues[i1];
sortedValues[indSort] = grayVal;
}
}
}
output[x][y] = int(sortedValues[2]);
for (int i=0; i< 5; i++) sortedValues[i] = 0;
}
}
// Display the result of the convolution
// by copying new data into the pixel buffer
loadPixels();
for(int i=0; i<height; i++) {
for(int j=0; j<width/2; j++) {
pixels[i*width + j] =
color(output[j][i], output[j][i], output[j][i]);
}
}
updatePixels();