# Connexions

You are here: Home » Content » Purdue Digital Signal Processing Labs (ECE 438) » Lab 10a - Image Processing (part 1)

## Navigation

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• NSF Partnership

This collection is included inLens: NSF Partnership in Signal Processing
By: Sidney Burrus

Click the "NSF Partnership" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

• Featured Content

This collection is included inLens: Connexions Featured Content
By: Connexions

Click the "Featured Content" link to see all content affiliated with them.

Click the tag icon to display tags associated with this content.

#### Also in these lenses

• UniqU content

This collection is included inLens: UniqU's lens
By: UniqU, LLC

Click the "UniqU content" link to see all content selected in this lens.

• Lens for Engineering

This module and collection are included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

### Tags

(What is a tag?)

These tags come from the endorsement, affiliation, and other lenses that include this content.

Inside Collection (Course):

Course by: Charles A. Bouman. E-mail the author

# Lab 10a - Image Processing (part 1)

Module by: Charles A. Bouman. E-mail the author

Questions or comments concerning this laboratory should be directed to Prof. Charles A. Bouman, School of Electrical and Computer Engineering, Purdue University, West Lafayette IN 47907; (765) 494-0340; bouman@ecn.purdue.edu

## Introduction

This is the first part of a two week experiment in image processing. During this week, we will cover the fundamentals of digital monochrome images, intensity histograms, pointwise transformations, gamma correction, and image enhancement based on filtering.

In the second week , we will cover some fundamental concepts of color images. This will include a brief description on how humans perceive color, followed by descriptions of two standard color spaces. The second week will also discuss an application known as image halftoning.

## Introduction to Monochrome Images

An image is the optical representation of objects illuminated by a light source. Since we want to process images using a computer, we represent them as functions of discrete spatial variables. For monochrome (black-and-white) images, a scalar function f(i,j)f(i,j) can be used to represent the light intensity at each spatial coordinate (i,j)(i,j). Figure 1 illustrates the convention we will use for spatial coordinates to represent images.

If we assume the coordinates to be a set of positive integers, for example i=1,,Mi=1,,M and j=1,,Nj=1,,N, then an image can be conveniently represented by a matrix.

f ( i , j ) = f ( 1 , 1 ) f ( 1 , 2 ) f ( 1 , N ) f ( 2 , 1 ) f ( 2 , 2 ) f ( 2 , N ) f ( M , 1 ) f ( M , 2 ) f ( M , N ) f ( i , j ) = f ( 1 , 1 ) f ( 1 , 2 ) f ( 1 , N ) f ( 2 , 1 ) f ( 2 , 2 ) f ( 2 , N ) f ( M , 1 ) f ( M , 2 ) f ( M , N )
(1)

We call this an M×NM×N image, and the elements of the matrix are known as pixels.

The pixels in digital images usually take on integer values in the finite range,

0 f ( i , j ) L m a x 0 f ( i , j ) L m a x
(2)

where 0 represents the minimum intensity level (black), and LmaxLmax is the maximum intensity level (white) that the digital image can take on. The interval [0,Lmax][0,Lmax] is known as a gray scale.

In this lab, we will concentrate on 8-bit images, meaning that each pixel is represented by a single byte. Since a byte can take on 256 distinct values, LmaxLmax is 255 for an 8-bit image.

### Exercise

Download the file yacht.tif for the following section. Click here for help on the Matlab image command.

In order to process images within Matlab, we need to first understand their numerical representation. Download the image file yacht.tif . This is an 8-bit monochrome image. Read it into a matrix using

A = imread('yacht.tif');

Type whos to display your variables. Notice under the "Class" column that the AA matrix elements are of type uint8 (unsigned integer, 8 bits). This means that Matlab is using a single byte to represent each pixel. Matlab cannot perform numerical computation on numbers of type uint8, so we usually need to convert the matrix to a floating point representation. Create a double precision representation of the image using B = double(A); . Again, type whos and notice the difference in the number of bytes between AA and BB. In future sections, we will be performing computations on our images, so we need to remember to convert them to type double before processing them.

Display yacht.tif using the following sequence of commands:

image(B);

 colormap(gray(256)); 

 axis('image');

The image command works for both type uint8 and double images. The colormap command specifies the range of displayed gray levels, assigning black to 0 and white to 255. It is important to note that if any pixel values are outside the range 0 to 255 (after processing), they will be clipped to 0 or 255 respectively in the displayed image. It is also important to note that a floating point pixel value will be rounded down ("floored") to an integer before it is displayed. Therefore the maximum number of gray levels that will be displayed on the monitor is 255, even if the image values take on a continuous range.

Now we will practice some simple operations on the yacht.tif image. Make a horizontally flipped version of the image by reversing the order of each column. Similarly, create a vertically flipped image. Print your results.

Now, create a "negative" of the image by subtracting each pixel from 255 (here's an example of where conversion to double is necessary.) Print the result.

Finally, multiply each pixel of the original image by 1.51.5, and print the result.

#### INLAB REPORT

1. Hand in two flipped images.
2. Hand in the negative image.
3. Hand in the image multiplied by factor of 1.5. What effect did this have?

## Pixel Distributions

Download the files house.tif and narrow.tif for the following sections.

### Histogram of an Image

The histogram of a digital image shows how its pixel intensities are distributed. The pixel intensities vary along the horizontal axis, and the number of pixels at each intensity is plotted vertically, usually as a bar graph. A typical histogram of an 8-bit image is shown in Figure 2.

Write a simple Matlab function Hist(A) which will plot the histogram of image matrix AA. You may use Matlab's hist function, however that function requires a vector as input. An example of using hist to plot a histogram of a matrix would be

x=reshape(A,1,M*N);

hist(x,0:255);

where AA is an image, and MM and NN are the number of rows and columns in AA. The reshape command is creating a row vector out of the image matrix, and the hist command plots a histogram with bins centered at [0:255][0:255].

Download the image file house.tif , and read it into Matlab. Test your Hist function on the image. Label the axes of the histogram and give it a title.

#### INLAB REPORT:

Hand in your labeled histogram. Comment on the distribution of the pixel intensities.

### Pointwise Transformations

A pointwise transformation is a function that maps pixels from one intensity to another. An example is shown in Figure 3. The horizontal axis shows all possible intensities of the original image, and the vertical axis shows the intensities of the transformed image. This particular transformation maps the "darker" pixels in the range [0,T1][0,T1] to a level of zero (black), and similarly maps the "lighter" pixels in [T2,255][T2,255] to white. Then the pixels in the range [T1,T2][T1,T2] are "stretched out" to use the full scale of [0,255][0,255]. This can have the effect of increasing the contrast in an image.

Pointwise transformations will obviously affect the pixel distribution, hence they will change the shape of the histogram. If a pixel transformation can be described by a one-to-one function, y=f(x)y=f(x), then it can be shown that the input and output histograms are approximately related by the following:

H o u t ( y ) H i n ( x ) | f ' ( x ) | x = f - 1 ( y ) . H o u t ( y ) H i n ( x ) | f ' ( x ) | x = f - 1 ( y ) .
(3)

Since xx and yy need to be integers in Equation 3, the evaluation of x=f-1(y)x=f-1(y) needs to be rounded to the nearest integer.

The pixel transformation shown in Figure 3 is not a one-to-one function. However, Equation 3 still may be used to give insight into the effect of the transformation. Since the regions [0,T1][0,T1] and [T2,255][T2,255] map to the single points 0 and 255, we might expect "spikes" at the points 0 and 255 in the output histogram. The region [1,254][1,254] of the output histogram will be directly related to the input histogram through Equation 3.

First, notice from x=f-1(y)x=f-1(y) that the region [1,254][1,254] of the output is being mapped from the region [T1,T2][T1,T2] of the input. Then notice that f'(x)f'(x) will be a constant scaling factor throughout the entire region of interest. Therefore, the output histogram should approximately be a stretched and rescaled version of the input histogram, with possible spikes at the endpoints.

Write a Matlab function that will perform the pixel transformation shown in Figure 3. It should have the syntax

output = pointTrans(input, T1, T2) .

#### Hints

• Determine an equation for the graph in Figure 3, and use this in your function. Notice you have three input regions to consider. You may want to create a separate function to apply this equation.
• If your function performs the transformation one pixel at a time, be sure to allocate the space for the output image at the beginning to speed things up.

Download the image file narrow.tif and read it into Matlab. Display the image, and compute its histogram. The reason the image appears "washed out" is that it has a narrow histogram. Print out this picture and its histogram.

Now use your pointTrans function to spread out the histogram using T1=70T1=70 and T2=180T2=180. Display the new image and its histogram. (You can open another figure window using the figure command.) Do you notice a difference in the "quality" of the picture?

#### INLAB REPORT

1. Hand in your code for pointTrans.
2. Hand in the original image and its histogram.
3. Hand in the transformed image and its histogram.
4. What qualitative effect did the transformation have on the original image? Do you observe any negative effects of the transformation?
5. Compare the histograms of the original and transformed images. Why are there zeros in the output histogram?

## Gamma Correction

Download the file dark.tif for the following section.

The light intensity generated by a physical device is usually a nonlinear function of the original signal. For example, a pixel that has a gray level of 200 will not be twice as bright as a pixel with a level of 100. Almost all computer monitors have a power law response to their applied voltage. For a typical cathode ray tube (CRT), the brightness of the illuminated phosphors is approximately equal to the applied voltage raised to a power of 2.5. The numerical value of this exponent is known as the gamma (γγ) of the CRT. Therefore the power law is expressed as

I = V γ I = V γ
(4)

where II is the pixel intensity and VV is the voltage applied to the device.

If we relate Equation 4 to the pixel values for an 8-bit image, we get the following relationship,

y = 255 x 255 γ y = 255 x 255 γ
(5)

where xx is the original pixel value, and yy is the pixel intensity as it appears on the display. This relationship is illustrated in Figure 4.

In order to achieve the correct reproduction of intensity, this nonlinearity must be compensated by a process known as γγcorrection. Images that are not properly corrected usually appear too light or too dark. If the value of γγ is available, then the correction process consists of applying the inverse of Equation 5. This is a straightforward pixel transformation, as we discussed in the section "Pointwise Transformations".

Write a Matlab function that will γγ correct an image by applying the inverse of Equation 5. The syntax should be

B = gammCorr(A,gamma)

where AA is the uncorrected image, gammagamma is the γγ of the device, and BB is the corrected image. (See the hints in "Pointwise Transformations".)

The file dark.tif is an image that has not been γγ corrected for your monitor. Download this image, and read it into Matlab. Display it and observe the quality of the image.

Assume that the γγ for your monitor is 2.2. Use your gammCorr function to correct the image for your monitor, and display the resultant image. Did it improve the quality of the picture?

### INLAB REPORT

1. Hand in your code for gammCorr.
2. Hand in the γγ corrected image.
3. How did the correction affect the image? Does this appear to be the correct value for γγ ?

## Image Enhancement Based on Filtering

Sometimes, we need to process images to improve their appearance. In this section, we will discuss two fundamental image enhancement techniques: image smoothing and sharpening.

### Image Smoothing

Smoothing operations are used primarily for diminishing spurious effects that may be present in a digital image, possibly as a result of a poor sampling system or a noisy transmission channel. Lowpass filtering is a popular technique of image smoothing.

Some filters can be represented as a 2-D convolution of an image f(i,j)f(i,j) with the filter's impulse response h(i,j)h(i,j).

g ( i , j ) = f ( i , j ) * * h ( i , j ) = k = - l = - f ( k , l ) h ( i - k , j - l ) g ( i , j ) = f ( i , j ) * * h ( i , j ) = k = - l = - f ( k , l ) h ( i - k , j - l )
(6)

Some typical lowpass filter impulse responses are shown in Figure 5, where the center element corresponds to h(0,0)h(0,0). Notice that the terms of each filter sum to one. This prevents amplification of the DC component of the original image. The frequency response of each of these filters is shown in Figure 6.

An example of image smoothing is shown in Figure 7, where the degraded image is processed by the filter shown in Figure 5(c). It can be seen that lowpass filtering clearly reduces the additive noise, but at the same time it blurs the image. Hence, blurring is a major limitation of lowpass filtering.

In addition to the above linear filtering techniques, images can be smoothed by nonlinear filtering, such as mathematical morphological processing. Median filtering is one of the simplest morphological techniques, and is useful in the reduction of impulsive noise. The main advantage of this type of filter is that it can reduce noise while preserving the detail of the original image. In a median filter, each input pixel is replaced by the median of the pixels contained in a surrounding window. This can be expressed by

g ( i , j ) = m e d i a n { f ( i - k , j - l ) } , ( k , l ) W g ( i , j ) = m e d i a n { f ( i - k , j - l ) } , ( k , l ) W
(7)

where WW is a suitably chosen window. Figure 8 shows the performance of the median filter in reducing so-called "salt and pepper" noise.

### Smoothing Exercise

Download the files race.tif, noise1.tif and noise2.tif for this exercise. Click here for help on the Matlab mesh command.

Among the many spatial lowpass filters, the Gaussian filter is of particular importance. This is because it results in very good spatial and spectral localization characteristics. The Gaussian filter has the form

h ( i , j ) = C exp ( - i 2 + j 2 2 σ 2 ) h ( i , j ) = C exp ( - i 2 + j 2 2 σ 2 )
(8)

where σ2σ2, known as the variance, determines the size of passband area. Usually the Gaussian filter is normalized by a scaling constant CC such that the sum of the filter coefficient magnitudes is one, allowing the average intensity of the image to be preserved.

i , j h ( i , j ) = 1 i , j h ( i , j ) = 1
(9)

Write a Matlab function that will create a normalized Gaussian filter that is centered around the origin (the center element of your matrix should be h(0,0)h(0,0)). Note that this filter is both separable and symmetric, meaning h(i,j)=h(i)h(j)h(i,j)=h(i)h(j) and h(i)=h(-i)h(i)=h(-i). Use the syntax

h=gaussFilter(N, var)

where N determines the size of filter, var is the variance, and h is the N×NN×N filter. Notice that for this filter to be symmetrically centered around zero, NN will need to be an odd number.

Use Matlab to compute the frequency response of a 7×77×7 Gaussian filter with σ2=1σ2=1. Use the command

H = fftshift(fft2(h,32,32));

to get a 32×3232×32 DFT. Plot the magnitude of the frequency response of the Gaussian filter, |HGauss(ω1,ω2)||HGauss(ω1,ω2)|, using the mesh command. Plot it over the region [-π,π]×[-π,π][-π,π]×[-π,π], and label the axes.

Filter the image contained in the file race.tif with a 7×77×7 Gaussian filter, with σ2=1σ2=1.

#### Hint:

You can filter the signal by using the Matlab command Y=filter2(h,X); , where XX is the matrix containing the input image and hh is the impulse response of the filter.
Display the original and the filtered images, and notice the blurring that the filter has caused.

Now write a Matlab function to implement a 3×33×3 median filter (without using the medfilt2 command). Use the syntax

 Y = medianFilter(X);

where X and Y are the input and output image matrices, respectively. For convenience, you do not have to alter the pixels on the border of XX.

#### Hint:

Use the Matlab command median to find the median value of a subarea of the image, i.e. a 3×33×3 window surrounding each pixel.

Download the image files noise1.tif and noise2.tif . These images are versions of the previous race.tif image that have been degraded by additive white Gaussian noise and "salt and pepper" noise, respectively. Read them into Matlab, and display them using image. Filter each of the noisy images with both the 7×77×7 Gaussian filter (σ2=1σ2=1) and the 3×33×3 median filter. Display the results of the filtering, and place a title on each figure. (You can open several figure windows using the figure command.) Compare the filtered images with the original noisy images. Print out the four filtered pictures.

#### INLAB REPORT

1. Hand in your code for gaussFilter and medianFilter.
2. Hand in the plot of |HGauss(ω1,ω2)||HGauss(ω1,ω2)|.
3. Hand in the results of filtering the noisy images (4 pictures).
4. Discuss the effectiveness of each filter for the case of additive white Gaussian noise. Discuss both positive and negative effects that you observe for each filter.
5. Discuss the effectiveness of each filter for the case of "salt and pepper" noise. Again, discuss both positive and negative effects that you observe for each filter.

### Image Sharpening

Image sharpening techniques are used primarily to enhance an image by highlighting details. Since fine details of an image are the main contributors to its high frequency content, highpass filtering often increases the local contrast and sharpens the image. Some typical highpass filter impulse responses used for contrast enhancement are shown in Figure 9. The frequency response of each of these filters is shown in Figure 10.

An example of highpass filtering is illustrated in Figure 11. It should be noted from this example that the processed image has enhanced contrast, however it appears more noisy than the original image. Since noise will usually contribute to the high frequency content of an image, highpass filtering has the undesirable effect of accentuating the noise.

### Sharpening Exercise

Download the file blur.tif for the following section.

In this section, we will introduce a sharpening filter known as an unsharp mask. This type of filter subtracts out the “unsharp” (low frequency) components of the image, and consequently produces an image with a sharper appearance. Thus, the unsharp mask is closely related to highpass filtering. The process of unsharp masking an image f(i,j)f(i,j) can be expressed by

g ( i , j ) = α f ( i , j ) - β [ f ( i , j ) * * h ( i , j ) ] g ( i , j ) = α f ( i , j ) - β [ f ( i , j ) * * h ( i , j ) ]
(10)

where h(i,j)h(i,j) is a lowpass filter, and αα and ββ are positive constants such that α-β=1α-β=1.

Analytically calculate the frequency response of the unsharp mask filter in terms of αα, ββ, and h(i,j)h(i,j) by finding an expression for

G ( ω 1 , ω 2 ) F ( ω 1 , ω 2 ) . G ( ω 1 , ω 2 ) F ( ω 1 , ω 2 ) .
(11)

Using your gaussFilter function from the "Smoothing Exercise" section, create a 5×55×5 Gaussian filter with σ2=1σ2=1. Use Matlab to compute the frequency response of an unsharp mask filter (use your expression for Equation 11), using the Gaussian filter as h(i,j)h(i,j), α=5α=5 and β=4β=4. The size of the calculated frequency response should be 32×3232×32. Plot the magnitude of this response in the range [-π,π]×[-π,π][-π,π]×[-π,π] using mesh, and label the axes. You can change the viewing angle of the mesh plot with the view command. Print out this response.

Download the image file blur.tif and read it into Matlab. Apply the unsharp mask filter with the parameters specified above to this image, using Equation 10. Use image to view the original and processed images. What effect did the filtering have on the image? Label the processed image and print it out.

Now try applying the filter to blur.tif, using α=10α=10 and β=9β=9. Compare this result to the previous one. Label the processed image and print it out.

#### INLAB REPORT

1. Hand in your derivation for the frequency response of the unsharp mask.
2. Hand in the labeled plot of the magnitude response. Compare this plot to the highpass responses of Figure 10. In what ways is it similar to these frequency responses?
3. Hand in the two processed images.
4. Describe any positive and negative effects of the filtering that you observe. Discuss the influence of the αα and ββ parameters.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

### Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

### Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

### Add:

#### Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

#### Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks