Skip to content Skip to navigation Skip to collection information

OpenStax_CNX

You are here: Home » Content » The Art of the PFUG » Image Denoising via the Redundant Wavelet Transform

Navigation

Table of Contents

Lenses

What is a lens?

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

This content is ...

Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
  • Rice Digital Scholarship

    This collection is included in aLens by: Digital Scholarship at Rice University

    Click the "Rice Digital Scholarship" link to see all content affiliated with them.

Also in these lenses

  • Lens for Engineering

    This module and collection are included inLens: Lens for Engineering
    By: Sidney Burrus

    Click the "Lens for Engineering" link to see all content selected in this lens.

Recently Viewed

This feature requires Javascript to be enabled.
 

Image Denoising via the Redundant Wavelet Transform

Module by: Stephen Kruzick, Colleen Kenney. E-mail the authors

Summary: This report summarizes work done as part of the Wavelet Based Image Analysis PFUG under Rice University's VIGRE program. VIGRE is a program of Vertically Integrated Grants for Research and Education in the Mathematical Sciences under the direction of the National Science Foundation. A PFUG is a group of Postdocs, Faculty, Undergraduates and Graduate students formed round the study of a common problem. This module introduces the redundant discrete wavelet transform as well as two level dependent estimators that could potentially be used for image denoising, the Bishrink algorithm and the Bayesian Least Squares-Gaussian Scale Mixture algorithm. A simulation designed to evaluate the efficacies of each of these methods for the purpose of denoising astronomical image data is described, and its results are presented and discussed. This Connexions module describes work conducted as part of Rice University's VIGRE program, supported by National Science Foundation grant DMS?0739420.

Introduction

The redundant wavelet transform (RWT) is widely used in order to denoise signals and images. Here, we consider two denoising methods used in the literature, to attempt to denoise astronomical images with the aim of obtaining images in which we can search for very faint objects that are not noise.

The paper is organized as follows. In "Redundant Wavelet Transform", we introduce a few algorithms used to compute the RWT. In "Denoising Algorithms based on the RWT", we discuss some denoising methods based on the RWT. In "Denoising Simulation", a description of the simulation and the results from the implemented methods can be found, which are further discussed in "Conclusions".

Redundant Wavelet Transform

Undecimated algorithm

The redundant discrete wavelet transform, similar in nature to the discrete wavelet transform, decomposes data into low-pass scaling (trend) and high-pass wavelet (detail) coefficients to obtain a projective decomposition of the data into different scales. More specifically, at each level the transform uses the scaling coefficients to compute the next level of scaling and wavelet coefficients. The difference lies in the fact that none of the latter are discarded through decimation as in the discrete wavelet transform but are instead retained, introducing a redundancy. This transform is good for denoising images, as the noise is usually spread over a small number of neighboring pixels. The Rice Wavelet Toolbox used to compute the transform in the simulation implements the redundant wavelet transform through the undecimated algorithm, which as its name suggests is similar to the discrete wavelet transform but omits downsampling, also known as decimation, in computation of the transform and upsampling in computation of the inverse transform[1].

A trous algorithm

Another method of computing the redundant wavelet transform, the a´a´ trous algorithm differs from the undecimated algorithm by modifying the low-pass and high-pass filters at each consecutive level. The algorithm up-samples the low-pass filter at each level by inserting zeros between each of the filter's coefficients. The high-pass coefficients are then computed as the difference between the low-pass images from the two consecutive levels. To compute the inverse transform, the detail coefficients from all levels are added to the final low-resolution image [1]. While inefficient in implementation, the a´a´ trous algorithm provides additional insight into the redundant discrete wavelet transform.

Denoising Algorithms based on the RWT

Soft-Thresholding

In the traditional method of soft-thresholding, where the universal threshold is used, coefficients below a specified threshold are shrunk to zero while those above the threshold are shrunk by a factor of σ^(2log(N))σ^(2log(N))[2]. On the orthogonal wavelet transforms, it has been shown to exhibit the following property:

Theorem 1 For a sequence of i.i.d. random variables ziN(0,1)ziN(0,1), P(maxi=1..N(2logN))1P(maxi=1..N(2logN))1 for NN.

Bivariate Shrinkage

Sendur and Selesnick [5] proposed a bivariate shrinkage estimator by estimating the marginal variance of the wavelet coefficients via small neighborhoods as well as from the the corresponding neighborhoods of the parent coefficients. The developed method maintains the simplicity and intuition of soft-thresholding.

We can write

y k = w k + n k , y k = w k + n k ,
(1)

where wkwk are the parent and child wavelet coefficients of the true, noise-free image and nknk is the noise. We have for our variance, then, that

σ y 2 = σ k 2 + σ n 2 . σ y 2 = σ k 2 + σ n 2 .
(2)

Noting that we will always be working with one coefficient at a time, we will suppress the kk.

In [4], Sendur and Selesnick proposed a bivariate pdf for the wavelet coefficient w1w1 and the parent w2w2 to be

p w ( w ) = 3 2 π σ 2 exp ( - 3 σ ( w 1 2 + w 2 2 ) , p w ( w ) = 3 2 π σ 2 exp ( - 3 σ ( w 1 2 + w 2 2 ) ,
(3)

where the marginal variance σ2σ2 is dependent upon the coefficient index kk. They derived their MAP estimator to be

w ^ 1 = ( y 1 2 + y 2 2 - 3 σ n 2 σ ) + y 1 2 + y 2 2 y 1 w ^ 1 = ( y 1 2 + y 2 2 - 3 σ n 2 σ ) + y 1 2 + y 2 2 y 1
(4)

To estimate the noise variance σn2σn2 from the noisy wavelet coefficients, they used the median absolute deviance (MAD) estimator

σ ^ n 2 = m e d i a n ( | y i | ) 0 . 6745 , y i s u b b a n d H H , σ ^ n 2 = m e d i a n ( | y i | ) 0 . 6745 , y i s u b b a n d H H ,
(5)

where the estimator uses the wavelet coeffiecients from the finest scale.

The marginal variance σy2σy2 was estimated using neighborhoods around each wavelet coefficient as well as the corresponding neighborhood of the parent wavelet coefficient. For instance, for a 7x7 window, we take the neighborhood around y1,(4,4)y1,(4,4) to be the wavelet coefficients located in the square (1, 1), (1, 7), (7, 7), (7, 1) as well as the coefficients in the second level located in the same square; this square is denoted N(k)N(k). The estimate used for σy2σy2 is given by

σ ^ y 2 = 1 M y i N ( k ) y i 2 , σ ^ y 2 = 1 M y i N ( k ) y i 2 ,
(6)

where MM is the size of the neighborhood N(k)N(k). We can then estimate the standard deviation of the true wavelet coefficients through Equation 2:

σ ^ = ( σ ^ y 2 - σ ^ n 2 ) + . σ ^ = ( σ ^ y 2 - σ ^ n 2 ) + .
(7)

We then have the information we need to use equation Equation 4.

BLS-GSM

Portilla, et. al. [3] propose the BLS-GSM method for denoising digital images, which may be used with orthogonal and redundant wavelet transforms as well as with pyramidal schemes. They model neighborhoods of coefficients at adjacent positions and scales as the product of a Gaussian vector and a hidden positive scalar multiplier, so that the neighborhoods are defined similarly as in the BiShrink algorithm. The coefficient within each neighborhood around a reference coefficient of a subband are modeled with a Gaussian scale mixture (GSM) model. The chosen prior distribution is the Jeffrey's prior, pz(z)1zpz(z)1z.

They assume the image has additive white Gaussian noise, although the algorithm also allows for nonwhite Gaussian noise. For a vector yy corresponding to a neighborhood of NN observed coefficients, we have

y = x + w = z u + w . y = x + w = z u + w .
(8)

The BLS-GSM algorithm is as follows:

  1. Decompose the image into subbands
  2. For the HH, HL, and LH subbands:
    1. Compute the noise covariance, CwCw, from the image-domain noise covariance
    2. Estimate CyCy, the noisy neighborhood covariance
    3. Estimate CuCu using Cu=Cy+CwCu=Cy+Cw
    4. Compute ΛΛ and MM, where QQ, ΛΛ is the eigenvector/eigenvalue expansion of the matrix S-1CuS-TS-1CuS-TSS is the symmetric square root of the positive definite matrix CwCw, and M=SQM=SQ
    5. For each neighborhood
      1. For each value zz in the integration range
        1. Compute E[xc|y,z]=n=1Nzmcnλnvnzλn+1E[xc|y,z]=n=1Nzmcnλnvnzλn+1, where mijMmijM, vv=M-1yvv=M-1y, λ=diag(λ)λ=diag(λ), and cc is the index of the reference coefficient.
        2. Compute the conditional density p(y|z)p(y|z)
      2. Compute the posterior p(z|y)p(z|y)
      3. Compute E[xc|y]E[xc|y]
    Reconstruct the denoised image from the processed subbands and the lowpass residual

Denoising Simulation

Simulation description

In order to compare and evaluate the efficacies of the Bishrink and BLS-GSM algorithms for the purpose of denoising image data, a simulation was developed to quantitatively examine their performance after addition of random noise to otherwise approximately noiseless images with a variety of features representative of those found in astronomical images. Specifically, the images encoded in the widely available files Moon.tif, which primarily demonstrates smoothly curving attributes, and Cameraman.tif, which exhibits a range of both smooth and coarse features, distributed in the MATLAB image processing toolbox were considered.

As a preliminary preparation for the simulation, the images were preprocessed such that they were represented in the form of a grayscale pixel matrix taking values on the interval [0,1][0,1] of square dimensions equal to a convenient power of two. Noisy versions of each image were generated by superposition of a random matrix with Gaussian distributed pixel elements on the image matrix, using noise variance values {.01,.1,1}{.01,.1,1}. For each noise variance level and original image, 100 contaminated images were created in this way using a set of 100 different random generator seeds, which was the same for each noise level and original image. A redundant discrete wavelet transform of each of these contaminated images was computed using the length 8 Daubechies filters, and the denoised wavelet coefficients were estimated using both the Bishrink and the BLS-GSM algorithms as previously described. Computation of the inverse redundant discrete wavelet transform using the denoised wavelet coefficients then yielded 100 images denoised with the Bishrink algorithm and 100 images denoised with the BLS-GSM algorithm for each original image and noise variance level.

Using this simulated data, the performance of the two denoising methods on each image at each noise contamination level were evaluated using the six statistical measures described here. The first of these was the mean square error MSEMSE, which is calculated by the average of

1 n i = 1 n f x i - f ^ x i 2 1 n i = 1 n f x i - f ^ x i 2
(9)

over all 100 denoisings. Related to the above was the root mean square error RMSERMSE, which is calculated by computing the square root of the mean square error. A third was the root mean square bias RMSBRMSB, which is calculated by

1 n i = 1 n f x i - f ¯ x i 2 1 n i = 1 n f x i - f ¯ x i 2
(10)

where f¯xif¯xi is the average of f^xif^xi over all 100 denoisings. Two more, the maximum deviation MXDVMXDV, calculated by the average of

max 1 < i < n f x i - f ^ x i max 1 < i < n f x i - f ^ x i
(11)

over all 100 denoisings, and L1, calculated by the average of

i = 1 n f x i - f ^ x i i = 1 n f x i - f ^ x i
(12)

over all 100 denoisings, were also examined. The results of this simulation now follow.

Bishrink results

Table 1: Simulation measures for noise variance .01
Measure Cameraman Moon
MSE 0.0019 0.0004
RMSE 0.0442 0.0188
L1 2019.9 3160.4
RMSB 0.0274 0.0117
MXDV 0.3309 0.2634
Figure 1: Cameraman with noise variance .01
Figure 1 (bicam1.png)
Figure 2: Moon with noise variance .01
Figure 2 (bimoon1.png)
Table 2: Simulation measures for noise variance .1
Measure Cameraman Moon
MSE 0.0063 0.0012
RMSE 0.0296 0.0345
L1 3612.4 5880.7
RMSB 0.0568 0.0213
MXDV 0.6147 0.4116
Figure 3: Cameraman with noise variance .1
Figure 3 (bicam2.png)
Figure 4: Moon with noise variance .1
Figure 4 (bimoon2.png)
Table 3: Simulation measures for noise variance 1
Measure Cameraman Moon
MSE 0.0173 0.0052
RMSE 0.1315 0.0722
L1 6183.7 11839
RMSB 0.0934 0.0389
MXDV 0.8991 0.9774
Figure 5: Cameraman with noise variance 1
Figure 5 (bicam3.png)
Figure 6: Moon with noise variance 1
Figure 6 (bimoon3.png)

BLS-GSM results

Table 4: Simulation measures for noise variance .01
Measure Cameraman Moon
MSE 0.0015 0.0003
RMSE 0.0390 0.0165
L1 1711.0 2718.6
RMSB 0.0283 0.0141
MXDV 0.3192 0.2635
Figure 7: Cameraman with noise variance .01
Figure 7 (blscam1.png)
Figure 8: Moon with noise variance .01
Figure 8 (blsmoon1.png)
Table 5: Simulation measures for noise variance .1
Measure Cameraman Moon
MSE 0.0052 0.0008
RMSE 0.0718 0.0288
L1 3111.5 4786.5
RMSB 0.0583 0.0224
MXDV 0.5862 0.3337
Figure 9: Cameraman with noise variance .1
Figure 9 (blscam2.png)
Figure 10: Moon with noise variance .1
Figure 10 (blsmoon2.png)
Table 6: Simulation measures for noise variance 1
Measure Cameraman Moon
MSE 0.0136 0.0017
RMSE 0.1167 0.0410
L1 5283.5 1500.2
RMSB 0.0970 0.0346
MXDV 0.7750 0.4614
Figure 11: Cameraman with noise variance 1
Figure 11 (blscam3.png)
Figure 12: Moon with noise variance 1
Figure 12 (blsmoon3.png)

Conclusions

The results obtained from this simulation now allow us to evaluate and comment upon the suitability of each of the two methods examined for the analysis of astronomical image data. As is clearly manifested in the quantitative simulation results, the BLS-GSM algorithm demonstrated more accurate performance than did the Bishrink algorithm in every measure consistently over all pictures and noise levels. That does not, however, indicate that it would be the method of choice in all circumstances. While BLS-GSM outperformed the Bishrink algorithm in the denoising simulation, the measures calculated for the Bishrink algorithm indicate that it also produced a reasonably accurate image estimate. Also, the denoised images produced by the Bishrink simulation exhibit a lesser degree of qualitative smoothing of fine features like the craters of the moon and grass of the field. The smoothing observed with the BLS-GSM algorithm could make classification of fine, dim objects difficult as they are blended into the background. Thus, the success of the Bishrink algorithm in preserving fine signal details while computing an accurate image estimate is likely to outweigh overall accuracy in applications searching for small, faint objects such as extrasolar planets, while the overall accuracy of the BLS-GSM algorithm recommend it for coarse and bright featured images.

References

  1. A. Gyaourove, C. Kamath and Fodor, I.K. (2002). Undecimated wavelet transforms for image denoising. (UCRL-ID-150931). Technical report. Center for Applied Scientific Computing, Lawrence Livermore National Laboratory.
  2. Donoho, D. and Johnstone, J. (1994). Ideal spatial adaption by wavelet shrinkage. Biometrika, 3(81), 425-455.
  3. J. Portilla, M.J. Wainwright and Simonccelli, E.P. (2003). Image Denoising Using Scale Mixtures of Gaussians in the Wavelet Domain. IEEE Transactions on Image Processing, 12(11), 1338-1351.
  4. Sendur, L. and Selesnick, I. W. (2002). A Bivariate Shrinkage Function for Wavelet Based Denoising. IEEE ICASSP, (12),
  5. Sendur, L. and Selesnick, I. W. (2002). Bivariate Shrinkage With Local Variance Estimation. IEEE Signal Processing Letters, (12), 438-441.

Collection Navigation

Content actions

Download:

Collection as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Module as:

PDF | EPUB (?)

What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

Downloading to a reading device

For detailed instructions on how to download this content's EPUB to your specific device, click the "(?)" link.

| More downloads ...

Add:

Collection to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

Module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks