Summary: This module describes how we created our blurry image and applied the FTVd algorithm.

Background

What are the causes of the image distortion? Well using the given model for our blurry observation there are two causes: K and omega.

K is a blurring kernel. It is a matrix convolved with our original image *u *that performs a linear operation to represent the effects of a particular kind of blur*. *Omega is a term used to represent the additive forms of noise introduced by our camera and the environment into our imperfect observation.

To model and recover our image, we applied an algorithm know as the Fast Total Variation Deconvolution.

Fast Total Variation Deconvolution takes advantage of our problem structure and assumes several facts about the information in our image. Because of the additive noise in all of our observations, we cannot directly recover our desired image from the blurry observation by performing the inverse of operation, deconvolve our original image with the blurring kernel. Instead, we first try to minimize the noise to approximate an ideal blurring. Then we can invert the problem to find *u*.

To do this we model our problem using the following equation:

In the equation above, we have two terms: the first is our total variation norm, which is a discretized gradient measured across our entire image, the second is the data fidelity term. The data fidelity term attempts to make the difference between our blurry observation and an ideally blurred image very small. If the difference were zero, we could very easily perform the deconvolution to recover *u*. So, the minimization step will take us as close as possible to a problem with a closed form solution. This model supposes a few facts about our problem. Primarily, it assumes that the majority of scenes in the real world will have flat, uniform surfaces. This means that our image should have very few nonzero gradients and the additive noise will introduce many random peaks and thus non-zero gradients to be minimized.

The full form of the Total Variation Regularization transforms our first model into the following:

This equation adds another term to our model. Here we try to minimize the difference between the non-zero gradients and some term **w**, while simultaneously trying to make **w** as small as possible. The beta parameter in the second term helps to establish convergence, when beta is very large. For our purposes, we have used the parameters for convergence given by the FTVd reference, which has chosen optimal value for beta. We can group these terms together as the regularizing term, which constrains our model so that we have a well conditioned noisy observation.

There are many other possible forms for constrained minimization. Different constraints will result in different ability for our model to converge. The FTVd algorithm performs this minimization using 2FFT’s and one inverse FFT, giving a complexity of N log (N). In particular, we note that the FTVd will converge quickly with relatively little iteration, but it is also important to note that our problem sacrifices some clarity on textured surfaces. This algorithm is ideal for quick noise removal.

It is also important to say that this algorithm cannot function without passing in the blurring kernel. In many real world situations, such as the random motion of a handshake, it is impossible to know the blurring kernel to one hundred percent accuracy. In this case, it would be necessary to use a blind deconvolution. The process of blind deconvolution will estimate the point spread function of the blur kernel. However, in general it is always necessary to calculate the point spread function of the blur kernel. Any signal received will be a convolution of input signal with the impulse response of the receiving system. So, to fully recover the input signal, we will need to the impulse response of our camera and any other functions that have acted on our input, namely the blur and the noise.

The FTVd code has been attached to this module. I has been provided open source with much thanks to Dr. Yin and his group [5].

Approach

In our project, we model four different type of real world blur, each under two possible types of noise. We break these types of blur up in three different categories and model then each with their respective blurring kernels: handshake motion, motion of the observed object, and out of focus blur.

For handshake blur, we used data collected experimentally by our Microsoft reference to develop a kernel [1]. We chose a kernel that represents a small, defined shake at the point of observation and applied it equally to the whole of the image. This data is very much similar to the linear motion function, but represents a two dimensional curve. We could have chosen to model this situation by piecing together some linear motion curves, but this more accurately represents the random motion that is possible when holding the image-capturing device. Inherently, the motion of a handshake will be unique to each observation, but in this case we have chose a specific motion blur to simplify our recovery process.

Here is our handshake kernel, represented as a black and white image:

For the linear motion of an observed object, we have also chosen a determined value for the motion that will simplify our calculations. We assume that the object observed is large compared to the size of our whole observation, such that we can blur the whole image uniformly. To develop this kernel we use the matlab function fspecial (‘motion’, amount, theta) function, which will develop a matrix of the specified distance at a specific angle [2].

Here is the linear motion kernel as a black and white image:

To create a kernel representing out of focus blur, we used a two-dimensional disc and applied this disc to two different situations. The first represents a situation in which the picture was completely out of focus. This could correspond to the picture focused somewhere very far in the distance or infinitesimally close the camera. This results in a totally blurred image.

The next approach approximates a focal point about 10 feet behind Willy’s statue. This means that the foreground is within focus and the background out of focus. To create this image, we had to blur our image in two steps. We found that in general there is no real distance information observable in the received signal. So, we had to investigate the boundary conditions of the image to determine which areas could be considered the foreground and the background. Then our kernel could be applied only to the background of the image. We applied the FTVd algorithm to the whole image uniformly to recover the image.

This is our 2D disc representing an out of focus lens:

We applied each of these blur kernels under two different noise applications.

The first is Gaussian white noise caused by thermal Johnson-Nyquist noise or shot noise in the operation of transistors in our camera circuitry. To create this kernel we followed the model in the FTVd package, and did not used the predefined noise function. Instead, we created a matrix of random numbers drawn from a Gaussian distribution and added that to the convolution of *K* and *u*.

Next, we applied our blur kernels under L1 noise. L1 noise can be caused by random failures in sensor pixels. This can be mathematically characterized by the presence or absence of Dirac delta functions. Due to the visual nature of corruption caused, L1 noise is often referred to as ‘Salt and Pepper’ noise. We were able to add this L1 noise to our image using the matlab function imnoise.

The following image shows our handshake kernel applied with L2 additive noise PSNR=24.27 dB :

The next image shows our foreground/background distinction in the presence of L1 noise PSNR=14.94 dB: