# Connexions

You are here: Home » Content » A Class of Fast Algorithms for Total Variation Image Restoration

### Lenses

What is a lens?

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

#### Affiliated with (What does "Affiliated with" mean?)

This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship

This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "The Art of the PFUG"

Click the "Rice Digital Scholarship" link to see all content affiliated with them.

#### Also in these lenses

• Lens for Engineering

This module is included inLens: Lens for Engineering
By: Sidney Burrus

Click the "Lens for Engineering" link to see all content selected in this lens.

### Recently Viewed

This feature requires Javascript to be enabled.

# A Class of Fast Algorithms for Total Variation Image Restoration

Summary: This report summarizes work done as part of the Imaging and Optimization PFUG under Rice University's VIGRE program. VIGRE is a program of Vertically Integrated Grants for Research and Education in the Mathematical Sciences under the direction of the National Science Foundation. A PFUG is a group of Postdocs, Faculty, Undergraduates and Graduate students formed round the study of a common problem. This module is based on the recent work of Junfeng Yang (jfyang2992@yahoo.com.cn) from Nanjing University and Wotao Yin, Yin Zhang, and Yilun Wang (wotao.yin, yzhang, yilun.wang@rice.edu) from Rice University. In image formation, the observed images are usually blurred by optical instruments and/or transfer medium and contaminated by noise, which makes image restoration a classical problem in image processing. Among various variational deconvolution models, those based upon total variation (TV) are known to preserve edges and meanwhile remove unwanted fine details in an image and thus have attracted much research interests since the pioneer work by Rudin, Osher and Fatemi. However, TV based models are difficult to solve due to the nondifferentiability and the universal coupling of variables. In this module, we present, analyze and test a class of alternating minimization algorithms for reconstructing images from blurry and noisy observations with TV-like regularization. This class of algorithms are applicable to both single- and multi-channel images with either Gaussian or impulsive noise, and permit cross-channel blurs when the underlying image has more than one channels. Numerical results are given to demonstrate the effectiveness of the proposed algorithms.

## Introduction

In electrical engineering and computer science, image processing refers to any form of signal processing in which the input is an image and the output can be either an image or a set of parameters related to the image. Generally, image processing includes image enhancement, restoration and reconstruction, edge and boundary detection, classification and segmentation, object recognition and identification, compression and communication, etc. Among them, image restoration is a classical problem and is generally a preprocessing stage of higher level processing. In many applications, the measured images are degraded by blurs; e.g. the optical system in a camera lens may be out of focus, so that the incoming light is smeared out, and in astronomical imaging the incoming light in the telescope has been slightly bent by turbulence in the atmosphere. In addition, images that occur in practical applications inevitably suffer from noise, which arise from numerous sources such as radiation scatter from the surface before the image is sensed, electrical noise in the sensor or camera, transmission errors, and bit errors as the image is digitized, etc. In such situations, the image formation process is usually modeled by the following equation

f ( x ) = ( k * u ¯ ) ( x ) + ω ( x ) , x Ω , f ( x ) = ( k * u ¯ ) ( x ) + ω ( x ) , x Ω ,
(1)

where u¯(x)u¯(x) is an unknown clean image over a region ΩR2ΩR2, “**" denotes the convolution operation, k(x),n(x)k(x),n(x) and f(x)f(x) are real-valued functions from R2R2 to RR representing, respectively, convolution kernel, additive noise, and the blurry and noisy observation. Usually, the convolution process neither absorbs nor generates optical energy, i.e., Ωk(x)dx=1Ωk(x)dx=1, and the additive noise has zero mean.

Deblurring or decovolution aims to recover the unknown image u¯(x)u¯(x) from f(x)f(x) and k(x)k(x) based on (Equation 1). When k(x)k(x) is unknown or only an estimate of it is available, recovering u¯(x)u¯(x) from f(x)f(x) is called blind deconvolution. Throughout this module, we assume that k(x)k(x) is known and ω(x)ω(x) is either Gaussian or impulsive noise. When k(x)k(x) is equal to the Dirac delta, the recovery of u¯(x)u¯(x) becomes a pure denoising problem. In the rest of this section, we review the TV-based variational models for image restoration and introduce necessary notation for analysis.

### Total Variation for Image Restoration

The TV regularization was first proposed by Rudin, Osher and Fatemi in [12] for image denoising, and then extended to image deblurring in [11]. The TV of uu is defined as

TV ( u ) = Ω u ( x ) d x . TV ( u ) = Ω u ( x ) d x .
(2)

When u(x)u(x) does not exist, the TV is defined using a dual formulation [18], which is equivalent to (Equation 2) when uu is differentiable. We point out that, in practical computation, discrete forms of regularization are always used where differential operators are replaced by ceratin finite difference operators. We refer TV regularization and its variants as TV-like regularization. In comparison to Tikhonov-like regularization, the homogeneous penalty on image smoothness in TV-like regularization can better preserve sharp edges and object boundaries that are usually the most important features to recover. Variational models with TV regularization and 22 fidelity has been widely studied in image restoration; see e.g. [4], [3] and references therein. For 11 fidelity with TV regularization, its geometric properties are analyzed in [2], [16], [17]. The superiority of TV over Tikhonov-like regularization was analyzed in [1], [5] for recovering images containing piecewise smooth objects.

Besides Tikhonov and TV-like regularization, there are other well studied regularizers in the literature, e.g. the Mumford-Shah regularization [9]. In this module, we concentrate on TV-like regularization. We derive fast algorithms, study their convergence, and examine their performance.

### Discretization and Notation

As used before, we let ·· be the 2-norm. In practice, we always discretize an image defined on ΩΩ, and vectorize the two-dimensional digitalized image into a long one-dimensional vector. We assume that ΩΩ is a square region in R2R2. Specifically, we first discretize u(x)u(x) into a digital image represented by a matrix URn×nURn×n. Then we vectorize UU column by column into a vector uRn2uRn2, i.e.

u i = U p q , i = 1 , ... , n 2 , u i = U p q , i = 1 , ... , n 2 ,
(3)

where uiui denotes the iith component of uu, UpqUpq is the component of UU at ppth row and qqth column, and pp and qq are determined by i=(q-1)n+pi=(q-1)n+p and 1qn1qn. Other quantities such as the convolution kernel k(x)k(x), additive noise ω(x)ω(x), and the observation f(x)f(x) are all discretized correspondingly. Now we present the discrete forms of the previously presented equations. The discrete form of (Equation 1) is

f = K u ¯ + ω , f = K u ¯ + ω ,
(4)

where in this case, u¯,ω,fRn2u¯,ω,fRn2 are all vectors representing, respectively, the discrete forms of the original image, additive noise and the blurry and noisy observation, and KRn2×n2KRn2×n2 is a convolution matrix representing the kernel k(x)k(x). The gradient u(x)u(x) is replaced by certain first-order finite difference at pixel ii. Let DiR2×n2DiR2×n2 be a first-order local finite difference matrix at pixel ii in horizontal and vertical directions. E.g. when the forward finite difference is used, we have

D i u = u i + n - u i u i + 1 - u i R 2 , D i u = u i + n - u i u i + 1 - u i R 2 ,
(5)

for i=1,...,n2i=1,...,n2 (with certain boundary conditions assumed for i>n2-ni>n2-n). Then the discrete form of TV defined in (Equation 2) is given by

TV ( u ) = i = 1 n 2 D i u . TV ( u ) = i = 1 n 2 D i u .
(6)

We will refer to

min u T V ( u ) + μ 2 K u - f 2 min u T V ( u ) + μ 2 K u - f 2
(7)

with discretized TV regularization (Equation 6) as TV/L22. For impulsive noise, we replace the 22 fidelity by 11 fidelity and refer to the resulted problem as TV/L11.

Now we introduce several more notation. For simplicity, we let ii be the summation taken over all pixels. The two first-order global finite difference operators in horizontal and vertical directions are, respectively, denoted by D(1)D(1) and D(2)D(2) which are n2n2-by-n2n2 matrices (boundary conditions are the same as those assumed on DiDi). As such, it is worth noting that the two-row matrix DiDi is formed by stacking the iith row of D(1)D(1) on that of D(2)D(2). For vectors v1v1 and v2v2, we let v=(v1;v2)(v1,v2)v=(v1;v2)(v1,v2), i.e. vv is the vector formed by stacking v1v1 on the top of v2v2. Similarly, we let D=(D(1);D(2))=(D(1)),(D(2))D=(D(1);D(2))=(D(1)),(D(2)). Given a matrix TT, we let diag (T) diag (T) be the vector containing the elements on the diagonal of TT, and F(T)=FTF-1F(T)=FTF-1, where Fn2×n2Fn2×n2 is the 2D discrete Fourier transform matrix.

### Existing Methods

Since TV is nonsmooth, quite a few algorithms are based on smoothing the TV term and solving an approximation problem. The TV of uu is usually replaced by

TV ϵ ( u ) = i D i u 2 + ϵ , TV ϵ ( u ) = i D i u 2 + ϵ ,
(8)

where ϵ>0ϵ>0 is a small constant. Then the resulted approximate TV/L22 problem is smooth and many optimization methods are available. Among others, the simplest method is the gradient descent method as was used in [12]. However, this method suffers slow convergence especially when the iterate point is close to the solution. Another important method is the linearized gradient method proposed in [14] for denoising and in [15] for deblurring. Both the gradient descent and the linearized gradient methods are globally and at best linearly convergent. To obtain super linear convergence, a primal-dual based Newton method was proposed in [13]. Both the linearized gradient method and this primal-dual method need to solve a large system of linear equations at each iteration. When ϵϵ is small and/or KK becomes more ill-conditioned, the linear system becomes more and more difficult to solve. Another class of well-known methods for TV/L22 are the iterative shrinkage/thresholding (IST) based methods [8]. For IST-based methods, a TV denoising problem needs to be solved at each iteration. Also, in [7] the authors transformed the TV/L22 problem into a second order cone program and solved it by interior point method.

## A New Alternating Minimization Algorithm

In this section, we derive a new algorithm for the TV/L22 problem

min u i D i u + μ 2 K u - f 2 . min u i D i u + μ 2 K u - f 2 .
(9)

In (Equation 9), the fidelity term is quadratic with respect to uu. Moreover, KK is a convolution matrix and thus can be easily diagonalized by fast transforms (with proper boundary conditions assumed on uu). Therefore, the main difficulty in solving (Equation 9) is caused by the nondifferentiability and the universal coupling of variables of the TV term. Our algorithm is derived from the well-known variable-splitting and penalty techniques in optimization. First, we introduce an auxiliary variable wiR2wiR2 at pixel ii to transfer DiuDiu out of the nondifferentiable term ··. Then we penalize the difference between wiwi and DiuDiu quadratically. As such, the auxiliary variables wiwi's are separable with respect to one another. For convenience, in the following we let w[w1,...,wn2]w[w1,...,wn2]. The approximation model to (Equation 9) is given by

min w , u i w i + β 2 i w i - D i u 2 + μ 2 K u - f 2 , min w , u i w i + β 2 i w i - D i u 2 + μ 2 K u - f 2 ,
(10)

where β0β0 is a penalty parameter. It is well known that the solution of (Equation 10) converges to that of (Equation 9) as ββ. In the following, we concentrate on problem (Equation 10).

### Basic Algorithm

The benefit of (Equation 10) is that while either one of the two variables uu and ww is fixed, minimizing the objective function with respect to the other has a closed-form formula that we will specify below. First, for a fixed uu, the first two terms in (Equation 10) are separable with respect to wiwi, and thus the minimization for ww is equivalent to solving

min w i w i + β 2 w i - D i u 2 , i = 1 , 2 , ... , n 2 . min w i w i + β 2 w i - D i u 2 , i = 1 , 2 , ... , n 2 .
(11)

It is easy to verify that the unique solutions of (Equation 11) are

w i = max D i u - 1 β , 0 D i u D i u , i = 1 , ... , n 2 , w i = max D i u - 1 β , 0 D i u D i u , i = 1 , ... , n 2 ,
(12)

where the convention 0·(0/0)=00·(0/0)=0 is followed. On the other hand, for a fixed ww, (Equation 10) is quadratic in uu and the minimizer uu is given by the normal equations

i D i D i + μ β K K u = i D i w i + μ β K f . i D i D i + μ β K K u = i D i w i + μ β K f .
(13)

By noting the relation between DD and DiDi and a reordering of variables, (Equation 13) can be rewritten as

D D + μ β K K u = D w + μ β K f , D D + μ β K K u = D w + μ β K f ,
(14)

where

w w 1 w 2 R 2 n 2 and w j ( w 1 ) j ( w n 2 ) j , j = 1 , 2 . w w 1 w 2 R 2 n 2 and w j ( w 1 ) j ( w n 2 ) j , j = 1 , 2 .
(15)

The normal equation (Equation 14) can also be solved easily provided that proper boundary conditions are assumed on uu. Since both the finite difference operations and the convolution are not well-defined on the boundary of uu, certain boundary assumptions are needed when solving (Equation 14). Under the periodic boundary conditions for uu, i.e. the 2D image uu is treated as a periodic function in both horizontal and vertical directions, D(1)D(1), D(2)D(2) and KK are all block circulant matrices with circulant blocks; see e.g. [10], [6]. Therefore, the Hessian matrix on the left-hand side of (Equation 14) has a block circulant structure and thus can be diagonalized by the 2D discrete Fourier transform FF, see e.g. [6]. Using the convolution theorem of Fourier transforms, the solution of (Equation 14) is given by

u = F - 1 F D w + ( μ / β ) K f diag F ( D D + ( μ / β ) K K ) , u = F - 1 F D w + ( μ / β ) K f diag F ( D D + ( μ / β ) K K ) ,
(16)

where the division is implemented by componentwise. Since all quantities but ww are constant for given ββ, computing uu from (Equation 16) involves merely the finite difference operation on ww and two FFTs (including one inverse FFT), once the constant quantities are computed.

Since minimizing the objective function in (Equation 10) with respect to either ww or uu is computationally inexpensive, we solve (Equation 10) for a fixed ββ by an alternating minimization scheme given below.

Algorithm :

• Input ff, KK and μ>0μ>0. Given β>0β>0 and initialize u=fu=f.
• While “not converged”, Do
1. Compute ww according to (Equation 12) for fixed uu.
2. Compute uu according to (Equation 16) for fixed ww (or equivalently ww).
• End Do

### Optimality Conditions and Convergence Results

To present the convergence results of Algorithm "Basic Algorithm" for a fixed ββ, we make the following weak assumption.

Assumption 1 N(K)N(D)={0}N(K)N(D)={0}, where N(·)N(·) represents the null space of a matrix.

Define

M = D D + μ β K K and T = D M - 1 D . M = D D + μ β K K and T = D M - 1 D .
(17)

Furthermore, we will make use of the following two index sets:

L = i : D i u * < 1 β and E = { 1 , ... , n 2 } L . L = i : D i u * < 1 β and E = { 1 , ... , n 2 } L .
(18)

Under Assumption 1, the proposed algorithm has the following convergence properties.

Theorem 1 For any fixed β>0β>0, the sequence {(wk,uk)}{(wk,uk)} generated by Algorithm "Basic Algorithm" from any starting point (w0,u0)(w0,u0) converges to a solution (w*,u*)(w*,u*) of (Equation 10). Furthermore, the sequence satisfies

1. w E k + 1 - w E * ρ ( ( T 2 ) E E ) w E k - w E * ; w E k + 1 - w E * ρ ( ( T 2 ) E E ) w E k - w E * ;
(19)
2. u k + 1 - u * M ρ ( T E E ) u k - u * M ; u k + 1 - u * M ρ ( T E E ) u k - u * M ;
(20)

for all kk sufficiently large, where TEE=[Ti,j]i,jE(n2+E)TEE=[Ti,j]i,jE(n2+E) is a minor of TT, vM2=vMvvM2=vMv and ρ(·)ρ(·) is the spectral radius of its argument.

## Extensions to Multichannel Images and TV/L

The alternating minimization algorithm given in "A New Alternating Minimization Algorithm" can be extended to solve multichannel extension of (Equation 9) when the underlying image has more than one channels and TV/L11 when the additive noise is impulsive.

### Multichannel image deconvolution

Let u¯=[u¯(1);...;u¯(m)]Rmn2u¯=[u¯(1);...;u¯(m)]Rmn2 be an mm-channel image, where, for each jj, u¯(j)Rn2u¯(j)Rn2 represents the jjth channel. An observation of u¯u¯ is modeled by (Equation 4), in which case f=[f(1);...;f(m)]f=[f(1);...;f(m)] and ω=[ω(1);...;ω(m)]ω=[ω(1);...;ω(m)] have the same size and the number of channels as u¯u¯, and KK is a multichannel blurring operator of the form

K = K 11 K 12 K 1 m K 21 K 22 K 2 m K m 1 K m 2 K m m R m n 2 × m n 2 , K = K 11 K 12 K 1 m K 21 K 22 K 2 m K m 1 K m 2 K m m R m n 2 × m n 2 ,
(21)

where KijRn2×n2KijRn2×n2, each diagonal submatrix KiiKii defines the blurring operator within the iith channel, and each off-diagonal matrix KijKij, ijij, defines how the jjth channel affects the iith channel.

The multichannel extension of (Equation 9) is

min u i ( I m D i ) u + μ 2 K u - f 2 , min u i ( I m D i ) u + μ 2 K u - f 2 ,
(22)

where ImIm is the identity matrix of order mm, and “" is the Kronecker product. By introducing auxiliary variables wiR2mwiR2m, i=1,...,n2i=1,...,n2, (Equation 22) is approximated by

min w , u i w i + β 2 i w i - ( I m D i ) u 2 + μ 2 K u - f 2 . min w , u i w i + β 2 i w i - ( I m D i ) u 2 + μ 2 K u - f 2 .
(23)

For fixed uu, the minimizer function for ww is given by (Equation 12) in which DiuDiu should be replaced by (ImDi)u(ImDi)u. On the other hand, for fixed ww, the minimization for uu is a least squares problem which is equivalent to the normal equations

I 3 ( D D ) + μ β K K u = ( I 3 D ) w + μ β K f , I 3 ( D D ) + μ β K K u = ( I 3 D ) w + μ β K f ,
(24)

where ww is a reordering of variables in a similar way as given in (Equation 15). Under the periodic boundary condition, (Equation 24) can be block diagonalized by FFTs and then solved by a low complexity Gaussian elimination method.

### Deconvolution with Impulsive Noise

When the blurred image is corrupted by impulsive noise rather than Gaussian, we recover u¯u¯ as the minimizer of a TV/L11 problem. For simplicity, we again assume u¯Rn2u¯Rn2 is a single channel image and the extension to multichannel case can be similarly done as in "Multichannel image deconvolution". The TV/L11 problem is

min u i D i u + μ K u - f 1 . min u i D i u + μ K u - f 1 .
(25)

Since the data-fidelity term is also not differentiable, in addition to ww, we introduce zRn2zRn2 and add a quadratic penalty term. The approximation problem to (Equation 25) is

min w , z , u i w i + β 2 w i - D i u 2 + μ z 1 + γ 2 z - ( K u - f ) 2 , min w , z , u i w i + β 2 w i - D i u 2 + μ z 1 + γ 2 z - ( K u - f ) 2 ,
(26)

where β,γ0β,γ0 are penalty parameters. For fixed uu, the minimization for ww is the same as before, while the minimizer function for zz is given by the famous one-dimensional shrinkage:

z = max | K u - f | - 1 γ , 0 · sgn ( K u - f ) . z = max | K u - f | - 1 γ , 0 · sgn ( K u - f ) .
(27)

On the other hand, for fixed ww and zz, the minimization for uu is a least squares problem which is equivalent to the normal equations

D D + μ γ β K K u = D w + μ γ β K ( f + z ) . D D + μ γ β K K u = D w + μ γ β K ( f + z ) .
(28)

Similar to previous arguments, (Equation 28) can be easily solved by FFTs.

## Experiments

In this section, we present the practical implementation and numerical results of the proposed algorithms. We used two images, Man (grayscale) and Lena (color) in our experiments, see Figure 1. The two images are widely used in the field of image processing because they contain nice mixture of detail, flat regions, shading area and texture.

We tested several kinds of blurring kernels including Gaussian, average and motion. The additive noise is Gaussian for TV/L22 problems and impulsive for TV/L11 problem. The quality of image is measured by the signal-to-noise ratio (SNR) defined by

SNR 10 * log 10 u ¯ - E ( u ¯ ) 2 u ¯ - u 2 , SNR 10 * log 10 u ¯ - E ( u ¯ ) 2 u ¯ - u 2 ,
(29)

where u¯u¯ is the original image and E(u¯)E(u¯) is the mean intensity value of u¯u¯. All blurring effects were generated using the MATLAB function “imfilter " with periodic boundary conditions, and noise was added using “imnoise ". All the experiments were finished under Windows Vista Premium and MATLAB v7.6 (R2008a) running on a Lenovo laptop with an Intel Core 2 Duo CPU at 2 GHz and 2 GB of memory.

### Practical Implementation

Generally, the quality of the restored image is expected to increase as ββ increases because the approximation problems become closer to the original ones. However, the alternating algorithms converge slowly when ββ is large, which is well-known for the class of penalty methods. An effective remedy is to gradually increase ββ from a small value to a pre-specified one. Figure 2 compares the different convergence behaviors of the proposed algorithm when with and without continuation, where we used Gaussian blur of size 11 and standard deviation 5 and added white Gaussian noise with mean zero and standard deviation 10-310-3.

In this continuation framework, we compute a solution of an approximation problem which used a smaller beta, and use the solution to warm-start the next approximation problem corresponding to a bigger ββ. As can be seen from Figure 2, with continuation on ββ the convergence is greatly sped up. In our experiments, we implemented the alternating minimization algorithms with continuation on ββ, which we call the resulting algorithm “Fast Total Variation de-convolution” or FTVd, which, for TV/L22, the framework is given below.

[FTVd]:

• Input ff, KK and μ>0μ>0. Given βmax>β0>0βmax>β0>0.
• Initialize u=fu=f, up=0up=0, β=β0β=β0 and ϵ>0ϵ>0.
• While ββmaxββmax, Do
1. Run Algorithm "Basic Algorithm" until an optimality condition is met.
2. β2*ββ2*β.
• End Do

Generally, it is difficult to determine how large ββ is sufficient to generate a solution that is close to be a solution of the original problems. In practice, we observed that the SNR values of recovered images from the approximation problems are stabilized once ββ reached a reasonably large value. To see this, we plot the SNR values of restored images corresponding to β=20,21,,218β=20,21,,218 in Figure 3. In this experiment, we used the same blur and noise as we used in the testing of continuation. As can be seen from Figure 3, the SNR values on both images essentially remain constant for β27β27. This suggests that ββ need not to be excessively large from a practical point of view. In our experiments, we set β0=1β0=1 and βmax=27βmax=27 in Algorithm "Practical Implementation". For each ββ, the inner iteration was stopped once an optimality condition is satisfied. For TV/L11 problems, we also implement continuation on γγ, and used similar settings as used in TV/L22.

### Recovered Results

In this subsection, we present results recovered from TV/L22 and TV/L11 problems including (Equation 9), (Equation 25) and their multichannel extensions. We tested various of blurs with different levels of Gaussian noise and impulsive noise. Here we merely present serval test results. Figure 4 gives two examples of blurry and noisy images and the recovered ones, where the blurred images are corrupted by Gaussian noise, while Figure 5 gives the recovered results where the blurred images are corrupted by random-valued noise. For TV/L11 problems, we set γ=215γ=215 and β=210β=210 in the approximation model and implemented continuation on both ββ and γγ.

## Concluding Remarks

We proposed, analyzed and tested an alternating algorithm FTVd which for solving the TV/L2L2 problem. This algorithm was extended to solve the TV/L1L1 model and their multichannel extensions by incorporating an extension of TV. Cross-channel blurs are permitted when the underlying image has more than one channels. We established strong convergence results for the algorithms and validated a continuation scheme. Numerical results are given to demonstrate the feasibility and efficiency of the proposed algorithms.

## Acknowledgements

This Connexions module describes work conducted as part of Rice University's VIGRE program, supported by National Science Foundation grant DMS-0739420.

## References

1. Acar, R. and Vogel, C. R. (1994). Analysis of total variation penalty methods. Inv. Probl., 10, 1217-1229.
2. Chan, T. F. and Esedoglu, S. (2005). Aspects of total variation regularized function approximation. SIAM Journal on Applied Mathematics, 65(5), 1817–1837.
3. Chan, T. F. and Esedoglu, S. and Park, F. and Yip, A. (2004). Recent Developments in Total Variation Image Restoration. (05-01). CAM Report. Department of Mathematics, UCLA.
4. Chambolle, A. and Lions, P. L. (1997). Image Recovery via Total Variation Minimization and Related Problems. Numer. Math., 76(2), 167-188.
5. Dobson, D. C. and Santosa, F. (1996). Recovery of blocky images from noisy and blurred data. SIAM J. Appl. Math., 56, 1181–1198.
6. Gonzalez, R. and Woods, R. (1992). Digital Image Processing. Addison-Wesley.
7. Goldforb, D. and Yin, W. (2005). Second-Order Cone Programming Methods for Total Variation-based Image Restoration. SIAM J. Sci. Comput., 27(2), 622-645.
8. I. Daubechies, M. Defriese and Mol, C. De. (2004). An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math., LVII, 1413-1457.
9. Mumford, D. and Shah, J. (1989). Optimal approximations by piecewise smooth functions and associated variational problems. Comm. Pure Appl. Math., 42, 577-685.
10. NG, Michael K. and Chan, Raymond H. and Tang, Wuncheung. (1999). A fast algorithm for deblurring models with neumann boundary conditions. SIAM J. Sci. Comput., 21(3), 851–866.
11. Rudin, L. and Osher, S. (1994). Total Variation Based Image Restoration with Free Local Constraints. Proc. 1st IEEE ICIP, 1, 31-35.
12. Rudin, L. and Osher, S. and Fatemi, E. (1992). Nonlinear Total Variation Based Noise Removal Algorithms. Phys. D, 60, 259-268.
13. T. F. Chan, G. H. Golub and Mulet, P. (1999). A nonlinear primal dual method for total variation based image restoration. SIAM J. Sci. Comput., 20, 1964-1977.
14. Vogel, C. R. and Oman, M. E. (1996). Iterative Methods for Total Variation Denoising. SIAM J. Sci. Comput., 17(1), 227-238.
15. Vogel, C. and Oman, M. (1998). Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image processing, 7(6), 813–824.
16. Yin, W. and Goldfarb, D. and Osher, S. (2005). Image cartoon-texture decomposition and feature selection using the total variation regularized functional. In Leture Notes in Computer Science: Vol. 3752. Variational, Geometric, and Level Set Methods in Computer Vision. (pp. 73-84). Springer.
17. Yin, W. and Goldfarb, D. and Osher, S. (2006). The total variation regularized model for multiscale decomposition. SIAM Journal on Multiscale Modeling and Simulation, 6(1), 190–211.
18. Ziemer, W. P. (1989). Graduate Texts in Mathematics: Weakly Differentiable Functions: Sobolev Spaces and Functions of Bounded Variation. Springer.

## Content actions

### Give feedback:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks