^{1}

^{1}

^{*}

The problem of multiplicative noise removal has been widely studied in recent years. Many methods have been used to remove it, but the final results are not very excellent. The total variation regularization method to solve the problem of the noise removal can preserve edge well, but sometimes produces undesirable staircasing effect. In this paper, we propose a variational model to remove multiplicative noise. An alternative algorithm is employed to solve variational model minimization problem. Experimental results show that the proposed model can not only effectively remove Gamma noise, but also Rayleigh noise, as well as the staircasing effect is significantly reduced.

Image noise removal is one of fundamental problems of image processing and computer version. A real recorded image may be disturbed by some random factors, which is an unavoidable. Additive noise model [

Classical variational model for multiplicative noise removal is aiming at Gaussian distribution [

and its fidelity term expresses as

series of variation models have taken logarithmic transformation

term written as

For solving problem that AA is not strictly convex, Huang, Ng and Wen [

Numerical results show that noise removal ability of HNW is better than AA, but it produces “staircase effect”. Alternative iterative algorithm ensures that the solution of the model is unique, and the iterative sequence also converges to optimal solution of it.

After, a body of variation models [

The rest of this paper is organized as follows. In Section 2, we introduce the proposed model how constructs it. Next section will give a new numerical algorithm. Convergence proof of the model will be launched in Section 4. In Section 5, we will show the experiments and its specific analysis. Finally, concluding remarks are given.

The difference between additive noise and multiplicative noise is whether the noise signal and the original image signal are independent or not. Multiplicative noise, however, is not independent. In paper [

In which g is the observed image, u is the original image, n is multiplicative noise under Rayleigh distribution, and the probability density function of n is denoted as follows

where

To realize the estimate of the original image u, the estimate can be computed by

Applying Bayes’s rule, it becomes

Based on (2.4), minimize post mortem energy of its MAP method

Logarithmic energy equation

We can know the truth from the reference [

Combining (2.2), (2.3), (2.5) with (2.6), we can get

From (2.1), we can derive that

If

where D is a two-dimensional bounded open domain of R^{2} with Lipschitz boundary, then image can be interpreted as a real function defined on D.

With using a logarithmic transformation

An unconstrained optimization problem can be solved by a composition function

Variable splitting [

which is apparently equivalent to formula (2.11), and the Lagrange function can be written as follows

where

We denote

To solve its minimum value, it is equivalent to this constrained optimization problem

Inspired by the iterative algorithm of reference [

Such that

To solve the problem (3.1), we need to divide it into the following three steps.

The first step of the method is to solve a part of the optimization problem. The minimizer of this problem

Its discretization

Now, letting

Since f is continuous and derivable in the specified range, this function is equitant to solving the regular with

We use CSM [

And then, we can get

The second step of the method is to apply a TV denoising scheme to the image generated by the previous multiplicative noise removal step. The minimizer of the optimization problem

Denoting

Its corresponding Euler-Lagrange equation of the variational problem (3.5) as follows

where

and

In this paper,

where

and

Using gradient descent method to obtain (3.5) the optimization numerical solution as follows:

where

and iterative formula

The third step is to analysis the condition to stop iterative.

In this section, we will discuss the convergence of the iterative algorithm. First, we know that

Theorem 1. For any given initial value

To prove this theorem, we will give the following lemmas, and the appropriate proof.

Lemma 1. Sequence

Proof. It follows from the alternating iterative process in algorithm that

It is obvious that sequence

Lemma 2.The function

Proof. Let

The matrix S is not a full-rank. The discrete total variation of regularization term of model (2.13) as follows

Denote

Next we will discuss two cases: 1)

For (i), we note that

to z. therefore we obtain

By using the above inequality, we have

Considering

We can get that

For (ii), considering

So

Definition 3. Let

Proof of theorem 1. Since sequence

function and strictly convex function, the set of fixed points are just minimizers of

one and only one minimizer of

Moreover, we have, for any

Let us denote by

We can get conclusion that

So

In this section, we will experiment on Lena and Cameraman. Different strength Gamma and Rayleigh noises are added to the original image, and then comparing effects of the proposed model proposed model denosing with HNW. In our experiments,

In

than HNW for Lena with Gamma L = 20, because gray distribution is reasonable and fitting degree of denoised image is stronger than HNW model. The result of denosed aiming at the noise under the Rayleigh distributed multiplicative noise

In

In order to better illustrate the effectiveness ofthe proposed model, this paper will use the additional data to show it. These are iteration time (T), signal to noise ratio (SNR), mean square error (MSE), peak signal to noise ratio (PSNR), and relative error rate (ReErr). T is time to work-the smaller timeis, the well model is. For SNR or PSNR, the larger the value, the smaller noise. For MSE or ReErr, the value is smaller, indicating that denoising effect is positive.

T | SNR | MSE | PSNR | ReErr | ||
---|---|---|---|---|---|---|

L = 20 Gamma | HNW | 16.052 | 20.497 | 138.624 | 61.586 | 0.094 |

proposed | 6.474 | 22.373 | 95.575 | 65.304 | 0.076 | |

HNW | 10.998 | 24.343 | 65.998 | 69.007 | 0.061 | |

proposed | 4.540 | 24.385 | 65.500 | 69.084 | 0.060 |

T | SNR | MSE | PSNR | ReErr | ||
---|---|---|---|---|---|---|

L = 10 Gamma | HNW | 92.228 | 11.799 | 1880.960 | 35.508 | 0.257 |

proposed | 8.487 | 12.730 | 665.292 | 45.901 | 0.231 | |

HNW | 11.404 | 21.290 | 121.872 | 62.878 | 0.088 | |

proposed | 4.056 | 22.457 | 103.332 | 64.524 | 0.075 |

In this paper, we propose a variational method for removing multiple multiplicative noises, and give a new numerical iterative algorithm. We proved the sequence obtained converges to the optimal solution of the model. Final experiments show that whether Gamma noise or Rayleigh noise, denoising and edge-protection ability of the proposed model are stronger than HNW model, at the same time, staircasing effect (image has the same gray in some regions) is greatly suppressed. But, proposed model has only dealt with two noises. Next work, we wish that we can find a model to remove many more kinds of multiplicative noises and make sure it has unique solution!

XuegangHu,YanHu, (2015) A Variational Model for Removing Multiple Multiplicative Noises. Open Journal of Applied Sciences,05,783-796. doi: 10.4236/ojapps.2015.512075