^{1}

^{*}

^{1}

This paper studies the problem of recovering low-rank tensors, and the tensors are corrupted by both impulse and Gaussian noise. The problem is well accomplished by integrating the tensor nuclear norm and the
*l*
_{1}-norm in a unified convex relaxation framework. The nuclear norm is adopted to explore the low-rank components and the
*l*
_{1}-norm is used to exploit the impulse noise. Then, this optimization problem is solved by some augmented-Lagrangian-based algorithms. Some preliminary numerical experiments verify that the proposed method can well recover the corrupted low-rank tensors.

The problem of exploiting low-dimensional structures in high-dimensional data is taking on increasing importance in image, text and video processing, and web search, where the observed data lie in very high dimensional spaces. The principal component analysis (PCA) proposed in [

where

In many real world applications, we need to consider the model defined in Equation (1) under more complicated circumstance [

where

One shortcoming of model defined in Equation (2) is that it can only handle matrix (two-way) data. However, the real-world data are ubiquitously in multi-way, also referred to as tensor. For example, a color image is a 3-way object with column, row and color modes; a greyscale video is indexed by two spatial variables and one temporal variable. If we use the model defined in Equation (2) to process the tensor data, we have to unfold the multi-way data into a matrix. Such a preprocessing usually leads to the loss of the inherent structure high-di- mensional information in the original observations. To avoid this negative factor, a common approach is to manipulate the tensor data by taking the advantage of its multi-dimensional structure. Tensor analysis have many applications in computer vision [

The goal of this paper is to study the Tensor Robust PCA which aims to accurately recover a low-rank tensor from impulse and Gaussian noise. The observations can also be incomplete. Tensors of low rank appear in a variety of applications such as video processing (d = 3) [

where

Although the recovery of low-rank matrix has been well studied, the research of low-rank tensor recovery is still lacking. This is mainly because it’s difficult to define a satisfactory tensor rank which enjoys similar good properties as the matrix case. Several different definitions of tensor rank have been proposed but each has its limitation. For example, the CP rank [

where

More recently, the work [

where

In this work, we go one step further, and consider recovering low-rank and sparse components of tensors from incomplete and noisy observations as defined in Equation (4).

The contributions of this work are two-fold:

・ A unified convex relaxation framework is proposed for the problem of recovering low-rank and sparse components of tensors from incomplete and noisy observations. Three augmented-Lagrangian-based algorithms are developed for the optimization problem.

・ Numerical experiments on synthetical data validate the efficacy of our proposed denoising approach.

The rest of the paper is organized as follows. In Section 2, some preliminaries that are useful for the subsequent analysis are provided. In Section 3, three augmented-Lagrangian-based methods are developed for the problem defined in Equation (4). In Section 4, some numerical experiments verify the justification of the model defined in Equation (4) and the efficiency of the proposed algorithms. Finally, in Section 5, we make some conclusions and discuss some topics for future work.

In this section, we list some lemmas concerning the shrinkage operators, which will be used at each iteration of the proposed augmented Lagrangian type methods to solve the generated subproblems.

Lemma 1. For

is given by

Lemma 2. Consider the singular value decomposition (SVD) of a matrix

where

where

An alternative model to study the problem defined in Equation (4) is the following nuclear-norm- and

Equation (12) can be reformulated into the following favourable form:

Alternating Direction Method of Multiplier (ADMM), which is an extension of ALM algorithm, can be used to solve the tensor recovery problem defined in (13). With given

See Algorithm 1 for the optimization details.

It can be easily verified that the iterates generated by the proposed ADMM algorithm can be characterized by

which is equivalent to

Algorithm 1. Optimization framework for problem defined in Equation (13).

Equation (16) shows that the distance of the iterates

solution

Here ^{−6}.

In this subsection, we mainly analyze the convergence of ADMM for solving problem defined in Equation (13). We denote

Definition 1. (Convex and Strongly Convex) Let

Cai et al. [

Assumption 1. In Equation (18),

Assumption 2. The optimal solution set for the problem defined in Equation (18) is nonempty, i.e., there exist

Theorem 1. Assume that Assumption 1 and Assumption 2 hold. Let

defined in Equation (18). If

is an optimal solution to Equation (18). Moreover, the objective function con- verges to the optimal value and the constraint violation converges to zero, i.e.,

and

where

tion (18). In our specific application,

the convergence [

In our optimization framework given in Equation (13), there are three parameters

is limited to the range

rithm (based on the analysis in Theorem 1). Thus, the value of

The solution for Equation (26) is equal to

Theorem 2. Supposing that the Gaussian noise term

Based on this conclusion, we derive the required conditions for convex program defined in Equation (13) to accurately recover the low-rank component

Main Result 1. Assume that the low-rank tensor

as

where _{r} and ρ_{s} are positive constants.

The value of penalty parameter β should be within the range of

ensure the convergence.

In this section, we conduct synthetic data and real data experiments to corroborate our algorithm. We investigate the ability of our proposed Robust Low Rank Tensor Approximation (denoted as RLRTA) algorithm for recovering low-rank tensors of various tubal rank from noises of various sparsity and random Gaussian noise of different intensity.

We first verify the correct recovery performance of our algorithm for different sparsity of

where

The Gaussian noise

The variance values

n | r | m | |||

100 | 10 | 1e5 | 132,399 | 1.1838e−04 | 0.3040 |

200 | 20 | 8e5 | 1,046,860 | 2.8331e−05 | 0.3026 |

n | r | m | |||

100 | 10 | 2e5 | 222,128 | 1.5001e−04 | 0.3072 |

200 | 20 | 16e5 | 1,797,586 | 3.8035e−05 | 0.3118 |

n | r | m | |||

100 | 10 | 1e5 | 575,485 | 0.0021 | 0.2805 |

200 | 20 | 8e5 | 4,594,860 | 5.4577e−04 | 0.2727 |

n | r | m | |||

100 | 10 | 2e5 | 576,448 | 0.0030 | 0.1597 |

200 | 20 | 16e5 | 4,609,591 | 8.3233e−04 | 0.1707 |

recovery results of algorithm RLRTA and TRPCA. It’s shown that RLRTA can better recover the low-rank compnent

Now we exam the recovery phenomenon with Gaussian noise of varying variances. The generation of

where

Now we exam the recovery phenomenon with varying rank of

r | m | ||||

0.02 | 10 | 1e5 | 100,438 | 1.1900e−04 | 0.2707 |

0.04 | 10 | 1e5 | 111,249 | 1.1886e−04 | 0.2915 |

0.06 | 10 | 1e5 | 134,908 | 1.2041e−04 | 0.3138 |

0.08 | 10 | 1e5 | 160,814 | 1.5044e−04 | 0.3451 |

0.10 | 10 | 1e5 | 184,006 | 2.2189e−04 | 0.3824 |

r | m | ||||

0.02 | 20 | 8e5 | 803,564 | 2.8135e−05 | 0.2709 |

0.04 | 20 | 8e5 | 889,969 | 2.8269e−05 | 0.2913 |

0.06 | 20 | 8e5 | 1,080,602 | 2.9120e−05 | 0.3145 |

0.08 | 20 | 8e5 | 1,287,048 | 3.6441e−05 | 0.3460 |

0.10 | 20 | 8e5 | 1,472,423 | 5.4040e−05 | 0.3821 |

r | m | ||||

0.02 | 10 | 1e5 | 571,972 | 0.0013 | 0.1036 |

0.04 | 10 | 1e5 | 571,943 | 0.0024 | 0.2041 |

0.06 | 10 | 1e5 | 571,944 | 0.0036 | 0.3039 |

0.08 | 10 | 1e5 | 571,290 | 0.0048 | 0.4037 |

0.10 | 10 | 1e5 | 571,880 | 0.0059 | 0.5039 |

r | m | ||||

0.02 | 20 | 8e5 | 4,573,505 | 3.0970e−04 | 0.1036 |

0.04 | 20 | 8e5 | 4,574,842 | 6.0571e−04 | 0.2043 |

0.06 | 20 | 8e5 | 4,573,573 | 9.0370e−04 | 0.3040 |

0.08 | 20 | 8e5 | 4,573,394 | 0.0012 | 0.4039 |

0.10 | 20 | 8e5 | 4,572,467 | 0.0015 | 0.5021 |

We set _{s} in

the fraction of correct recovery for each pair (black = 0% and white = 100%). It can be seen that there is a large region in which the recovery is correct.

Now we exam the recovery phenomenon with varying rank of

support and random signs as in Equation (27) and sparsity parameter ρ_{s} is fixed at 0.1. The generation of

We set

This work verifies the ability of convex optimization for the recovery of low- rank tensors corrupted by both impulse and Gaussian noise. The problem is tackled by integrating the tensor nuclear norm,

The authors would like to thank Canyi Lu for providing the code for TRPCA algorithm.

Fan, H.Y. and Kuang, G.Y. (2017) Recovery of Corrupted Low-Rank Tensors. Applied Mathematics, 8, 229-244. https://doi.org/10.4236/am.2017.82019