Applied Mathematics
Vol.08 No.02(2017), Article ID:74421,16 pages
10.4236/am.2017.82019

Recovery of Corrupted Low-Rank Tensors

Haiyan Fan, Gangyao Kuang

Department of Electronic Science and Engineering, National University of Defense Technology, Changsha, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 30, 2017; Accepted: February 25, 2017; Published: February 28, 2017

ABSTRACT

This paper studies the problem of recovering low-rank tensors, and the tensors are corrupted by both impulse and Gaussian noise. The problem is well accomplished by integrating the tensor nuclear norm and the l1-norm in a unified convex relaxation framework. The nuclear norm is adopted to explore the low-rank components and the l1-norm is used to exploit the impulse noise. Then, this optimization problem is solved by some augmented-Lagrangian- based algorithms. Some preliminary numerical experiments verify that the proposed method can well recover the corrupted low-rank tensors.

Keywords:

Low-Rank Tensor, Tensor Recovery, Augmented Lagrangian Method, Impulsive Noise, Mixed Noise

1. Introduction

The problem of exploiting low-dimensional structures in high-dimensional data is taking on increasing importance in image, text and video processing, and web search, where the observed data lie in very high dimensional spaces. The principal component analysis (PCA) proposed in [1] is the most widely used tool for low-rank component extraction and dimensionality reduction. It is easily implementable and efficient for data mildly corrupted by small noise. However, this PCA method is sensitive to data corrupted by heavy impulse noise or outlying observations. Then, a number of robust PCA methods were proposed. But none of these approaches yield a polynomial-time algorithm with strong performance guarantees under broad conditions. The proposed Robust PCA [2] is one among the earliest polynomial-time algorithms. Assume that a data matrix X n 1 × n 2 consists of a low-rank matrix A 0 and a sparse matrix E 0 . Then, A 0 and E 0 can be recovered with a high probability by solving the following convex relaxation problem (if A 0 is low-rank and satisfies some incoherent conditions, E 0 is sufficiently sparse):

min A , E A * + λ E 1 s .t . X = A + E (1)

where A * denotes the nuclear norm of A and E 1 denotes the l 1 -norm of E . Nuclear norm and l 1 -norm are used to induce low rank and sparsity, specifically. λ > 0 is a parameter balancing the low rank and sparsity. Candes et al. [2] prove that λ = 1 / max ( n 1 , n 2 ) works with a high probability for recovering any low-rank, incoherent matrix. Notably, Chandrasekaran et al. [3] also consider the problem of decomposing a given data matrix into sparse and low-rank components, and give sufficient conditions for convex programming to succeed. Their work was motivated by applications in system identification and learning of graphical models. In contrast, Candes et al. [2] were motivated by robust principal component computations in high dimensional settings when there were erroneous and missing entries; missing entries were not considered in [3] . In [3] , the parameter λ is data-dependent, and may have to be selected by solving a number of convex programs, while Candes et al. [2] provide a universal value of λ , namely, λ = 1 / max ( n 1 , n 2 ) . The distinction between the two results is a consequence of different assumptions about the origin of the data matrix X .

In many real world applications, we need to consider the model defined in Equation (1) under more complicated circumstance [4] [5] . First, only a fraction of entries of X can be observed. This is the well-known matrix completion problem. If the unknown matrix is known to have low rank or approximately low rank, then accurate and even exact recovery is possible by nuclear norm minimization [6] [7] . Second, the observed data are corrupted by both impulse noise (sparse but large) and Gaussian noise (small but dense). Let X be the superposition of low-rank matrix A , the impulse noise matrix E and the Gaussian noise F , i.e., X = A + E + F . The Gaussian noise of the observed entries is assumed to be small in the sense that P Ω ( F ) F δ , where δ is the Gaussian noise level and F is the Frobenius norm. Then, to be broadly applicable, we consider the following extension of model defined in Equation (1):

min A , E A * + λ E 1 s .t . P Ω ( X A E ) F δ (2)

where Ω is a subset of the index set of entries { 1 , 2 , , n 1 } × { 1 , 2 , , n 2 } . It’s assumed that only these entries { X i j , ( i , j ) Ω } can be observed. The operator P Ω : n 1 × n 2 n 1 × n 2 is a orthogonal projection onto the span of matrices vanishing outside of Ω so that the ij-th entry of P Ω ( X ) is X i j if ( i , j ) Ω and zero otherwise. The problem defined in Equation (2) can be solved by the classical Augmented Lagrangian Method (ALM). The separable structure emerging in the objective function and the constraints entails the idea of splitting the corresponding augmented Lagrangian function to derive more efficient numerical algorithms. Tao et al. [5] developed the Alternating Splitting Augmented Lagrangian Method (ASALM) and its variant (VASALM), and the Parallel Splitting Augmented Lagrangian method (PSALM) for solving Equation (2).

One shortcoming of model defined in Equation (2) is that it can only handle matrix (two-way) data. However, the real-world data are ubiquitously in multi-way, also referred to as tensor. For example, a color image is a 3-way object with column, row and color modes; a greyscale video is indexed by two spatial variables and one temporal variable. If we use the model defined in Equation (2) to process the tensor data, we have to unfold the multi-way data into a matrix. Such a preprocessing usually leads to the loss of the inherent structure high-di- mensional information in the original observations. To avoid this negative factor, a common approach is to manipulate the tensor data by taking the advantage of its multi-dimensional structure. Tensor analysis have many applications in computer vision [8] , diffusion Magnetic Resonance Imaging (MRI) [9] [10] [11] , quantum entanglement problem [12] , spectral hypergraph theory [13] and higher-order Markov chains [14] .

The goal of this paper is to study the Tensor Robust PCA which aims to accurately recover a low-rank tensor from impulse and Gaussian noise. The observations can also be incomplete. Tensors of low rank appear in a variety of applications such as video processing (d = 3) [15] , time-dependent 3D imaging (d = 4), ray tracing where the material dependent bidirectional reflection function is an order four tensor that has to be determined from measurements [15] , numerical solution of the electronic Schrödinger equation ( d = 3 N , where N is the number of particles) [16] [17] [18] , machine learning [19] and more. More specifically, suppose we are given a data tensor X n 1 × n 2 × × n d ( d is the number of ways), and it can be decomposed as

X = A 0 + E 0 + F 0 (3)

where A 0 is low-rank and E 0 is sparse. F 0 is Gaussian noise with the noise level being δ . Then, we try to recover the low-rank A 0 through the following convex relaxation problem:

min A , E A * + λ E 1 s .t . P Ω ( X A E ) F δ (4)

1.1. Related Work

Although the recovery of low-rank matrix has been well studied, the research of low-rank tensor recovery is still lacking. This is mainly because it’s difficult to define a satisfactory tensor rank which enjoys similar good properties as the matrix case. Several different definitions of tensor rank have been proposed but each has its limitation. For example, the CP rank [20] is defined as the smallest number of rank one tensors that sum up to X , where a rank one tensor is of the form u 1 u 2 u d . Expectedly, the CP rank is NP-hard to compute. Its convex relaxation is also intractable. Another more popular direction is to use the tractable Tucker rank [20] and its convex relaxation. For a d-way tensor X , the Tucker rank is a vector defined as

rank t c ( X ) : = ( rank ( X ( 1 ) ) , rank ( X ( 2 ) ) , , rank ( X ( d ) ) ) ,

where X ( i ) is the mode-i matricization of X . Motivated by the fact that the nuclear norm is the convex envelop of the matrix rank within the unit ball of the spectral norm. The Sum of Nuclear Norms (SNN), defined as i X ( i ) * , is used as a convex surrogate of the Tucker rank. This approach is effective, but SNN is not a tight convex relaxation of Tucker rank.

More recently, the work [21] proposed the tensor tubal rank based on a new tensor decomposition scheme denoted as t-SVD. The t-SVD is based on a new tensor-tensor product which enjoys many similar properties as the matrix case. Based on the computable t-SVD, the tensor nuclear norm [22] is used to replace the tubal rank for low-rank tensor recovery (from incomplete/corrupted tensors). The problem of recovering the unknown low-rank tensor F from incomplete samples F 0 can be solved by the following convex program [21] ,

min A A * s .t . P Ω ( A ) = P Ω ( A 0 ) (5)

where * is the nuclear norm of A , Ω is the index set of known elements in the original tensor, and P Ω is the projector onto the span of tensors. Lu et al. [23] extended the work and studied the Tensor Robust Principal Component (TRPCA) problem, as defined in the following equation:

min A , E A * + λ E 1 s .t . X = A + E (6)

In this work, we go one step further, and consider recovering low-rank and sparse components of tensors from incomplete and noisy observations as defined in Equation (4).

1.2. Paper Contribution

The contributions of this work are two-fold:

・ A unified convex relaxation framework is proposed for the problem of recovering low-rank and sparse components of tensors from incomplete and noisy observations. Three augmented-Lagrangian-based algorithms are developed for the optimization problem.

・ Numerical experiments on synthetical data validate the efficacy of our proposed denoising approach.

1.3. Paper Organization

The rest of the paper is organized as follows. In Section 2, some preliminaries that are useful for the subsequent analysis are provided. In Section 3, three augmented-Lagrangian-based methods are developed for the problem defined in Equation (4). In Section 4, some numerical experiments verify the justification of the model defined in Equation (4) and the efficiency of the proposed algorithms. Finally, in Section 5, we make some conclusions and discuss some topics for future work.

2. Preliminaries

In this section, we list some lemmas concerning the shrinkage operators, which will be used at each iteration of the proposed augmented Lagrangian type methods to solve the generated subproblems.

Lemma 1. For τ > 0 , and T n 1 × n 2 , the solution of the following problem (7) obeys

arg min S { 1 2 S T F 2 + τ S 1 } (7)

is given by shrink ( T , τ ) . shrink ( , ) is a soft shrinkage operator and defined as:

shrink ( a , κ ) { a κ a > κ 0 | a | κ a + κ a < κ (8)

Lemma 2. Consider the singular value decomposition (SVD) of a matrix A n 1 × n 2 of rank r .

A = Q * S * V , S = diag ( { σ i } 1 i r ) (9)

where Q n 1 × r and V n 2 × r are orthogonal, and the singular values σ i are real and positive. Then, for all τ > 0 , define the soft-thresholding operator D ,

D τ ( A ) : = Q * D τ ( S ) * V , D τ ( S ) = diag ( { ( σ i τ ) + } 1 i r ) (10)

where x + is the operator that x + = max ( 0 , x ) . Then, for each τ > 0 and B n 1 × n 2 , the singular value shrinkage operator (10) obeys

D τ ( A ) = arg min B { 1 2 B A F 2 + τ B * } (11)

3. Algorithm

An alternative model to study the problem defined in Equation (4) is the following nuclear-norm- and l 1 -norm- normalized least squares problem:

min A , E A * + λ 1 E 1 + λ 2 P Ω ( X A E ) F 2 (12)

Equation (12) can be reformulated into the following favourable form:

min A , E , F A * + λ 1 E 1 + λ 2 P Ω ( F ) F 2 s .t . X = A + E + F (13)

Alternating Direction Method of Multiplier (ADMM), which is an extension of ALM algorithm, can be used to solve the tensor recovery problem defined in (13). With given ( A k , E k , F k ) , the ADMM generate the new iterates via the following scheme:

{ A k + 1 = arg min A ( A * + β 2 X ( A + E k + F k ) + Λ 1 k β ) E k + 1 = arg min E ( λ 1 E 1 + β 2 X ( A k + 1 + E + F k ) + Λ 1 k β ) F k + 1 = arg min F B ( λ 2 F F 2 + β 2 X ( A k + 1 + E k + 1 + F ) + Λ 1 k β ) Λ 1 k + 1 = Λ 1 k + β ( X ( A k + 1 + E k + 1 + F k + 1 ) ) (14)

See Algorithm 1 for the optimization details.

3.1. Stopping Criterion

It can be easily verified that the iterates generated by the proposed ADMM algorithm can be characterized by

{ 0 A k + 1 * [ Λ 1 k β ( A k + 1 + E k + F k X ) ] 0 ( λ 1 E k + 1 1 ) [ Λ 1 k β ( A k + 1 + E k + 1 + F k X ) ] 0 ( λ 2 F k + 1 F 2 ) [ Λ 1 k β ( A k + 1 + E k + 1 + F k + 1 X ) ] Λ 1 k + 1 = Λ 1 k [ ( A k + 1 + E k + 1 + F k + 1 ) X ] (15)

which is equivalent to

{ 0 A k + 1 * Λ 1 k + 1 + β ( E k E k + 1 ) + β ( F k F k + 1 ) 0 ( λ 1 E k + 1 1 ) Λ 1 k + 1 + β ( F k F k + 1 ) 0 ( λ 2 F k + 1 F 2 ) Λ 1 k + 1 Λ 1 k + 1 = Λ 1 k β [ ( A k + 1 + E k + 1 + F k + 1 ) X ] (16)

Algorithm 1. Optimization framework for problem defined in Equation (13).

Equation (16) shows that the distance of the iterates ( A k + 1 , E k + 1 , F k + 1 ) to the

solution ( A * , E * , F * , Λ * ) can be characterized by β ( E k E k + 1 + F k F k + 1 ) and 1 β Λ 1 k Λ 1 k + 1 . Thus, a straightforward stopping criterion for Algorithm 1 is:

m i n { β ( E k E k + 1 + F k F k + 1 ) , 1 β Λ 1 k Λ 1 k + 1 } ϵ (17)

Here ϵ is an infinitesimal number, e.g., 10−6.

3.2. Convergence Analysis

In this subsection, we mainly analyze the convergence of ADMM for solving problem defined in Equation (13). We denote f 1 ( ) = * , f 2 ( ) = F 2 , and f 3 ( ) = 1 . f 2 ( ) is strongly convex, while f 1 ( ) and f 3 ( ) are convex terms, but may not be strongly convex. The problem defined in Equation (13) can be reformulated as

min A , E , F f 1 ( A ) + λ 2 f 2 ( F ) + λ 1 f 3 ( E ) s .t . χ = A + E + F (18)

Definition 1. (Convex and Strongly Convex) Let f : n [ , + ] , if the domain of f denoted by f : = { x n , f ( x ) < + } is not empty, f is considered to be proper. If for any x n and y n , we always have f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) , t [ 0,1 ] , then it is considered that f is convex. Furthermore, f is considered to be strongly convex with the modulus μ > 0 , if

f ( t x + ( 1 t ) y ) t f ( x ) + ( 1 t ) f ( y ) 1 2 μ t ( 1 t ) x y 2 , t [ 0,1 ] (19)

Cai et al. [24] and Li et al. [25] have proved the convergence of Extended Alternating Direction Method of Multipliers (e-ADMM) with only one strongly convex function for the case m = 3.

Assumption 1. In Equation (18), f 1 and f 3 are convex, and f 2 is strongly convex with modulus μ 2 > 0 .

Assumption 2. The optimal solution set for the problem defined in Equation (18) is nonempty, i.e., there exist ( A * , E * , F * , Λ 1 * ) Ω * such that the following requirements can be satisfied:

f 1 ( A * ) Λ * = 0 , (20)

λ 2 f 2 ( F * ) Λ 1 * = 0 , (21)

λ 1 f 3 ( E * ) Λ 1 * = 0 , (22)

A * + E * + F * χ = 0 , (23)

Theorem 1. Assume that Assumption 1 and Assumption 2 hold. Let ( A k , E k , F k , Λ 1 k ) be the sequence generated by Algorithm 1 for solving the problem

defined in Equation (18). If β ( 0 , 6 μ 2 13 ) , the limit point of ( A k , F k , E k , Λ 1 k )

is an optimal solution to Equation (18). Moreover, the objective function con- verges to the optimal value and the constraint violation converges to zero, i.e.,

lim k f 1 ( A * ) + λ 2 f 2 ( F * ) + λ 1 f 3 ( E * ) f * = 0 (24)

and

lim k χ ( A + E + F ) = 0 (25)

where f * denotes the optimal objective value for the problem defined in Equa-

tion (18). In our specific application, β ( 0 , 6 * 2 λ 2 13 ) can sufficiently ensure

the convergence [24] .

3.3. Parameter Choice

In our optimization framework given in Equation (13), there are three parameters β , λ 1 and λ 2 . As mentioned in Lu [23] , λ 1 does not need any tuning and can be set to 1 / n ( 1 ) n 3 , where n ( 1 ) = max { n 1 , n 2 } . Besides, the value of β

is limited to the range β ( 0, 6 * 2 λ 2 13 ) to ensure the convergence of our algo-

rithm (based on the analysis in Theorem 1). Thus, the value of λ 2 is important for the performance of our algorithm. For simplicity, we consider the case when A is only degraded by Gaussian noise F without sparse noise E , that is:

m i n 1 2 F F 2 + 1 2 λ 2 A * s .t . χ = A + F (26)

The solution for Equation (26) is equal to χ but with singular values being shifted towards zero by soft thresholding. λ 2 should be set large enough to remove noise (i.e., to keep the variance low), and small to avoid over-shrinking of the original tensor A (i.e., to keep the bias low). For the matrix case (i.e., n 3 = 1 ), Candes et al. [26] have deduced the proper value of λ 2 , as shown in the following theorem.

Theorem 2. Supposing that the Gaussian noise term F n × n , and each entry n i , j is iid normally distributed, we can have that for N ( 0, σ 2 ) ,

F F 2 ( n + 8 n ) σ 2 with high probability. Then, 1 2 λ 1 = ( n + 8 n ) σ . That is, λ 2 = 1 2 ( n + 8 n ) σ .

Based on this conclusion, we derive the required conditions for convex program defined in Equation (13) to accurately recover the low-rank component A from corrupted observations. Our derivations are given in the following main result.

Main Result 1. Assume that the low-rank tensor A 0 n 1 × n 2 × n 3 obeys the incoherence conditions [27] . The support set Ω of E 0 is uniformly distributed among all sets of cardinality m. Then, there exist universal constants c 1 , c 2 > 0 such that with probability at least 1 c 1 n c 2 , A 0 , E 0 is the unique minimizer to problem defined in Equation (13). The values of λ 1 and λ 2 can be determined

as λ 2 = 1 2 n ( 1 ) n 3 + 8 n ( 1 ) n 3 σ and λ 1 = 1 n ( 1 ) n 3 . In the same time, the rank of

A 0 and the number of non-zero entries of E 0 should satisfy that

rank t ( A ) ρ r n ( 2 ) μ 0 ( log ( n ( 1 ) n 3 ) ) 2 and m ρ s n 1 n 2 n 3 2

where n ( 1 ) = max { n 1 , n 2 } and n ( 2 ) = min { n 1 , n 2 } . ρr and ρs are positive constants.

The value of penalty parameter β should be within the range of ( 0 , 6 * 2 λ 2 13 ) to

ensure the convergence.

4. Experiments on Synthetic Data

In this section, we conduct synthetic data and real data experiments to corroborate our algorithm. We investigate the ability of our proposed Robust Low Rank Tensor Approximation (denoted as RLRTA) algorithm for recovering low-rank tensors of various tubal rank from noises of various sparsity and random Gaussian noise of different intensity.

4.1. Exact Recovery from Varying Sparsity of E

We first verify the correct recovery performance of our algorithm for different sparsity of E . Be similar to [23] , we consider the tensors of size n × n × n , with varying dimensions n = 100 , 200 . We generate a rank t r tensor A = Q * V , where the entries of Q n × r × n and V r × n × n are independently sampled from a uniform distribution in interval ( 0, 1 / n ) . The support set Ω , with size m = ρ s n 3 of sparse component E is chosen uniformly at random. For all ( i , j , k ) Ω , let E i j k = M i j k , where M is a tensor with independent Bernoulli ± 1 entries. For E , it can be mathematically expressed as

E i j k = { 1, w .p . ρ s / 2 0 w .p .1 ρ s 1 w .p . ρ s / 2 (27)

where w .p . is the abbreviation of “with probability”. We test on two settings: the first scenario with setting r = rank t ( A ) = 0.1 n and ρ s = 0.1 . The second scenario with setting r = rank t ( A ) = 0.1 n and ρ s = 0.2 .

The Gaussian noise F in each frontal slice is generated independently with each other, i.e.

F ( : , : , i 3 ) N ( 0 , σ i 3 2 ) , 1 i 3 n (28)

The variance values σ i 3 2 in each frontal slice are randomly selected from 0 to 0.1. In this sub-subsection 1, our task is to recovery A from noisy observation χ = A + E + F with E of varying sparsity. Table 1 and Table 2 show the

Table 1. Correct recovery for random problems of varying sparsity using RLRTA.

Table 2. Correct recovery for random problems of varying sparsity using TRPCA [23] .

recovery results of algorithm RLRTA and TRPCA. It’s shown that RLRTA can better recover the low-rank compnent A under different sparse component E .

4.2. Exact Recovery with F of Varying Intensity

Now we exam the recovery phenomenon with Gaussian noise of varying variances. The generation of A n × n × n is the same as that in sub-subsection 1 and r = rank t ( A ) = 0.1 n . The sparse component E has sparsity ρ s = 0.1 . For simplicity, we assume that F is white Gaussian noise, that is

F ( i 1 , i 2 , i 3 ) N ( 0, σ w 2 ) (29)

where 1 i 1 n , 1 i 2 n , 1 i 3 n . The noise variance values σ w 2 are 0.02, 0.04, 0.06, 0.08 and 0.1, respectively Table 3 and Table 4 show the recovery results of algorithm RLRTA and TRPCA. It’s shown that RLRTA can better recover the low-rank compnent A under different Gaussian noise F .

4.3. Phase Transition in Rank and Sparsity

Now we exam the recovery phenomenon with varying rank of A and varying sparsity of E . Similar to [23] , we consider two sizes of A n × n × n 3 : (1)

Table 3. Correct recovery for random problems of varying intensity using RLRTA.

Table 4. Correct recovery for random problems of varying intensity using TRPCA [23] .

n = 100 , n 3 = 50 , (2) n = 200 , n 3 = 50 . We generate A = Q * V , where the entries of Q n × r × n 3 and V r × n × n 3 are independently sampled from a uniform distribution in interval ( 0, 1 / n ) . For E , we still consider a Bernoulli model for its support and random signs as in Equation (27). The variance values σ w , i 3 2 in each frontal slice i 3 ( 1 i 3 n 3 ) are randomly selected from 0 to 0.1, and the mean variance values are both set to be 0.05.

We set r / n as all the choices in [ 0.01 : 0.01 : 0.5 ] , and ρs in [ 0.01 : 0.01 : 0.5 ] . For each ( r , ρ s ) -pair, we simulate 10 test instances and declare a trial to be successful if the recovered A satisfies A A F / A F 10 3 . Figure 1 plots

(a)(b)

Figure 1. Correct recovery for varying rank and sparsity for RLRTA and TRPCA [23] . Fraction of correct recoveries, as a function of rank t ( A ) ( x -axis) and sparsity of E ( y -axis).

the fraction of correct recovery for each pair (black = 0% and white = 100%). It can be seen that there is a large region in which the recovery is correct.

4.4. Phase Transition in Rank and Entry-Wise Noise Intensity

Now we exam the recovery phenomenon with varying rank of A and varying intensity of noise F . We still consider two sizes of A n × n × n 3 : (1) n = 100 , n 3 = 50 , (2) n = 200 , n 3 = 50 . We generate A = Q * V , where the entries of Q n × r × n 3 and V r × n × n 3 independently sampled from a uniform distribution in interval ( 0, 1 / n ) . For E , we still consider a Bernoulli model for its

(a)(b)

Figure 2. Correct recovery for varying rank and noise variance for RLRTA and TRPCA [23] . Fraction of correct recoveries, as a function of rank t ( A ) ( x -axis) and variance of F ( y -axis).

support and random signs as in Equation (27) and sparsity parameter ρs is fixed at 0.1. The generation of F is similar to Equation (29).

We set r / n as all the choices in [ 0.01 : 0.05 : 0.5 ] . The noise variance values σ w 2 are in [ 0.01 : 0.01 : 0.1 ] . For each ( r , σ w 2 ) -pair, we simulate 10 test instances and declare a trial to be successful if the recovered A satisfies A A F / A F 10 3 . Figure 2 plots the fraction of correct recovery for each pair (black = 0% and white = 100%). It can be seen that there is a large region in which the recovery is correct.

5. Conclusion

This work verifies the ability of convex optimization for the recovery of low- rank tensors corrupted by both impulse and Gaussian noise. The problem is tackled by integrating the tensor nuclear norm, l 1 -norm and least square term in a unified convex relaxation framework. Parameters are selected to comprise the low-rank component, the sparse component and the Gaussian-noise term. Besides, the convergence of the proposed algorithm is discussed. Numerical experiments have been conducted to demonstrate the efficacy of our proposed denoising approach.

Acknowledgements

The authors would like to thank Canyi Lu for providing the code for TRPCA algorithm.

Cite this paper

Fan, H.Y. and Kuang, G.Y. (2017) Recovery of Corrupted Low-Rank Tensors. Applied Mathematics, 8, 229-244. https://doi.org/10.4236/am.2017.82019

References

  1. 1. Jolliffe, I. (2002) Principal Component Analysis. Wiley Online Library.

  2. 2. Candes, E.J., Li, X.D., Ma, Y. and Wright, J. (2011) Robust Principal Component Analysis? Journal of the ACM, 58, Article Number: 11. https://doi.org/10.1145/1970392.1970395

  3. 3. Chandrasekaran, V., Sanghavi, S., Parrilo, P.A. and Willsky, A.S. (2011) Rank-Sparsity Incoherence for Matrix Decomposition. SIAM Journal on Optimization, 21, 572-596. https://doi.org/10.1137/090761793

  4. 4. Wright, J., Ganesh, A., Rao, S.Y.P. and Ma, Y. (2009) Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization Neural Information Processing Systems (NIPS).

  5. 5. Tao, M. and Yuan, X. (2011) Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations. SIAM Journal on Optimization, 21, 57-81. https://doi.org/10.1137/100781894

  6. 6. Candes, E.J. and Recht, B. (2009) Exact Matrix Completion via Convex Optimization. Foundations of Computational Mathematics, 9, 717-772. https://doi.org/10.1007/s10208-009-9045-5

  7. 7. Candes, E.J. and Tao, T. (2010) The Power of Convex Relaxation: Near-Optimal Matrix Completion. IEEE Transaction on Information Theory, 56, 2053-2080. https://doi.org/10.1109/TIT.2010.2044061

  8. 8. Wang, H. and Ahuja, N. (2004) Compact Representation of Multidimensional Data Using Tensor Rank-One Decomposition.

  9. 9. Bloy, L. and Verma, R. (2008) On Computing the Underlying Fiber Directions from the Diffusion Orientation Distribution Function. 11th International Conference on Medical Image Computing and Computer-Assisted Intervention, New York, 6-10 September 2008, 1-8.

  10. 10. Ghosh, A., Tsigaridas, E., Descoteaux, M., Comon, P., Mourrain, B. and Deriche, R. (2008) A Polynomial Based Approach to Extract the Maxima of an Antipodally Symmetric Spherical Function and Its Application to Extract Fiber Directions from the Orientation Distribution Function in Diffusion MRI.

  11. 11. Qi, L., Yu, G. and Wu, E.X. (2010) Higher Order Positive Semi-Definite Diffusion Tensor Imaging. SIAM Journal on Imaging Sciences, 3, 416-433. https://doi.org/10.1137/090755138

  12. 12. Hilling, J.J. and Sudbery, A. (2010) The Geometric Measure of Multipartite Entanglement and the Singular Values of a Hypermatrix. Journal of Mathematical Physics, 51, Article ID: 072102. https://doi.org/10.1063/1.3451264

  13. 13. Hu, S. and Qi, L. (2012) Algebraic Connectivity of an Even Uniform Hypergraph. Journal of Combinatorial Optimization, 24, 564-579. https://doi.org/10.1007/s10878-011-9407-1

  14. 14. Li, W. and Ng, M. (2011) Existence and Uniqueness of Stationary Probability Vector of a Transition Probability Tensor Technical Report. Department of Mathematics, the Hong Kong Baptist University, Hong Kong.

  15. 15. Liu, J., Musialski, P., Wonka, P. and Ye, J. (2009) Tensor Completion for Estimating Missing Values in Visual Data In ICCV.

  16. 16. Lubich, C. (2008) From Quantum to Classical Molecular Dynamics: Reduced Models and Numerical Analysis EMS Zürich.

  17. 17. Beck, M.H., Jackle, A., Worth, G.A. and Meyer, H.D. (1999) The Multiconfiguration Time-Dependent Hartree (MCTDH) Method: A Highly Efficient Algorithm for Propagating Wavepackets. Physics Reports, 324, 1-105. https://doi.org/10.1016/S0370-1573(99)00047-2

  18. 18. Wang, H. and Thoss, M. (2009) Numerically Exact Quantum Dynamics for Indistinguishable Particles: The Multilayer Multiconfiguration Time-Dependent Hartree Theory in Second Quantization Representation. Journal of Chemical Physics, 131, Article ID: 024114.

  19. 19. Paredes, B.R., Aung, H., Berthouze, N.B. and Pontil, M. (2013) Multilinear Multitask Learning. Journal of Machine Learning Research, 28, 1444-1452. https://doi.org/10.1063/1.3173823

  20. 20. Kolda, T.G. and Bader, B.W. (2009) Tensor Decompositions and Applications. SIAM Review, 51, 455-500. https://doi.org/10.1137/07070111X

  21. 21. Zhang, Z., Ely, G., Aeron, S., Hao, N. and Kilmer, M.E. (2014) Novel Methods for Multilinear Data Completion and De-Noising Based on Tensor-SVD. Computer Science, 44, 3842-3849. https://doi.org/10.1109/cvpr.2014.485

  22. 22. Semerci, O., Hao, N., Kilmer, M.E. and Miller, E.L. (2013) Tensor-Based Formulation and Nuclear Norm Regularization for Multienergy Computed Tomography. IEEE Transactions on Image Processing, 23, 1678-1693. https://doi.org/10.1109/TIP.2014.2305840

  23. 23. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z. and Yan, S. (2016) Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Tensors via Convex Optimization.

  24. 24. Cai, X., Han, D. and Yuan, X. (2014) The Direct Extension of ADMM for Three-Block Separable Convex Minimization Models Is Convergent When One Function Is Strongly Convex.

  25. 25. Li, M., Sun, D. and Toh, K. (2014) A Convergent 3-Block Semi-Proximal ADMM for Convex Minimization Problems with One Strongly Convex Block. Asia Pacific Journal of Operational Research, 32, 1550024. https://doi.org/10.1142/S0217595915500244

  26. 26. Candes, E.J. and Plan, Y. (2010) Matrix Completion with Noise. Proceedings of the IEEE, 98, 925-936. https://doi.org/10.1109/JPROC.2009.2035722

  27. 27. Zhang, Z. and Aeron, S. (2015) Exact Tensor Completion Using t-SVD.