 Advances in Computed Tomography, 2012, 1, 30-36 http://dx.doi.org/10.4236/act.2012.13007 Published Online December 2012 (http://www.SciRP.org/journal/act) The Convergence of Two Algorithms for Compressed Sensing Based Tomography Xiezhang Li*, Jiehua Zhu Department of Mathematical Sciences, Georgia Southern University, Statesboro, USA Email: *xli@georgiasouthern.edu, jzhu@georgiasouthern.edu Received November 10, 2012; revised December 12, 2012; accepted December 20, 2012 ABSTRACT The constrained total variation minimization has been developed successfully for image reconstruction in computed tomography. In this paper, the block component averaging and diagonally-relaxed orthogonal projection methods are proposed to incorporate with the total variation minimization in the compressed sensing framework. The convergence of the algorithms under a certain condition is derived. Examples are given to illustrate their convergence behavior and noise performance. Keywords: Compressed Sensing; Image Reconstruction; Total Variation Minimization; Block Iterative Methods 1. Introduction The reconstruction of an image from the projection data in tomography by algebraic approaches involves solving a linear system ,Axb (1) where the coefficient matrix mnAR is determined by the scanning geometry and directions, vector the projection data obtained from computed tomograpgy (CT) scan and the unknown vector mbRnxR the image to be reconstructed. It is assumed that system (1) is consis- tent and underdetermined (m < n). So it has infinitely many solutions. We seek for a solution such that it re- covers the original image as good as possible. It is an illposed problem. In general, the dimension of x is very large, thus the conventional direct methods are not ap- propriate. One of classic iterative algorithms is the alge- braic reconstruction technique (ART) given by  122,,ikikkkibaxixxaa where k is a parameter, ai the ith column of AT, and ,iaxkk the inner product of ai and xk. The value of i is cyclically chosen as 1, ···, m. The sequence {xk} con- verges to a solution of system (1) as long as 0 liminflimsup2k . Cimmino method , given by 1212,,ikmikkkiibaxis an iterative projection algorithm suitable for parallel computing. The slow convergence of Cimmino method because of the large value of m was improved by the component averaging (CAV) method  given by 1212,,1,,ikmikk ikjj iijbax ,xxajsa n (2) where sj is the number of nonzero entries in the jth col- umn of A. The CAV method was further generalized as the Diagonally-Relaxed Orthogonal Projection (DROP) method  given by 121,,1,,ikmikk ikjj ijiijbax ,xxw ajsa n (3) ixxmaa where wi’s are positive weights. The block versions of these iterative methods were investigated regarding con- vergence and computations [5-7]. The theory of com- pressed sensing [8-10] has recently shown that signals and images that have sparse representations in some or- thonormal basis can be reconstructed at high quality from much less data than what the Nyquist sampling theory  requires. In many cases in tomography, a medical image can be approximately modeled to be essentially piecewise constant so its gradients are sparse. The image can then be reconstructed via the total variation minimi- zation [12,13]. In other words, a two dimensional image F can be reconstructed by solving the following minimi- zation problem, min ||TVx or min || s.t ,Axb (4) where || is the magnitude of a gradient and the total *Corresponding author. Copyright © 2012 SciRes. ACT X. Z. LI, J. H. ZHU 31variation ||TVf of a two dimensional array fi,j is the l1 - norm of the magnitude of the discrete gradient, 21,,, 1,,2i jijijijijfffff||TV A. Under the condition that the gradients are sparse enough, the solution of the l1-norm minimization prob- lem (4) is unique and it gives an exact recovery of the image based on compressed sensing theory [8,10]. A block cyclic projection for compressed sensing based tomograph (BCPCS) for solving (4) was proposed . To describe the algorithm the following notations are needed: Suppose that A and b in are partitioned, respect- tively, as 11 and , for 1,ppbAbpAb    m (5) where Aj ∈ Rrj×n and bj ∈ Rrj , 1 ≤ j ≤ p. Assuming that the row indices of Aj form a set 1,,jjjrjBt t for ijBB, we have where . We assume that 1,,p0kmj1UjjBi and in the fol- lowing algorithm: kkAlgorithm 1. BCPCS 1. for k = 0, 1, 2, ··· 2. for j = 1 to p 3.1. for 1,,jjrjit t 3.2. 2,ikikkkibaxixxaa 3.3. end 4. , where ||kkkxdxx dd  5. end 6. 1kkxx 7. end The convergence of BCPCS with 1k was shown with an application of the convergence theory of the amalgamented projection method . Theorem 1.  The sequence {xk} generated by Al- gorithm 1 with 1k converges and its limit is a solu- tion of Ax = b. In this paper, the block component averaging and di- agonally-relaxed orthogonal projection methods are pro- posed to incorporate with the total variation minimization in the compressed sensing framework. The convergence of the algorithms, under a certain condition for example in the strip-based projection model , is derived. Ex- amples are given to illustrate their convergence behavior and noise performance. 2. BCAVCS Algorithm and Its Convergence The convergence of the CAV and block CAV methods was proved [4,5]. Several notations and concepts in  are adopted in this paper for literature consistency. For each 1, ,im, denote by |,niii,HxRax b a hyperplane in Rn and by i a nonnegative diagonal matrix, where 1,,iiGdiagg gn1ij jgs if 0ija, otherwise gij = 0, j = 1, ···, n. Then 1miiGI. The set 1imiG is called sparsity pattern oriented (SPO) w.r.t. A. The generalized oblique projection of a vector nzR onto Hi w.r.t. Gi, denoted as iiGHPz, is defined for all j = 1, ···, n, by 1, 0, if 0 if 0,iiiliiijjiiGnillHlgjiljibazazggaPz gzgjj (6) a seminorm in Rn corresponding to any nonnegative di- agonal matrix G is defined by 12||,GzzGz, nzR. It is known that argmin ||:,iiiGGiHPzxz xH which explains the meaning of the generalized oblique projection. For simplicity, we denote iiGHPz by iQz for any nzR. Moreover, with the generalized inverse of Gi, ††1,,,iiGdiaggg†in where for a real number , †1 for 0, oth- erwise †0, we rewrite (6) as  ††,.,iiiiGiiiHiiibazQzP zzGaaGa With above notations, the CAV iteration (2) can be expressed as 11.mkk kkkiiixxGQx x (7) Algorithm 1 can be modified as a block CAV method for compressed sensing based tomography algorithm (BCAVCS) for image reconstruction described as fol- lows: Algorithm 2. BCAVCS 1. for k = 0, 1, 2, ··· 2. for j = 1 to p 3. jkkk kkiiiBxxGQx x 4. , where ||kkkdxx ddx  5. end 6. 1kkxx Copyright © 2012 SciRes. ACT X. Z. LI, J. H. ZHU 32 7. end In order to derive the convergence of Algorithm 2 un- der a certain condition, we introduce an operator Ti: Rn → Rn, for i = 1, ···, m, by ,ikiiTzzGQI z where I is the identity operator. For an index vector t = (t1, ···, tr) in {1, ···, m}, we denote T[t] as a composition of operators Tt1, ···, Ttr by 1.rtTt TT t (8) We have the following property for the operator T[t]. Lemma 2. Let 1i be SPO w.r.t. miGmnAR and let t = (t1, ···, tr) be an index vector in {1, ···, m} such that GkGl = O for any distinct coordinates k, l in t. Then for , nzR 1,,riit tTtzzT zIz . (9) Proof. Let . It fol- lows from GlGh = O that hkhhwTzGQIzz Glw = Glz and ,,llaw az. Consequently, we have  †2,.||llllll llllGbawGQ wGwaGQ za  Therefore,    .lhlklkllkh hlklllhTT zTw wGwGQwzGQIzGzGQzzTzIz TzIz It follows that 111,, .rrrtt ttiit tTt zTTzTTzzTzI  z We complete the proof of the lemma. The convergence of BCAVCS in a certain case will be shown below using the concepts of SPO matrices and generalized oblique projection. Suppose that Ax = b is generated by the strip-based projection model in discrete tomography or CT [16,17]. The index vector 1,,,jjj jrtt t is associated with the jth subsystem Ajx = bj of (5) deter- mined by one projection direction. In each 0-1 submatrix Aj, there is exactly one 1 in each column, i.e., sj = 1, Therefore all rows of Aj are orthogonal. So GkGl = 0 for distinct coordinates k and l in tj. In this case we will show the following. Theorem 3. If each column of every block Aj, 1 ≤ j ≤ p, in (5) has exact one 1 then the sequence {xk} generated by Algorithm 2 with 1k converges and its limit is a solution of (1). Proof. Recall that the inner loop of BCPCS for the jth block of A can be represented by an operator jPt defined by 1,jjrjjkkttPt xPP x   where 22,,ikikk iikibaxPx xaa then ,ikiiPIGQI Ti for all 1,, ,jjjrit t and   11.jjrjjjrjjk kttkjttTt xTTxPPxPtx   kk It follows from (9) that the iteration scheme of the in- ner loop of BCAVCS for the jth block can be expressed as .jjkkkiiiBkkkiiBjk jkxxGQxTIxTtxPt x    Ix It means that for the jth block of the matrix A the BCAVCS method is equivalent to BCPCS algorithm. The convergence of BCAVCS as 1k follows from the convergence of BCPCS algorithm as 1k by Theorem 1. It is remarked that the rate of convergence of BCAVCS in general is close to that of BCPCS in se- quential computing. Hence, the convergence of BCA- VCS algorithm with parallel computing will be signifi-cantly faster than BCPCS algorithm. 3. BDROPCS Algorithm and Its Convergence The convergence of the DROP and block DROP methods was proved . In this section, we propose a block DROP algorithm modified for compressed sensing based tomography (BDROPCS) and show its convergence in a Copyright © 2012 SciRes. ACT X. Z. LI, J. H. ZHU 33certain case. We denote by sj the number of nonzero en- tries in each column within the jth block and define a DROP operator jPt corresponding to the jth block as 2,.jiijikiiiBjbaxPtxxwasa  (10) Algorithm 3. BDROPCS 1. for k = 0, 1, 2, ··· 2. for j = 1 : p 3. kjkxPt x with a block DROP 4. kkkdxx d with ||dx 5. end 6. 1kkxx 7. end We first show that in a certain case the DROP is iden- tical to the ART in the following Theorem 4. Let mnAR have exactly the same number s of nonzero entries in each column and let rows of A be orthogonal. Then xk+1 generated by the DROP in (3) is the same as a vector generated by Algorithm ART with a constant parameter ks and weights wi. Proof. Under the assumption, the DROP method can be expressed as 121,.ikmikkkiiibaxixxwsa a (11) For convenience, we rewrite Algorithm ART with a constant parameter ks and weights wi for one cycle which yields xk+1 from xk as follows: x = xk, for i = 1 to m  12,,iiiii ikiibaxxxw asa (12) end xk+1 = x[m+1]. By the assumption on A, ,0 for ililaa . It fol- lows that  111,,iiiiaa ax. It is easy to see by induction  1,,,, 1,iiiikaxaxax im Then (12) can be rewritten as  12,,ikiii ikiibaxxxw asa from which, the vector xk+1 is given by 121,,ikmikkkiiibaxixxwsa a (13) which is the same as the vector yielded by the DROP method in one cycle (11). We now have the convergence of Algorithm 3 in the case where Ax = b is generated by the strip-based project- tion model in DT or CT [16,17], which is a direct result of Theorems 1 and 5. Theorem 5. If each column of every block Aj, 1 ≤ j ≤ p, in (5) has exact one 1 then {xk} generated by Algorithm 3 with 1kiw converges and its limit is a solution of Ax = b. It is remarked that BCAVCS is a special case of BDROPCS when wi = 1. Therefore, Theorem 5 can be considered as an alternative proof of Theorem 3 without using the generalized oblique projection. However, it is believed that the concept of generalized oblique project- tion and the idea in the proof of Theorem 3 will be im- portant for our further investigation of the convergence of BCAVCS in a general case. 4. Numerical Simulations In order to test the performance of our algorithms in recon- structing images, we implemented algorithms BCAVCS and BDROPCS in Matlab. The performance of the algo-rithms was compared with some other algorithms. Algo-rithm BCAV is a non-CS iterative method formed by deleting line 4 in Algorithm BCAVCS so that no pertur-bation for total variation minimization is performed. Al-gorithm CAVCS is a nonblock CAV CS-based iterative method obtained by exchanging lines 4 and 5 in Algo-rithm BCAVCS so that a perturbation for total variation minimization is performed only after all the blocks are done. Note that in our case BDROPCS is same as BCAVCS. We also tested the sensitivity of the algo- rithms to additional Gaussian noise. We perform the re- construction of the 256 × 256 Shepp-Logan phantom from a set of 20 reasonably distributed projection direc- tions of rational slopes [16,17]. The system Ax = b gen- erated by the strip-based projection model, where A ∈ R20918×65536 is highly underdetermined. Moreover, A is a sparse 0-1 matrix and there is one and only one 1 in each column within each of total 20 blocks . The experiment data with the algorithms are summa- rized in Table 1. We set maximum 500 iterations in the ,. Copyright © 2012 SciRes. ACT X. Z. LI, J. H. ZHU Copyright © 2012 SciRes. ACT 34 test and algorithms will terminate if the relative error reaches 0.001. The relative error 22fGf and 2,,MSE ,mnfmnGmnMN To evaluate the noise characteristics of two new algo- rithms, we added Gaussian noise with mean 0 and stan- dard deviation 0.05 to synthetic projection data from the Shepp-Logan phantom. The reconstructed images and convergence curves after 250 itertaions are given in Fig- ure 3. The noise level was evaluated by the mean-square difference between reconstruction with and without added noise. The noise measures with the algorithms BCAV, CAVCS, and BCAVCS/BDROPCS are 0.0012, 0.0029, and 0.0023, respectively. The results indicate that the four algorithms have simliar sensitivity to data noise. for a reconstructed image G are used to measure the error. The re-constructed images and relative errors by different algorithms are shown in Figure 1. Our experiment re-sults indicate that algorithms BCAVCS and BDROPCS converge. Algorithm BCAVCS is demonstrated to be much better than BCAV and CAVCS and to have the same converges rate as BDROP. It is remarked that BCAVCS/BDROPCS algorithm is appropriate for paral-lel computing. Table 1. Experiment data of Shepp-Logan phantom. Iteration Number Time (s) Error MSE BCAV We also tested the algorithms with a real CT image of cardiac  of size 256 × 256 from projections in 32 directions. The CT image was preprocessed to have a certain gradient sparsity. The reconstructed images and corresponding convergence curves are shown in Figure 2. The experiment indicates that the BCAVCS and BDR- OPCS are superior to non-CS methods too. 500 55 0.458 0.013 CAVCS 500 57 0.075 0.073 BCAVCS 404 75 0.001 0.000 Figure 1. Reconstruction of Shepp-Logan phantom with different algorithms; (a) Shepp-Logan phantom; (b) Reconstruction by BCAV; (c) Reconstruction by CAVCS; (d) Reconstruction by BCAVCS/BDROPCS. Bottom: Convergence curves of algo- rithms. X. Z. LI, J. H. ZHU 35 Figure 2. Reconstruction of a cardiac CT image with different algorithms; (a) Cardiac phantom; (b) Reconstruction by BCAV; (c) Reconstruction by CAVCS; (d) Reconstruction by BCAVCS/BDROPCS. Bottom: Convergence curves of algo- rithms. Figure 3. Reconstruction of Shepp-Logan phantom with noise; (a) Reconstruction by BCAV; (b) Reconstruction by CAVCS; (c) Reconstruction by BCAVCS/BDROPCS. Bottom: Convergence curves of algorithms. Copyright © 2012 SciRes. ACT X. Z. LI, J. H. ZHU Copyright © 2012 SciRes. ACT 36 5. Conclusions The total variation minimization is a powerful method to reconstruct piecewise constant medical images based on the compressed sensing theory. We consider the block component averaging and diagonally-relaxed orthogonal projection methods, in the case of the parameter 1k, with the total variation in the compressed sensing frame- work. Their convergence is derived in the striped-based projection model. The experiments indicate that the proposed algorithms BCAVCS and BDROPCS converge faster than algo-rithms without using block iterations or CS framework. Moreover, algorithms BCAVCS and BDROPCS recover more details of images. The convergence of algorithms BCAVCS and BDROPCS in the general case of 1k will be further studied. 6. Acknowledgements This study was supported by a Faculty Research Award from Georgia Southern University. REFERENCES  A. C. Kak and M. Slaney, “Principles of Computerized Tomographic Imaging,” Society of Industrial and Applied Mathematics, Philadelphia, 2001. doi:10.1137/1.9780898719277  P. B. Eggermont, G. T. Herman and A. Lent, “Iterative Algorithm for Larger Partitioned Linear Systems, with Applications to Image Reconstruction,” Linear Algebra and Its Applications, Vol. 40, 1981, pp. 37-67. doi:10.1016/0024-3795(81)90139-7  G. Cimmino, “Calcolo Approssimato Per le Soluzioni dei Sistemi di Equazioni Lineari,” La Ricerca Scientifica, Se- ries II, Vol. 9, 1938, pp. 326-333.  Y. Censor, D. Gordan and R. Gordan, “Component Ave- ging: An Efficient Iterative Parallel Algorithm for Large and Sparse Unstructured Problems,” Parallel Computing, Vol. 27, No. 6, 2001, pp. 777-808. doi:10.1016/S0167-8191(00)00100-9  Y. Censor, T. Elfving, G. T. Herman and T. Nikazad. “On Diagonally-Relaxed Orthogonal Projection Methods,” SIAM Journal on Scientific Computing, Vol. 30, No. 1, 2008, pp. 473-504. doi:10.1137/050639399  R. Aharoni and Y. Censor, “Block-Iterative Projection Methods for Parallel Computation of Solutions to Convex Feasibility Problems,” Linear Algebra and Its Applica-tions, Vol. 120, 1989, pp. 65-175. doi:10.1016/0024-3795(89)90375-3  Y. Censor and Z. Stavors, “Parallel Optimization: Theory, Algorithms, and Applications,” Oxford University Press, Oxford, 1997.  E. Candes, J. Romberg and T. Tao, “Robust Uncertainty Principles: Exact Signal Reconstruction from Highly In- complete Frequency Information,” IEEE Transactions on Information Theory, Vol. 52, No. 2, 2006, pp. 489-509. doi:10.1109/TIT.2005.862083  E. Candes and M. Wakin, “An Introduction to Compres- sive Sampling,” IEEE Signal Processing Magazine, Vol. 25, No. 2, 2008, pp. 21-30. doi:10.1109/MSP.2007.914731  D. Donoho, “Compressed Sensing,” IEEE Transactions on Information Theory, Vol. 52, No. 4, 2006, pp. 1289-1306. doi:10.1109/TIT.2006.871582  C. E. Shannon, “Communication in the Presence of Noise,” Proceedings of the IEEE, Vol. 86, No. 2, 1998, pp. 447-457. doi:10.1109/JPROC.1998.659497  E. Candes and M. Wakin, “Enhancing Sparsity by Re-weighted L1 Minimization,” Journal of Fourier Analysis and Applications, Vol. 14, No. 5-6, 2008, pp. 877-905. doi:10.1007/s00041-008-9045-x  H. Yu and G. Wang, “Compressed Sensing Based Interior Tomography,” Physics in Medicine and Biology, Vol. 54, No. 9, 2009, pp. 2791-2805. doi:10.1088/0031-9155/54/9/014  X. Li and J. Zhu, “Convergence of Block Cyclic Projec-tion and Cimmino Algorithms for Compressed Sensing Based Tomography,” Journal of X-Ray Science and Tech-nology, Vol. 18, No. 4, 2010, pp. 1-11.  D. Butnariu, R. Davidi, G. T. Herman and I. G. Kazantsev, “Stable Convergence Behavior under Summable Pertur-bation of a Class Projection Methods for Convex Feasibil-ity and Optimization Problems,” IEEE Journal of Selected Topics in Signal Processing, Vo. 1, No. 4, 2007, pp. 540- 547.  J. Zhu, X. Li, Y. Ye and G. Wang, “Analysis on the Strip- Based Projection for Discrete Tomography,” Discrete Ap- plied Mathematics, Vol. 156, No. 12, 2008, pp. 2359-2367. doi:10.1016/j.dam.2007.10.011  X. Li and J. Zhu, “A Note of Reconstruction Algorithm of the Strip-Based Projection Model for Discrete Tomogra-phy,” Journal of X-Ray Science and Technology, Vol. 16, No. 4, 2008, pp. 253-260.  TEAM RADS. http://teamrads.com/