Advances in Computed Tomography, 2012, 1, 30-36
http://dx.doi.org/10.4236/act.2012.13007 Published Online December 2012 (http://www.SciRP.org/journal/act)
The Convergence of Two Algorithms for Compressed
Sensing Based Tomography
Xiezhang Li*, Jiehua Zhu
Department of Mathematical Sciences, Georgia Southern University, Statesboro, USA
Email: *xli@georgiasouthern.edu, jzhu@georgiasouthern.edu
Received November 10, 2012; revised December 12, 2012; accepted December 20, 2012
ABSTRACT
The constrained total variation minimization has been developed successfully for image reconstruction in computed
tomography. In this paper, the block component averaging and diagonally-relaxed orthogonal projection methods are
proposed to incorporate with the total variation minimization in the compressed sensing framework. The convergence
of the algorithms under a certain condition is derived. Examples are given to illustrate their convergence behavior and
noise performance.
Keywords: Compressed Sensing; Image Reconstruction; Total Variation Minimization; Block Iterative Methods
1. Introduction
The reconstruction of an image from the projection data
in tomography by algebraic approaches involves solving
a linear system
,
A
xb (1)
where the coefficient matrix mn
A
R
is determined by
the scanning geometry and directions, vector
the projection data obtained from computed tomograpgy
(CT) scan and the unknown vector
m
bR
n
x
R the image to
be reconstructed. It is assumed that system (1) is consis-
tent and underdetermined (m < n). So it has infinitely
many solutions. We seek for a solution such that it re-
covers the original image as good as possible. It is an
illposed problem. In general, the dimension of x is very
large, thus the conventional direct methods are not ap-
propriate. One of classic iterative algorithms is the alge-
braic reconstruction technique (ART) given by [1]
1
2
2
,,
ik
i
kk
ki
baxi
x
xa
a

where k
is a parameter, ai the ith column of AT, and
,
i
ax
k
k
the inner product of ai and xk. The value of i is
cyclically chosen as 1, ···, m. The sequence {xk} con-
verges to a solution of system (1) as long as
0 liminflimsup2
k
 [2]. Cimmino method [3],
given by
1
2
1
2
,,
ik
mi
kk
k
i
i
bax
is an iterative projection algorithm suitable for parallel
computing. The slow convergence of Cimmino method
because of the large value of m was improved by the
component averaging (CAV) method [4] given by
1
2
1
2
,,1,,
ik
mi
kk i
k
jj i
i
j
bax ,
x
xaj
sa
 
n
(2)
where sj is the number of nonzero entries in the jth col-
umn of A. The CAV method was further generalized as
the Diagonally-Relaxed Orthogonal Projection (DROP)
method [5] given by
1
2
1
,,1,,
ik
mi
kk i
k
jj ij
i
i
j
bax ,
x
xw aj
sa
 
n
(3)
i
x
ma
a
where wi’s are positive weights. The block versions of
these iterative methods were investigated regarding con-
vergence and computations [5-7]. The theory of com-
pressed sensing [8-10] has recently shown that signals
and images that have sparse representations in some or-
thonormal basis can be reconstructed at high quality from
much less data than what the Nyquist sampling theory
[11] requires. In many cases in tomography, a medical
image can be approximately modeled to be essentially
piecewise constant so its gradients are sparse. The image
can then be reconstructed via the total variation minimi-
zation [12,13]. In other words, a two dimensional image
F can be reconstructed by solving the following minimi-
zation problem,
min ||
TV
x
or min || s.t ,
A
xb
(4)
where ||
is the magnitude of a gradient and the total
*Corresponding author.
C
opyright © 2012 SciRes. ACT
X. Z. LI, J. H. ZHU 31
variation ||
TV
f
of a two dimensional array fi,j is the l1 -
norm of the magnitude of the discrete gradient,

2
1,,, 1,
,
2
i jijijij
ij
fffff

||
TV
A
.
Under the condition that the gradients are sparse
enough, the solution of the l1-norm minimization prob-
lem (4) is unique and it gives an exact recovery of the
image based on compressed sensing theory [8,10]. A
block cyclic projection for compressed sensing based
tomograph (BCPCS) for solving (4) was proposed [14].
To describe the algorithm the following notations are
needed: Suppose that A and b in are partitioned, respect-
tively, as
1
1
and , for 1,
pp
b
A
bp
Ab

 



 

 
 m (5)
where Aj
Rrj×n and bj
Rrj , 1 j p. Assuming that
the row indices of Aj form a set

1,,
j
j
j
rj
Bt t
for
ij
BB
, we
have where .
We assume that
1,,
p
0
k
mj
1
U
jj
B
i
and in the fol-
lowing algorithm:
k
k

Algorithm 1. BCPCS
1. for k = 0, 1, 2, ···
2. for j = 1 to p
3.1. for 1,,
j
j
rj
it t
3.2. 2
,
ik
i
kk
ki
baxi
x
xa
a

3.3. end
4. , where ||
kk
k
x
d
xx d
d
 
5. end
6. 1kk
x
x
7. end
The convergence of BCPCS with 1
k
was shown
with an application of the convergence theory of the
amalgamented projection method [15].
Theorem 1. [14] The sequence {xk} generated by Al-
gorithm 1 with 1
k
converges and its limit is a solu-
tion of Ax = b.
In this paper, the block component averaging and di-
agonally-relaxed orthogonal projection methods are pro-
posed to incorporate with the total variation minimization
in the compressed sensing framework. The convergence
of the algorithms, under a certain condition for example
in the strip-based projection model [16], is derived. Ex-
amples are given to illustrate their convergence behavior
and noise performance.
2. BCAVCS Algorithm and Its Convergence
The convergence of the CAV and block CAV methods
was proved [4,5]. Several notations and concepts in [4]
are adopted in this paper for literature consistency. For
each
1, ,im, denote by
|,
ni
ii
,
H
xRax b
a hyperplane in Rn and by i
a
nonnegative diagonal matrix, where

1,,
ii
Gdiagg gn
1
ij j
g
s if
0
i
j
a
, otherwise gij = 0, j = 1, ···, n. Then 1
m
ii
GI
.
The set 1i

m
i
G
is called sparsity pattern oriented (SPO)
w.r.t. A. The generalized oblique projection of a vector
n
zR
onto Hi w.r.t. Gi, denoted as

i
i
G
H
P
z, is defined
for all j = 1, ···, n, by


1, 0
, if 0
if 0,
i
iil
ii
ij
ji
i
Gnil
l
Hlg
jil
ji
baza
zg
g
a
Pz g
zg

j
j

(6)
a seminorm in Rn corresponding to any nonnegative di-
agonal matrix G is defined by 12
||,
G
zzGz, n
zR
.
It is known that

argmin ||:,
i
i
i
G
Gi
H
Pzxz xH
which explains the meaning of the generalized oblique
projection. For simplicity, we denote
i
i
G
H
P
z by
i
Qz for any n
zR
.
Moreover, with the generalized inverse of Gi,
††
1,,,
ii
Gdiaggg
in
where for a real number
, †1
for 0
, oth-
erwise 0
, we rewrite (6) as
 
,.
,
i
i
i
i
Gi
ii
Hii
i
baz
QzP zzGa
aGa

With above notations, the CAV iteration (2) can be
expressed as

1
1
.
m
kk kk
kii
i
x
xGQx
 
x (7)
Algorithm 1 can be modified as a block CAV method
for compressed sensing based tomography algorithm
(BCAVCS) for image reconstruction described as fol-
lows:
Algorithm 2. BCAVCS
1. for k = 0, 1, 2, ···
2. for j = 1 to p
3.


j
kkk k
kii
iB
x
xGQx
 
x
4. , where ||
kk
k
d
xx d
dx
 
5. end
6. 1kk
x
x
Copyright © 2012 SciRes. ACT
X. Z. LI, J. H. ZHU
32
7. end
In order to derive the convergence of Algorithm 2 un-
der a certain condition, we introduce an operator Ti: Rn
Rn, for i = 1, ···, m, by


,
ikii
TzzGQI z

where I is the identity operator. For an index vector t =
(t1, ···, tr) in {1, ···, m}, we denote T[t] as a composition
of operators Tt1, ···, Ttr by
1.
r
t
Tt TT t
(8)
We have the following property for the operator T[t].
Lemma 2. Let 1i be SPO w.r.t.

m
i
Gmn
A
R
and let
t = (t1, ···, tr) be an index vector in {1, ···, m} such that
GkGl = O for any distinct coordinates k, l in t. Then for
,
n
zR

 



1,,
ri
it t
TtzzT zIz
 
. (9)
Proof. Let . It fol-
lows from GlGh = O that



hkhh
wTzGQIzz
 
Glw = Glz and ,,
ll
aw az.
Consequently, we have
 
2
,.
||
l
l
ll
ll lll
l
G
baw
GQ wGwaGQ z
a


 


Therefore,


 

 


 


.
lh
lklkll
kh hlkll
lh
TT z
Tw wGwGQw
zGQIzGzGQz
zTzIz TzIz





It follows that











11
1,, .
rr
r
tt tt
i
it t
Tt zTTzTTz
zTzI

 
 
z
We complete the proof of the lemma.
The convergence of BCAVCS in a certain case will be
shown below using the concepts of SPO matrices and
generalized oblique projection. Suppose that Ax = b is
generated by the strip-based projection model in discrete
tomography or CT [16,17]. The index vector

1,,,
j
jj j
r
tt t
is associated with the jth subsystem Ajx = bj of (5) deter-
mined by one projection direction. In each 0-1 submatrix
Aj, there is exactly one 1 in each column, i.e., sj = 1,
Therefore all rows of Aj are orthogonal. So GkGl = 0 for
distinct coordinates k and l in tj. In this case we will show
the following.
Theorem 3. If each column of every block Aj, 1 j p,
in (5) has exact one 1 then the sequence {xk} generated
by Algorithm 2 with 1
k
converges and its limit is a
solution of (1).
Proof. Recall that the inner loop of BCPCS for the jth
block of A can be represented by an operator
j
Pt
defined by
1
,
jj
rj
j
kk
tt
Pt xPP x

 
 

where
2
2
,,
ik
i
kk i
ik
i
bax
Px xa
a

then

,
ikii
PIGQI T
i

for all
1,, ,
j
jj
r
it t
and
 
 
1
1
.
jj
rj
jj
rj
jk k
tt
kj
tt
Tt xTTx
PPxPtx

 
 







 k
k
It follows from (9) that the iteration scheme of the in-
ner loop of BCAVCS for the jth block can be expressed
as



.
j
j
kk
kii
iB
kk
ki
iB
jk jk
x
xGQ
xTIx
TtxPt x
 
 
 

 
Ix
It means that for the jth block of the matrix A the
BCAVCS method is equivalent to BCPCS algorithm.
The convergence of BCAVCS as 1
k
follows from
the convergence of BCPCS algorithm as 1
k
by
Theorem 1.
It is remarked that the rate of convergence of
BCAVCS in general is close to that of BCPCS in se-
quential computing. Hence, the convergence of BCA-
VCS algorithm with parallel computing will be signifi-
cantly faster than BCPCS algorithm.
3. BDROPCS Algorithm and Its
Convergence
The convergence of the DROP and block DROP methods
was proved [5]. In this section, we propose a block
DROP algorithm modified for compressed sensing based
tomography (BDROPCS) and show its convergence in a
Copyright © 2012 SciRes. ACT
X. Z. LI, J. H. ZHU 33
certain case. We denote by sj the number of nonzero en-
tries in each column within the jth block and define a
DROP operator
j
Pt


corresponding to the jth block
as
2
,.
j
i
i
j
i
k
ii
iB
j
bax
Ptxxwa
sa


(10)
Algorithm 3. BDROPCS
1. for k = 0, 1, 2, ···
2. for j = 1 : p
3. kjk
x
Pt x


with a block DROP
4. kk
k
d
xx d
 with ||dx
5. end
6. 1kk
x
x
7. end
We first show that in a certain case the DROP is iden-
tical to the ART in the following
Theorem 4. Let mn
A
R
have exactly the same
number s of nonzero entries in each column and let rows
of A be orthogonal. Then xk+1 generated by the DROP in
(3) is the same as a vector generated by Algorithm ART
with a constant parameter k
s
and weights wi.
Proof. Under the assumption, the DROP method can
be expressed as
1
2
1
,.
ik
mi
kk
k
ii
i
baxi
x
xw
sa
 a (11)
For convenience, we rewrite Algorithm ART with a
constant parameter k
s
and weights wi for one cycle
which yields xk+1 from xk as follows:
x[1] = xk,
for i = 1 to m
 

1
2
,
,
i
i
i
ii i
k
ii
bax
x
xw a
sa






(12)
end
xk+1 = x[m+1].
By the assumption on A, ,0 for il
il
aa . It fol-
lows that
 
1
11
,,
ii
ii
aa ax

.
It is easy to see by induction
 
1
,,,, 1,
i
iiik
axaxax im 
Then (12) can be rewritten as
 
1
2
,,
ik
i
ii i
k
ii
bax
x
xw a
sa






from which, the vector xk+1 is given by
1
2
1
,,
ik
mi
kk
k
ii
i
baxi
x
xw
sa
 a (13)
which is the same as the vector yielded by the DROP
method in one cycle (11).
We now have the convergence of Algorithm 3 in the
case where Ax = b is generated by the strip-based project-
tion model in DT or CT [16,17], which is a direct result
of Theorems 1 and 5.
Theorem 5. If each column of every block Aj, 1 j p,
in (5) has exact one 1 then {xk} generated by Algorithm 3
with 1
ki
w
converges and its limit is a solution of
Ax = b.
It is remarked that BCAVCS is a special case of
BDROPCS when wi = 1. Therefore, Theorem 5 can be
considered as an alternative proof of Theorem 3 without
using the generalized oblique projection. However, it is
believed that the concept of generalized oblique project-
tion and the idea in the proof of Theorem 3 will be im-
portant for our further investigation of the convergence
of BCAVCS in a general case.
4. Numerical Simulations
In order to test the performance of our algorithms in recon-
structing images, we implemented algorithms BCAVCS
and BDROPCS in Matlab. The performance of the algo-
rithms was compared with some other algorithms. Algo-
rithm BCAV is a non-CS iterative method formed by
deleting line 4 in Algorithm BCAVCS so that no pertur-
bation for total variation minimization is performed. Al-
gorithm CAVCS is a nonblock CAV CS-based iterative
method obtained by exchanging lines 4 and 5 in Algo-
rithm BCAVCS so that a perturbation for total variation
minimization is performed only after all the blocks are
done. Note that in our case BDROPCS is same as
BCAVCS. We also tested the sensitivity of the algo-
rithms to additional Gaussian noise. We perform the re-
construction of the 256 × 256 Shepp-Logan phantom
from a set of 20 reasonably distributed projection direc-
tions of rational slopes [16,17]. The system Ax = b gen-
erated by the strip-based projection model, where A
R20918×65536 is highly underdetermined. Moreover, A is a
sparse 0-1 matrix and there is one and only one 1 in each
column within each of total 20 blocks [17].
The experiment data with the algorithms are summa-
rized in Table 1. We set maximum 500 iterations in the
,.
Copyright © 2012 SciRes. ACT
X. Z. LI, J. H. ZHU
Copyright © 2012 SciRes. ACT
34
test and algorithms will terminate if the relative error
reaches 0.001. The relative error
2
2
f
G
f
and


2
,,
MSE ,
mn
fmnGmn
MN

To evaluate the noise characteristics of two new algo-
rithms, we added Gaussian noise with mean 0 and stan-
dard deviation 0.05 to synthetic projection data from the
Shepp-Logan phantom. The reconstructed images and
convergence curves after 250 itertaions are given in Fig-
ure 3. The noise level was evaluated by the mean-square
difference between reconstruction with and without added
noise. The noise measures with the algorithms BCAV,
CAVCS, and BCAVCS/BDROPCS are 0.0012, 0.0029,
and 0.0023, respectively. The results indicate that the
four algorithms have simliar sensitivity to data noise.
for a reconstructed image G are used to measure the error.
The re-constructed images and relative errors by different
algorithms are shown in Figure 1. Our experiment re-
sults indicate that algorithms BCAVCS and BDROPCS
converge. Algorithm BCAVCS is demonstrated to be
much better than BCAV and CAVCS and to have the
same converges rate as BDROP. It is remarked that
BCAVCS/BDROPCS algorithm is appropriate for paral-
lel computing.
Table 1. Experiment data of Shepp-Logan phantom.
Iteration
Number Time (s) Error MSE
BCAV
We also tested the algorithms with a real CT image of
cardiac [18] of size 256 × 256 from projections in 32
directions. The CT image was preprocessed to have a
certain gradient sparsity. The reconstructed images and
corresponding convergence curves are shown in Figure 2.
The experiment indicates that the BCAVCS and BDR-
OPCS are superior to non-CS methods too.
500 55 0.458 0.013
CAVCS 500 57 0.075 0.073
BCAVCS 404 75 0.001 0.000
Figure 1. Reconstruction of Shepp-Logan phantom with different algorithms; (a) Shepp-Logan phantom; (b) Reconstruction
by BCAV; (c) Reconstruction by CAVCS; (d) Reconstruction by BCAVCS/BDROPCS. Bottom: Convergence curves of algo-
rithms.
X. Z. LI, J. H. ZHU 35
Figure 2. Reconstruction of a cardiac CT image with different algorithms; (a) Cardiac phantom; (b) Reconstruction by
BCAV; (c) Reconstruction by CAVCS; (d) Reconstruction by BCAVCS/BDROPCS. Bottom: Convergence curves of algo-
rithms.
Figure 3. Reconstruction of Shepp-Logan phantom with noise; (a) Reconstruction by BCAV; (b) Reconstruction by CAVCS;
(c) Reconstruction by BCAVCS/BDROPCS. Bottom: Convergence curves of algorithms.
Copyright © 2012 SciRes. ACT
X. Z. LI, J. H. ZHU
Copyright © 2012 SciRes. ACT
36
5. Conclusions
The total variation minimization is a powerful method to
reconstruct piecewise constant medical images based on
the compressed sensing theory. We consider the block
component averaging and diagonally-relaxed orthogonal
projection methods, in the case of the parameter 1
k
,
with the total variation in the compressed sensing frame-
work. Their convergence is derived in the striped-based
projection model.
The experiments indicate that the proposed algorithms
BCAVCS and BDROPCS converge faster than algo-
rithms without using block iterations or CS framework.
Moreover, algorithms BCAVCS and BDROPCS recover
more details of images. The convergence of algorithms
BCAVCS and BDROPCS in the general case of 1
k
will be further studied.
6. Acknowledgements
This study was supported by a Faculty Research Award
from Georgia Southern University.
REFERENCES
[1] A. C. Kak and M. Slaney, “Principles of Computerized
Tomographic Imaging,” Society of Industrial and Applied
Mathematics, Philadelphia, 2001.
doi:10.1137/1.9780898719277
[2] P. B. Eggermont, G. T. Herman and A. Lent, “Iterative
Algorithm for Larger Partitioned Linear Systems, with
Applications to Image Reconstruction,” Linear Algebra
and Its Applications, Vol. 40, 1981, pp. 37-67.
doi:10.1016/0024-3795(81)90139-7
[3] G. Cimmino, “Calcolo Approssimato Per le Soluzioni dei
Sistemi di Equazioni Lineari,” La Ricerca Scientifica, Se-
ries II, Vol. 9, 1938, pp. 326-333.
[4] Y. Censor, D. Gordan and R. Gordan, “Component Ave-
ging: An Efficient Iterative Parallel Algorithm for Large
and Sparse Unstructured Problems,” Parallel Computing,
Vol. 27, No. 6, 2001, pp. 777-808.
doi:10.1016/S0167-8191(00)00100-9
[5] Y. Censor, T. Elfving, G. T. Herman and T. Nikazad. “On
Diagonally-Relaxed Orthogonal Projection Methods,”
SIAM Journal on Scientific Computing, Vol. 30, No. 1,
2008, pp. 473-504. doi:10.1137/050639399
[6] R. Aharoni and Y. Censor, “Block-Iterative Projection
Methods for Parallel Computation of Solutions to Convex
Feasibility Problems,” Linear Algebra and Its Applica-
tions, Vol. 120, 1989, pp. 65-175.
doi:10.1016/0024-3795(89)90375-3
[7] Y. Censor and Z. Stavors, “Parallel Optimization: Theory,
Algorithms, and Applications,” Oxford University Press,
Oxford, 1997.
[8] E. Candes, J. Romberg and T. Tao, “Robust Uncertainty
Principles: Exact Signal Reconstruction from Highly In-
complete Frequency Information,” IEEE Transactions on
Information Theory, Vol. 52, No. 2, 2006, pp. 489-509.
doi:10.1109/TIT.2005.862083
[9] E. Candes and M. Wakin, “An Introduction to Compres-
sive Sampling,” IEEE Signal Processing Magazine, Vol.
25, No. 2, 2008, pp. 21-30.
doi:10.1109/MSP.2007.914731
[10] D. Donoho, “Compressed Sensing,” IEEE Transactions on
Information Theory, Vol. 52, No. 4, 2006, pp. 1289-1306.
doi:10.1109/TIT.2006.871582
[11] C. E. Shannon, “Communication in the Presence of Noise,”
Proceedings of the IEEE, Vol. 86, No. 2, 1998, pp. 447-457.
doi:10.1109/JPROC.1998.659497
[12] E. Candes and M. Wakin, “Enhancing Sparsity by Re-
weighted L1 Minimization,” Journal of Fourier Analysis
and Applications, Vol. 14, No. 5-6, 2008, pp. 877-905.
doi:10.1007/s00041-008-9045-x
[13] H. Yu and G. Wang, “Compressed Sensing Based Interior
Tomography,” Physics in Medicine and Biology, Vol. 54,
No. 9, 2009, pp. 2791-2805.
doi:10.1088/0031-9155/54/9/014
[14] X. Li and J. Zhu, “Convergence of Block Cyclic Projec-
tion and Cimmino Algorithms for Compressed Sensing
Based Tomography,” Journal of X-Ray Science and Tech-
nology, Vol. 18, No. 4, 2010, pp. 1-11.
[15] D. Butnariu, R. Davidi, G. T. Herman and I. G. Kazantsev,
“Stable Convergence Behavior under Summable Pertur-
bation of a Class Projection Methods for Convex Feasibil-
ity and Optimization Problems,” IEEE Journal of Selected
Topics in Signal Processing, Vo. 1, No. 4, 2007, pp. 540-
547.
[16] J. Zhu, X. Li, Y. Ye and G. Wang, “Analysis on the Strip-
Based Projection for Discrete Tomography,” Discrete Ap-
plied Mathematics, Vol. 156, No. 12, 2008, pp. 2359-2367.
doi:10.1016/j.dam.2007.10.011
[17] X. Li and J. Zhu, “A Note of Reconstruction Algorithm of
the Strip-Based Projection Model for Discrete Tomogra-
phy,” Journal of X-Ray Science and Technology, Vol. 16,
No. 4, 2008, pp. 253-260.
[18] TEAM RADS. http://teamrads.com/