Journal of Signal and Information Processing, 2013, 4, 385-393
Published Online November 2013 (http://www.scirp.org/journal/jsip)
http://dx.doi.org/10.4236/jsip.2013.44049
Open Access JSIP
385
FIR System Identification Using Feedback
T. J. Moir
School of Engineering, AUT University, Auckland, New Zealand.
Email: Tom.Moir@aut.ac.nz
Received September 20th, 2013; revised October 18th, 2013; accepted October 25th, 2013
Copyright © 2013 T. J. Moir. This is an open access article distributed under the Creative Commons Attribution License, which per-
mits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ABSTRACT
This paper describes a new approach to finite-impulse (FIR) system identification. The method differs from the tradi-
tional stochastic approximation method as used in the traditional least-mean squares (LMS) family of algorithms, in
which we use deconvolution as a means of separating the impulse-response from the system input data. The technique
can be used as a substitute for ord inary LMS but it h as the ad ded adv an tages that can b e used fo r cons tant inp u t data (i.e.
data which are not persistently exciting) an d the stability limit is far simpler to calculate. Furth ermore, the convergence
properties are much faster than LMS under certain types of noise input.
Keywords: LMS; Feedback; System Identification
1. Introduction
Recursive parameter (or weight) estimation has been
used since the early days of least-mean squares (LMS)
[1]. Along with its variants such as Normalised-LMS
(NLMS) [2], it has become the adaptive algorithm of
choice due both to its simplicity, good tracking ability
and the fact that it applies to finite-impulse-response
(FIR) filters which are always stable. There have been
many attempts at adaptive infinite-impulse-response (IIR)
[3,4] filters using algorithms such as recursive-least-
squares (RLS) [2,5], but they are less popular due to sta-
bility problems. Besides this, the tracking ability of RLS
is rather ad-hoc in nature using a forgetting factor and
the algorithm complexity for L weights is
2
OL op-
erations rather than for LMS [6]. Adaptive
filters are proven successful in a wide range of signal
processing applications such as noise-cancellation [7,8],
adaptive arrays [9], time-delay estimation [10], echo can-
cellation [11] and channel equalization [12].
21OL
Despite the popularity of LMS it has a few drawbacks.
To name a few, LMS will only give unbiased estimates
when the driving signal is rich in harmonics (a persis-
tently exciting input signal [13]) and when this driving
signal is not white, the convergence of the algorithm is
affected depending on the eigen-value spread of the cor-
relation matrix [2,14,15]. Usually many ad-hoc approa-
ches that employ variable step-size have been used to try
and overcome this problem [16]. Gradient based algo-
rithms such as LMS are based on steepest-descent and do
not have as fast convergence as RLS. The literature is
quite old for many of these approaches indicating that
little has changed in many respects though there have
been some more recent approaches to the same problem
[17]. This paper addresses these prob lems by introducing
a new concept in weight estimation which is not based on
minimisation of any form of least-squares criterion, re-
cursive or otherwise. Instead the paper uses classical
control-theory to separate the convolution of weights to
system input by the use of high-gain and integrators. Al-
though LMS also use high-gain and integrators with
feedback, the LMS approach is always geared towards
correlation and the solution of the Wiener-Hopf equation.
This is in fact the reason for some of its limitations on
special cases. The approach used here is entirely deter-
ministic in nature and instead based on purely control
theory. The control-loop used results in deconvolution, or
the separation of the two convolved signals (the impulse
response and the driving signal). The novelty in the solu-
tion is the fact that a special lower-triangular convolu-
tion matrix is used in the feedback path of this control
system.
2. Feedback Deconvolution Loop 1
Consider an unknown system driven by a
known input signal which can be either random or
deterministic
1
Wz
k
u
FIR System Identification Using Feedback
386
1
kk
s
Wq u
(1)
Although the system to be identified is single-input
single-output, the method here works on a vector of out-
puts consisting of n + 1 consecutive outputs.
We define here as the z-transform operator and
is the backwards shift operator defined for a scalar
discrete signal at sample instant k as
1
z
1
q
1
k1 k
=
y
qy
or for
a vector k1 . A vector output is obtained from
the block convolution “*” of the system input and the
impulse response vector thus
1k
=q
xx
k
k
k
sWu (2)
where the weight and input vectors are defined respectively
as: , and
are of order n + 1 each giving

T
01
,,,
n
ww wW
T
1
,,,
kkk kn
uu u

u
1
0
k
kj
j
wu

Wu (3)
From the convolution Equation (3), we consider only
the first n + 1 terms. k becomes a vector of dimension
n + 1 written in terms of past values of k as
.
s
T
n,1
A matrix format of convolution now follows accord-
ingly:
,,
kkk k
ss s



s
kk
sTuW
k
0
k
(4)
Here is an n + 1 square lower-triangular Toe-
plitz matrix given by

k
Tu

 
1
21
32 1
12
0000
0000
000
00
0
k
kk
kk k
kkk kk
k
kn k
kn kn
u
uu
uu u
uu uu
u
uu uu

 
 

 

Tu (5)
This will be known as a convolution matrix. This is
distinct from a correlation matrix which is met in least-
squares problems.
Now consider a time-variant multivariable control-
system as shown in Figure 1. The control-loop forward-
path consists of a matrix of integrators with gain K. Its
error vector is defined as

ˆ
kkkk k
 esysTuW (6)
The vector represents the estimate of the true
weight vector k. Assume that the closed-loop multi-
variable system is stable. Then the error for large gain K
becomes (via the usual action of negative feedback)
so that k
and hence
as . If we initially assume the closed-loop system
can be made always stable then for a time-varying sys-
te m, the control-loop output must track any such ch anges.
ˆk
W

W
0
kek

ˆ
kk
Tu WsˆkWW
1
1
Kz
1z
I
+
Weight
Vector
Estimate
Error
Vector
k
sk
e
+
-
k
y
S
y
stem
Output k1
ˆ
W
k
(
)Tu
s
k
e
k
y
k
T(u
k
)
1
1
1
Kz
z
I
1
ˆk
W
Figure 1. Multivariable System 1.
In algorithm form, the method is quite simple.
Furthermore, if k and k are reversed as inputs to
the control-system, then the inverse of the system im-
pulse response will be estimated instead, provided of
course enough weights are allocated. The above algo-
rithm is not optimal in any least-squares sense since no
cost function is minimised. In fact this is an approach to
deconvolution using feedback. Note that in Figure 1 in-
tegrators are chosen since they have infinite gain at dc
(giving zero steady-state error to step inputs of weight-
vector), but other types of controller are possible. Al-
though not explored here, it is possible to include loop-
shaping to the control-loop by adding extra integrators
and phase-lead compensation. This was explored else-
where for the ordinary LMS algorithm [18].
s u
3. Stability and Convergence of the Loop
For a given batch of n + 1 samples it is fairly straight-
forward to determine the stability of the system of Sec-
tion 2.
Figure 1 is a time-variant multivariable system, but
for a given constant vector k we can treat the closed-
loop system as time-invariant. (A more complete expla-
nation is shown in Appendix when the input vector is
time-varying). Calculating an expression for the error
vector in the z-domain
u


111
ˆ
k
zzz z

es TuW
1
(7)
and
 
1
1
1
ˆ1
Kz
z
z
W1
z
e (8)
Substitute (8) into (7) and re-arrange

 
11
1
1
kKz zz
z




ITues 1
(9)
Simplifying further

 
1111
11
k
zKzzz


 

ITue s
1
z
(10)
The roots for z of the determinant of the return-dif-
ference matrix are the closed-loop poles of the system
Open Access JSIP
FIR System Identification Using Feedback 387
satisfying[19]:


det 0
ck
zz K



IITu
(11)
That is, the closed-loop characteristic polynomial of
the matrix must have roots (eigen-values)
which lie within the unit-c irc le on the z-plane. Now since
is a lower-triangular Toeplitz matrix, it follows

k
ITu

k
Tu

k
K
IT

u
c
zz
is also a lower-triangular Toeplitz matrix.
Furthermore, it is well established that the eigen-values
of such a matrix are just the its diagonal elements which
in this case are the n + 1 repeated roots according to
giving
1n
uK


10
k
1,zu 1,2,,
ik
Ki n 1 (12)
For stability all roots must lie within the unit circle in
the z-plane. This gives us
1
k
uK1
2
(13)
Excluding the special case when , the gain K
must satisfy: 0
k
u
0
k
uK (14)
Equation (15) clearly poses no problem provided
and K are always positive. Then k
u
2
0
k
Ku
 (15)
However, if k is negative then K (from 14) must
also be negative for stability. This will require a time-
varying K whose sign tracks the same sign as k. It is
interesting to see that the stability is only dependant on
the single input k and is also independent of the sys-
tem order. This is different from the LM S case wh ere the
step-size depends on the variance of the input signal and
the number of weights. Of course k is renewed with
every new buffer of data and so it is not just one value
which the stability is dependent on.
u
u
u
u
Now we assume that the true weight vector has a con-
stant but unknown set of weights, say . This is mod-
0
W
elled by a step z-transform 01
1
1z
W.
From Figure 1, the weight vector estimate is found
from the multivariable system according to,
 
1
1
ˆ1
K
z
z
W1
z
e
1
1
z
(16)
and the error vector is found from
 


111
ˆ
k
zzz z

es TuW (17)
Substitute (17) into (16) and re-arranging, gives the
multivariable transfer function matrix relating the esti-
mated vector with signal vector.




1
11
ˆk
zKzK




WIITu s
(18)
which as previously, requires all the eigen-values of the
matrix
k
K
ITu to lie within the unit-circle on the
z-plane. With stability satisfied via (15), we can now
examine the convergence by applying the step input vec-
tor in the z-domain thus:

101
1
1
zz
WW
We can write in (18) that

11
k
zz

sTuW
and by applying a step input vector in the z-domain thus:

101
1
1
zz
WW
So that (18) becomes




1
11
01
1
ˆ1
kk
zKzK
z




WIITu TuW
(19)
By applying the z-transform final-value theorem to (19)
we get
 
1
1
ˆˆ
lim lim1
k
kz
zz
 
WW 1
(20)
0
ˆkWW (21)
So that the weights converge to the true values pro-
vided the closed-loop system is stable.
Algorithm 3.1: Deconvolution Loop 1. Select magni-
tude of the loop gain 0
K
.
Loop: {Fetch recent input and output data vectors k,
using at least n + 1 samples. Monitor th e first sample
k within the vector and make
u
k
s
uk
u
0sgn k
K
K

1kk
uWu, 1)
Update vector error: , 2) Update
Weight Vector: kk
esT
1
ˆˆ
kk k
K
WW e}
where in the above
k
Tu is formed by Equation (5).
For L weights, the algorithm has operations.

2
OL
4. Feedback Deconvolution Loop 2
The problem with the con trol loop discussed in section 2
is that the stability satisfies (15) which makes the loop
gain dependent on the sign of k, the first sample of
each vector of input data fed to the unknown system.
This means that the input sign must be monitored and K
switched in sign. A slight modification can overcome this
problem. Consider Figure 2.
u
It can be seen that the convolution matrix
k
Tu has
now been added to the forward path as well as the feed-
back path. We now have



11
1
ˆ1k
K
zz
z

WTue (22)
and
Open Access JSIP
FIR System Identification Using Feedback
388
1
1
Kz
1z
I
+
k1
ˆ
W
Weight
Vector
Estim ate
Error
Vector
k
e
+
-Tu
k
()
k
s+
k
y
Tu
k
()
S
y
stem
Output
s
k
ek T(uk)
1
1
1
Kz
z
I
T(uk)
yk
1
ˆk
W
Figure 2. Multivariable System 2.
 


111
ˆ
k
zzz z

es TuW
1
1
z
1
z
(23)
Substituting the error vector (23) into (22) an d follow-
ing a similar approach to Section 2, we find the relation-
ship





1
11
ˆkk
zKzK




WIITu Tus
(24)
and using the z-transform of (4)





1
1212
ˆkk
zKzK




WIITu TuW
(25)
The stability becomes the solution of the roots of the
polynomial found from the return-difference matrix
. Here however we have the
product of two lower-triangular matrices . The
product of two such iden tical lower-triangular matrices is
a matrix of the form


21
0
k
Kz
 II Tu

2k
Tu

2
2
2
2
2
2
2
00000
0000
000
00
0
k
k
k
k
k
k
k
u
xu
xxu
xxxu
u
x
xxxu

Tu
(26)
(where the x represents cross-terms) which is another
lower-triangular matrix with diagonal elements . Fol-
lowing the same arguments as section (2) we can easily
show that the condition for stability of the multivariable
loop is now
2
k
u
2
2
0
k
Ku
 (27)
which makes the gain always positive. Furthermore from
(25), for constant weights we can also show that the
weights converge to their true values asymptotically
.
0
LMS algorithm which has a condition for convergence
in the mean-square equivalent to
ˆkWW

2
2
01n

where 2
is the variance of th e input driving noise and
is the step-size. Clearly (27) when taken over a large
number of batches of data and the input driving signal is
random, becomes
2
2
0
 ,
which is not dependent on the system order n.
Algorithm 4.1: Deconvolution Loop 2; Select magni-
tude of loop gain
.
Loop: {Fetch recent input and output data vectors k,
using at least n + 1 samples. Monitor the first sample
within the vector and make
u
k
s
k
uk
uK
where
2
2
0
k
u

1) Update vector error: , 2) Update
Weight Vector:

1kk kk
esTuW

1
ˆˆ
kk k
K
WW Tue
k
}
where in the above
k
Tu is formed by Equation (5).
For L weights the algorithm also has operations.

2
OL
5. Illustrative Examples
Example 1: Consider an FIR system with three unknown
weights:
1121
012 11.50.5Wzb bzbzzz
2
 

Let the system be driven by unity-v ariance white noise
which is dc-free. Select a forward path gain (and LMS
step-size) of K = 0.2.
Figures 3-5 show the weight convergence of Algo-
rithms 3.1, 4.1 and LMS respectively for zero additive
measurement noise.
It can be seen that the new algorithms have overshoot
but the convergence time is very similar. Algorithm 4.2
has around 90% overshoot than Algorithm 3.1. If we
compare a norm of the weight-error 11
ˆ
kkk
WW
for each case we see the comparison in Figure 6.
LMS gives the fastest performance of the three algo-
Figure 3. Weight convergence for Algorithm 3.1. No meas-
urement noise.
Open Access JSIP
FIR System Identification Using Feedback 389
Figure 4. Weight convergence for Algorithm 4.1. No meas-
urement noise.
Figure 5. Weight convergence for LMS algorithm. No meas-
urement noise.
Figure 6. Comparison of weight-error norm for zero meas-
urement noise.
rithms. The norm is taken rather than mean-square error
because the new algorithms have vector-based errors as
opposed to LMS which has a scalar error.
If we add zero-mean uncorrelated white measurement
noise for a SNR of 12dB and repeat the abov e simulation
we get some interesting results. In Figures 7-9 it is seen
that the LMS estimates are not as smooth as the new al-
gorithms.
The smooth convergence is illu strated in Figure 10 by
comparing the weight-error norms. There are large error
fluctuations as compared with the feedback algorithms
which give similar much smoother performance. Of
course the LMS case can be much improved by lowering
Figure 7. Weight convergence for Algorithm 3.1. 12d B SNR.
Figure 8. Weight convergence for Algorith m 4.1. 12dB SNR.
Figure 9. Weight convergence for LMS algorithm. 12dB
SNR.
Figure 10. Comparison of error norm for 12dB measure-
ment noise and K = 0.2.
the step-size at the expense of a much slower conver-
gence rate. Figure 11 shows a comparison of the weight-
Open Access JSIP
FIR System Identification Using Feedback
390
Figure 11. Comparison of weight-error error norm for
12dB measurement noise and gain reduction K = 0.05.
error norms for a ga in K = 0.05.
LMS is the superior of the three if convergence rate is
sacrificed but still has noisy fluctuations in the weight-
error norm.
We can conclude from this example that the new algo-
rithms give smoother weight estimates than LMS but
LMS can outperform the new algorithms when the
measurement noise is zero provided the loop gain is not
too high.
Example 2: It is well established that ordinary LMS
has problems when the eigen-value spread of the correla-
tion matrix is large [2]. This leads to slow convergence
and no amount of increasing the step-size will help since
instability will result. Consider a system with zero meas-
urement noise and with two weights to be estimated

11
01 10.5Wzb bzz

 
u
1
which is driven by filtered noise k by passing a white-
noise signal k
of variance 0.003025 through an
autogressive filter of order two.
112 2kkk
uauau k


The parameters of the autoregressive model are chosen
as 1 and 2 to make the eigenvalue
spread of the correlation matrix [2]. (For
the previous example we can say that
1.9114a0.95a

100
uu
R

uu
1
R) The
driving white noise k
is suitably scaled so that the
filtered variance of k is unity. The step-size (gain) of
the loop was set initially to K = 0.2, which was the
maximum step-size that the LMS algorithm could toler-
ate without becoming unstable. Figure 12 shows the
LMS estimates when the step-size is at its max imum that
it can tolerate without instability.
u
The LMS algorithm takes about 1000 steps to con-
verge. If the step-size increases further then the LMS
algorithm becomes unstable. Whereas if we look at Fig-
ure 13 we can see that Algorithm 3.1 is perfectly stable
with gain to spare.
The weight-error norms are compared for the same
step-size in Figure 14. In order to make a fair compari-
Figure 12. Weight convergence for LMS algorithm and
uu
R100.
Figure 13. Weight convergence for Algorithm 3.1 and
uu
R100, K = 0.2.
Figure 14. Comparison of weight-error norm for
uu
R100, K = 0.2.
son with LMS, it should be pointed out that Algorithm
3.1 in Figure 14 does not use the maximum step-size
(gain).
A comparison of Algorithm 3.1 and 4.1 is shown in
Figure 15 for this example when the gain (step-size) of
the loop is increased to K = 10.
The LMS case is not shown in Figure 15 since it be-
comes unstable. Although Algorithm 4.1 has a much
Open Access JSIP
FIR System Identification Using Feedback 391
slower convergence rate than Algorithm 3.1, it is still at
least five times as fast as the fastest achievable LMS (by
comparison with Figure 12). The reason for the signifi-
cant increase in convergence rate is because the new
methods do not use a correlation matrix at all and hence
there is no equivalent inversion of such a matrix as in
ordinary least-squares problems. Algorithm 3.1 is around
100 times faster than LMS. For the large eigen-value
spread case.
It is worth comparing with the RLS algorithm for this
case (Figure 16). An initial error covariance matrix was
set up to be diag {1000, 1000} (for two parameters) in
order to get a fast convergence. Unlike LMS, RLS is
known not to be sensitive to correlation matrix eigen-
value spread [2].
Algorithm 4.1 and LMS (not shown) are much slower
than RLS but Algorithm 3.1 is about twice as fast as RLS
as shown in Figure 16. The gain was adjusted on Algo-
rithm 3.1 in order to achieve the fastest convergence.
Example 3: The problem of an input signal which is
not persistently exciting.
Consider the non-minimum phase system

2
1
12
12
11.5 0.5
z
Wz zz

 .
Figure 15. Weight-error norm for Algorithm 4.1 and Algo-
rithm 3.1, K = 10 and .

uu
R100
Figure 16. Weight-error norm for Algorithm 3.1 and RLS,
K = 15 and .

uu
R100
The system has an FIR equivalent system by using a
Taylor series expansion. Hence we find that (to six
weights of app ro ximation):
1123 45
11.50.25 0.25Wzzz zzz

 
A steady dc (unit step) signal is fed to the input of the
system and in steady-state a comparison of various re-
cursive estimation schemes was made. Note that the in-
put here is essentially a step input and not dc per se, but
nevertheless some algorithms have difficulty with this
type of driving signal.
It was found for 4 weights that the new methods of
Algorithm 3.1 and 4.1 both converged to the exact valu es
3.1
ˆ1, 1,1.5, 1,0.250.25W ,
4.1 1, 1,1.5, 1,0.250.25W
ˆ in around 6 itera-
tions. Conversly the LMS algorithm converged to
LMS
ˆ0.378, 0.418, 1.416, 0.355, 0.2610.172W.
Clearly LMS gives the complete wrong answer. However,
RLS converges very close and as fast as algorithms 3.1
and 4.1 to the values
RLS
ˆ0.999, 0.999, 1.499, 0.999, 0.25,0.249W.
6. Conclusion
Two new algorithms have been demonstrated which use
feedback instead of correlation methods to estimate the
weights of an unknown FIR system. It has been shown
that the new algorithms give much smoother estimates of
the weights than ordinary LMS. Other than that there is
little difference until a driving signal is used whose cor-
relation matrix has widely dispersed eigen-values. Under
such conditions the Algorithm 4.1 has at least five times
faster convergence and Algorithm 3.1 has ab out one hun-
dred times faster than ordinary LMS. The disadvantage
of Algorithm 3.1 is that the gain needs to be switched in
sign as the sign of the first input sample changes. Both of
the new algorithms have the property that the gain is not
dependent on t he number of estimated weights.
REFERENCES
[1] B. Widrow and M. E. Hoff, “Adaptive Switching Cir-
cuits,” IRE Wescon Convention Record, Vol. 4, 1960, pp.
96-104.
[2] S. Haykin, “Adaptive Filter Theory,” Englewood Cliffs,
Prentice Hall, New Jersey, 1986.
[3] J. J. Shynk, “Adaptive IIR Filtering,” IEEE of ASSP Ma-
gazine, Vol. 6, No. 2, 1989, pp. 4-21.
http://dx.doi.org/10.1109/53.29644
[4] P. Hagander and B. Wittenmark, “A Self-Tuning Filter
for Fixed-Lag Smoothing,” IEEE Transactions on Infor-
mation Theory, Vol. 23, 1977, pp. 377-384.
http://dx.doi.org/10.1109/TIT.1977.1055719
[5] L. Ljung and T. Sodestrom, “Theory and Practice of Re-
Open Access JSIP
FIR System Identification Using Feedback
Open Access JSIP
392
cursive Estimation,” MIT Press, Cambridge, 1987.
[6] L. R. Vega and H. Rey, “A Rapid Introduction to Adap-
tive Filtering,” Springer, New York, 2013.
http://dx.doi.org/10.1007/978-3-642-30299-2
[7] J. V. Berghe and J. Wouters, “An Adaptive Noise Can-
celler for Hearing Aids Using Two nearby Microphones,”
Journal of the Acoustical Society of America, Vol. 103,
No. 6, 1998, pp. 3621-3626.
http://dx.doi.org/10.1121/1.423066
[8] B. Widrow, “A Microphone Array for Hearing Aids,”
IEEE of Circuits and Systems Magazine, Vol. 1, 2001, pp.
26-32. http://dx.doi.org/10.1109/7384.938976
[9] L. Horowitz and K. Senne, “Performance Advantage of
Complex LMS for Controlling Narrow-Band Adaptive
Arrays,” IEEE Transactions on Acoustics, Speech and
Signal Processing, Vol. 29, 1981, pp. 722-736.
http://dx.doi.org/10.1109/TASSP.1981.1163602
[10] F. Reed, P. L. Feintuch and N. J. Bershad, “Time Delay
Estimation Using the LMS Adaptive Filter—Static Be-
havior,” IEEE Transactions on Acoustics, Speech and
Signal Processing, Vol. 29, 1981, pp. 561-571.
http://dx.doi.org/10.1109/TASSP.1981.1163614
[11] B. Farhang-Boroujeny, “Fast LMS/Newton Algorithms
Based on Autoregressive Modeling and Their Application
to Acoustic Echo Cancellation,” IEEE Transactions on
Signal Processing, Vol. 45, 1997, pp. 1987-2000.
http://dx.doi.org/10.1109/78.611195
[12] S. U. H. Qureshi, “Adaptive Equalization,” Proceedings
of the IEEE, Vol. 73, 1985, pp. 1349-1387.
http://dx.doi.org/10.1109/PROC.1985.13298
[13] P. C. Young, “Recursive Estimation and Time-Series An a-
lysis,” 2nd Edition, Springer-Verlag, Berlin, 2011.
http://dx.doi.org/10.1007/978-3-642-21981-8
[14] S. Haykin and B. Widrow, “Least-Mean-Square Adaptive
Filters (Adaptive and Learning Systems for Signal Proc-
essing, Communications and Control),” Wiley Intersci-
ence, 2003.
[15] E. Eweda, “Comparison of RLS, LMS, and Sign Algo-
rithms for Tracking Randomly Time-Varying Channels,”
IEEE Transactions on Signal Processing, Vol. 42, 1994,
pp. 2937-2944. http://dx.doi.org/10.1109/78.330354
[16] T. Aboulnasr and K. Mayyas, “A Robust Variable Step-
Size LMS-Type Algorithm: Analysis and Simulations,”
IEEE Transactions on Signal Processing, Vol. 45, 1997,
pp. 631-639. http://dx.doi.org/10.1109/78.558478
[17] R. C. Bilcu, P. Kuosmanen and K. Egiazarian, “A Trans-
form Domain LMS Adaptive Filter with Variable Step-
Size,” IEEE of Signal Processing Letters, Vol. 9, 2002,
pp. 51-53. http://dx.doi.org/10.1109/97.991136
[18] T. J. Moir, “Loop-Shaping Techniques Applied to the
Least-Mean-Squares Algorithm,” Signal, Image and Video
Processing, Vol. 5, 2011, pp. 231-243.
http://dx.doi.org/10.1007/s11760-010-0157-9
[19] A. G. J. MacFarlane, “Return-Difference and Return-
Ratio Matrices and Their Use in Analysis and Design of
Multivariable Feedback Control Systems,” Proceedings
of the Institution of Electrical Engineers, Vol. 117, 1970,
pp. 2037-2049. http://dx.doi.org/10.1049/piee.1970.0367
FIR System Identification Using Feedback 393
Appendix. State-Space Description
We can look at the more general problem when the input is time-varying but the weights are constant by writing the
algorithm in state-space format. For Algorithm 3.1 we have

11
ˆˆ ˆ
kkk kk
K

 
WWsTuW (1)
which becom es
1
ˆˆ
kkkk
K
K



WITuW s (2)
The time-varying matrix must have roots (eigen-value s) which lie within the unit-circle on the z-plane.
This gives us the same as (13) for the stability limit on the gain K.

k
KITu
1
k
uK 1
(3)
Equation (3) clearly poses no problem provided and K are always positive. Then
k
u
2
0
k
Ku
 (4)
but when becomes negative we must change the sign of K ma ki ng
k
u
0sgn k
K
Ku where 0
K
is always positive.
If the true weights are constant and there is zero measurement noise then we can write (2) as
0kWW
1
ˆˆ
kkk
KK



WITuW TuW
0k
1
(5)
Now define the weight-error vector 0 and 10
ˆ
kk
WW
ˆ
kk
WW
with
0kk
sTuW, and we can write (1)
in weight-error format as the homogeneous vector difference equation
10
kkk
K


ITu
(6)
Now for some initial condition error 0
, (6) has solu tion

0
k
kk
K

ITu
(7)
Now write the lower triangular matrix
1
kk
KKuITu IVu
k
(8)
where the diagonal elements
1k
KuI
of the lower triangular matrix have been separated leaving a square matrix
of dimension
with zeros in its diagona l

k
Vu 1n

 
1
21
32 1
1
12
00 000
0000
000
000
00
0
k
kk
kkk k
kn k
kn kn
u
uu
Kuu u
uu uu

 

 






 



 


 
Vu
0
0
0
Due to its sparsity, such a matrix when raised to the same pow er as its dimensio n will always be zero i.e.
1nk
Vu0.
Therefore from (7), by using the Binomial theorem
  
1
k
kk
KKu



ITu IVu
k
k
(9)
which, provided (3) holds dies out for large k, taking the weight-error with it. Hence
0ˆ0as
kk
k
 WW
(10)
For example for 3 weights at time k = 6

665 4
2
111151
kkkkk k
KuKu KuKu

 

IVuI+6VuV u
k
k

65 4
2
6 0
11 151
kkkk
Ku KuKu

 

I+6VuVu
.
Algorithm 4.1 follows in a similar manner with
1
kk
K2k



ITu
.
Open Access JSIP