^{1}

^{1}

This paper investigates the problem of seeking minimum of API (Auxiliary Performance Index) in parameters of Data Model instead of parameters of Adaptive Filter in order to avoid the phenomenon of over parameterization. This problem was stated by Semushin in [2]. The solution to the problem can be considered as the development of API approach to parameter identification in stochastic dynamic systems.

The recent papers [1,2] gave a survey of the field of adaptation in stochastic systems as it has developed over the last four decades. The author’s research in this field was summarized and a novel solution for fitting an adaptive model in state space (instead of response space) was given.

In this paper, we further develop the Active Principle of Adaptation for linear time-invariant state-space stochastic MIMO filter systems included into the feedback or considered independently.

We solve the problem of seeking minimum of Auxiliary Performance Index (API) in parameters of Data Source Model (DSM) instead of parameters of Adaptive Filter (AF) in order to avoid difficulties known as Phenomenon of Over Parameterization (PhOP). The PhOP means that the number of parameters to be adjusted in AF is usually much greater than that in DSM. The solution of this problem will enable identification in the space of lower dimension and at the same time provide estimates of the given system state vector according to Original Performance Index (OPI). We verify the obtained theoretical results by two numerical simulation examples.

Following the previous results of [1,2], we assume that all data models forming a set are parameterized by an -component vector. Each particular value of (which does not depend on time) specifies a. Hence

where is the compact subset of. A given physical data model (PhDM) is described by the following equations:

where denotes nonnegative integers, strictly positive integers, and all integers.

Every model (2) is assumed to be acting between adjacent switches as long as it is sufficient for accepting as correct the basic theoretical statement (BTS) that all processes related to the are wide-sense stationary. This statement amounts to the following assumptions. The random with is orthogonal [

when considering the closed-loop setup.

Stackable vectors of previous values

constitute the experimental condition (cf. Ljung [

By assumption, is generated by the completely observable PhDM (2), so we can move from the physical state variables in (2) to another through the similarity transformation. Such transformation uniquely determines a new state representation

of the standard observable data model (SODM) (cf. Semushin [

For convenience in the below we shall omit the subscript for all the matrices describing PhDM or SODM.

As before, the above data model of a time-invariant data source will be referred to as the conventional model, no matter whether it is PhDM (2) or SODM (5). Here we use another innovation model, that differs from the timeinvariant (due to BTS) innovation model, presented in [

with, the initial, andwhich is the well-known (not necessarily steady-state) Kalman filter with the innovation process, the optimal state predictor, the gain

and satisfying the discrete Riccati iterations [

Concurrently, another form

with the initial, which is equivalent to (6), can be used where is the optimal “filtered” estimator for based on experimental condition (4). When ranges (or switches) over as in (1), we obtain the set of Kalman filters

We consider the mean-square criterion

defined for a one-step predictor through its error in the Kalman filter. Thus in the basis forming the state-space, (9) is the model minimizing the Original Performance Index (OPI) (11) at any, which is large enough for BTS to hold, so that writing or as well as any other finitely shifted time in (11) makes no difference.

In contrast to our previous work [

Let us consider the set of adaptive models

Here we emphasize the fact that we construct adaptive models in the same class as belongs to with the only difference that the unknown parameter in is replaced by to obtain. In so doing, each particular value of, an estimate of, leads to a fixed model. In accordance with The Active Principle of Adaptation (APA) [

Remark 1 Note in passing that pace of may differ from that of. We shall need to discriminate between and later when developing a PIA.

Remark 2 If we work in the context of SODM, the set

instead of (12) should be used.

At this junction, we identify the following tasks as pending:

1) Express or in an explicit form.

2) Build up APIs that could offer vision of the goal.

3) Examine APIs’ capacity to visualize the goal.

4) Develop a PIA that could help pursuing the goal.

Consider here the first three points consecutively.

Reasoning from (6), (9), we set the adaptive model

or equivalently (due to) the model

as a member of (12). Here is the self-tuned parameter intended to estimate (in one-to-one corresponddence) parameter θ. In parallel, reasoning from, we build the adaptive model

or equivalently (due to) the model

where does not depend on. Matrices

and are evaluated according to (7), (8).

Adaptor using (14)-(15) (or alternatively, using (16)-(17)) is supposed to contain a PIA to offer the prospect of convergence. For convergence in parameter space, we anticipate almost surely (a. s.) convergence, as it is the case for MPE identification methods [3,4]. It actuates either or both of the two other types of convergence. The type of convergence in state space, as well as in response space, is induced by the type of Proximity Criterion, PC (cf. [

With the understanding that errors for PC

are fundamentally invisible for any measurement, we search for a function

of the difference in two terms: outputs generated by Data Source described in any appropriate form (2), (5)(6), or (9), and their estimates generated by the adaptive model (or). For in

(20), we will also use notations or, thus bringing them into correlation with or (correspondingly, with or) from (19). Then

will be taken as the PC and determined with the key aim:

True (Unbiased) System Identifiability

Here, the equivalence symbol needs clarification. Its sense correlates with the above concept of convergence (18). Necessary refinements will be done (in Theorem 1).

Let the auxiliary process (20) be built for the API (21) as

or, equivalently, as

where special matrix transformations are used (see the section “Ancillary Matrix Transformations” of [

Theorem 1 Let (20) be a vector-valued ncomponent function of. If is defined by

(22) or (equivalently) (23) in order to form the API (21), then minimum in of the API fixed out at any instant t is the necessary and sufficient condition for adaptive model to be consistent estimator of in mean square,

, that is True (Unbiased) m.s. System Identifiability

in the following three setups:

Setup 1 (Random Control Input) is a preassigned zero-mean orthogonal wide-sence stationary process orthogonal to but in contrast to and, known and serving as a testing signal;

Setup 2 (Pure Filtering), and Setup 3 (Close-loop Control) with, which does not depend on.

Proof is similar to the proof of Theorem 2 in [

Remark 3 Our main goal is to identify the vector of unknown parameters. The minimization of API by some PIA allows us to determine the optimal value. Then it must be substituted into (14)-(15) (or (16)-(17))

to get the optimal model (or). At the same time, we obtain the optimal estimates (or) according to OPI.

Seeking minimum of API in parameters of Data Model instead of parameters of Adaptive Filter is more profitable, as:

• It takes into account the dynamics of the discrete Riccati equations, which positively affects the quality of parameter and state vector estimates.

• The number of unknown parameters can be substantially (an order of magnitude or more) reduced thus helping avoid the difficulties of PhOP.

• API gradient is calculated easily—without the construction of sensitivity model of adaptive filter.

• It can be implemented in the case of non-stationary systems, which is critical, for example, to handle the navigation data.

Thus, the proposed variant of API method is a new, thanks to the solution of its important tasks:

1) Numerical construction of API, which has the same minimizing argument that the OPI does;

2) The numerical minimization of the API by conventional optimization methods such as Newton-Raphson method, and 3) The combination of parameter identification of the system with the process of adaptive estimation of its states.

We take two examples to simulate:

E1 Second order open-loop system with unknown parameters is given by

Unknown parameters should be identified. Adaptive model parameter is the four-component vector

.

Its true value is

.

Covariances and of the noises and are equal to 0.04 and 0.06, correspondingly.

E2 The same (but closed-loop) system as in E1. The system is designed to operate with a minimum expected control cost

Unknown parameters, , and should be identified. The true values of parameters are the same as in E1.

Simulation results of Figures 1-4 and 5-7 obtained from Julia Tsyganova’s MATLAB programs demonstrate equimodality (coincidence of the minimizing arguments) of the auxiliary performance index with the original performance index. It is seen that the minimums of OPI and API coincide near. Thus, the obtained results confirm applicability of the presented method.

The present paper gives a comprehensive solution to the problem of seeking minimum of in parameters

of Data Model or instead of parameters of Adaptive Filter or. The obtained results were verified by two numerical simulation examples.

Our further research is aimed at obtaining solutions to the following issues:

• Economic feasibility, numeric stability and convergence reliability of each proposed parameter identification algorithm.

• Numerical testing of the approach and determining the scope of its appropriate use in real life problems, for example, in Health Care field [

This work was partly supported by the RFBR Grant No. 13-01-97035.