Volume 13 Number 4
August 2016
Article Contents
Mourad Elloumi and Samira Kamoun. Parametric Estimation of Interconnected Nonlinear Systems Described by Input-output Mathematical Models. International Journal of Automation and Computing, vol. 13, no. 4, pp. 364-381, 2016. doi: 10.1007/s11633-016-0956-8
Cite as: Mourad Elloumi and Samira Kamoun. Parametric Estimation of Interconnected Nonlinear Systems Described by Input-output Mathematical Models. International Journal of Automation and Computing, vol. 13, no. 4, pp. 364-381, 2016. doi: 10.1007/s11633-016-0956-8

Parametric Estimation of Interconnected Nonlinear Systems Described by Input-output Mathematical Models

Author Biography:
  • Samira Kamoun graduated from University of Tunis, Tunisia. She received the M. Sc. degree from UT in 1989, the Ph. D. degree from University of Sfax, Tunisia in 2003 and the Habilitation Universitaire degree from US in 2009. Since 1990, she has been with the Department of Electrical Engineering of the National School of Engineering of Sfax, US, Tunisia, where she is currently a professor of automatic control. She is also a member at the Laboratory of Sciences and Techniques of Automatic Control and Computer Engineering (Lab-STA) of Sfax. Her research interests include identification and adaptive control of complex systems (large-scale systems, nonlinear systems, time-varying systems, stochastic systems), with applications to automatic control of engineering systems. E-mail: kamounsamira@yahoo.fr

  • Corresponding author: Mourad Elloumi graduated from University of Sfax, Tunisia in 2005. He received the B. Sc. degree in electrical engineering in 2010 and the M. Sc. degree in automatic control and industrial computing from National School of Engineering of Sfax, US, Tunisia in 2011. He is currently a Ph. D. candidate at the Laboratory of Sciences and Techniques of Automatic Control and Computer Engineering (Lab-STA) from the National School of Engineering of Sfax and contractual assistant at the Sciences Faculty of Sfax, Tunisia. He is also a member at the Tunisian Association of Digital Techniques and Automatic, ATTNA. His research interests include identification and adaptive control of nonlinear large-scale systems, with applications to automatic control of engineering systems. E-mail: mourad.elloumi@yahoo.fr (Corresponding author) ORCID iD: 0000-0001-7119-9422
  • Received: 2013-05-01
    Published Online: 2016-01-08
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (11)  / Tables (3)

Metrics

Abstract Views (4401) PDF downloads (1836) Citations (0)

Parametric Estimation of Interconnected Nonlinear Systems Described by Input-output Mathematical Models

  • Corresponding author: Mourad Elloumi graduated from University of Sfax, Tunisia in 2005. He received the B. Sc. degree in electrical engineering in 2010 and the M. Sc. degree in automatic control and industrial computing from National School of Engineering of Sfax, US, Tunisia in 2011. He is currently a Ph. D. candidate at the Laboratory of Sciences and Techniques of Automatic Control and Computer Engineering (Lab-STA) from the National School of Engineering of Sfax and contractual assistant at the Sciences Faculty of Sfax, Tunisia. He is also a member at the Tunisian Association of Digital Techniques and Automatic, ATTNA. His research interests include identification and adaptive control of nonlinear large-scale systems, with applications to automatic control of engineering systems. E-mail: mourad.elloumi@yahoo.fr (Corresponding author) ORCID iD: 0000-0001-7119-9422

Abstract: In this paper, two types of mathematical models are developed to describe the dynamics of large-scale nonlinear systems, which are composed of several interconnected nonlinear subsystems. Each subsystem can be described by an input-output nonlinear discrete-time mathematical model, with unknown, but constant or slowly time-varying parameters. Then, two recursive estimation methods are used to solve the parametric estimation problem for the considered class of the interconnected nonlinear systems. These methods are based on the recursive least squares techniques and the prediction error method. Convergence analysis is provided using the hyper-stability and positivity method and the differential equation approach. A numerical simulation example of the parametric estimation of a stochastic interconnected nonlinear hydraulic system is treated.

Mourad Elloumi and Samira Kamoun. Parametric Estimation of Interconnected Nonlinear Systems Described by Input-output Mathematical Models. International Journal of Automation and Computing, vol. 13, no. 4, pp. 364-381, 2016. doi: 10.1007/s11633-016-0956-8
Citation: Mourad Elloumi and Samira Kamoun. Parametric Estimation of Interconnected Nonlinear Systems Described by Input-output Mathematical Models. International Journal of Automation and Computing, vol. 13, no. 4, pp. 364-381, 2016. doi: 10.1007/s11633-016-0956-8
  • The study of the dynamical large-scale systems has attracted the attention of many researchers and automation engineers worldwide. Every large-scale system can be envisaged as a system consisting of a large number of interacting interconnected systems. Since such a system normally comprises several interconnected systems (power network system, set of coupled tanks, etc.), the formulation problem of their parametric estimation or their control is intricate. Several studies dealing with different themes (modeling, identification, control, stability, optimization, etc.) have been developed and published in the literature[1-7]. The most of these works concerned the large-scale systems which can be described by a linear mathematical model with constant or slow time-varying parameters. However, a few results were published concerning the large-scale nonlinear systems which can be described by nonlinear mathematical models.

    Let us note that the most of the estimate or control schemes of the large-scale nonlinear systems, which are published in the engineering literature, are based primarily on continuous state-space mathematical models[8-17]. However, the number of works relating to the discrete-time mathematical models, and more particularly the input-output mathematical models, is small.

    The objective in this paper is to formulate the parametric estimation problem for large-scale nonlinear systems with time-varying parameters, which can be decomposed into several interconnected nonlinear single-input single-output (SISO) systems. We particularly focused on the dynamic large-scale nonlinear systems that can be described by the developed discrete-time input-output mathematical models, with unknown and time-varying parameters, operating in deterministic or stochastic environment. The convergence analysis of each estimator for the considered mathematical model is presented using a certain method.

    The rest of this paper is organized as follows. The second section deals with the description of the interconnected nonlinear systems operating in a deterministic or a stochastic environment. Each interconnected nonlinear system is described by a discrete-time input-output mathematical model with unknown time-varying parameters. Then, we detail the parametric estimation problem of the nonlinear large-scale systems operating in a deterministic or stochastic environment in the third section. We particularly focus on the dynamic large-scale systems consisting of several interconnected nonlinear monovariable systems, which can be described by the class of input-output mathematical models. Then, we formulate the convergence properties of two recursive parametric estimation algorithms. An illustrative simulation example dealing with the parametric estimation of a hydraulic system, which consists of three interconnected nonlinear subsystems, is treated in the fourth section before drawing the conclusion in the final section of this paper.

  • All real large-scale nonlinear systems, which are composed of several interconnected nonlinear systems, have attracted the attention of many engineers in the worldwide due principally to unmodelled dynamics, unknown parameters, unknown structure variables (orders, delays) and disturbances. Note that the most published works concerning the large-scale nonlinear systems in engineering are based primarily on continuous state-space mathematical models.

    In this work, we consider only the class of large-scale nonlinear systems which are composed of several interconnected nonlinear monovariable systems with time-varying parameters and known structure variables.

    In this context, we develop two types of discrete-time input-output mathematical models that can describe the dynamics of the considered systems. The first discrete-time input-output mathematical model, "interconnected nonlinear deterministic autoregressive moving average (INDARMA)", describes an interconnected nonlinear system operating in deterministic environment. However, the second discrete-time input-output mathematical model, "interconnected nonlinear autoregressive moving average with exogenous (INARMAX)", describes an interconnected nonlinear system operating in stochastic environment, where the noise that acts on this system is described by a disturbance mathematical model.

  • Let us consider a nonlinear large-scale time-varying system S, which can be decomposed into N interconnected nonlinear subsystems $S_1, \cdots, S_N $ . We assume that these interconnected nonlinear systems operate in a deterministic environment with time-varying parameters. It can be described by the following developed INDARMA mathematical model:

    $\begin{align} A_i (q^{-1} &, k)\mbox{ }y_i (k)=q^{-d_i }\mbox{ }B_i (q^{-1}, k)\mbox{ }u_i (k)+\notag\\ & \sum\limits_{j=1, j\ne i}^N {q^{-d_{ij} }B_{ij} (q^{-1}, k)\mbox{ }u_j (k)} +\notag\\ & \sum\limits_{j=1, j\ne i}^N {q^{-t_{ij} }A_{ij} (q^{-1}, k)\mbox{ }y_j (k)}-\notag\\ & f_{f_{ii} }^y ({y_i (k-1), y_i (k-2), \cdots, y_i (k-n_i)}) +\notag\\ & f_{f_{ij} }^y ({y_i (k-1), y_i (k-2), \cdots, y_i (k-n_i), } \notag\\ & {\mbox{ }y_j (k-1), y_j (k-2), \cdots, y_j (k-n_i)}) +\notag\\ & f_g^u ({u_i (k-1), u_i (k-2), \cdots, u_i (k-n_i), } \notag\\ & {\mbox{ }u_j (k-1), u_j (k-2), \cdots, u_j (k-n_i)}) +\notag\\ & f_{h\ell }^{uy} ({y_i (k-1), y_i (k-2), \cdots, y_i (k-n_i), } \notag\\ & u_i (k-1), u_i (k-2), \cdots, u_i (k-n_i), \notag\\ & u_j (k-1), u_j (k-2), \cdots, u_j (k-n_i), \notag\\ & {\mbox{ }y_j (k-1), y_j (k-2), \cdots, y_j (k-n_i)}) \end{align}$

    (1)

    where $u_i (k)$ and $y_i (k)$ represent the input and the output of the interconnected nonlinear system $S_i $ at the discrete-time $k$ , respectively, $u_j (k)$ and $y_j (k)$ are respectively the inputs and outputs resulting from the other interconnected systems $S_j $ , $j=1, \cdots, N$ , $j\ne i$ , $d_i $ is an intrinsic delay of the interconnected system $S_i $ , $d_{ij} $ and $t_{ij} $ represent the delays of the interactions, which are related respectively to the inputs and the outputs of the other interconnected nonlinear systems $S_j $ , and $A_i (q^{-1}, k)$ , $B_i (q^{-1}, k)$ , $A_{ij} (q^{-1}, k)$ and $B_{ij} (q^{-1}, k)_{ }$ are polynomials with unknown time-varying parameters, which are defined as follows:

    $\label{eq2} A_i (q^{-1}, k)=1+a_{i, 1} (k)q^{-1}+\cdots+a_{i, n_{A_i } } (k)q^{-n_{A_i } }$

    (2)

    $\label{eq3} B_i (q^{-1}, k)=b_{i, 1} (k)q^{-1}+\cdots+b_{i, n_{B_i } } (k)q^{-n_{B_i } }$

    (3)

    $\label{eq4} A_{ij} (q^{-1}, k)=1+a_{ij, 1} (k)q^{-1}+\cdots+a_{ij, n_{A_{ij} } } (k)q^{-n_{A_{ij} } }$

    (4)

    and

    $\label{eq5} B_{ij} (q^{-1}, k)=b_{ij, 1} (k)q^{-1}+\cdots+b_{ij, n_{B_{ij} } } (k)q^{-n_{B_{ij} } }$

    (5)

    where $i, j=1, \cdots, N$ , $j\ne i$ , and $n_{A_i } $ , $n_{B_i } $ , $n_{A_{ij} } $ and $n_{B_{ij} } $ are the orders of the polynomials $A_i (q^{-1}, k)$ , $B_i (q^{-1}, k)$ , $A_{ij} (q^{-1}, k)$ and $B_{ij} (q^{-1}, k)$ , respectively

    Let us note that each interconnected nonlinear system $S_i $ , $1\le i\le N$ , is coupled with the inputs and the outputs of other interconnected nonlinear systems $S_j $ , $j=1, \cdots, N$ , $j\ne i$ , via the two polynomials $A_{ij} (q^{-1}, k)$ and $B_{ij} (q^{-1}, k)$ .

    The term $f_g^u \left(\cdot\right)$ in (1) represents some nonlinear function with degree of nonlinearity $p$ , which depends on the sequences of inputs of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and other interconnected systems $S_j $ , $j=1, \cdots, N$ , $j\ne i$ , defined as follows:

    $\begin{align} \label{eq6} f_g^u & \left(\cdot \right)=\sum\limits_{r_1=1}^{n_{g_{ii, r_1 } } } {\sum\limits_{r_2=1}^{J_{y_2}} {g_{ii, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)\; u_i (k-r_2)} } +\notag\\ & \sum\limits_{r_1=1}^{n_{g_{ii, r_1 } } } \sum\limits_{r_2=1}^{n_{g_{ii, r_1 r_2 } } } \sum\limits_{r_3=1}^{n_{g_{ii, r_1 r_2 r_3 } } } g_{ii, r_1 r_2 r_3 } (k)\times \notag\\ & u_i (k-r_1)\; u_i (k-r_2)\; u_i (k-r_3) +\; \cdots \; + \notag\\ & \sum\limits_{r_1=1}^{n_{g_{ii, r_1 } } } \sum\limits_{r_2=1}^{n_{g_{ii, r_1 r_2 } } } \cdots \sum\limits_{r_p=1}^{n_{g_{ii, r_1 r_2...r_p } } } \notag\\ & {g_{ii, r_1...r_p } (k)\mbox{ }u_i (k-r_1)\mbox{ }\cdots \mbox{ }u_i (k-r_p)} + \notag\\ & \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r_1=1}^{n_{g_{ij, r_1 } } } {\sum\limits_{r_2=1}^{n_{g_{ij, r_1 r_2 } } } {g_{ij, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)\; u_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1, j\ne i}^N \sum\limits_{r_1=1}^{n_{g_{ii, r_1 } } } \sum\limits_{_2=1}^{n_{g_{ii, r_1 r_2 } } } \sum\limits_{r_3=1}^{n_{g_{ii, r_1 r_2 r_3 } } } g_{ij, r_1 r_2 r_3 } (k) \times\notag\\ & {\mbox{ }u_i (k-r_1)\; u_i (k-r_2)\; u_j (k-r_3)} +\; \cdots \; + \notag\\ & \sum\limits_{j=1, j\ne i}^N \sum\limits_{r_1=1}^{n_{g_{ii, r_1 } } } \sum\limits_{r_2=1}^{n_{g_{ii, r_1 r_2 } } } \cdots \sum\limits_{r_p=1}^{n_{g_{ii, r_1 r_2 \cdots r_p } } }g_{ij, r_1 \cdots r_p } (k)\times\notag\\ & \mbox{ }u_i (k-r_1)\mbox{ }\cdots \mbox{ }u_j (k-r_p). \end{align}$

    (6)

    The terms $f_{f_{ii} }^y \left(\cdot \right)$ and $f_{f_{ij} }^y \left(\cdot \right)$ in (1) are some nonlinear functions with degree of nonlinearity $p$ , which depend on the sequences of outputs of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and other interconnected systems $S_j $ , $j=1, \cdots, N$ , $j\ne i$ , as given respectively by

    $\begin{align} \label{eq7} f_{f_{ii} }^y & \left(\cdot\right)=\sum\limits_{r_1=1}^{n_{f_{ii, r_1 } } } \sum\limits_{r_2=1}^{n_{f_{ii, r_1 r_2 } } } {f_{ii, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)\; y_i (k-r_2)} +\notag\\ & \sum\limits_{r_1=1}^{n_{f_{ii, r_1 } } } \sum\limits_{r_2=1}^{n_{f_{ii, r_1 r_2 } } } \sum\limits_{r_3=1}^{n_{f_{ii, r_1 r_2 r_3 } } } f_{ii, r_1 r_2 r_3 } (k)\times \notag\\ & y_i (k-r_1)\; y_i (k-r_2)\; y_i (k-r_3) +\; \cdots \; + \notag\\ & \sum\limits_{r_1=1}^{n_{f_{ii, r_1 } } } \sum\limits_{r_2=1}^{n_{f_{ii, r_1 r_2 } } } \cdots \sum\limits_{r_p=1}^{n_{f_{ii, r_1 r_2 \cdots r_p } } }f_{ii, r_1 \cdots r_p }(k)\times\notag\\ & { \mbox{ }y_i (k-r_1)\cdots y_i (k-r_p)} \end{align}$

    (7)

    and

    $\begin{align} \label{eq8} f_{f_{ij} }^y & \left(\cdot \right)=\sum\limits_{j=1, j\ne i}^N \sum\limits_{r_1=1}^{n_{f_{ij, r_1 } } } \sum\limits_{r_2=1}^{n_{f_{ij, r_1 r_2 } } } f_{ij, r_1 r_2 } (k)\mbox{ }\times \notag\\ & y_i (k-r_1)\; y_j (k-r_2) +\notag\\ & \sum\limits_{j=1, j\ne i}^N \sum\limits_{r_1=1}^{n_{f_{ij, r_1 } } } \sum\limits_{r_2=1}^{n_{f_{ij, r_1 r_2 } } } \sum\limits_{r_3=1}^{n_{f_{ij, r_1 r_2 r_3 } } } f_{ij, r_1 r_2 r_3 } (k)\mbox{ }\times \notag\\ & y_i (k-r_1)\; y_i (k-r_2)\; y_j (k-r_3) +\; \cdots \; + \notag\\ & \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r_1=1}^{n_{f_{ij, r_1 } } } {\sum\limits_{r_2=1}^{n_{f_{ij, r_1 r_2 } } } \cdots } } \sum\limits_{r_p=1}^{n_{f_{ij, r_1 r_2 \cdots r_p } } } f_{ij, r_1 \cdots r_p } (k)\mbox{ }\times \notag\\ & y_i (k-r_1)\; \cdots \; y_j (k-r_p). \end{align}$

    (8)

    The term $f_{hl }^{uy} \left(\cdot \right)$ in (1) denotes some nonlinear function with degree of nonlinearity $p$ , which depends on the sequences of the input and the output of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and other interconnected systems $S_j $ , $j=1, \cdots, N$ , $j\ne i$ , as described by the following expansion:

    $ \begin{align} \label{eq9} f_{hl }^{uy} & \left(\cdot\right)=\sum\limits_{j=1}^N \sum\limits_{r_1=1}^{n_{h_{ij, r_1 } } } \sum\limits_{r_2=1}^{n_{h_{ij, r_1 r_2 } } } h_{ij, r_1 r_2 } (k)\mbox{ }\times \notag\\ & u_i (k-r_1)\; y_j (k-r_2) +\notag\\ & \sum\limits_{j=1}^N \sum\limits_{r_1=1}^{n_{h_{ij, r_1}}} \sum\limits_{r_2=1}^{n_{h_{ij, r_1 r_2}}}\sum\limits_{r_3=1}^{n_{h_{ij, r_1~r_{2_1} r_3}}} h_{ij, r_1~r_2~r_3}(k)\times \notag\\ & u_i(k-r_1)\; u_i (k-r_2)\; y_j (k-r_3) +\; \cdots \; + \notag\\ & \sum\limits_{j=1}^N \sum\limits_{r_1=1}^{n_{h_{ij, r_1 } } } \sum\limits_{r_2=1}^{n_{h_{ij, r_1 r_2 } } } \cdots \mbox{ }\sum\limits_{r_p=1}^{n_{h_{ij, r_1 r_2...r_p } } } h_{ij, r_1...r_p } (k)\mbox{ }\times \notag\\ & \quad u_i (k-r_1)\; \cdots \; y_j (k-r_p) +\notag\\ & \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r_1=1}^{n_{l _{ij, r_1 } } } {\sum\limits_{r_2=1}^{n_{l _{ij, r_1 r_2 } } } {l _{ij, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)\; u_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1, j\ne i}^N \sum\limits_{r_1=1}^{n_{l _{ij, r_1 } } } \sum\limits_{r_2=1}^{n_{l _{ij, r_1 r_2 } } } \sum\limits_{r_3=1}^{n_{l _{ij, r_1 r_2 r_3 } } } l _{ij, r_1 r_2 r_3 } (k)\times \; \notag \end{align} $

    $\begin{align} & \quad \mbox{ }y_i (k-r_1)y_i (k-r_2)\; u_j (k-r_3)+\; \cdots \; + \notag\\ & \sum\limits_{j=1, j\ne i}^N \sum\limits_{r_1=1}^{n_{l _{ij, r_1 } } } \sum\limits_{r_2=1}^{n_{l _{ij, r_1 r_2 } } } \cdots \mbox{ }\sum\limits_{r_p=1}^{n_{l _{ij, r_1 r_2 \cdots r_p } } } l _{ij, r_1 \cdots r_p } (k)\mbox{ }\times \notag\\ & \quad y_i (k-r_1)\; \cdots \; u_j (k-r_p). \end{align}$

    (9)

    It can be remarked that the developed mathematical model becomes more complex with the increase of the nonlinearity degree and the order of the considered system.

  • We consider a nonlinear large-scale system which consists of $N$ interconnected nonlinear systems $S_1, \cdots, S_N $ operating in a stochastic environment with unknown time-varying parameters. The noise sequences which are added to the system and act on the output are assumed to be independent and correspond to a Gaussian distribution with zero mean and constant variance. Thereby, each interconnected nonlinear system $S_i $ , $1\le i\le N$ , can be described by an input-output mathematical model, which is monovariable, nonlinear and with unknown time-varying parameters.

    The considered system $S_i $ , $1\le i\le N$ , is described by the following developed INARMAX mathematical model:

    $\begin{align} \label{eq10} A_i (q^{-1},&k)\mbox{ }y_i (k)=q^{-d_i }\mbox{ }B_i (q^{-1},k)\mbox{ }u_i (k)+\notag\\ &\sum\limits_{j=1,j\ne i}^N {q^{-d_{ij} }B_{ij} (q^{-1},k)\mbox{ }u_j (k)} +\notag\\ &\sum\limits_{j=1,j\ne i}^N {q^{-t_{ij} }A_{ij} (q^{-1},k)\mbox{ }y_j (k)} +C_i (q^{-1})\mbox{ }e_i (k) -\notag\\ &f_{f_{ii} }^y ( {y_i (k-1),y_i (k-2),\cdots,y_i (k-n_i )} ) +\notag\\ &f_{f_{ij} }^y ( {y_i (k-1),y_i (k-2),\cdots,y_i (k-n_i ),} \notag\\ & {\mbox{ }y_j (k-1),y_j (k-2),\cdots,y_j (k-n_i )} ) +\notag\\ &f_g^u ( {u_i (k-1),u_i (k-2),\cdots,u_i (k-n_i ),} \notag\\ & {\mbox{ }u_j (k-1),u_j (k-2),\cdots,u_j (k-n_i )} ) +\notag\\ &f_{hl }^{uy} ( {y_i (k-1),y_i (k-2),\cdots,y_i (k-n_i ),} \notag\\ &u_i (k-1),u_i (k-2),\cdots,u_i (k-n_i ), \notag\\ &u_j (k-1),u_j (k-2),\cdots,u_j (k-n_i ), \notag\\ & {\mbox{ }y_j (k-1),y_j (k-2),\cdots,y_j (k-n_i )} )+ \notag\\ &f_c^e ( {e_i (k-1),e_i (k-2),\cdots,e_i (k-n_i )} )+\notag \\ &f_\alpha ^{ye} ( {y_i (k-1),y_i (k-2),\cdots,y_i (k-n_i ),} \notag\\ & {\mbox{ }e_i (k-1),e_i (k-2),\cdots,e_i (k-n_i )} )+ \notag\\ &f_\beta ^{ue} ( {u_i (k-1),u_i (k-2),\cdots,u_i (k-n_i ),} \notag\\ & {\mbox{ }e_i (k-1),e_i (k-2),\cdots,e_i (k-n_i )} ) \end{align}$

    (10)

    where $\left\{ {e_i (k)} \right\}$ is a sequence of independent random variables with zero mean and constant variance $\sigma _i^2 $ , $A_i (q^{-1}, k)$ , $B_i (q^{-1}, k)$ , $A_{ij} (q^{-1}, k)$ and $B_{ij} (q^{-1}, k)$ are polynomials with unknown and time-varying parameters defined by (2), (3), (4), (5), respectively, $f_g^u(\cdot)$ , $f_{f_{ii} }^y (\cdot)$ , $f_{f_{ij} }^y (\cdot)$ and $f_{h\ell }^{uy} (\cdot)$ are some nonlinear functions given by (6)--(9), respectively, and ${{C}_{i}}({{q}^{-1}})$ is a polynomial with unknown but constants parameters, given as follows:

    $\begin{align} \label{eq11} C_i (q^{-1})=1+c_{i,1} q^{-1}+\;\cdots \;+c_{i,n_{C_i } } q^{-n_{C_i } }. \end{align}$

    (11)

    The term $f_c^e (\cdot)$ involved in expression (10) is some nonlinear function with degree of nonlinearity $p$ , which depends on the sequences of the noise of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and is expressed by the following equation:

    $\begin{align} \label{eq12} f_c^e & \left(\cdot\right)=\sum\limits_{r_1=1}^{n_{c_{ii, J_1 } } } \sum\limits_{r_2=1}^{n_{c_{ii, r_1 Jy_2 } } } c_{ii, r_1 r_2 } \mbox{ }\times \notag\\ & e_i (k-r_1)\; e_i (k-r_2) +\cdots \quad \cdots + \notag\\ & \sum\limits_{r_1=1}^{n_{c_{ii, J_1 } } } \sum\limits_{r_2=1}^{n_{c_{ii, r_1 Jy_2 } } } \cdots \sum\limits_{r_p=1}^{n_{c_{ii, r_1 Jy_2 \cdots r_p } } } c_{ii, Jy_1 \cdots r_p } \mbox{ }\times \notag\\ & e_i (k-r_1)\mbox{ }\cdot\mbox{ }e_i (k-r_p). \end{align}$

    (12)

    The term $f_\alpha ^{ye} (\cdot)$ in (10) represents some nonlinear function with degree of nonlinearity $p$ , which depends on the sequences of the noise and the output of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and is defined by

    $\begin{align} \label{eq13} f_\alpha ^{ye} & \left(\cdot \right)=\sum\limits_{r_1=1}^{n_{\alpha _{ii, Jy_1 } } } \sum\limits_{r_2=1}^{n_{\alpha _{ii, r_1 Jy_2 } } } \alpha _{ii, r_1 r_2 } \mbox{ }\times \notag\\ & \qquad y_i (k-r_1)\; e_i (k-r_2) +\cdots \quad \cdots +\notag \\ & \sum\limits_{r_1=1}^{n_{\alpha _{ii, J_1 } } } \sum\limits_{r_2=1}^{n_{\alpha _{ii, r_1 Jy_2 } } } \cdots \mbox{ }\sum\limits_{r_p=1}^{n_{\alpha _{ii, r_1 Jy_2 \cdots r_p } } } \alpha _{ii, r_1 \cdots r_p } \mbox{ }\times \notag\\ & \qquad y_i (k-r_1)\; \cdots \; e_i (k-r_p). \end{align}$

    (13)

    The term $f_\beta ^{ue} \left(\cdot \right)$ in (10) represents some nonlinear function with degree of nonlinearity p, which depends on the sequences of the noise and the input of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and is given by the following expression:

    $\begin{align} \label{eq14} f_\beta ^{ue} & \left(\cdot \right)=\sum\limits_{r_1=1}^{n_{\beta _{ii, J_1 } } } \sum\limits_{r_2=1}^{n_{\beta _{ii, Jy_2 } } } \beta _{ii, r_1 r_2 } \mbox{ }\times \notag\\ & \qquad u_i (k-r_1)\; e_i (k-r_2) +\cdots \quad \cdots + \notag\\ & \mbox{ }\sum\limits_{r_1=1}^{n_{\beta _{ii, J_1 } } } \sum\limits_{r_2=1}^{n_{\beta _{ii, Jy_2 } } } \cdots \sum\limits_{r_p=1}^{n_{\beta _{ii, Jy_2...r_p } } } \beta _{ii, r_1 \cdots r_p } \mbox{ }\times \notag\\ & \qquad u_i (k-r_1)\mbox{ }...\mbox{ }e_i (k-r_p). \end{align}$

    (14)

    Note that the developed mathematical model becomes more complex with increase of the nonlinearity degree and the order of the system.

  • In this section, we propose to formulate the parametric estimation problem of the interconnected nonlinear dynamical systems operating in a deterministic or stochastic environment. We particularly focus on the large-scale dynamic system, which consists of several interconnected nonlinear systems described by the class of the input-output mathematical models with linear parameters but, nonlinear in the observations (input, output, noise). This parametric estimation problem will be formulated on the basis of the prediction error method and the least squares techniques.

  • We assume that each interconnected nonlinear system $S_i $ , $1\le i\le N$ , works only in open loop. It should be noted that the use of such an operation of the considered system allows a varying choice of the input signal $u_i (k)$ .

    We retain also the following assumptions for each interconnected nonlinear system $S_i $ , $1\le i\le N$ , during the formulation of the parametric estimation problem of these systems which can be described by the considered discrete-time input-output mathematical models:

    1) The system $S_i $ is observable;

    2) The system $S_i $ is stable in open loop (the roots (in $q)$ of the polynomial $A_i (q^{-1}, k)$ are inside the unity circle);

    3) The parameters intervening in the polynomials $A_i (q^{-1}, k)$ , $B_i (q^{-1}, k)$ , $A_{ij} (q^{-1}, k)$ and $B_{ij} (q^{-1}, k)$ are unknown time-varying parameters;

    4) The input signal $u_i (k)$ applied to the considered system $S_i $ is bounded and sufficiently exciting, is able to excite all the modes of this system;

    5) The noise sequence $\left\{ {e_i (k)} \right\}$ consists of a sequence of independent random variables with zero mean and constant variance $\sigma _i^2 $ ;

    6) The number of the measured values $M_k $ is sufficiently important in order to provide a good convergence of the recursive parametric estimation algorithm;

    7) The signal variables of the considered interconnected nonlinear systems are measurable at every discrete-time $k$ . The measured values correspond to the various inputs and outputs resulting from the interconnected systems $S_i $ , $1\le i\le N$ , and the other interconnected nonlinear system $S_j $ , $j=1, \cdots, N$ , $j\ne i$ , is defined by the following information sequence: $Q_i (k)=\{ u_i (k), y_i (k), u_j (k), y_j (k)$ , $i, j=1, \cdots, N$ , $j\ne i$ , $k=1, \cdots, M_k \}$ .

  • In this subsection, we treat the formulation problem of the parametric estimation of the interconnected nonlinear systems operating in a deterministic environment, which can be described by the INDARMA input-output mathematical models, as defined by (1).

    For the sake of simplicity and without loss of general information, we limit ourselves only to the formulation of the parametric estimation problem of an interconnected nonlinear dynamic system, which can be described by a second-order input-output mathematical model INDARMA with second degree of nonlinearity

    The output $y_i (k)$ of the considered system $S_i $ can be expressed as follows:

    $\begin{align} \label{eq15} y_i (k) &=-\sum\limits_{r=1}^2 {a_{i, r} (k)y_i (k-r)}-\notag\\ & \sum\limits_{r_1=1}^2 \sum\limits_{r_2=1}^2 f_{ii, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)y_i (k-r_2) +\notag\\ & \sum\limits_{r=1}^2 {b_{i, r} (k)u_i (k-d_i-r)} +\notag\\ & \sum\limits_{j=1, j\ne i}^2 \sum\limits_{r=1}^2 b_{ij, r} (k)\mbox{ }u_j (k-d_{ij}-r) + \notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r=1}^2 {a_{ij, r} (k)\mbox{ }y_j (k-t_{ij}-r)} } +\notag\\ & \sum\limits_{r_1=1}^2 \sum\limits_{r_2=1}^2 \mbox{ }g_{ii, r_1 r_2 } (k)u_i (k-r_1)u_i (k-r_2) \notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {g_{ij, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)u_j (k-r_2)} } }+\notag \\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {f_{ij, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)y_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {h_{ij, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)y_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {l _{ij, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)u_j (k-r_2)} } }. \end{align}$

    (15)

    We can rewrite the output $y_i (k)$ of the considered system $S_i $ defined by (15) in the following developed form:

    $ \begin{align} \label{eq16} y_i (k) &=-a_{i, 1} (k)\; y_i (k-1)-\notag\\ & a_{i, 2} (k)\; y_i (k-2)-f_{ii, 11} (k)\; y_i^2 (k-1)-\notag\\ & f_{ii, 220} (k)\; y_i (k-1)\; y_i (k-2)-f_{ii, 22} (k)\; y_i^2 (k-2) +\notag\\ & b_{i, 1} (k)\; u_i (k-d_i-1)+b_{i, 2} (k)\; u_i (k-d_i-2)+\notag\\ & b_{ij, 1} (k)\; u_j (k-d_{ij}-1)+ b_{ij, 2} (k)\; u_j (k-d_{ij}-2)+\notag\\ & a_{ij, 1} (k)\; y_j (k-t_{ij}-1)+a_{ij, 2} (k)\; y_j (k-t_{ij}-2)+ \notag\\ & g_{ii, 11} (k)\; u_i^2 (k-1)+g_{ii, 220} (k)\; u_i (k-1)\; u_i (k-2)+\notag\\ & g_{ii, 22} (k)\; u_i^2 (k-2)+g_{ij, 11} (k)\; u_i (k-1)\; u_j (k-1)\mbox{+}\notag \\ & g_{ij, 12} (k)\; u_i (k-1)\; u_j (k-2)+g_{ij, 21} (k)\; u_i (k-2)\times \notag \\ & \; u_j (k-1)\mbox{+}g_{ij, 22} (k)\; u_i (k-2)\; u_j (k-2)+\notag \\ & f_{ij, 11} (k)\; y_i (k-1)\; y_j (k-1)\mbox{+ }f_{ij, 12} (k)\; y_i (k-1)\times \; \notag \\ & y_j (k-2)+f_{ij, 21} (k)\; y_i (k-2)\; y_j (k-1)\mbox{+ }f_{ij, 22} (k)\times \notag \\ & y_i (k-2)\; y_j (k-2)+h_{ii, 11} (k)\mbox{ }u_i (k-1)y_i (k-1)\mbox{+ }\notag \\ & h_{ii, 12} (k)\mbox{ }u_i (k-1)y_i (k-2)+h_{ii, 21} (k)\mbox{ }\times \notag \\ & u_i (k-2)y_i (k-1)\mbox{+ }h_{ii, 22} (k)\mbox{ }u_i (k-2)y_i (k-2) +\notag\\ & h_{ij, 11} (k)\mbox{ }u_i (k-1)y_j (k-1)\mbox{+ }h_{ij, 12} (k)\mbox{ }\times \notag\\ & u_i (k-1)y_j (k-2)+h_{ij, 21} (k)\mbox{ }u_i (k-2)y_j (k-1)\mbox{+ } \notag\\ & h_{ij, 22} (k)\mbox{ }u_i (k-2)y_j (k-2) +\ell _{ij, 11} (k)\mbox{ }y_i (k-1)\times \notag \end{align} $

    $\begin{align} & u_j (k-1)+\ell _{ij, 12} (k)\mbox{ }y_i (k-1)u_j (k-2) +\ell _{ij, 21} (k)\mbox{ }\times \notag\\ & y_i (k-2)u_j (k-1)+\ell _{ij, 22} (k)\mbox{ }y_i (k-2)u_j (k-2) \end{align}$

    (16)

    with $f_{ii, 220} (k)=f_{ii, 12} (k)+f_{ii, 21} (k)$ , $g_{ii, 220} (k)=g_{ii, 12} (k)+g_{ii, 21} (k)$ and $j=1, \cdots, N$ , $j\ne i$ .

    or equivalently in the following matrix form:

    $\label{eq17} y_i (k)=\theta _i^{\rm T} (k)\, \psi _i (k)$

    (17)

    where the parameter vector $\theta _i (k)$ and the observation vector $\psi _i (k)$ are described as follows, respectively:

    $\begin{align} \label{eq18} \theta _i^{\rm T} & (k)=\Big[{a_{i, 1} (k)\, \, a_{i, 2} (k)\, \, f_{ii, 11} (k)\, \, f_{ii, 220} (k)\, \, f_{ii, 22} (k)\, \, } \notag\\ & \mbox{ }b_{i, 1} (k)\, \, b_{i, 2} (k)\, \, b_{ij, 1} (k)\, \, b_{ij, 2} (k)\, \, a_{ij, 1} (k)\, \, a_{ij, 2} (k)\, \notag\\ & \mbox{ g}_{ii, 11} (k)\, \, g_{ii, 220} (k)\, \, g_{ii, 22} (k)\, \, g_{ij, 11} (k)\, \, g_{ij, 12} (k)\, \notag\\ & \mbox{ }g_{ij, 21} (k)\, \, g_{ij, 22} (k)\, \, f_{ij, 11} (k)\, \, f_{ij, 12} (k)\, \, f_{ij, 21} (k)\, \notag\\ & \mbox{ }f_{ij, 22} (k)\, \, h_{ii, 11} (k)\, \, h_{ii, 12} (k)\, \, h_{ii, 21} (k)\, \, h_{ii, 22} (k)\, \notag\\ & \mbox{ }h_{ij, 11} (k)\, \, h_{ij, 12} (k)\, \, h_{ij, 21} (k)\, \, h_{ij, 22} (k)\, \, \ell _{ij, 11} (k)\, \notag\\ & {\mbox{ }\ell _{ij, 12} (k)\, \, \ell _{ij, 21} (k)\, \, \ell _{ij, 22} (k)} \Big] \end{align}$

    (18)

    and

    $\begin{align} \label{eq19} \psi_i^{\rm T} & (k)=[-y_i (k-1)-y_i (k-2)-\notag\\ & y_i^2 (k-1)-y_i (k-1)y_i (k-2)-\notag\\ & y_i^2 (k-2)u_i (k-d_i-1)u_i (k-d_i-2)u_j (k-d_{ij}-1)\times \notag\\ & u_j (k-d_{ij}-2)y_j (k-t_{ij}-1)y_j (k-t_{ij}-2)u_i^2 (k-1)\times \notag\\ & u_i (k-1)u_i (k-2)u_i^2 (k-2)u_i (k-1)u_j (k-1)\times \notag \\ & u_i (k-1)u_j (k-2)u_i (k-2)u_j (k-1)u_i (k-2)u_j (k-2)\times \notag \\ & y_i (k-1)y_j (k-1)y_i (k-1)y_j (k-2)y_i (k-2)y_j (k-1)\times \notag \\ & y_i (k-2)y_j (k-2)u_i (k-1)y_i (k-1)u_i (k-1)y_i (k-2)\times \notag \\ & u_i (k-2)y_i (k-1)u_i (k-2)y_i (k-2)u_i (k-1)y_j (k-1)\times \notag \\ & u_i (k-1)y_j (k-2)u_i (k-2)y_j (k-1)u_i (k-2)y_j (k-2)\times \notag\\ & y_i (k-1)u_j (k-1)y_i (k-1)u_j (k-2)y_i (k-2)u_j (k-1)\times \notag\\ & {u_i (k-2)y_i (k-2)}]. \end{align}$

    (19)

    The formulation of the parametric estimation problem is based upon the adjustable model by using the prediction error method and the recursive least squares techniques, and this, starting from the knowledge of several measured values (inputs, outputs) resulting from the considered system.

    The prediction error $\varepsilon _i (k)$ , which represents the difference between the output of the interconnected system $S_i $ and the predicted output of the adjustable model, is defined by the following expression

    $\label{eq20} \varepsilon _i (k)=y_i (k)-\hat {\theta }_i^{\rm T} (k-1)\, \psi _i (k).$

    (20)

    The estimate of the parameter vector $\theta _i (k)$ , which is given by (18), can be ensured on the basis of the recursive least squares (RLS) algorithm by using the prediction error method. We can show easily that this algorithm, which permits to estimate the parameters involved in the nonlinear mathematical model INDARMA, as given by (16), can be described as follows[7, 18]:

    $\begin{align} \label{eq21} & \hat {\theta }_i (k)=\hat {\theta }_i (k-1)+P_i (k)\, \psi _i (k)\, \varepsilon _i (k)\notag \\ & P_i (k)=\frac{1}{\lambda _i (k)}\times\notag\\ & \qquad\left[{P_i (k-1)-\frac{P_i (k-1)\, \psi _i (k)\, \psi _i^{\rm T} (k)\, P_i (k-1)}{\lambda _i (k)+\psi _i^{\rm T} (k)\, P_i (k-1)\, \psi _i (k)}} \right] \notag\\ & \varepsilon _i (k)=y_i (k)-\hat {\theta }_i^{\rm T} (k-1)\, \psi _i (k) \end{align}$

    (21)

    where $P_i (k)$ is an adaptation gain matrix and $\lambda _i (k)$ is an exponential forgetting factor $\left({0 < \lambda _i (k) < 1} \right)$ , which can be calculated from the following recursive expansion:

    $\label{eq22} \lambda _i (k)=\lambda_{^{\circ}i} \lambda _i (k-1)+\lambda _i^{\circ} (1-\lambda_{^{\circ}i})$

    (22)

    with $0 < \lambda _{^{\circ}i} < 1$ and $0 < \lambda _i^{\circ} < 1$ .

    The use of a forgetting factor $\lambda _i (k)$ in the adaptation gain matrix $P_i (k)$ of the recursive parametric estimation algorithm (21) improves the ability of this adaptation gain matrix, while ensuring a better tracking of the time-varying parameters of the considered system. Thus, the forgetting factor $\lambda _i (k)$ that prevents the parameter values of the adaptation gain matrix $P_i (k)$ from becoming too small, such that any new data (the measured input and output values) intervening in the observation vector $\psi _i (k)$ continue to have an effect on the estimation quality. The forgetting factor allows introducing forgetting of the influence of the former measures in favour of the new measured values, thereby weighting the old measures.

    It should be noted that the recursive parametric estimation algorithm is available for the industrial systems being slightly noisy. However, the parametric estimation quality becomes quite faulty beyond a certain value of the noise variance acting on the system output.

    The practical implementation of the recursive least squares algorithm (21) will be carried out starting from the knowledge of several measured values of the inputs and the outputs resulting from all the interconnected nonlinear systems $S_i $ , $1\le i\le N$ , and the initial conditions $P_i (0)$ and $\theta _i (0)$ .

  • This subsection aims at the analysis of the recursive parametric estimation method which can be applied to large-scale time-varying systems composed of several interconnected nonlinear monovariable systems, as described by the INDARMA mathematical model. The convergence analysis of the algorithm RLS can be conducted based on the hyperstability and positivity method[19]. The convergence properties of this estimator for the linear interconnected system was proposed by Kamoun[20]. We can prove readily that these properties are similar to the nonlinear interconnected system, which can be described by INDARMA mathematical models. These properties can be enounced by the following theorem:

    Theorem 1. Let us consider a nonlinear large-scale system operating in a deterministic environment, which consists of several interconnected nonlinear systems that can be described by the discrete-time INDARMA mathematical model, as given by (1). The parametric estimation involved in the considered mathematical model can be conducted using the recursive least squares algorithm RLS: If

    1) the components vectors $\hat {\theta }_i (0)$ and $\psi _i (k)$ are finished

    2) the adaptation gain $P_i (k)$ is decreasing

    then the convergence analysis of this algorithm is provided.

    The global quality of the obtained estimate can be made starting from the calculation of the following parametric distance $D_i \left(k \right)$ :

    $\begin{align} \label{eq23} D_i \left(k \right) &=\left[\sum\limits_{r=1}^2 {\left[{\frac{a_{i, r} (k)-\hat {a}_{i, r} (k)}{a_{i, r} (k)}} \right]^2} +\right.\notag\\ & \sum\limits_{r=1}^2 {\left[{\frac{b_{i, r} (k)-\hat {b}_{i, r} (k)}{b_{i, r} (k)}} \right]^2} +\notag\\ & \sum\limits_{j=1, j\ne i}^N \sum\limits_{r=1}^2 \left[ {\frac{b_{ij, r} (k)-\hat {b}_{ij, r} (k)}{b_{ij, r} (k)}} \right]^2+\notag\\ & \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r=1}^2 {\left[ {\frac{a_{ij, r} (k)-\hat {a}_{ij, r} (k)}{a_{ij, r} (k)}} \right]^2} } + \notag\\ & \sum\limits_{j=1}^N \sum\limits_{r_1=1}^2 \sum\limits_{r_2=1}^2 \left[{\frac{f_{ij, r_1 r_2 } (k)-\hat {f}_{ij, r_1 r_2 } (k)}{f_{ij, r_1 r_2 } (k)}} \right]^2\mbox{+}\notag\\ & \sum\limits_{j=1}^N {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[{\frac{g_{ij, r_1 r_2 } (k)-\hat {g}_{ij, r_1 r_2 } (k)}{g_{ij, r_1 r_2 } (k)}} \right]^2} } } +\notag\\ & \sum\limits_{j=1}^N \sum\limits_{r_1=1}^2 \sum\limits_{r_2=1}^2 \left[{\frac{h_{ij, r_1 r_2 } (k)-\hat {h}_{ij, r_1 r_2 } (k)}{h_{ij, r_1 r_2 } (k)}} \right]^2+\notag\\ & \left. \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r_1=1}^2 \sum\limits_{r_2=1}^2 {\left[{\frac{l_{ij, r_1 r_2 } (k)-\hat {l }_{ij, r_1 r_2 } (k)}{l_{ij, r_1 r_2 } (k)}} \right]^2} } \right]^{0.5} \end{align}$

    (23)

    with $j=1, \cdots, N$ , $j\ne i$ .

  • This subsection is reserved for the parametric estimation of the interconnected nonlinear systems operating in stochastic environment, which can be described by the input-output mathematical models INARMAX, as given by (10), with unknown time-varying parameters.

    For the sake of simplicity and without loss of general information, we consider an interconnected nonlinear system operating in a stochastic environment, which can be described by a second-order input-output mathematical model INARMAX with second degree of nonlinearity. This assumption permits to facilitate the formulation of the parametric estimation problem for the considered mathematical model.

    We can write the output $y_i (k)$ of the considered system $S_i $ by the following expression:

    $\begin{align} \label{eq24} y_i & (k)=-\sum\limits_{r=1}^2 {a_{i, r} (k)y_i (k-r)}-\notag\\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {f_{ii, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)y_i (k-r_2)} } +\notag\\ & \mbox{ }\sum\limits_{r=1}^2 {b_{i, r} (k)u_i (k-d_i-r)} +\notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r=1}^2 {b_{ij, r} (k)\mbox{ }u_j (k-d_{ij}-r)} }+\notag \\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r=1}^2 {a_{ij, r} (k)\mbox{ }y_j (k-t_{ij}-r)} } +\notag\\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {g_{ii, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)u_i (k-r_2)} } +\notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {g_{ij, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)u_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {f_{ij, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)y_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {h_{ij, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)y_j (k-r_2)} } } +\notag\\ & \sum\limits_{j=1, j\ne i}^2 {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\ell _{ij, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)u_j (k-r_2)} } } +\notag\\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {c_{ii, r_1 r_2 } (k)\mbox{ }e_i (k-r_1)e_i (k-r_2)} } +\notag\\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\alpha _{ii, r_1 r_2 } (k)\mbox{ }y_i (k-r_1)e_i (k-r_2)} } +\notag\\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\beta _{ii, r_1 r_2 } (k)\mbox{ }u_i (k-r_1)e_i (k-r_2)} } +e_i (k). \end{align}$

    (24)

    The output $y_i (k)$ of the considered system $S_i $ can be expressed by the following developed form:

    $ \begin{align*} \label{eq25} y_i & (k)=-a_{i, 1} (k)\; y_i (k-1)-\notag\\ & a_{i, 2} (k)\; y_i (k-2)-f_{ii, 11} (k)\; y_i^2 (k-1)-\notag\\ & f_{ii, 220} (k)\; y_i (k-1)\; y_i (k-2)-f_{ii, 22} (k)\; y_i^2 (k-2)+\notag \\ & b_{i, 1} (k)\; u_i (k-d_i-1)+b_{i, 2} (k)\; u_i (k-d_i-2)+\notag\\ & b_{ij, 1} (k)\; u_j (k-d_{ij}-1) +b_{ij, 2} (k)\; u_j (k-d_{ij}-2)+\notag\\ & a_{ij, 1} (k)\; y_j (k-t_{ij}-1)+a_{ij, 2} (k)\; y_j (k-t_{ij}-2) +\notag\\ & g_{ii, 11} (k)\; u_i^2 (k-1)+g_{ii, 220} (k)\; u_i (k-1)\; u_i (k-2)+\notag\\ & g_{ii, 22} (k)\; u_i^2 (k-2) +\notag\\ & g_{ij, 11} (k)\; u_i (k-1)\; u_j (k-1)\mbox{+}g_{ij, 12} (k)\; u_i (k-1)\; u_j (k-2) +\notag\\ & g_{ij, 21} (k)\; u_i (k-2)\; u_j (k-1)\mbox{+}g_{ij, 22} (k)\; u_i (k-2)\; u_j (k-2) +\notag\\ & f_{ij, 11} (k)\; y_i (k-1)\; y_j (k-1)\mbox{+ }f_{ij, 12} (k)\; y_i (k-1)\; y_j (k-2) +\notag\\ & f_{ij, 21} (k)\; y_i (k-2)\; y_j (k-1)\mbox{+ }f_{ij, 22} (k)\; y_i (k-2)\; y_j (k-2) +\notag \end{align*} $

    $\begin{align} & h_{ii, 11} (k)\mbox{ }u_i (k-1)y_i (k-1)\mbox{+ }h_{ii, 12} (k)\mbox{ }u_i (k-1)y_i (k-2) +\notag\\ & h_{ii, 21} (k)\mbox{ }u_i (k-2)y_i (k-1)\mbox{+ }h_{ii, 22} (k)\mbox{ }u_i (k-2)y_i (k-2) +\notag\\ & h_{ij, 11} (k)\mbox{ }u_i (k-1)y_j (k-1)\mbox{+ }h_{ij, 12} (k)\mbox{ }u_i (k-1)y_j (k-2) +\notag\\ & h_{ij, 21} (k)\mbox{ }u_i (k-2)y_j (k-1)\mbox{+ }h_{ij, 22} (k)\mbox{ }u_i (k-2)y_j (k-2) +\notag\\ & \ell _{ij, 11} (k)\mbox{ }y_i (k-1)u_j (k-1)+\ell _{ij, 12} (k)\mbox{ }y_i (k-1)u_j (k-2) +\notag\\ & \ell _{ij, 21} (k)\mbox{ }y_i (k-2)u_j (k-1)+\ell _{ij, 22} (k)\mbox{ }y_i (k-2)u_j (k-2) +\notag\\ & c_{i, 1} \; e_i (k-1)+c_{i, 2} \; e_i (k-2)+c_{ii, 11} \; e_i^2 (k-1) +\notag\\ & c_{ii, 220} \; e_i (k-1)\; e_i (k-2)+c_{ii, 22} \; e_i^2 (k-2)+\notag \\ & \alpha _{ii, 11} \; y_i (k-1)\; e_i (k-1)\mbox{+ }\alpha _{ii, 12} \; y_i (k-1)\; e_i (k-2) +\notag\\ & \alpha _{ii, 21} \; y_i (k-2)\; e_i (k-1)\mbox{+ }\alpha _{ii, 22} \; y_i (k-2)\; e_i (k-2) +\notag\\ & \beta _{ii, 11} \; u_i (k-1)\; e_i (k-1)\mbox{+ }\beta _{ii, 12} \; u_i (k-1)\; e_i (k-2) +\notag\\ & \beta _{ii, 21} \; u_i (k-2)\; e_i (k-1)\mbox{+ }\beta _{ii, 22} \; u_i (k-2)\; e_i (k-2)+e_i (k) \end{align}$

    (25)

    with $c_{ii, 220}=c_{ii, 12} +c_{ii, 21} $ and $j=1, \cdots, N$ , $j\ne i$ .

    The output $y_i (k)$ of this system, as given by (25), can be written in the following compact form:

    $\begin{align} \label{eq26} y_i (k)=\theta _i^{\rm T} (k)\, \psi _i (k)+e_i (k) \end{align}$

    (26)

    where $\theta _i (k)$ and $\psi _i (k)$ are, respectively, the parameter and the observation vectors, which are defined as follows:

    $\begin{align} \label{eq27} \theta _i^{\rm T} & (k)=\left[{a_{i, 1} (k)\, \, a_{i, 2} (k)\, \, f_{ii, 11} (k)\, \, f_{ii, 220} (k)\, \, f_{ii, 22} (k)\, \, } \right. \notag\\ & \mbox{ }b_{i, 1} (k)\, \, b_{i, 2} (k)\, \, b_{ij, 1} (k)\, \, b_{ij, 2} (k)\, \, a_{ij, 1} (k)\, \, a_{ij, 2} (k)\, \notag\\ & \mbox{ g}_{ii, 11} (k)\, \, g_{ii, 220} (k)\, \, g_{ii, 22} (k)\, \, g_{ij, 11} (k)\, \, g_{ij, 12} (k)\, \notag\\ & \mbox{ }g_{ij, 21} (k)\, \, g_{ij, 22} (k)\, \, f_{ij, 11} (k)\, \, f_{ij, 12} (k)\, \, f_{ij, 21} (k)\, \notag\\ & \mbox{ }f_{ij, 22} (k)\, \, h_{ii, 11} (k)\, \, h_{ii, 12} (k)\, \, h_{ii, 21} (k)\, \, h_{ii, 22} (k)\, \notag\\ & \mbox{ }h_{ij, 11} (k)\, \, h_{ij, 12} (k)\, \, h_{ij, 21} (k)\, \, h_{ij, 22} (k)\, \, l _{ij, 11} (k)\, \notag\\ & \mbox{ }l _{ij, 12} (k)\, \, l _{ij, 21} (k)\, \, l _{ij, 22} (k)\, \, c_{i, 1} \, \, c_{i, 2} \, \, c_{ii, 11} \, \, c_{ii, 220} \, \, c_{ii, 22} \notag\\ & \left. {\mbox{ }\alpha _{ii, 11} \, \, \alpha _{ii, 12} \, \, \alpha _{ii, 21} \, \, \alpha _{ii, 22} \, \, \beta _{ii, 11} \, \, \beta _{ii, 12} \, \, \beta _{ii, 21} \, \, \beta _{ii, 22} } \right] \end{align}$

    (27)

    and

    $\begin{align*} \label{eq28} \psi _i^{\rm T} & (k)=[-y_i (k-1)-y_i (k-2)-y_i^2 (k-1)-\notag\\ & y_i (k-1)y_i (k-2)-\notag\\ & y_i^2 (k-2)u_i (k-d_i-1)u_i (k-d_i-2)u_j (k-d_{ij}-1) \notag\\ & u_j (k-d_{ij}-2)y_j (k-t_{ij}-1)y_j (k-t_{ij}-2)u_i^2 (k-1)\notag\\ & u_i (k-1)u_i (k-2)u_i^2 (k-2)u_i (k-1)u_j (k-1)\notag\\ & u_i (k-1)u_j (k-2)u_i (k-2)u_j (k-1)u_i (k-2)u_j (k-2)\notag \\ & y_i (k-1)y_j (k-1)y_i (k-1)y_j (k-2)y_i (k-2)y_j (k-1)\notag \\ & y_i (k-2)y_j (k-2)u_i (k-1)y_i (k-1)u_i (k-1)y_i (k-2)\notag \\ & u_i (k-2)y_i (k-1)u_i (k-2)y_i (k-2)u_i (k-1)y_j (k-1)\notag\\ & u_i (k-1)y_j (k-2)u_i (k-2)y_j (k-1)u_i (k-2)y_j (k-2)\notag \\ & y_i (k-1)u_j (k-1)y_i (k-1)u_j (k-2)y_i (k-2)u_j (k-1) \notag\\ & u_i (k-2)y_i (k-2)e_i (k-1)e_i (k-2)e_i^2 (k-1)e_i (k-1) \notag\\ & e_i (k-2)e_i^2 (k-2)y_i (k-1)e_i (k-1)y_i (k-1)e_i (k-2) \notag\\ & y_i (k-2)e_i (k-1)y_i (k-2)e_i (k-2)u_i (k-1)e_i (k-1)\notag\\ & {u_i (k-1)e_i (k-2)u_i (k-2)e_i (k-1)u_i (k-2)e_i (k-2)} \big]. \end{align*}$

    (28)

    The problem here consists in the formulation of an optimally estimated $\hat {\theta }_i (k)$ of the parameter vector $\theta _i (k)$ from the knowledge of the sizes involved in the observation vector $\psi _i (k)$ . However, the observation vector $\psi _i (k)$ is not fully observable, because it contains the sequence of noise $\mbox{\{}e_i (k-i), i=1, \cdots, N\}$ which is not measurable. To avoid this problem, we can approximate the observation vector $\psi _i (k)$ in such a way that

    $\begin{align} \label{eq29} % \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {\widehat{\psi}^{\rm T} %}} _i & (k)=\left[-y_i (k-1)-\right.\notag\\ {\hat{\psi}^{\rm T}} _i & (k)=\left[-y_i (k-1)-\right.\notag\\ & y_i (k-2)-y_i^2 (k-1)-y_i (k-1)y_i (k-2)-\notag\\ & y_i^2 (k-2)u_i (k-d_i-1)u_i (k-d_i-2)u_j (k-d_{ij}-1)\times \notag\\ & u_j (k-d_{ij}-2)y_j (k-t_{ij}-1)y_j (k-t_{ij}-2)u_i^2 (k-1)\times \notag \\ & u_i (k-1)u_i (k-2)u_i^2 (k-2)u_i (k-1)u_j (k-1)\times \notag\\ & u_i (k-1)u_j (k-2)u_i (k-2)u_j (k-1)u_i (k-2)u_j (k-2)\times \notag\\ & y_i (k-1)y_j (k-1)y_i (k-1)y_j (k-2)y_i (k-2)y_j (k-1)\times \notag \\ & y_i (k-2)y_j (k-2)u_i (k-1)y_i (k-1)u_i (k-1)y_i (k-2)\times \notag \\ & u_i (k-2)y_i (k-1)u_i (k-2)y_i (k-2)u_i (k-1)y_j (k-1)\times \notag \\ & u_i (k-1)y_j (k-2)u_i (k-2)y_j (k-1)u_i (k-2)y_j (k-2)\times \notag \\ & y_i (k-1)u_j (k-1)y_i (k-1)u_j (k-2)y_i (k-2)u_j (k-1)\times \notag\\ & u_i (k-2)y_i (k-2)\varepsilon _i (k-1)\varepsilon _i (k-2)\varepsilon _i^2 (k-1)\times \notag\\ & \varepsilon _i (k-1)\varepsilon _i (k-2)\varepsilon _i^2 (k-2)y_i (k-1)\varepsilon _i (k-1)\times \notag\\ & y_i (k-1)\varepsilon _i (k-2)\times \notag\\ & y_i (k-2)\varepsilon _i (k-1)y_i (k-2)\varepsilon _i (k-2)u_i (k-1)\varepsilon _i (k-1)\times \notag\\ & {u_i (k-1)\varepsilon _i (k-2)u_i (k-2)\varepsilon _i (k-1)u_i (k-2)\varepsilon _i (k-2)} \big]. \end{align}$

    (29)

    The recursive extended least squares (RELS) algorithm, which permits to estimate the parameters intervening in the parameter vector $\theta _i (k)$ , as defined by (27), can be described as follows[7, 18, 20]:

    $\begin{align} \label{eq30} & \hat {\theta }_i (k)=\hat {\theta }_i (k-1)+P_i (k)\, {\hat{\psi} } _i (k)\, \varepsilon _i (k) \notag\\ & P_i (k)=\frac{1}{\lambda _i (k)}\times\notag\\ & \qquad\left[{P_i (k-1)-\frac{P_i (k-1)\, {\hat{\psi} } _i (k)\, {\hat\psi }^{\rm T} _i (k)\, P_i (k-1)}{\lambda _i (k)+ {\hat\psi } _i^{\rm T} (k)\, P_i (k-1)\, {\hat\psi } _i (k)}} \right]\notag \\ & \varepsilon _i (k)=y_i (k)-\hat {\theta }_i^{T} (k-1)\, {\hat\psi } _i (k). \end{align}$

    (30)

    The role of the forgetting factor $\lambda _i (k)$ is to improve the capacity of the adaptation gain matrix $P_i (k)$ while providing best following variable parameters of the considered system.

  • This subsection is reserved to the convergence analysis of the RELS algorithm, which can be used to estimate the parameters of a nonlinear stochastic large-scale system that can be described by the INARMAX mathematical model. In this case, we assume that the noise model is linear. Thus, the considered mathematical model which can be described the dynamic of the considered system is given as follows:

    $\begin{gathered} {A_i}({q^{-1}}, k){\text{ }}{y_i}(k)={q^{-{d_i}}}{\text{ }}{B_i}({q^{-1}}, k){\text{ }}{u_i}(k) + \hfill \\ \sum\limits_{j=1, j \ne i}^N {{q^{-{d_{ij}}}}{B_{ij}}({q^{-1}}, k){u_j}(k)} + \hfill \\ \sum\limits_{j=1, j \ne i}^N {{q^{-{t_{ij}}}}{A_{ij}}({q^{-1}}, k){y_j}(k)} + {C_i}(q-1){\text{ }}{e_i}(k) + \hfill \\ {f_{ij}}[{y_i}(k-1), \cdots, {y_i}(k-{n_i}), {u_i}(k-1), \cdots, {u_i}(k-{n_i}), \hfill \\ {y_j}(k-1), \cdots, {y_j}(k-{n_i}), {u_j}(k-1), \cdots, {u_j}(k-{n_i})] \hfill \\ \end{gathered} $

    (31)

    where $f_{ij} \left(\cdot \right)$ is some nonlinear function with degree of nonlinearity $p$ , which depends on the sequences of the inputs and the outputs of the interconnected nonlinear system $S_i $ , $1\le i\le N$ , and other interconnected systems $S_j $ , $j=1, \cdots, N$ , $j\ne i$ .

    The objective here is to determine the convergence properties of the estimator, when the estimated parameters converge asymptotically (on statistical average) to the real parameters of the considered system.

    The formulation of this convergence problem for this class of INARMAX mathematical models with linear noise model is similar to those for the linear case (IARMAX), which was developed by Kamoun[20]. The convergence analysis of the RELS estimator can be conducted based on the ordinary differential equation method[21-22].

    The RELS algorithm, as defined by (30), can be written in the following form[23]:

    $\begin{align} \label{eq32} & \hat {\theta }_i (k)=\hat {\theta }_i (k-1)+\zeta _i (k)R_i^{-1} (k)\, {\hat\psi } _i (k)\, \varepsilon _i (k) \notag\\ & R_i (k)=R_i (k-1)+\zeta _i (k)\left[{\hat\psi } _i (k)\, {\hat\psi } _i^{\rm T} (k)-R_i (k-1) \right] \notag\\ & \varepsilon _i (k)=y_i (k)-\hat {\theta }_i^{\rm T} (k-1)\, {\hat\psi } _i (k) \end{align}$

    (32)

    where

    $P_i (k)=\zeta _i (k)R_i^{-1} (k)$

    (33)

    and

    $\lambda _i (k)=\frac{\zeta _i (k-1)}{\left({1-\zeta _i (k)} \right)\zeta _i (k)}.$

    (34)

    We suppose the following stationary variables $\bar {\varepsilon }_i (k, \theta _i^c)$ and $\bar {\hat\psi } _i (k, \theta _i^c)$ , such that

    $\bar {\varepsilon }_i (k, \theta _i^c)=y_i (k)-\theta _i^{c^{\rm T}} \, \bar {\hat\psi }_i (k, \theta _i^c)$

    (35)

    and

    $\begin{align} \label{eq36} \bar {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {\psi }} }_i (k, & \theta _i^c)=\left[-y_i (k-1);\cdots; -y_i (k-n_i); -y_i^2 (k-1);\cdots; \notag\right.\\ &-y_i^2 (k-n_i); \cdots; \notag\\ &-y_i^p (k-1);\cdots; -y_i^p (k-n_i); u_i (k-1);\cdots; \notag\\ & u_i (k-n_i); u_i^2 (k-1);\cdots; \notag \\ & u_i^2 (k-n_i); \cdots; u_i^p (k-1);\cdots; \notag\\ & u_i^p (k-n_i); u_j (k-1);\cdots; \notag\\ & u_j (k-n_i); y_j (k-1);\cdots; \notag\\ & y_j (k-n_i); u_i (k-1)y_i (k-1);\cdots; \notag\\ & u_i (k-n_i)y_i (k-n_i); u_i (k-1)y_j (k-1);\cdots; \notag\\ & u_i (k-n_i)y_j (k-n_i); y_i (k-1)u_j (k-1);\cdots; \notag\\ & {y_i (k-n_i)u_j (k-n_i); \bar {\varepsilon }_i (k-1, \theta _i^c)\cdots; \bar {\varepsilon }_i (k-n_i, \theta _i^c)} \big] \end{align}$

    (36)

    where $\theta _i^c $ is a fixed vector in a domain for which these quantities can be defined ( $\hat {\theta }_i (k)=\theta _i^c)$ .

    The analysis of the RELS algorithm, as given by (30), is based on the following associated differential equation:

    $\begin{align} \label{eq37} & \frac{{\rm d}}{{\rm d}\tau }\theta _i^c (\tau)=R_i^{-1} (\tau)\; F_i (\theta _i^c (\tau)) \notag\\ & \frac{{\rm d}}{{\rm d}\tau }R_i (\tau)=G_i (\theta _i^c (\tau))-R_i (\tau) \end{align}$

    (37)

    where $R_i (\tau)$ is a definite positive matrix, $\theta _i^c (\tau)$ is a parameter vector belonging to a certain domain for which the considered system is stable, and the vector $F_i (\theta _i^c (\tau))$ and the matrix $G_i (\theta _i^c (\tau))$ are defined as follows:

    $F_i (\theta _i^c)={\rm E}\left[{\bar {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {\psi }} }_i (k, \theta _i^c)\; \bar {\varepsilon }_i (k, \theta _i^c)} \right]$

    (38)

    and

    $G_i (\theta _i^c)={\rm E}\left[{\bar {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {\psi }} }_i (k, \theta _i^c)\; \bar {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {\psi }} }_i^T (k, \theta _i^c)} \right].$

    (39)

    The equation $\bar {\varepsilon }_i (k, \theta _i^c)$ , as given by (35), can be expressed in the following form:

    $\bar {\varepsilon }_i (k, \theta _i^c)=\theta _i^{\rm T} (k)\, \psi _i (k)+e_i (k)-\theta _i^{c^{\rm T}} \, \bar {\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}\over {\psi }} }_i (k, \theta _i^c)$

    (40)

    where

    $y_i (k)=\theta _i^{\rm T} (k)\, \psi _i (k)+e_i (k)$

    (41)

    We can prove easily that the expression (40) can be written in the following form:

    ${{\bar{\varepsilon }}_{i}}(k, \theta _{i}^{c})={{\left[{{H}_{i}}\left({{q}^{-1}} \right){{{\bar{\hat{\psi }}}}_{i}}(k, \theta _{i}^{c}) \right]}^{\text{T}}}\left[{{\theta }_{i}}(k)-\theta _{i}^{c} \right]+{{e}_{i}}(k)$

    (42)

    where

    ${{H}_{i}}({{q}^{-1}})=\frac{1}{{{C}_{i}}({{q}^{-1}})}.$

    (43)

    Based on the convergence properties of the RELS algorithm for the IARMAX and NARMAX mathematical models[20, 23], we can prove readily the convergence properties of the RELS estimator for the considered INARMAX mathematical models by the following theorem:

    Theorem 2. Let us consider a nonlinear large-scale system operating in a stochastic environment, which consists of several interconnected nonlinear systems that can be described by the discrete-time INARMAX mathematical model with pseudo linear regression, as defined by (30). The parametric estimation involved in the considered mathematical model can be conducted using the recursive extended least squares algorithm RELS. We assume:

    1) The vectors $\hat {\theta }_i (0)$ and $\psi _i (k)$ are finished.

    2) The adaptation gain matrix $P_i (k)$ is decreasing.

    3) The input signal $u_i (k)$ applied to the considered system $S_i $ is stationary and sufficiently exciting.

    4) The noise $\left\{ {e_i (k)} \right\}$ is a white sequence with zero mean and constant variance $\sigma _i^2 $ .

    Under these assumptions, we can assert: If $\frac{1}{{{C}_{i}}\left({{q}^{-1}} \right)}-\frac{1}{2}\succ 0$ , then, the convergence analysis of the RELS algorithm is ensured.

    Let us notice that the global quality of the obtained estimate can be made starting from the calculation of the following parametric distance $D_i (k)$ :

    $\begin{align} \label{eq44} D_i \left(k \right) &=\left[\sum\limits_{r=1}^2 {\left[{\frac{a_{i, r} (k)-\hat {a}_{i, r} (k)}{a_{i, r} (k)}} \right]^2}+\notag\right.\\ & \sum\limits_{r=1}^2 \left[{\frac{b_{i, r} (k)-\hat {b}_{i, r} (k)}{b_{i, r} (k)}} \right]^2 +\sum\limits_{r=1}^2 {\left[{\frac{c_{i, r}-\hat {c}_{i, r} (k)}{c_{i, r} }} \right]^2} +\notag\\ & \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r=1}^2 {\left[ {\frac{b_{ij, r} (k)-\hat {b}_{ij, r} (k)}{b_{ij, r} (k)}} \right]^2} } +\notag\\ & \sum\limits_{j=1, j\ne i}^N {\sum\limits_{r=1}^2 {\left[ {\frac{a_{ij, r} (k)-\hat {a}_{ij, r} (k)}{a_{ij, r} (k)}} \right]^2} } \mbox{+}\notag\\ & \sum\limits_{j=1}^N {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[{\frac{f_{ij, r_1 r_2 } (k)-\hat {f}_{ij, r_1 r_2 } (k)}{f_{ij, r_1 r_2 } (k)}} \right]^2} } }+\notag \\ & \sum\limits_{j=1}^N {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[{\frac{g_{ij, r_1 r_2 } (k)-\hat {g}_{ij, r_1 r_2 } (k)}{g_{ij, r_1 r_2 } (k)}} \right]^2} } } \mbox{+}\notag\\ & \sum\limits_{j=1}^N {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[{\frac{h_{ij, r_1 r_2 } (k)-\hat {h}_{ij, r_1 r_2 } (k)}{h_{ij, r_1 r_2 } (k)}} \right]^2} } } +\notag\\ & {\sum\limits_{j=1, j\ne i}^N {\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[{\frac{l _{ij, r_1 r_2 } (k)-\hat {l }_{ij, r_1 r_2 } (k)}{l _{ij, r_1 r_2 } (k)}} \right]^2} } } } \mbox{+}\notag\\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[ {\frac{c_{ii, r_1 r_2 }-\hat {c}_{ii, r_1 r_2 } (k)}{c_{ii, r_1 r_2 } }} \right]^2} }+\notag \\ & \sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[ {\frac{\alpha _{ii, r_1 r_2 }-\hat {\alpha }_{ii, r_1 r_2 } (k)}{\alpha _{ii, r_1 r_2 } }} \right]^2} } +\notag\\ & \left.\sum\limits_{r_1=1}^2 {\sum\limits_{r_2=1}^2 {\left[ {\frac{\beta _{ii, r_1 r_2 }-\hat {\beta }_{ii, r_1 r_2 } (k)}{\beta _{ii, r_1 r_2 } }} \right]^2} } \right]^\frac{1}{2} \end{align}$

    (44)

    with $j=1, \cdots, N$ , $j\ne i$ .

  • In this section, we present a simulation example, to test the performance and the efficiency of the RELS algorithm with forgetting factor, as given by (30).

    In this case, we consider an interconnected nonlinear hydraulic system that is installed in a petroleum society of Tunisia, which consists of three separators (V130, V140 and V141), six motor-pumps shipments (P500, P505, P510, P515, P520 and P525) and three valves controls (LCV131, LCV417 and LCV965). The role of these separators is to treat the fluid reservoir as it comes from other separation stage, to separate the three phase's water, oil and gas with stabilization of level oil in each one

    Fig. 1 shows the oil circuit diagram of the considered hydraulic system.

    Figure 1.  Oil circuit diagram of the considered interconnected nonlinear hydraulic system

    Fig. 2 represents the diagram of the interaction structure of the considered three interconnected systems $S_1 $ , $S_2 $ and $S_3 $ .

    Figure 2.  Interaction structure of the considered three interconnected nonlinear systems S1, S2 and S3

    These three interconnected nonlinear monovariable systems can be described by the following input-output INARMAX mathematical models with unknown time-varying parameters:

    $\begin{align} \label{eq45} y_1 (k) &=-a_{1, 1} (k)y_1 (k-1)-\notag\\ & f_{11, 11} (k)\mbox{ }y_1^2 (k-1)+b_{1, 1} (k)u_1 (k-1)+\notag \\ & a_{13, 1} (k)\mbox{ }y_3 (k-1)+f_{13, 11} (k)\mbox{ }y_1 (k-1)y_3 (k-1) +\notag\\ & g_{11, 11} (k)\mbox{ }u_1^2 (k-1)\mbox{+ }h_{11, 11} (k)\mbox{ }u_1 (k-1)y_1 (k-1) +\notag\\ & c_{1, 1} e_1 (k-1)+e_1 (k) \end{align}$

    (45)

    $\begin{align} \label{eq46} y_2 (k) &=-a_{2, 1} (k)y_2 (k-1)-\notag\\ & f_{22, 11} (k)\mbox{ }y_2^2 (k-1)+b_{2, 1} (k)u_2 (k-1)+\notag\\ & a_{21, 1} (k)\mbox{ }y_1 (k-1)+f_{21, 11} (k)\mbox{ }y_2 (k-1)y_1 (k-1)+\notag\\ & g_{22, 11} (k)\mbox{ }u_2^2 (k-1)\mbox{+ }h_{22, 11} (k)\mbox{ }u_2 (k-1)y_2 (k-1)+\notag \\ & c_{2, 1} e_2 (k-1)+e_2 (k) \end{align}$

    (46)

    and

    $\begin{align} \label{eq47} y_3 (k) &=-a_{3, 1} (k)y_3 (k-1)-\notag\\ & f_{33, 11} (k)\mbox{ }y_3^2 (k-1)+b_{3, 1} (k)u_3 (k-1)+\notag \\ & a_{32, 1} (k)\mbox{ }y_2 (k-1)+f_{32, 11} (k)\mbox{ }y_3 (k-1)y_2 (k-1)+\notag \\ & g_{33, 11} (k)\mbox{ }u_3^2 (k-1)\mbox{+ }h_{33, 11} (k)\mbox{ }u_3 (k-1)y_3 (k-1) +\notag\\ & c_{3, 1} e_3 (k-1)+e_3 (k) \end{align}$

    (47)

    where $u_1 (k)$ , $y_1 (k)$ and $e_1 (k)$ represent the input, the output and the noise of the interconnected nonlinear system $S_1 $ , respectively, $u_2 (k)$ , $y_2 (k)$ and $e_2 (k)$ denote the input, the output and the noise of the interconnected nonlinear system $S_2 $ , respectively, and $u_3 (k)$ , $y_3 (k)$ and $e_3 (k)$ represent the input, the output and the noise of the interconnected nonlinear system $S_3 $ , respectively.

    It should be noted that the noise sequences $\left\{ {e_1 (k)} \right\}$ , $\left\{ {e_2 (k)} \right\}$ and $\left\{ {e_3 (k)} \right\}$ are assumed to be independent and correspond to a Gaussian distribution with zero mean and constant variance.

    The output $y_1 (k)$ of the interconnected nonlinear system $S_1 $ can be rewritten as follows:

    $y_1 (k)=\theta _1^{\rm T} (k)\psi _1 (k)+e_1 (k)$

    (48)

    where the parameter vector $\theta _1 (k)$ and the observation vector $\psi _1 (k)$ are defined by

    $\begin{align} \label{eq49} \theta _1^{\rm T} & (k)=\left[{a_{1, 1} (k)\, \, f_{11, 11} (k)\, \, b_{1, 1} (k)\, \, a_{13, 1} (k)\, \, } \right. \notag\\ & \left. {\mbox{ }f_{13, 11} (k)\, \, g_{11, 11} (k)\, \, h_{11, 11} (k)\, \, c_{1, 1} } \right] \end{align}$

    (49)

    and

    $\begin{align} \label{eq50} \psi _1^{\rm T} & (k)=\left[{-y_1 (k-1)\; \;-y_1^2 (k-1)\; \;u_1 (k-1)\; \;y_3 (k-1)} \right. \notag\\ & \mbox{ }\; \;y_1 (k-1)y_3 (k-1)\; \;u_1^2 (k-1)\; \;u_1 (k-1)y_1 (k-1) \notag\\ & \left. {\mbox{ }\; \;e_1 (k-1)} \right]. \end{align}$

    (50)

    We can also rewrite the output $y_2 (k)$ of the interconnected nonlinear system $S_2 $ in the following matrix form:

    $\begin{align} \label{eq51} y_2 (k)=\theta _2^{\rm T} (k)\psi _2 (k)+e_2 (k) \end{align}$

    (51)

    where the parameter vector $\theta _2 (k)$ and the observation vector $\psi _2 (k)$ are described by

    $ \begin{align} \label{eq52} \theta _2^{\rm T} & (k)=\left[{a_{2, 1} (k)\, \, f_{22, 11} (k)\, \, b_{2, 1} (k)\, \, a_{21, 1} (k)\, \, } \right. \notag\\ & \left. {\mbox{ }f_{21, 11} (k)\, \, g_{22, 11} (k)\, \, h_{22, 11} (k)\, \, c_{2, 1} } \right] \end{align}$

    (52)

    and

    $\begin{align} \label{eq53} \psi _2^{\rm T} & (k)=\left[{-y_2 (k-1)\; \;-y_2^2 (k-1)\; \;u_2 (k-1)\; \;y_1 (k-1)} \right. \notag\\ & \mbox{ }\; \;y_2 (k-1)y_1 (k-1)\; \;u_2^2 (k-1)\; \;u_2 (k-1)y_2 (k-1) \notag\\ & \left. {\mbox{ }\; \;e_2 (k-1)} \right]. \end{align}$

    (53)

    The output $y_3 (k)$ of the interconnected nonlinear system $S_3 $ can be rewritten as follows:

    $\begin{align} \label{eq54} y_3 (k)=\theta _3^{\rm T} (k)\psi _3 (k)+e_3 (k) \end{align}$

    (54)

    where $\theta _3 (k)$ represents the parameter vector and $\psi _3 (k)$ indicates the observation vector, which are given by

    $\begin{align} \label{eq55} \theta _3^{\rm T} & (k)=\left[{a_{3, 1} (k)\, \, f_{33, 11} (k)\, \, b_{3, 1} (k)\, \, a_{32, 1} (k)\, \, } \right. \notag\\ & \left. {\mbox{ }f_{32, 11} (k)\, \, g_{33, 11} (k)\, \, h_{33, 11} (k)\, \, c_{3, 1} } \right] \end{align}$

    (55)

    and

    $\begin{align} \label{eq56} \psi _3^{\rm T} & (k)=\left[{-y_3 (k-1)\; \;-y_3^2 (k-1)\; \;u_3 (k-1)\; \;y_2 (k-1)} \right. \notag\\ & \mbox{ }\; \;y_3 (k-1)y_2 (k-1)\; \;u_3^2 (k-1)\; \;u_3 (k-1)y_3 (k-1) \notag\\ & \left. {\mbox{ }\; \;e_3 (k-1)} \right]. \end{align}$

    (56)

    The objective is to estimate the parameter vectors $\theta _1 (k)$ , $\theta _2 (k)$ and $\theta _3 (k)$ by using the algorithm RELS (30). In this example of numerical simulation for practical implementation of the RELS parametric estimation algorithm, the relative data are given hereafter:

    1) The parameter values of the INARMAX mathematical model (45) are selected as such: $a_{{1, 1}} {(}k{)=}-0.58+0.02\, {\rm sin}(0.3k)$ , $b_{{1, 1}} {(}k{)=0.3+0.02\, {\rm cos}(0.3}k{)}$ , $a_{{13, 1}} {(}k{)=}-0.5+$ $0.02\, {\rm cos}(0.3k)$ , $f_{{11, 11}} {(}k{)=0.2+0.02\, {\rm sin}(0.3}k{)}$ , $f_{{13, 11}}(k)$ $=0.25+0.02\, {\rm sin}(0.3k)$ , $g_{{11, 11}} {(}k)=0.2+0.02$ ${\rm cos}(0.3k{)}$ , $h_{{11, 11}} (k)=0.22+0.02$ $\, {\rm cos}(0.3k{)}$ , $c_{{1, 1}} {=0.2}$ ;

    2) The parameter values of the INARMAX mathematical model (46) are selected as such: $a_{{2, 1}} {(}k{)=0.4+0.02\, {\rm sin}(0.3}k{)}$ , $b_{{2, 1}} {(}k)=0.28+0.02$ ${\rm cos}(0.3k{)}$ , $a_{{21, 1}} {(}k{)=}-{0.65+0.02\, {\rm sin}(0.3}k{)}$ , $f_{{22, 11}} {(}k)=0.15+0.02$ ${\rm sin}(0.3k{)}$ , $f_{{21, 11}} {(}k{)=0.25+0.02\, {\rm sin}(0.3}k{)}$ , $g_{{22, 11}} {(}k{)=0.16+0.02\, {\rm sin}(0.3}k{)}$ , $h_{{22, 11}} {(}k)=0.1+0.02$ ${\rm sin}(0.3k{)}$ , $c_{{2, 1}} {=0.22}$ ;

    3) The parameter values of the INARMAX mathematical model (47) are selected as such: $a_{{3, 1}} {(}k{)=}-{0.55+0.02\, {\rm sin}(0.3}k{)}$ , $b_{{3, 1}} {(}k{)=0.3+0.02\, {\rm cos}(0.3}k{)}$ , $a_{{32, 1}} {(}k{)=}-{0.5+0.02\, {\rm sin}(0.3}k{)}$ , $f_{{33, 11}} (k)=0.13+0.02$ ${\rm sin}(0.3k{)}$ , $f_{{32, 11}} {(}k{)=0.3+0.02\, {\rm sin}(0.3}k{)}$ , $g_{{33, 11}} {(}k{)=0.12+0.02\, {\rm sin}(0.3}k{)}$ , $h_{{33, 11}} {(}k)=0.1+0.02$ ${\rm sin}(0.3k{)}$ , $c_{{3, 1}} {=0.14}$ ;

    4) The input $u_i (k)$ , $i=1, 2, 3$ , applied to the interconnected nonlinear system $S_{i}$ is a high level pseudo-random binary sequence [-2, +2];

    5) The initial values of the three parameter vectors $\hat {\theta }_1 (0)$ , $\hat {\theta }_2 (0)$ and $\hat {\theta }_3 (0)$ , and the three adaptation gain matrices $P_1 (0)$ , $P_2 (0)$ and $P_3 (0)$ are chosen in such a way that: $\hat {\theta }_i (0)=0_{ }$ and $P_i (0)=1000I$ , where $I$ is a identity matrix, $i=1, 2, 3$ ;

    6) The forgetting factors $\lambda _1 (k)$ , $\lambda _2 (k)$ and $\lambda _3 (k)$ are chosen in such a way that: $\lambda _1 (k)=\mbox{0.995}$ , $\lambda _2 (k)=\mbox{0.996}$ and $\lambda _3 (k)=\mbox{0.99}$ ;

    7) The variances values of the noise sequences $\left\{ {e_1 (k)} \right\}$ , $\left\{ {e_2 (k)} \right\}$ and $\left\{ {e_3 (k)} \right\}$ are $\sigma _1^2=\mbox{0.0991}$ , $\sigma _2^2=\mbox{0.0923}$ and $\sigma _3^2=\mbox{0.0873}$ ;

    8) The number of measurements $M_k $ is chosen as: $M_k=1, \cdots, 600$ .

    The global quality of the obtained estimate in the numerical simulation can be made starting from the calculation of the following parametric distance $D_i \left(k \right)$ :

    $\begin{align} \label{eq57} D_i \left(k \right) &=\left[\left[{\frac{a_{i, 1} (k)-\hat {a}_{i, 1} (k)}{a_{i, 1} (k)}} \right]^2+\right.\notag\\ & \left[{\frac{b_{i, 1} (k)-\hat {b}_{i, 1} (k)}{b_{i, 1} (k)}} \right]^2+\left[{\frac{c_{i, 1} (k)-\hat {c}_{i, 1} (k)}{c_{i, 1} (k)}} \right]^2+\notag \end{align}$

    (57)

    $ \begin{align} & \sum\limits_{j=1, j\ne i}^N {\left[{\frac{a_{ij, 1} (k)-\hat {a}_{ij, 1} (k)}{a_{ij, 1} (k)}} \right]^2} +\notag\\ & \sum\limits_{j=1}^N {\left[{\frac{f_{ij, 11} (k)-\hat {f}_{ij, 11} (k)}{f_{ij, 11} (k)}} \right]^2} \mbox{+}\notag\\ & \sum\limits_{j=1}^N {\left[ {\frac{g_{ij, 11} (k)-\hat {g}_{ij, 11} (k)}{g_{ij, 11} (k)}} \right]^2} +\notag\\ & \left. {\sum\limits_{j=1}^N {\left[{\frac{h_{ij, 11} (k)-\hat {h}_{ij, 11} (k)}{h_{ij, 11} (k)}} \right]^2} } \right]^{0.5} \end{align} $

    with $i, j=1, 2, 3$ , $j\ne i$ .

    Some results of this simulation example of the considered subsystem $S_1 $ are given. Thus, the evolution curves of the estimated parameters $\hat {a}_{{1, 1}} {(}k{)}$ , $\hat {b}_{{1, 1}} {(}k{)}$ , $\hat {a}_{{13, 1}} {(}k{)}$ and $\hat {c}_{{1, 1}} {(}k{)}$ are given in Fig. 3. Fig. 4 represents the evolution curves of the estimated parameters $\hat {f}_{{11, 11}} {(}k{)}$ , $\hat {f}_{{13, 11}} {(}k{)}$ , $\hat {g}_{{11, 11}} {(}k{)}$ and $\hat {h}_{{11, 11}} {(}k{)}$ . Fig 5 illustrates the prediction error $\varepsilon _1 (k)$ , the parametric distance $D_1 (k)$ and these overall variances $\sigma _{\varepsilon _1 }^2 (k)$ and $\sigma _{D_1 }^2 (k)$ .

    Figure 3.  Evolution curves of the estimated parameters $\hat {a}_{{1, 1}} {(}k{)}$ , $\hat {b}_{{1, 1}} {(}k{)}$ , $\hat {a}_{{13, 1}} {(}k{)}$ and $\hat {c}_{{1, 1}} {(}k{)}$

    Figure 4.  Evolution curves of the estimated parameters $\hat {f}_{{11, 11}} {(}k{)}$ , $\hat {f}_{{13, 11}} {(}k{)}$ , $\hat {g}_{{11, 11}} {(}k{)}$ and $\hat {h}_{{11, 11}} {(}k{)}$

    Figure 5.  Evolution curves of the prediction error $\varepsilon _1 (k)$ , the parametric distance $D_1 (k)$ and these overall variances $\sigma _{\varepsilon _1 }^2 (k)$ and $\sigma _{D_1 }^2 (k)$

    The following figures show some obtained simulation results of the considered subsystem $S_2 $ . The evolution curves of the estimated parameters $\hat {a}_{{2, 1}} {(}k{)}$ , $\hat {b}_{{2, 1}} {(}k{)}$ , $\hat {a}_{{21, 1}} {(}k{)}$ and $\hat {c}_{{2, 1}} {(}k{)}$ are given in Fig. 6. Fig. 7 represents the evolution curves of the estimated parameters $\hat {f}_{{22, 11}} {(}k{)}$ , $\hat {f}_{{21, 11}} {(}k{)}$ , $\hat {g}_{{22, 11}} {(}k{)}$ and $\hat {h}_{{22, 11}} {(}k{)}$ . Fig 8 represents of the evolution curves of the prediction error $\varepsilon _2 (k)$ , the parametric distance $D_2 (k)$ and these overall variances $\sigma _{\varepsilon _2 }^2 (k)$ and $\sigma _{D_2 }^2 (k)$ .

    Figure 6.  Evolution curves of the estimated parameters $\hat {a}_{{2, 1}} {(}k{)}$ , $\hat {b}_{{2, 1}} {(}k{)}$ , $\hat {a}_{{21, 1}} {(}k{)}$ and $\hat {c}_{{2, 1}} {(}k{)}$

    Figure 7.  Evolution curves of the estimated parameters $\hat {f}_{{22, 11}} {(}k{)}$ , $\hat {f}_{{21, 11}} {(}k{)}$ , $\hat {g}_{{22, 11}} {(}k{)}$ and $\hat {h}_{{22, 11}} {(}k{)}$

    Figure 8.  Evolution curves of the prediction error $\varepsilon _2 (k)$ , the parametric distance $D_2 (k)$ and these overall variances $\sigma _{\varepsilon _2 }^2 (k)$ and $\sigma _{D_2 }^2 (k)$

    The obtained simulation results of the considered subsystem $S_3 $ are given in Figs. 9-11. Fig. 9 shows the evolution curves of the estimated parameters $\hat {a}_{{3, 1}} {(}k{)}$ , $\hat {b}_{{3, 1}} {(}k{)}$ , $\hat {a}_{{32, 1}} {(}k{)}$ and $\hat {c}_{{3, 1}} {(}k{)}$ . The evolution curves of the estimated parameters $\hat {f}_{{33, 11}} {(}k{)}$ , $\hat {f}_{{32, 11}} {(}k{)}$ , $\hat {g}_{{33, 11}} {(}k{)}$ and $\hat {h}_{{33, 11}} {(}k{)}$ are given in Fig. 10. Fig. 11 represents of the evolution curves of the prediction error $\varepsilon _3 (k)$ , the parametric distance $D_3 (k)$ and these overall variances $\sigma _{\varepsilon _3 }^2 (k)$ and $\sigma _{D_3 }^2 (k)$ .

    Figure 9.  Evolution curves of the estimated parameters $\hat {a}_{{3, 1}} {(}k{)}$ , $\hat {b}_{{3, 1}} {(}k{)}$ , $\hat {a}_{{32, 1}} {(}k{)}$ and $\hat {c}_{{3, 1}} {(}k{)}$

    Figure 10.  Evolution curves of the estimated parameters $\hat {f}_{{33, 11}} {(}k{)}$ , $\hat {f}_{{32, 11}} {(}k{)}$ , $\hat {g}_{{33, 11}} {(}k{)}$ and $\hat {h}_{{33, 11}} {(}k{)}$

    Figure 11.  Evolution curves of the prediction error $\varepsilon _3 (k)$ , the parametric distance $D_3 (k)$ and these overall variances $\sigma _{\varepsilon _3 }^2 (k)$ and $\sigma _{D_3 }^2 (k)$

    The interpretation of the obtained evolution curves of the different elements shown in the preceding figures of the simulation example shows the quality of the estimation, which is obtained by the algorithm RELS (30). It can be remarked that the parametric distances (statistical average) take quite low values.

    The principal obtained numerical simulation results are given in Tables 1, 2 and 3. Each table presents the values of the statistical averages of the estimated parameters of each interconnected nonlinear system. It illustrates also the statistical averages of the prediction error, the parametric distance and these overall variances. The computation of these several values is made for $k=401, \cdots, 600$ .

    where

    $\begin{align} \label{eq58} \bar {m}_{x(k)}=\frac{\sum\limits_{k=401}^{600} {x(k)} }{200} \end{align}$

    (58)

    and

    $\begin{align} \label{eq59} \sigma _x^2=\frac{\sum\limits_{k=401}^{600} {\left[ {x(k)-\bar {m}_{x(k)} } \right]^2} }{200}. \end{align}$

    (59)

    It can be also remarked that the shapes of prediction errors have some relatively low values. Thus, we can affirm that the obtained results in this numerical simulation example are satisfactory and clearly prove the good performance that can be achieved by the recursive extended least squares algorithm RELS (30) during the estimate of the three interconnected nonlinear time-varying systems, which described by the INARMAX mathematical models (45), (46) and (47), respectively.

  • In this paper, we detailed the description of the nonlinear large-scale systems which can operate in deterministic or stochastic environments. We have particularly focused on the class of large-scale time-varying systems that can be decomposed of several interconnected SISO nonlinear systems. The considered interconnected systems are described by discrete-time input-output mathematical models, which are SISO, nonlinear, with known structure parameters and with unknown time-varying parameters. We also discussed the parametric estimation using the prediction error method and the recursive least-squares techniques in order to estimate the parameters of the considered systems.

    Furthermore, we have formulated the convergence properties of the recursive least squares (RLS) algorithm based on hyperstability and positivity methods. It has been shown that the convergence analysis for the linear models, which described the dynamics of interconnected linear systems, can be extended to the INARMAX mathematical models with linear noise model, by applying the associated differential equation approach. For other noise model, which is nonlinear, the situation is more complex. This will be investigated in a future study.

    We have tested the performance and efficiency that can validate the recursive extended least squares algorithm RELS, by treating a numerical simulation example. The obtained numerical simulation results have shown the good follow-up of the parametric variations of the interconnected nonlinear time-varying systems and the best quality of the estimate, which are ensured by using the algorithm RELS.

Reference (23)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return