Volume 16 Number 3
June 2019
Article Contents
Ya-Jun Li, Zhao-Wen Huang and Jing-Zhao Li. H∞ State Estimation for Stochastic Markovian Jumping Neural Network with Time-varying Delay and Leakage Delay. International Journal of Automation and Computing, vol. 16, no. 3, pp. 329-340, 2019. doi: 10.1007/s11633-016-0955-9
Cite as: Ya-Jun Li, Zhao-Wen Huang and Jing-Zhao Li. H∞ State Estimation for Stochastic Markovian Jumping Neural Network with Time-varying Delay and Leakage Delay. International Journal of Automation and Computing, vol. 16, no. 3, pp. 329-340, 2019. doi: 10.1007/s11633-016-0955-9

H State Estimation for Stochastic Markovian Jumping Neural Network with Time-varying Delay and Leakage Delay

Author Biography:
  • Zhao-Wen Huang received the Ph. D. degree in microelectronics and solid state electronics from South China Normal University, China in 2013. Currently, he is a lecturer in College of Electrical and Information Engineering of Shunde Polytechnic, China.
    His research interests include the stability analysis of electronic system and new energy.
    E-mail: 382799732@qq.com

    Jing-Zhao Li received the M. Sc. degree in optoelectronic engineering from Jinan university, China in 2009. Currently, he is a lecturer in College of Electrical and Information Engineering of Shunde Polytechnic, China.
    His research interest include laser stability analysis and nonlinear optics.
    E-mail: ljzhemail@qq.com

  • Corresponding author: Ya-Jun Li received the Ph. D. degree in control theory and control engineering from South China University of Technology, China in 2011. Currently, he is an associate professor in college of electrical and information engineering of Shunde Polytechnic.
    His research interests include stability analysis, flltering of stochastic time-delay system and neural networks system.
    E-mail: lyjflrst@163.com (Corresponding author)
    ORCID iD: 0000-0001-7472-9185
  • Received: 2015-01-14
  • Accepted: 2015-05-11
  • Published Online: 2016-06-20
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (12)  / Tables (7)

Metrics

Abstract Views (1405) PDF downloads (29) Citations (0)

H State Estimation for Stochastic Markovian Jumping Neural Network with Time-varying Delay and Leakage Delay

  • Corresponding author: Ya-Jun Li received the Ph. D. degree in control theory and control engineering from South China University of Technology, China in 2011. Currently, he is an associate professor in college of electrical and information engineering of Shunde Polytechnic.
    His research interests include stability analysis, flltering of stochastic time-delay system and neural networks system.
    E-mail: lyjflrst@163.com (Corresponding author)
    ORCID iD: 0000-0001-7472-9185

Abstract: The H state estimation problem for a class of stochastic neural networks with Markovian jumping parameters and leakage delay is investigated in this paper. By employing a suitable Lyapunov functional and inequality technic, the sufficient conditions for exponential stability as well as prescribed H norm level of the state estimation error system are proposed and verified, and all obtained results are expressed in terms of strict linear matrix inequalities (LMIs). Examples and simulations are presented to show the effectiveness of the proposed methods, at the same time, the effect of leakage delay on stability of neural networks system and on the attenuation level of state estimator are discussed.

Ya-Jun Li, Zhao-Wen Huang and Jing-Zhao Li. H∞ State Estimation for Stochastic Markovian Jumping Neural Network with Time-varying Delay and Leakage Delay. International Journal of Automation and Computing, vol. 16, no. 3, pp. 329-340, 2019. doi: 10.1007/s11633-016-0955-9
Citation: Ya-Jun Li, Zhao-Wen Huang and Jing-Zhao Li. H∞ State Estimation for Stochastic Markovian Jumping Neural Network with Time-varying Delay and Leakage Delay. International Journal of Automation and Computing, vol. 16, no. 3, pp. 329-340, 2019. doi: 10.1007/s11633-016-0955-9
  • Due to the effect of measurement error, transmission and transports lags, time delay exists extensively in various dynamical systems such as nuclear reactor systems, chemical process systems, rolling mill and turbojet engine systems, and it is often regarded as the main source of chaos, divergence, instability and poor performance[1-3]. At the same time, great attention is devoted to the study of neural networks systems because of their massive potential application in pattern classification, reconstruction of moving image and combinatorial optimization. Also, many researchers have focused on the stability analysis, passivity analysis and state estimation of neural networks with different types of time delays such as discrete delay, distributed delay, dependent delay and independent delay, and considerable exciting results have been reported in the past years[4-12].

    Recently, a special delay called leakage or forgetting delay has been investigated widely since the discovery of its existence in the real systems. As pointed out in [13], neural networks with leakage delay are a class of special networks systems, time-delay in the leakage term has a big impact on the dynamics of neural networks, which has a tendency to cause to instability, oscillations and even periodicity, sometimes the existence of leakage delay has more significant effect on dynamics of neural networks than other kinds of delays. A lot of exciting research results about the existence, and stability of state estimator design of systems with leakage delay have been reported very recently, see [14-19] for detail. In [14], by constructing mapping theorem and topological degree theory, the existence, uniqueness and global asymptotic stability of impulsive neural networks with leakage delay have been studied. By using the Lyapunov functional method, linear matrix inequality approach and general convex combination technique, a class of mixed recurrent neural networks with time delay in the leakage term under impulsive perturbations is investigated in [15], together with free-weighting matrix technique, the state estimation for a class of bidirectional associative memory (BAM) neural networks with a constant delay in the leakage term is considered in [16]. In [17], by using the properties of M-matrix, the properties of fuzzy logic operator, eigenvalue of the spectral radius of nonnegative matrices and delay differential inequality, a class of fuzzy cellular neural networks with time delay in the leakage term and impulsive perturbations are investigated. The leakage delay studied above was considered as constant, so the research on leakage delay is extended to time-varying[18-25] case. For example, in [18], based on Lyapunov method, triple Lyapunov-Krasovskii functional terms are employed to study the robust stability of discrete-time uncertain neural networks with leakage time-varying delay.

    As we know, stochastic model plays a very important role in the branches of economics, industry and science, and a particular research branch is the stochastic system with Markovian jumping parameters. Markovian jumping systems are regarded as a special class of hybrid systems, which are described by a discrete-time finite-state Markovian process, and a continuous-time state represented by differential equations. It has been proved that the state of neural network can jump from one mode to another according to Markovian chain when the structures of networks are subjected to unexpected matters, such as components failures or repairs, or due to sudden external environmental disturbance. Because the Markovian jumping systems have the advantage in modeling these dynamic systems mentioned above, much progress has been made about stability analysis, impulsive response and state estimation of stochastic neural networks with Markovian jumping parameters[26-30].

    On the other hand, it is commonly hard or even impossible to directly obtain the complete neuron states in some practical neural networks systems, so it is sometimes necessary to estimate neuron states from the available measurements so that the neural networks can be made full use of in practice. Recently, the state estimation of neural networks has been a hot topic and many important results are reported[31-39]. For instance, the state estimator is designed for bidirectional associative memory neural networks with leakage delays in [39]. Most recently, the problems of $H_{\infty}$ state estimation are also investigated for static neural networks with time-varying delays[40-41]. But the systems studied in [40-41] are only about the deterministic neural networks, the stochastic perturbation and leakage delay have not been taken into account. Up to now, to the best of our knowledge, the research results about $ H_{\infty}$ state estimation for stochastic neural networks with leakage delays have not been reported, which motivates the present study.

    In this paper, the $H_{\infty}$ state estimation problem for a class of stochastic neural networks with time-varying discrete delay and leakage delay is investigated. By employing a suitable Lyapunov functional and introducing a new inequality technic, a mode-dependent linear state estimator is designed and obtained such that the error system is not only exponentially stable, but also satisfies a prescribed $H_{\infty}$ norm level, the sufficient conditions to make the error system exponentially stable are given and proved by solving a set of strict linear matrix inequalities. Finally, examples and simulations are presented to show the effectiveness of the proposed methods, and the mutual effects among discrete delay, leakage delay, time-delay$'$s derivative and attenuation level are discussed. Experiment analysis reveals that the effects of leakage delay on the stability of neural networks can not be neglected.

    The rest of this paper is organized as follows: The $H_{\infty}$ state estimate problems for delayed neural networks with Markovian jump parameters and mixed delays are formulated in Section 2. Sections 3 is dedicated to presenting delay-dependent criteria to ensure the existence of state estimator. Two examples and simulations are provided to illustrate the effectiveness and performance of the proposed approaches in Section 4. Also some discussions and comparisons are given in Section 4. Finally, we draw some conclusions in Section 5.

    Notation. Throughout this paper, if not explicitly stated, matrices are assumed to have compatible dimensions. The notation $ M>( \geq, <, \leq)\, 0 $ means that the symmetric matrix $M$ is positive-definite (positive-semidefinite, negative, negative-semidefinite). $\lambda_{\rm min}(\cdot)$ and $\lambda_{\rm max}(\cdot)$ denote the minimum and the maximum eigenvalue of the corresponding matrix; The superscript "T" stands for the transpose of a matrix; ${\rm E}\{\cdot\}$ is the expectation operator, the shorthand diag$\{\cdot\cdot\cdot\}$ denotes the block diagonal matrix; $\|\cdot\|$ represents the Euclidean norm for vector or the spectral norm of matrices. $ I $ refers to an identity matrix of appropriate dimensions, $\ast$ means the symmetric terms. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can arise.

  • Let $ r_{t}, ~t\geq0 $ be a right-continuous Markov chain defined on a complete probability space $(\Omega, \mathcal{ F} , P)$ and taking discrete values in a finite state space $S = \{ 1, 2, \cdots, N \}$ with generator $ \Pi=(\pi_{ij})_{N \times N}$ given by

    $ \begin{align} P\{r(t+\Delta)=j|r(t)=i\}= \left\{\begin{aligned} &\pi_{ij}\Delta+o(\Delta) , ~~i\neq j \\ &1+ \pi_{ij}\Delta+o(\Delta) , ~~ i= j \end{aligned}\right. \end{align} $

    (1)

    where $\Delta>0$, $\pi_{ij}\geq0$ is the transition rate from $i$ to $j$ while $\pi_{ii}=-\sum_{j\neq i}\pi_{ij}$.

    Let us consider the following neural networks with time delay and subject to noise disturbances:

    $ \begin{align} \begin{cases} \begin{aligned} &{\rm d}x (t)=[-A(r_{t})x(t-\delta(t))+W_{0}(r_{t})f(x(t)) +\\ &\quad W_{1}(r_{t})f(x(t-\tau(t)))+B_{1}(r_{t})v(t)]{\rm d}t+\\ &\quad [C(r_{t})x(t )+D(r_{t})x(t-\tau(t))+W_{2}(r_{t})f(x(t))+\\ &\quad W_{3}(r_{t})f(x(t-\tau(t)))]{\rm d}\omega(t)\\ &y(t)=E(r_{t})x(t-\delta(t))+\phi(x(t))+B_{2}(r_{t})v(t)\\ &z(t)=H(r_{t})x(t)\\ &x(t)=\zeta(t), \quad \forall t\in[-\rho, 0] \end{aligned} \end{cases} \end{align} $

    (2)

    where $x(t)=[ x_{1}(t) , x_{2}(t), \cdots , x_{n}(t)]^{\rm T}\in {\bf R}^{n}$ is the state vector of the neural network associated with $n$ neurons, $y(t)\in {\bf R}^{m}$ is the network measurement, $z(t)\in {\bf R} ^{p}$ is the signal to be estimated, which is a linear combination of the states, and $v (t)\in {\bf R} ^{q}$ is the noise input belonging to $L_{2}[0, \infty)$. $\omega(t)$ is an $m$-dimensional Brownian motion defined on a probability space $(\Omega, F, P)$, which is assumed to satisfy ${\rm E}\{{\rm d}\omega(t)\}=0$, ${\rm E}\{{\rm d}\omega^{2}(t)\}={\rm d}t$. $A(r_{t}) ={\rm diag} \{a_{1}(r_{t}), a_{2}(r_{t}), \cdots, a_{n}(r_{t})\}$ is a diagonal matrix with positive entries with $ a_{i}(r_{t})>0 (i=1, 2, \cdots, n)$. $f(x(t))=[f( x_{1}(t)), f(x_{2}(t)), \cdots , f(x_{n}(t))]^{\rm T}\in {\bf R}^{n}$ denotes the neuron activation function, $\phi(x(t))$ is nonlinear disturbance, which is bounded and satisfies Lipschitz condition, and $W_{0}(r_{t}), ~W_{1}(r_{t}), ~ W_{2}(r_{t}), ~W_{3}(r_{t})$ are the connection weight matrices and the delayed connection weight matrices, respectively. $B_{1}(r_{t}), ~B_{2}(r_{t}), ~C(r_{t}) $, $E(r_{t})$, $D(r_{t})$ and $H(r_{t})$ are known real constant matrices with compatible dimensions, $\rho={\rm max}(\tau, \delta)$. The $\delta(t)$, $\tau (t )$ denote leakage and the transmission delay that satisfy:

    $ \begin{align} 0\leq\dot{\delta}(t)\leq\rho_{\sigma}, ~~\delta (t)\leq\delta, ~~0\leq\tau(t)\leq\tau, ~~\dot{\tau}(t)\leq\mu<1 \end{align} $

    (3)

    where $\rho_{\sigma}, \delta $ and $\mu$ are some positive scalars. $\zeta(t)$ is real-valued continuous initial condition in $[- \rho, 0 ]$. For the purpose of simplicity, in the sequel, for each $r_{t} = i \in S $, $A(r_{t})$, $W_{0}(r_{t})$, $W_{1}(r_{t})$ are denoted by $A_{i}$, $W_{0i}$, $W_{1i}$ and so on. Throughout the paper, we assume that $\omega(t)$ and $r(t)$ are independent.

    In order to estimate the state of system (2), we construct the following full-order state estimator:

    $ \begin{align} \begin{cases} \begin{aligned} {\rm d} \hat{x} (t)&=[-A_{i}\hat{x}(t-\delta(t))+W_{0i}f(\hat{x}(t)) +\\ &\quad W_{1i} f(\hat{x}(t-\tau(t)))+\\ &\quad K_{i}[y(t)-E_{i}\hat{x}(t-\delta(t))-\phi(t, \hat{x}(t))]{\rm d}t+\\ &\quad [C_{i}\hat{x}(t )+D_{i} \hat{x}(t-\tau(t))+W_{2i}f(\hat{x}(t))+\\ &\quad W_{3i}f(\hat{x}(t-\tau(t)))]{\rm d}\omega(t)\\ &\hat{z}(t)=H_{i}\hat{x}(t)\\ &\hat{x}(t)=\zeta(t), \quad \forall t\in[-\rho, 0] \end{aligned} \end{cases} \end{align} $

    (4)

    where $\hat{x}(t) \in {\bf R}^{n}$ is the estimated state, $\hat{y}(t)\in {\bf R} ^{m}$ is estimated output, and $K\in {\bf R}^{n\times m}$ is the state estimator gain matrix to be determined.

    Assumption 1. (H1) For $ i\in\{1, 2 , \cdots, n\}$, $ \forall x, y \in {\bf R}, ~~ x\neq y, $ the neuron activation functions $f_{i}(\cdot)$ are continuous, bounded and satisfy:

    $ \begin{align} \mid f_{i}(x)-f_{i}(y)\mid\leq l_{i}\mid x-y \mid \end{align} $

    (5)

    with $L= {\rm diag} \{ l_{1}, l_{2}, \cdots, l_{n}\}$.

    (H2) The nonlinear disturbance $\phi(t, x(t))$ is bounded and satisfies the following Lipschitz condition:

    $ \begin{align} \mid \phi_{i}(x)-\phi_{i}(y)\mid\leq G_{i}\mid x-y \mid, ~~~ \forall x, y\in {\bf R}, ~~~ x\neq y \end{align} $

    (6)

    where $G={\rm diag} \{ G_{1}, G_{2}, \cdots, G_{n}\}$ is a known constant diagonal matrix.

    Lemma 1[29]. For any constant symmetric positive definite matrix $J\in {\bf R}^{m\times m}$, scalar $\eta$ and the vector function $\nu: [0, \eta]\rightarrow {\bf R}^{m}$, the following inequality holds:

    $ \begin{align} \eta\int_{0}^{\eta}\nu^{\rm T}(s)J\nu(s)\textrm{d}s\geq(\int_{0}^{\eta}\nu(s)\textrm{d}s)^{\rm T} J(\int_{0}^{\eta}\nu(s)\textrm{d}s).\nonumber \end{align} $

    Lemma 2[29]. For given proper dimensions constant matrix $\Phi_{1}, \Phi_{2}$ and $\Phi_{3}$, where $\Phi_{1}^{\rm T}=\Phi_{1}$ and $\Phi_{2}^{\rm T}=\Phi_{2}>0$, then we have $\Phi_{1}+\Phi_{3}^{\rm T}\Phi_{2}^{-1}\Phi_{3}<0$ if and only if the following linear matrix inequalities hold:

    $ \begin{align*} \left[ \begin{array}{cc} \Phi_{1} & \Phi_{3}^{\rm T} \\ \ast & -\Phi_{2} \\ \end{array} \right]<0, \quad {\rm or} \left[ \begin{array}{cc} -\Phi_{2} & \Phi_{3} \\ \ast & \Phi_{1} \\ \end{array} \right]<0. \end{align*} $

    Lemma 3[42]. Let vector function $x(t), ~ \varphi(t), ~g(t)$ satisfy the stochastic differential equation

    $ \begin{align} {\rm d}x(t)=\varphi(t){\rm d}t+ g(t){\rm d}\omega(t) \end{align} $

    (7)

    where $\omega(t)$ is a Brownian motion. For any constant matrix $Z\geq 0 $ and scalar $h>0$, the following inequality holds

    $ \begin{align} \begin{aligned} &-h\int^{t}_{t-h}\varphi^{\rm T}(s)Z\varphi(s){\rm d}s\leq\\ &\qquad \left[ \begin{array}{c} x(t) \\ x(t-h)\\ \end{array} \right]^{\rm T}\left[ \begin{array}{cc} -Z & Z \\ Z & -Z \\ \end{array} \right]\left[ \begin{array}{c} x(t) \\ x(t-h)\\ \end{array} \right]+\\ &\qquad \left[ \begin{array}{c} x(t) \\ x(t-h)\\ \end{array} \right]^{\rm T}\left[ \begin{array}{c} Z \\ -Z\\ \end{array} \right] \int^{t}_{t-h}g(s){\rm d}\omega(t). \end{aligned} \end{align} $

    (8)
  • In this section, we firstly consider the filtering error system with $v(t)=0 $ is exponentially mean-square stable, then the system (2) becomes following:

    $ \begin{align} \begin{cases} \begin{aligned} &{\rm d} x (t)=[-A_{i}x(t-\delta(t))+W_{0i}f(x(t)) +\\ &\quad W_{1i} f(x(t-\tau(t)))]{\rm d}t+\\ &\quad [C_{i}x(t )+D_{i}x(t-\tau(t))+W_{2i}f(x(t))+\\ &\quad W_{3i}f(x(t-\tau(t)))]{\rm d}\omega(t)\\ &z(t)=H_{i}x(t)\\ &x(t)=\zeta(t), ~~~\forall t\in[-\rho, 0]. \end{aligned} \end{cases} \end{align} $

    (9)

    Define the state estimation error $ e(t)= x(t) - \hat{x}(t)$ and $\tilde{z}(t) = z(t)- \hat{z}(t)$, so we can get

    $ \begin{align} \begin{aligned} \begin{cases} \!\!\!\!&\textrm{d}e(t)=[-(A_{i}+K_{i}E_{i}) e(t-\delta(t))+W_{0i} \sigma_{1}(t)+\\ \!\!\!\!&\quad W_{1i} \sigma_{1}(t-\tau(t)) -K_{i} \sigma_{2}(t)]{\rm d}t+\\ \!\!\!\!&\quad [C_{i}e(t)+D_{i} e(t-\tau(t))+W_{2i} \sigma_{1}(t)+\\ \!\!\!\!&\quad W_{3i} \sigma_{1}((t-\tau(t)) ]\textrm{d}\omega(t)\\ \!\!\!\!&\tilde{z}(t)=H_{i}e(t) \end{cases} \end{aligned} \end{align} $

    (10)

    where

    $ \sigma_{1}(t)=f(t, x(t))-f(t, \hat{x}(t)) $

    (11)

    $ \sigma_{2}(t)=\phi(t, x(t))-\phi(t, \hat{x}(t)). $

    (12)

    The $H_\infty$ state estimator problem considered in this paper is now formulated as follows. Given a prescribed level of noise attenuation $\gamma>0$, design a suitable state estimator such that the error system (10) has an $H_\infty$ performance $\gamma$ satisfying:

    1) the error system (10) is exponentially stable in the mean square sense, that is to say, there exist positive scalars $\alpha$, $\beta$ such that

    $ \begin{align} {\rm E}\{\|e(t, \psi)\|^{2}\}\leq \alpha {\rm e}^{-\beta t}{\rm E}\{\sup\limits_{-\tau\leq s\leq0}\|\psi(s)\|^{2}\}. \end{align} $

    (13)

    2) The $H_\infty$ performance ${\rm E}\{\parallel \tilde{z}(t)\parallel_{2}\}< \gamma\parallel v(t)\parallel_{2} $ is guaranteed under zero-initial conditions for all nonzero $v(t)\in L_{2}[0, \infty)$, where $\parallel \tilde{z}(t)\parallel_{2}=\sqrt{\int^{\infty}_{0}\tilde{z}^{\rm T}(t)\tilde{z}(t){\rm d}t}$, and $\parallel v(t)\parallel_{2}=\sqrt{\int^{\infty}_{0}v^{\rm T}(t)v(t){\rm d}t}$.

    In this section, firstly, we show the error system (10) is mean square exponential stable, so we can get the following Theorem 1.

    Theorem 1. For given scalars $\delta>0$, $\tau>0$, $\rho_{\sigma}>0$ and $\mu>0$, the estimator error system (10) is exponentially mean-square stable, if there exist matrices $P_{i}>0$, $Q>0$, $R_{j}>0 \ (j=1, ~2, ~3, ~4 )$, matrix $T_{i} $ and scalars $\lambda_{1i}>0 $, $\lambda_{2i}>0$, $\lambda_{3i} >0 (i\in S)$ such that the following LMI holds

    $ \begin{align} \Pi=\left[ \begin{array}{cccc} \Pi^{11} & \Pi^{12}& \Pi^{13}&\Pi^{14}\\ \ast & -P_{i} & 0 & 0\\ \ast & \ast & -2P_{i}+R_{1} & 0\\ \ast & \ast & \ast & -Q\\ \end{array} \right]<0 \end{align} $

    (14)

    where

    $ \begin{align*} &\Pi^{11}=\small{\left[ \begin{array}{ccccccc} \Pi_{11} & \Pi_{12} & R_{1}& \Pi_{14}& P_{i}W_{0i} & P_{i}W_{1i} & -T_{i} \\ \ast & \Pi_{22} & 0 & \Pi_{24} & 0 & 0 & 0 \\ \ast& \ast & \Pi_{33} & 0 & 0 & 0 & 0 \\ \ast & \ast & \ast& \Pi_{44} & \Pi_{45} &\Pi_{46} & A_{i}^{\rm T}T_{i} \\ \ast & \ast & \ast & \ast & -\lambda_{1i} & 0 & 0 \\ \ast & \ast& \ast & \ast & \ast & -\lambda_{2i}& 0 \\ \ast & \ast & \ast & \ast & \ast & \ast& -\lambda_{3i} \end{array} \right]\small}\nonumber\\ &\Pi^{12}=[P_{i}C_{i}~~0~~P_{i}D_{i}~~0~~P_{i}W_{2i}~~P_{i}W_{3i}~~0]^{\rm T}\\ &\Pi^{13}=[0~-\tau( R_{1}A_{i}+T_{i}E_{i})~0~0~\tau R_{1}W_{0i}~\tau R_{1}W_{1i}~~-\tau T_{i}]^{\rm T}\\ &P_{i}K_{i}=T_{i}\\ &\Pi^{14}= [\sqrt{\rho_{\sigma}}P_{i}A_{i} ~~0~~0~~~~0~~0~~~~0~~0~~ ]^{\rm T}\\ &\Pi_{11}=-P_{i}A_{i}-A^{\rm T}_{i}P_{i} +\sum\limits_{j=1}^{N}\pi_{ij}P_{j}+R_{3}+R_{4}-R_{1}+\\ &\qquad \lambda_{1i}L^{\rm T}L+ \lambda_{3i}G^{\rm T}G+\delta^{2}R_{2}\\ &\Pi_{12}=-T_{i}E_{i}+E^{\rm T}_{i}T^{\rm T}_{i}A_{i}\\ &\Pi_{14}=A_{i}^{\rm T}P_{i}A_{i}-\sum\limits_{j=1}^{N}\pi_{ij}P_{j}A_{i}\\ &\Pi_{22}=\rho_{\sigma}Q- (1- \rho_{\sigma})R_{4}\\ &\Pi_{33}=-(1-\mu)R_{3}-R_{1}+\lambda_{2i}L^{\rm T}L\\ &\Pi_{24}=\rho_{\sigma}A_{i}^{\rm T}P_{i}A_{i}+ E_{i}^{\rm T} T_{i}^{\rm T} A_{i}\\ &\Pi_{44}=-R_{2}+\sum\limits_{j=1}^{N}\pi_{ij}A^{\rm T}_{i}P_{j}A_{i}\\ &\Pi_{45}=-A_{i}^{\rm T}P_{i}W_{0i}\\ &\Pi_{46}=-A_{i}^{\rm T}P_{i}W_{1i}.\\ \end{align*} $

    Proof. For a convenient proof, we denote

    $ \begin{align*} &\varphi(t)=-(A_{i}+K_{i}E_{i} )e(t-\delta(t))+W_{0i} \sigma_{1}(t)+\\ &\quad \qquad W_{1i} \sigma_{1}(t-\tau(t)) -K_{i} \sigma_{2}(t)\\ &g(t)=C_{i}e(t)\!+\!D_{i} e(t-\tau(t))\!+\!W_{2i} \sigma_{1}(t)\! +\!W_{3i} \sigma_{1}(t-\tau(t)).\end{align*} $

    Then, the system (10) can be rewritten as

    $ \begin{align} \textrm{d}e(t) =\varphi(t)\textrm{d}t+g(t)\textrm{d}\omega(t). \end{align} $

    (15)

    Choose the following Lyapunov-Krasovskii functional candidate as

    $ \begin{align} V(e(t), ~t, ~i)=\sum\limits_{N=1}^{5}V_{N}(e(t), ~t, ~i) \end{align} $

    (16)

    where

    $ \begin{align*} &V_1(e(t), ~t, ~i)=[e(t)-A_{i}\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}\times\\ &\qquad P_{i}[e(t)-A_{i}\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]\\ &V_2(e(t), ~t, ~i)=\tau\int^{0}_{-\tau}\int^{t}_{t+\theta}\varphi^{\rm T}(s)R_{1}\varphi^{\rm T}(s)\textrm{d}s\textrm{d}\theta\\ &V_3(e(t), ~t, ~i)=\delta\int^{0}_{-\delta(t)}\int^{t}_{t+\theta}e^{\rm T}(s)R_{2}e(s)\textrm{d}s\textrm{d}\theta \\ &V_4(e(t), ~t, ~i)= \int^{t}_{t- \tau(t)}e^{\rm T}(s) R_{3}e(s)\textrm{d}s\\ &V_5(e(t), ~t, ~i)= \int^{t}_{t- \sigma(t)}e^{\rm T}(s) R_{4}e(s)\textrm{d}s. \end{align*} $

    By $It\hat{o}$ differential rule, the stochastic differential of $\textrm{d}V(e(t), ~t, ~i)$ with respect to $t$ along error system (10) is given by

    $ \begin{align} \textrm{d}V(e(t), t, i)=\sum^{5}_{N=1} \mathcal{L}V_{N}(e(t), ~t, ~i){\rm d}t+2e^{\rm T}(t) P_{i}g(t)\textrm{d}\omega(t) \end{align} $

    (17)

    where

    $ \begin{align} &\mathcal{L}V_{1}(e(t), ~t, ~i)=\notag\\ &\quad 2[e(t)-A_{i}\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}P_{i}\overline{\varphi}(t)+\notag\\ &\quad \sum\limits_{j=1}^{N}\pi_{ij}[e(t)-A_{i}\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}\times\notag\\ &\quad P_{j}[e(t)-A_{i}\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]+g^{\rm T}(t)P_{i}g(t) \leq\notag\\ &\quad -2 e^{\rm T}(t )P_{i}A_{i}e(t) -2 e^{\rm T}(t )P_{i}K_{i}E_{i}e(t-\delta(t))+\notag\\ &\quad 2 e^{\rm T}(t )P_{i}W_{0i} \sigma_{1}(t)+2 e^{\rm T}(t )P_{i}W_{1i} \sigma_{1}(t-\tau(t))-\notag\\ &\quad 2 e^{\rm T}(t )P_{i}K_{i} \sigma_{2}(t)+2 [\int^{t}_{t-\delta(t)} e(s)\textrm{d}s]^{\rm T}A^{\rm T}_{i}P_{i}A_{i}e(t) +\notag\\ &\quad 2 [\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}A^{\rm T}_{i}P_{i}K_{i}E_{i}e(t-\delta(t))-\notag\\ &\quad 2 [\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}A^{\rm T}_{i}P_{i}W_{0i} \sigma_{1}(t)-\notag\\ &\quad 2 [\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}A^{\rm T}_{i}P_{i}W_{1i} \sigma_{1}(t-\tau(t))+\notag\\ &\quad 2 [\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}A^{\rm T}_{i}P_{i}K_{i} \sigma_{2}(t)+\notag\\ &\quad 2[\int^{t}_{t-\delta(t)}e(s)\textrm{d}s]^{\rm T}A^{\rm T}_{i}P_{i}A_{i}e(t-\delta(t))\rho_{\sigma}+\notag\\ &\quad e^{\rm T}(t )P_{i}A_{i}Q^{-1} A^{\rm T}_{i}P_{i}e(t)\rho_{\sigma} +\notag\\ &\quad e^{\rm T}(t-\delta(t))Q e(t-\delta(t))\rho_{\sigma}+g^{\rm T}(t)P_{i}g(t) \end{align} $

    (18)

    where

    $ \begin{align} &{\overline{\varphi}}(t)=-A_{i}e(t)-(K_{i}E_{i}+\dot{\delta}(t)A_{i}) +e(t-\delta(t))+\notag\\ &\quad W_{0i} \sigma_{1}(t)+W_{1i} \sigma_{1}(t-\tau(t)) -K_{i} \sigma_{2}(t). \end{align} $

    $ \begin{align} &\mathcal {L}V_{2}(e(t), ~t, ~i)\leq \tau^{2} \varphi^{\rm T}(t)R_{1}\varphi(t)-\notag\\ &\qquad \tau\int^{t}_{t-\tau} \varphi^{\rm T}(s)\textrm{d}sR_{1} \varphi(s)\textrm{d}s. \end{align} $

    (19)

    $ \begin{align} &\mathcal {L}V_{3}(e(t), ~t, ~i)\leq \delta^{2} e^{\rm T}(t)R_{2}e(t)-\notag\\ &\quad \delta\int^{t}_{t-\delta(t)} e^{\rm T}(s) R_{2} e(s)\textrm{d}s. \end{align} $

    (20)

    $ \begin{align} &\mathcal{L}V_{4}(e(t), t, i)=e^{\rm T}(t)R_{3}e(x(t))-\notag\\ &\quad (1- \dot{\tau}(t))e^{\rm T}(t- \tau (t))R_{3}e(t- \tau(t)). \end{align} $

    (21)

    $ \begin{align} &\mathcal{L}V_{5}(e(t), t, i)=e^{\rm T}(t)R_{4}e(x(t))-\notag\\ & (1- \dot{\sigma}(t))e^{\rm T}(t- \sigma (t))R_{4}e(t- \sigma(t)). \end{align} $

    (22)

    By Lemma 1, we have

    $ \begin{align} \begin{aligned} &-\delta\int^{t}_{t-\delta(t)} e^{\rm T}(s)\textrm{d}s R_{2} e(s)\textrm{d}s\leq \\ &\qquad -\int^{t}_{t-\delta(t)} e^{\rm T}(s)\textrm{d}s R_{2}\int^{t}_{t-\delta(t)} e(s)\textrm{d}s \end{aligned} \end{align} $

    (23)

    and applying Lemma 2, the following inequality can be obtained

    $ \begin{align} \begin{aligned} &-\tau\int^{t}_{t-\tau} \varphi^{\rm T}(s)\textrm{d}sR_{1} \varphi(s)\textrm{d}s\leq\\ &\quad \left[ \begin{array}{c} e(t) \\ e(t-\tau(t))\\ \end{array} \right]^{\rm T}\left[ \begin{array}{cc} -R_{1} & R_{1} \\ R_{1}&-R_{1} \\ \end{array} \right]\left[ \begin{array}{c} e(t) \\ e(t-\tau(t))\\ \end{array} \right]+\\ &\quad 2\left[ \begin{array}{c} e(t) \\ e(t-\tau(t))\\ \end{array} \right]\left[ \begin{array}{c} R_{1}\\ -R_{1} \\ \end{array} \right]\int^{t}_{(t-\tau(t))}g(s){\rm d}\omega(s). \end{aligned} \end{align} $

    (24)

    From (11) and (12), we can have

    $ \begin{align*} &\sigma^{\rm T}_{1}(t)\varphi_{1}(t)=[f(x(t))-f(\hat{x}(t))]^{\rm T}[f(x(t))-f(\hat{x}(t))]\leq \nonumber\\ &\qquad e^{\rm T}(t)L^{\rm T}Le(t)\\ &\sigma^{\rm T}_{2}(t)\sigma_{2}(t)=[\phi(x(t))-\phi(\hat{x}(t))]^{\rm T}\times\\ &\qquad [\phi(x(t))-\phi(\hat{x}(t))] \leq\nonumber\\ & \qquad e^{\rm T}(t)G^{\rm T}Ge(t) \end{align*} $

    and

    $ \begin{align} \begin{aligned} &\sigma^{\rm T}_{1}(t-\tau(t))\sigma_{1}(t- \tau(t))\leq\nonumber\\ &\qquad\quad e^{\rm T}(t- \tau(t))L^{\rm T}Le(t- \tau(t)). \end{aligned} \end{align} $

    So there exist positive scalars $\lambda_{1i}$, $\lambda_{2i}$ and $\lambda_{3i}$ such that the following inequalities hold

    $ \begin{align} &-\lambda_{1i}[\sigma^{\rm T}_{1}(t)\sigma_{1}(t)-e^{\rm T}(t)L^{\rm T}Le(t)] \geq 0 \end{align} $

    (25)

    $ \begin{align} -\lambda_{2i}[\sigma^{\rm T}_{1}(t- \tau (t))\sigma_{1}(t- \tau (t))-\notag\\ e^{\rm T}(t- \tau (t))L^{\rm T}Le(t- \tau (t))] \geq 0 \end{align} $

    (26)

    $ \begin{align} -\lambda_{3i}[\sigma^{\rm T}_{2}(t)\sigma_{2}(t)-e^{\rm T}(t)G^{\rm T}Ge(t)] \geq 0. \end{align} $

    (27)

    Using inequalities (18)-(22) and (24), then by adding the terms on the left sides of (25)-(27) to the right side of (17), and considering (23), we can get

    $ \begin{align} \mathcal{L} V (e(t), t, i)\leq \xi^{\rm T}(t)\Gamma \xi (t)+\theta(t) \end{align} $

    (28)

    where

    $ \begin{align*} &\xi^{\rm T}(t)=[e^{\rm T}(t)~~~e^{\rm T}(t-\delta(t))~~~e^{\rm T}(t- \tau (t))\\ &\quad \int^{t}_{t-\delta(t)}e^{\rm T}(s){\rm d}s~~\sigma_{1}^{\rm T} (t)~~\sigma_{1}^{\rm T}(t- \tau (t))~~\sigma ^{\rm T}_{2}(t)]\\ & \Gamma=\Pi^{11}+[\Pi^{12}]^{\rm T}P^{-1}_{i}\Pi^{12}+ [\Pi ^{13}]^{\rm T}R^{-1}_{1} \Pi ^{13}+\\ &\qquad [\Pi ^{14}]^{\rm T}Q^{-1} \Pi ^{14} \end{align*} $

    and

    $ \begin{align} \theta(t)=2\left[ \begin{array}{c} e(t) \\ e(t-\tau(t))\\ \end{array} \right]\left[ \begin{array}{c} R_{1}\\ -R_{1} \\ \end{array} \right]\int^{t}_{(t-\tau(t))}g(s){\rm d}\omega(s). \end{align} $

    (29)

    Be Lemma 1, $\Gamma<0$ is equivalent to

    $ \begin{align} \bar{ \Gamma}=\left[ \begin{array}{cccc} \Pi^{11} & \Pi^{12}& \Pi^{13}& \Pi ^{14} \\ \ast & -P_{i} & 0 & 0\\ \ast & \ast & -P_{i}R^{-1}_{1}P_{i} & 0\\ \ast & \ast & \ast &-Q \\ \end{array} \right]<0. \end{align} $

    (30)

    Then pre- and post-multiply the matrix $ \Gamma $ by ${\rm diag}\{I, ~I, ~I, ~I, ~I, ~I, ~R_{1}P^{-1}_{i}\}$ and ${\rm diag}\{I, ~I, ~I, ~I, ~I, ~I, ~P^{-1}_{i}R_{1}\}$, respectively. Then substituting $T_{i}=P_{i}K_{i}$ into $ \Gamma $, and noting the fact that $P_{i}>0 $, $R_{1}>0$, and $P_{i}R^{-1}_{1}P_{i}-2P_{i}+R_{1}=(P_{i}-R_{1})^{\rm T}R^{-1}_{1}(P_{i}-R_{1})\geq0$, so we can get that $-P_{i}R^{-1}_{1}P_{i}\leq-2P_{i}+R_{1}$. From Lemma 2, it is easy to see that $\Gamma<0$ is equivalent to $\Pi<0$.

    By taking expectation on both sides of (17), we can obtain

    $ \begin{align} &{\rm E}\{V(e(Tp), Tp, r(Tp))\}-V(e(0), 0, r_{0})=\notag\\ &\qquad \qquad {\rm E}\Big\{ \int^{Tp}_{0} \mathcal{L}V_{i}(e(t), ~t, ~r(t))\Big\}\leq\notag\\ &\qquad \qquad -\beta \int^{Tp}_{0} {\rm E}\{e^{\rm T}(t)e(t)\}{\rm d}t \end{align} $

    (31)

    where $\beta={\rm min}_{i\in S}{\lambda_{\rm min}(-\Gamma)}>0$.

    On the other hand, it follows from (16) that

    $ \begin{align*} {\rm E}\{V(e(t), t, i)\}\geq b{\rm E}\{e(t)^{\rm T}e(t)\} \end{align*} $

    where $ b={\rm min}_{i\in S}\big(\lambda_{\rm min}(P_{i})\big)>0$. So we can get

    $ \begin{align} &{\rm E}\{ e(t)^{\rm T}e(t)\} \leq b^{-1} V(e(0), 0, r_{0})-\notag\\ &\quad b^{-1}\beta \int^{T_p}_{0} {\rm E}\{e^{\rm T}(t)e(t)\}{\rm d}t. \end{align} $

    (32)

    Applying Gronwall-Bellman Lemma to the inequality (32), we can obtain

    $ \begin{align*} {\rm E}\{ e(t)^{\rm T}e(t)\} \leq b^{-1} V(e(0), 0, r_{0})e^{-\beta b^{-1}{T}}. \end{align*} $

    Noting that there exists a scalar $c>0 $ such that

    $ \begin{align} b^{-1}V(e(0), ~0, ~r_{0})\leq c \sup\limits_{-\rho\leq\theta\leq0}\mid e(\theta)\mid^{2}. \end{align} $

    (33)

    So by (13), we can draw the conclusion that the error system (10) is exponentially mean square stable.

    Remark 1. In order to convert nonlinear matrix inequality into strict LMIs, the fact $- P_{i}R_{1}^{-1}P_{i}\leq -2\ P_{i}+ R_{1}$ is used in the proof of Theorem 1, thus it is very convenient to get the feasible solution through Matlab LMI Toolbox.

    Next, we will establish the $H_{\infty}$ performance for the error estimation system (10), when $v(t)\neq 0$, the following error estimation system can be obtained:

    $ \begin{align} \begin{aligned} \begin{cases} \!\!\!\!&{\rm d}e(t)=\textrm{d}e(t) [-(A_{i}+K_{i}E_{i}) e(t-\delta(t))+W_{0i} \sigma_{1}(t)+\\ \!\!\!\!&\quad W_{1i} \sigma_{1}(t-\tau(t))-K_{i} \sigma_{2}(t)+\\ \!\!\!\!&\quad (B_{1i}-K_{i}B_{2i})v(t) ]{\rm d}t+\\ \!\!\!\!&\quad [C_{i}e(t)+D_{i} e(t-\tau(t))+W_{2i} \sigma_{1}(t)+\\ \!\!\!\!&\quad W_{3i} \sigma_{1}(t-\tau(t)) ]\textrm{d}\omega(t)\\ \!\!\!\!&\tilde{z}(t)=H_{i}e(t). \end{cases} \end{aligned} \end{align} $

    (34)

    Define

    $ \begin{align} J(t)={\rm E}\{\int^{\infty}_{0}[\hat{z}^{\rm T}(s)\hat{z}(s)-\gamma^{2} v^{\rm T}(s)v(s)]{\rm d}s\} \end{align} $

    (35)

    so we can get the following Theorem 2.

    Theorem 2. For given scalars $\delta>0$, $\tau>0$, $\rho_{\sigma}>0$, and $\mu>0$, the error estimation system (34) is exponentially mean-square stable with a prescribed $H_{\infty}$ disturbance attenuation level $\gamma$, if there exist positive definite matrices $P_{i}$, $R_{j}, j=1, ~2, ~3, ~4 $, $T_{i}$, and scalars $\lambda_{1i}>0$, $\lambda_{2i}>0$, $\lambda_{3i}>0~(i\in S)$ such that the following LMI holds

    $ \begin{align} \Gamma=\left[ \begin{array}{cccc} \overline{\Pi}^{11} & \Pi^{12}& \overline{\Pi}^{13} & \Pi^{14}\\ \ast & -P_{i} & 0 & 0\\ \ast & \ast & -2P_{i}+R_{1} & 0 \\ \ast & \ast & \ast & -Q\\ \end{array} \right]<0 \end{align} $

    (36)

    where

    $ \begin{align*} &\overline{\Pi}^{11}=\small{\left[ \begin{array}{ccccccccc} \bar{\Pi}_{11} &\!\!\! \Pi_{12} &\!\!\! R_{1} &\!\!\! \Pi_{14}&\!\!\! P_{i}W_{0i} &\!\!\! P_{i}W_{1i} &\!\!\! -T_{i} &\!\!\!\Pi_{19}\\ \ast &\!\!\! \Pi_{22} &\!\!\! 0 &\!\!\! \Pi_{24} &\!\!\! 0 &\!\!\! 0 &\!\!\! 0 &\!\!\! 0\\ \ast&\!\!\! \ast &\!\!\! \Pi_{33} &\!\!\! 0 &\!\!\! 0 &\!\!\! 0 &\!\!\! 0&\!\!\! 0\\ \ast &\!\!\! \ast &\!\!\! \ast&\!\!\! \Pi_{44} &\!\!\! \Pi_{45} &\!\!\!\Pi_{46} &\!\!\! A_{i}^{\rm T}T_{i}&\!\!\!\Pi_{49}\\ \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! -\lambda_{1i} &\!\!\! 0 &\!\!\! 0 &\!\!\! 0\\ \ast &\!\!\! \ast&\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! -\lambda_{2i}&\!\!\! 0 &\!\!\! 0\\ \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast&\!\!\! -\lambda_{3i} &\!\!\! 0\\ \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast&\!\!\! \ast&\!\!\! -\gamma^{2}I\\ \end{array} \right]\small}\nonumber\\ &\bar{\Pi}_{11}=-P_{i}A_{i}-A^{\rm T}_{i}P_{i} +\sum\limits_{j=1}^{N}\pi_{ij}P_{j}+R_{3}+R_{4} -R_{1}+ \\ &\qquad \lambda_{1i}L^{\rm T}L+ \lambda_{3i}G^{\rm T}G+\delta^{2}R_{2}+H_{i}H_{i}\\ &\bar{\Pi}^{13}=[0~~ -(\tau T_{i}E_{i}+\tau P_{i}A_{i}) ~~0~~0~~\tau P_{i}W_{0i}~~ \tau P_{i}W_{1i}-\\ &\qquad \tau T_{i}~~\tau P_{i}B_{1i}-\tau T_{i}B_{2i}]^{\rm T}\\ &\Pi_{19} =\tau P_{i}B_{1i}-\tau T_{i}B_{2i}\\ &\Pi_{49} =-A^{\rm T}_{i}P_{i}B_{1i}+A^{\rm T}_{i}T_{i}B_{2i}. \end{align*} $

    The other terms are the same as in the Theorem 1.

    Proof. Taking the same Lyapunov functional and the same proof method as the Theorem 1, we can obtain

    $ \begin{align} \mathcal{L}\{V (e(t), ~t)\leq \int^{\infty}_{0}\eta^{\rm T}(t)\Pi\eta(t)+\theta(t) \end{align} $

    (37)

    where

    $ \begin{align*} &\eta(t)=[\xi^{\rm T}(t)~~~~ v^{\rm T}(t)]^{\rm T}.\\ &\Pi=\overline{\Pi}^{11}+[\Pi^{12}]^{\rm T}P^{-1}_{i}\Pi^{12}+ [\overline{\Pi} ^{13}]^{\rm T}R^{-1}_{1} \overline{\Pi} ^{13}+[\Pi^{14}]^{\rm T}Q^{-1}\Pi^{14}\end{align*} $

    therefore we can get

    $ \begin{align*} &J(t)={\rm E}\{\int^{\infty}_{0}[e^{\rm T}(s) H^{\rm T}_{i}H_{i}e(s)-\gamma^{2} v^{\rm T}(s)v(s)+\\ &\quad\mathcal{L} V (e(s), ~s, ~i)]{\rm d}s\}-{E}\{V (e(s), ~s, ~i){\rm d}s\}\leq \\ &\int^{\infty}_{0}\eta^{\rm T}(t)\Pi\eta(t). \end{align*} $

    Therefore, follow the same lines of proof of Theorem 1, the state estimation error system (34) is exponentially mean-square stable with a prescribed $H_{\infty }$ disturbance attenuation level $\gamma$.

    Under the zero-initial condition, we can obtain that $ V (t)\mid_{t=0}=0$ and $ V (t)>0$, then for any nonzero $v(t)\in L_{2}[0, \infty)$, the following inequality holds:

    $ \begin{align*} J(t)\!\leq\!{\rm E}\{\int^{\infty}_{0}[\hat{z}^{\rm T}(t)\hat{z}^{\rm T}(t)\!-\!\gamma^{2} v^{\rm T}(t)v(t)]{\rm d}t\} - {\rm E}\{\mathcal{L}V (t)\}<0. \end{align*} $

    Remark 2. In Theorem 2, some LMI-based conditions are presented to guarantee the exponential stability of error system (34). The established criterion is dependent on the leakage delay $\delta$ and the transmission delay $\tau$, $\mu$, $\rho_{\sigma}$ and $\lambda$. By Matlab LMI Control Toolbox, the allowable time delay conditions can be determined by solving the following optimization problems:

    Case 1. Estimate the allowable maximum time delay $\delta$.

    Optimization 1. Maximize $\delta$, s.t. LMI (36) holds, and $\mu$, $\rho_{\sigma}$, $\tau$ and $\lambda$ fixed.

    Case 2. Estimate the allowable maximum time delay $\tau$.

    Optimization 2. Maximize $\tau$, s.t. LMI (36) holds, and $\mu$, $\rho_{\sigma}$, $\delta$ and $\lambda$ fixed.

    Case 3. Estimate the allowable minimum disturbance attenuation level $\gamma$.

    Optimization 3. Minimize $\lambda$, s.t. LMI (36) holds, and $\mu$, $\rho_{\sigma}$, $\delta$ and $\tau$ fixed.

    In particular, when $\delta(t)=\delta$, the error system (2) will become the following one:

    $ \begin{align} \begin{aligned} \begin{cases} &\textrm{d}e(t) =[-(A_{i}+K_{i}E_{i}) e(t-\delta )+W_{0i} \sigma_{1}(t)+\\ &\quad W_{1i} \sigma_{1}(t-\tau(t)) -K_{i} \sigma_{2}(t)]{\rm d}t+\\ &\quad [C_{i}e(t)+D_{i} e(t-\tau(t))+W_{2i} \sigma_{1}(t)+\\ &\quad W_{3i} \sigma_{1}(t-\tau(t)) ]\textrm{d}\omega(t)\\ &\tilde{z}(t)=H_{i}e(t). \end{cases} \end{aligned} \end{align} $

    (38)

    So for system (38), we can obtain the following Corollary 1.

    Corollary 1. For given scalar $\delta>0$, $\tau>0$, $\mu>0$, the estimator error system (38) is exponentially mean-square stable with a prescribed $H_{\infty}$ disturbance attenuation level $\gamma$, if there exist positive definite matrices $P_{i}>0$, $R_{1}>0$, $ R_{2}>0$, $ R_{3}>0$, $ R_{4}>0$, $T_{i}$, and scalars $\lambda_{1i}>0$, $\lambda_{2i}>0$, $\lambda_{3i}>0~~(i\in S)$ such that the following LMI holds

    $ \begin{align} \Gamma=\left[ \begin{array}{cccc} \Pi^{11} & \Pi^{12}& \overline{\Pi}^{13} \\ \ast & -P_{i} & 0 \\ \ast & \ast & -2P_{i}+R_{1} \\ \end{array} \right]<0 \end{align} $

    (39)

    where

    $ \begin{align*} &\Pi^{11}=\small{\left[ \begin{array}{cccccccc} \bar{\Pi}_{11} &\!\!\! \Pi_{12} &\!\!\! R_{1} &\!\!\! \Pi_{14}&\!\!\! P_{i}W_{0i} &\!\!\! P_{i}W_{1i} &\!\!\! -T_{i} &\!\!\!\overline{\Pi}_{18}\\ \ast &\!\!\! \overline{\Pi}_{22} &\!\!\! 0 &\!\!\! \overline{\Pi}_{24} &\!\!\! 0 &\!\!\! 0 &\!\!\! 0 &\!\!\! 0\\ \ast&\!\!\! \ast &\!\!\! \Pi_{33} &\!\!\! 0 &\!\!\! 0 &\!\!\! 0 &\!\!\! 0&\!\!\! 0\\ \ast &\!\!\! \ast &\!\!\! \ast&\!\!\! \Pi_{44} &\!\!\! \Pi_{45} &\!\!\!\Pi_{46} &\!\!\! A_{i}^{\rm T}T_{i}&\!\!\! \overline{\Pi}_{48} \\ \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! -\lambda_{1i} &\!\!\! 0 &\!\!\! 0 &\!\!\! 0\\ \ast &\!\!\! \ast&\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! -\lambda_{2i}&\!\!\! 0 &\!\!\! 0\\ \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast&\!\!\! -\lambda_{3i}&\!\!\!0 \\ \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast &\!\!\! \ast&\!\!\! \ast&\!\!\!-\gamma^{2}I \\ \end{array} \right]\small}\nonumber\\[3mm] &\bar{\Pi}_{11}=-P_{i}A_{i}-A^{\rm T}_{i}P_{i} +\sum\limits_{j=1}^{N}\pi_{ij}P_{j}+R_{3}- R_{1}+ \lambda_{1i}L^{\rm T}L+\\ &\qquad \lambda_{3i}G^{\rm T}G+\delta^{2}R_{2}+H_{i}H_{i}\\[3mm] &\bar{\Pi}^{13}=[-\tau R_{1}A_{i}~~ -\tau T_{i}E_{i} ~~0~~0~~\tau R_{1}W_{0i}~~ \tau R_{1}W_{1i}~~-\\[3mm] &\qquad \tau T_{i}~~\tau (R_{1}B_{1i}-\tau T_{i}B_{2i})]^{\rm T}\\[3mm] &\overline{\Pi}_{18} =P_{i}B_{1i}-\tau T_{i}B_{2i}\\[3mm] &\overline{\Pi}_{22}=-R_{4}\\[3mm] &\overline{\Pi}_{24}=E_{i}^{\rm T} T_{i}^{\rm T} A_{i}, ~~ \overline{\Pi}_{48} =-A^{\rm T}_{i}P_{i}B_{1i}+A^{\rm T}_{i}T_{i}B_{2i}\end{align*} $

    the other terms are the same as in Theorem 1.

  • In this section, two numerical examples with simulation results have been provided to demonstrate the effectiveness of the proposed $H_{\infty}$ state estimator design method.

    Example 1. Consider a two-neuron stochastic neural network with Markovian jump parameters and mixed time delays (2) with the following parameters:

    Mode 1:

    $ \begin{align} \begin{aligned} &A_{1}=\left[ \begin{array}{cc} 1 & 0 \\ 0 & 2 \\ \end{array} \right] , \quad W_{01}=\left[ \begin{array}{cc} 0.2 & 0.3 \\ 0.1 & 0.5 \\ \end{array} \right], \\[3mm] & W_{11}=\left[ \begin{array}{cc} 0.3 & 0.2 \\ -0.2 &-0.3 \\ \end{array} \right] , \quad W_{21}=\left[ \begin{array}{cc} 0.2 & -0.4 \\ 0.1 & 0.3 \\ \end{array} \right], \nonumber \\[3mm]\end{aligned}\end{align} $

    $ \begin{align} \begin{aligned} &W_{31}=\left[ \begin{array}{cc} 0.1 & -0.3 \\ -0.2 & 0.2 \\ \end{array} \right] , \quad E_{1}=\left[ \begin{array}{cc} 0.2 & 0.3 \\ 0.2 & 0.4 \\ \end{array} \right], \nonumber\\[3mm] & C_{1}=\left[ \begin{array}{cc} 0.4 & -0.1\\ 0.2 & -0.2\\ \end{array} \right], \quad D_{1}=\left[ \begin{array}{cc} 0.2 & -0.3\\ 0.2 & 0.2 \\ \end{array} \right], \\[3mm] &B_{11}=\left[ \begin{array}{cc} 0.2 & 0.1 \\ \end{array} \right] , \quad B_{21}=\left[ \begin{array}{cc} 0.1 & -0.2 \\ \end{array} \right], \\[3mm] &H_{1}=\left[ \begin{array}{cc} 0.1 & 0.2 \\ \end{array} \right]. \end{aligned} \end{align} $

    Mode 2:

    $ \begin{align} \begin{aligned} &A_{2}=\left[ \begin{array}{cc} 2 & 0 \\ 0 & 1.3 \\ \end{array} \right] , \quad W_{02}=\left[ \begin{array}{cc} 0.4 & 0.5 \\ 0.3& 0.2 \\ \end{array} \right], \\ & W_{12}=\left[ \begin{array}{cc} 0.1 & 0.3 \\ -0.2 &-0.2 \\ \end{array} \right] , \quad W_{22}=\left[ \begin{array}{cc} -0.2 & 0.2 \\ -0.3 & 0.1 \\ \end{array} \right], \\ &W_{32}=\left[ \begin{array}{cc} - 0.3 & 0.1 \\ -0.2 & 0.4 \\ \end{array} \right], \quad E_{2}=\left[ \begin{array}{cc} 0.1 & -0.2 \\ -0.2 & 0.3 \\ \end{array} \right], \nonumber\\ & C_{2}=\left[ \begin{array}{cc} 0.2 & -0.3\\ 0.2 & 0.2\\ \end{array} \right], \quad D_{2}=\left[ \begin{array}{cc} 0.3 & 0.1\\ 0.2 & -0.3 \\ \end{array} \right], \\ &B_{12}=\left[ \begin{array}{cc} 0.2 & -0.3 \\ \end{array} \right], \quad B_{22}=\left[ \begin{array}{cc} 0.1 & 0.3 \\ \end{array} \right], \\ &H_{2}=\left[ \begin{array}{cc} 0.2 & 0.1 \\ \end{array} \right]. \end{aligned} \end{align} $

    Let the Markov process governing the mode switching has generator

    $ \begin{align} \prod=\left[ \begin{array}{cc} -0.3 & 0.3 \\\nonumber 0.7 & -0.7 \\ \end{array} \right]. \end{align} $

    Take the activation functions as follows:

    $ \begin{align*} &f(x)=0.05(\mid x+1\mid-\mid x-1\mid) \nonumber\\ &\phi(x(t))={\rm tanh}(x).\nonumber \end{align*} $

    Then we can easily get $ L={\rm diag}\{0.1, ~0.1\}$ and $ G={\rm diag}\{1, ~1\}$. When taking $v(t)=0.01{\rm e}^{-t}{\rm sin}(0.02t), ~t>0$, by using the Matlab LMI Toolbox to solve the LMI (36), the filter gain matrix can be obtained as

    $ \begin{align} &K_{1}=\left[ \begin{array}{cc} 0.423\, 9 & -0.425\, 7\\\nonumber 0.042\, 7& -0.285\, 8\\ \end{array} \right]\\ &K_{2}=\left[ \begin{array}{cc} 0.424\, 0 & 0.408\, 3 \\ \nonumber -0.471\, 6 &-0.477\, 5 \\ \end{array} \right]. \end{align} $

    The upper bounds of delays $\delta $, $\tau$ and minimum value $\gamma$ to guarantee the stability of system (10) are listed in Tables 1-6, where "-" means that LMI (36) has no feasible solution. Table 1 shows the maximum allowable upper bound $\delta $ for different values of $\rho_{\sigma}$, which means that the bound of the derivative of the leakage-time-vary delay is very effective and plays an important role in obtaining the feasible results.

    Table 1.  Allowable upper bounds of $\delta $ with different values of $\rho_{\sigma}$, $\mu=0.5$, $\tau=0.15$ and $\gamma=0.1$

    Table 2.  Allowable upper bounds of $\tau$ for different values of $\delta $, $\gamma= 0.1$, $\rho_{\sigma}$=0.01, $\mu=0.5$

    Table 3.  Minimum allowable bounds of $\gamma$ for different value of $\tau $, $\rho_{\sigma}=0.02$, $\delta=0.12$ and $\mu=0.5$

    Table 4.  Minimum allowable bounds of $\gamma$ for different values of $ \delta$, $\rho_{\sigma}=0.02$, $\tau=0.2$ and $\mu=0.5$

    Table 5.  Minimum allowable bounds of $\tau$ for different values of $\gamma=0.1 $, $\rho_{\sigma}=0.1$ and $\delta=0.15$

    Table 6.  Allowable upper bounds of $ \tau$ with different values of $\mu$, $\rho_{\sigma}=0 $, $\delta=0.15 $ and $\gamma=0.1$

    From Table 2 above, we can see that when fixing the values of $\mu$, $\rho_{\sigma}$ and $\gamma$, the allowable upper bound value of $\tau$ is affected by $\delta$, especially when $\delta=0.2$, the feasible solution cannot be obtained.

    Since the main aim of $H_{\infty}$ state estimation is to design a signal estimator for a given system such that the $L_{2} $ gain of state estimation error will be less than a prescribed level, the following Tables 3 and 4 show the effects on minimum allowable bounds of the prescribed level $\gamma$ for different $ \delta$ and $\tau$.

    When $\rho_{\sigma} $ is a non-zero constant, the allowable upper bounds of $\tau$ for different values of $ \mu$ are listed in Table 5.

    Especially, when leakage delay $\delta(t)=\delta$, namely, leakage delay is constant, the studied system will become the error system (38), so from Corollary 1, we can have the following research result listed in Table 6, which shows the effect on $\tau $ for different $\mu$.

    Remark 3. Comparing Tables 5 and 6, when leakage delay $\delta(t)=\delta$, the scope of maximum allowable upper bound value of $\tau$ is expanded, so the conservatism of systems has been reduced.

    When leakage delay does not exist in the system, Table 7 shows the effect of different $\mu$ on $\tau$.

    Table 7.  Allowable upper bounds of $ \tau$ with different values of $\mu$, $\rho_{\sigma}=0 $, $\delta=0 $ and $\gamma=0.1$

    Remark 4. Comparing Tables 6 and 7, when leakage delay $\delta(t)=0$, namely, leakage delay does not exist, the maximum allowable upper bound value of $\tau$ can improve further, the conservatism of system can also be reduced.

    Remark 5. By comparing Tables 5-7, we can see that when leakage delay is constant and even does not exist, the allowable upper bound values of $\tau$ change with the change of $\delta(t)$ and $\rho_{\sigma} $ accordingly, which shows that the stability of system (2) and (10) is affected by leakage delay.

    At the same time, by choosing the same parameters as in Example 1, the initial values of state are chosen as $x(0)=[ 1.5 ~ -1]^{\rm T}$, $ \widehat{x} (0)=[-1 ~ 1]^{\rm T}$, and the initial value of leakage delay state is chosen as $x_\delta(0 )=[0.2 ~ -0.5]^{\rm T} $, $\delta=0.1$, $\gamma= 0.3$, $\mu=0.1$, $\rho=0.01$, $\tau=0.2$. By the Matlab software, the state estimation and filtering error simulation results are obtained and are shown in Figs. 1-5, in which Figs. 1 and 2 show the true states $x_{1}$, $x_{2}$ and their estimations, respectively, Fig. 3 shows the response of the filtering error $e(t)$, Fig. 4 shows the signal $z(t)$ to be estimated, Fig. 5 shows the switching modes at different time. The simulation results confirm further the effectiveness of Theorem 2 for the $H_{\infty}$ state estimator design for system (2) and (10).

    Figure 1.  State trajectories of $x_{1}(t)$ and $\hat{x}_{1}(t)$

    Figure 2.  State trajectories of $x_{2}(t)$ and $\hat{x}_{2}(t)$

    Figure 3.  Response of the filtering error state $e(t)$ of $H_{\infty}$

    Figure 4.  The signal to be estimated z(t)

    Figure 5.  The simulation of system mode r(t) = 1, 2 in Example 1

    Then by choosing the same initial values, we can get the simulation curves of the mean square of $x_{1}(t)$, $x_{2}(t)$ and $e(t)$, which are plotted in Figs. 6-8. From Figs. 6-8 we can see that system (9) and (10) are mean square stable.

    Figure 6.  State trajectories of the mean square of x1(t)

    Figure 7.  State trajectories of the mean square of x2(t)

    Figure 8.  Mean square of estimation error e(t)

    Remark 6. In [17], a class of mixed recurrent neural networks with time delay in the leakage term under impulsive perturbations has been investigated, but stochastic disturbance has not been included and our research result has extended to the state estimation of stochastic system. In [40, 41], $H_{\infty}$ state estimation for neural networks with mode-dependent time-varying delays is studied, but the leakage term is not involved.

    Example 2. Consider a three-neuron two-mode stochastic neural network with Markovian jump parameters and mixed time delays (2) with parameters as follows:

    Model 1

    $ \begin{align*} &A_{1}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2.2 & 0 \\ 0 & 0 & 2.3 \\ \end{array} \right], \quad W_{01}=\left[ \begin{array}{ccc} 0.2 &0.3 &0.2 \\ 0.1 &0.5& 0.4 \\ 0.2 &0.1 &0.3\\ \end{array} \right], \end{align*} $

    $ \begin{align*} &W_{11}=\left[ \begin{array}{ccc} 0.1& -0.1&\!\! 0.4 \\ 0.2 &0.2 &\!\!0.3 \\ 0.5 &0.4 &\!\!-0.3 \\ \end{array} \!\!\right], E_{1}=\left[ \begin{array}{ccc} 0.2 &-0.2& 0.1 \\ 0.4 &0.1 &0.3 \\ 0.1 &-0.2 &0.1 \\ \end{array} \!\!\right], \nonumber\\ &W_{21}=\left[ \begin{array}{ccc} 0.1 &\!\! 0.5 & \!\!0.2 \\ 0.3& \!\!0.25 & \!\!0.12 \\ 0.2 &\!\! 0.1 &\!\! 0.22 \\ \end{array} \!\!\!\!\right], \quad W_{31}=\left[ \begin{array}{ccc} 0.3 &0.22 &\!\!0.12 \\ 0.2 &0.5&\!\! 0.4 \\ 0.1 &0.4 &\!\!0.3\\ \end{array} \!\!\right], \nonumber\\ &C_{1}=\left[ \begin{array}{ccc} 0.2& 0.3 &0.1 \\ 0.2& 0.4 &0.5 \\ 0.5& 0.3 &0.1 \\ \end{array} \right], D_{1}=\left[ \begin{array}{ccc} 0.4 &0.2 &0.1 \\ 0.2& 0.2& 0.8 \\ 0.1 &0.3 &0.2 \\ \end{array} \right], \nonumber\\ &B_{11}=\left[ \begin{array}{c} 0.2 \\ 0.1 \\ 0.3 \\ \end{array} \right], B_{12}=\left[ \begin{array}{c} 0.2 \\ 0.1 \\ -0.2 \\ \end{array} \right], H_{1}= \left[ \begin{array}{ccc} 0.1 &\!\!0.2 &\!\!0.3 \\ \end{array} \!\!\right]. \end{align*} $

    Mode 2

    $ \begin{align} \begin{aligned} & A_{2}=\left[ \begin{array}{ccc} 0.5 & 0 & 0 \\ 0 & 0.2 & 0 \\ 0 & 0 & 2.2 \\ \end{array} \right], \quad W_{02}=\left[ \begin{array}{ccc} -0.4 &0.2 &0.2 \\ -0.3 &0.5 &0.2 \\ -0.3& 0.2& 0.2 \\ \end{array} \right], \nonumber\\ &W_{12}=\left[ \begin{array}{ccc} 3 & 0 & 0 \\ 0 & 1.2 & 0 \\ 0 & 0 & 3.2 \\ \end{array} \right], \quad W_{22}=\left[ \begin{array}{ccc} 0.4 &0.5 &0.1 \\ 0.3 &0.2 &0.1 \\ 0.3& 0.17& 0.1 \\ \end{array} \right], \nonumber\\ &W_{32}=\left[ \begin{array}{ccc} \!\!0.4 &-0.2 &0.3 \\ \!\!0.3 &0.2 &-0.3 \\ \!\!-0.3& 0.2& -0.1 \\ \end{array} \right], E_{2}=\left[ \begin{array}{ccc} 0.2 & 0.2&\!\! 0.1 \\ 0.3 &0.2 &\!\!-0.2 \\ 0.1 & 0.2 &\!\!-0.2 \\ \end{array} \right], \nonumber \\ &C_{2}=\left[ \begin{array}{ccc} 0.1& 0.2 &-0.1 \\ -0.2& 0.1 &0.3 \\ 0.4& 0.2 &0.1 \\ \end{array} \right], D_{2}=\left[ \begin{array}{ccc} \!\! 0.3 &-0.2 &0.1 \\ \!\! 0.2& 0.5& -0.3 \\ \!\!-0.2 &0.3 &0.6 \\ \end{array} \!\!\right], \nonumber\\ &B_{21}=\left[ \begin{array}{c} \!\!0.1 \\ \!\!-0.2 \\ \!\!0.3 \\ \end{array} \!\!\right], B_{22}=\left[ \begin{array}{c} \!\!0.2 \\ \!\! 0.2 \\ \!\!0.1 \\ \end{array} \right], H_{2}= \left[ \begin{array}{ccc} 0.2 &0.3 &0.1 \\ \end{array} \!\!\right]. \end{aligned} \end{align} $

    Let the Markov process governing the mode switching that has the generator

    $ \begin{align} \prod=\left[ \begin{array}{cc} -0.4 & 0.4 \\ 0.6 & -0.6 \\ \end{array} \nonumber \right]. \end{align} $

    The other parameters are the same as in Example 1, by solving LMI (36), the following filtering gain matrix can be obtained as

    $ \begin{align*} &K'_{1}=\left[ \begin{array}{cc} 0.697\, 0 & 0.249\, 7 \\ -0.507\, 8 & 0.693\, 4 \\ \nonumber 0.435\, 0 & -0.139\, 2\\ \end{array} \right]\\ & K'_{2}=\left[ \begin{array}{cc} 0.418\, 6 & 0.523\, 0\\ 3.687\, 4 & -1.266\, 2\\ \nonumber 0.912\, 2 & 0.180\, 9\\ \end{array} \right]. \end{align*} $

    Substituting $K'_{1} $, $K'_{2}$ into (2) and (4), the initial state values are chosen as $x_1(0) =[3~~ 1.5~~ -1 ]^{\rm T} $, the initial values of state estimation are chosen as $\widehat{x}_1(0) =[-1 ~~ -2~~ 1 ]^{\rm T} $, the initial value of estimation error is chosen as $e(0)=[4 ~~3.5 ~~ -2 ]^{\rm T} $, the state responses of the plant and estimation are respectively given in Figs. 9-12, and the simulation results confirm further that our design methods of state estimator are effective.

    Figure 9.  State trajectories of $x_{1}(t)$ and $\hat{x}_{1}(t)$

    Figure 10.  State trajectories of $x_{2}(t)$ and $\hat{x}_{2}(t)$

    Figure 11.  State trajectories of $x_{3}(t)$ and $\hat{x}_{3}(t)$

    Figure 12.  Response of the state estimator error $e(t)$ of $H_{\infty}$

    Remark 7. In [36], $H_{\infty} $ filter design for stochastic Markovian jump Hopfield neural networks with mode-dependent time-varying delays is studied, but the leakage term is not involved.

  • In this paper, the $H_{\infty}$ state estimator problem for a class of stochastic neural networks with Markovian jumping parameters and leakage delay is investigated. By employing a suitable Lyapunov functional and inequality technic, the sufficient conditions are provided to make the error system not only exponentially stable, but they also satisfy a prescribed $H_{\infty}$ norm level. All results are expressed and obtained in terms of strict LMIs. Finally, examples and simulations are presented to show the effectiveness of the proposed methods, and discussions about the effect of leakage delay on stability are presented. The numerical analysis reveal that the leakage delay in the system has some influence on conservatism of systems and noise attenuation level of $H_{\infty}$ state estimator.

  • This work was supported by the Research Fund for the Doctoral Program of Guang Dong Province of China (No. 2015A030310336).

Reference (42)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return