Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays

Xiao-Jing Wu Xue-Li Wu Xiao-Yuan Luo

Xiao-Jing Wu, Xue-Li Wu and Xiao-Yuan Luo. Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays. International Journal of Automation and Computing, vol. 13, no. 4, pp. 409-416, 2016 doi:  10.1007/s11633-015-0945-3
Citation: Xiao-Jing Wu, Xue-Li Wu and Xiao-Yuan Luo. Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays. International Journal of Automation and Computing, vol. 13, no. 4, pp. 409-416, 2016 doi:  10.1007/s11633-015-0945-3

doi: 10.1007/s11633-015-0945-3
基金项目: 

Natural Science Foundation of Hebei Province F2014208119

Doctoral Foundation of Hebei University of Science and Technology 010075

Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays

Funds: 

Natural Science Foundation of Hebei Province F2014208119

Doctoral Foundation of Hebei University of Science and Technology 010075

More Information
    Corresponding author: Xiao-Jing Wu received the M.Sc. and Ph.D. degrees from Yanshan University, China in 2009 and 2012, respectively. Now, she is a lecturer in Hebei University of Science and Technology, China. Her research interests include adaptive control of nonlinear system and fault-tolerant control. E-mail: wuxiaojing013@163.com (Corresponding author) ORCID iD: 0000-0003-2784-2278
图(4)
计量
  • 文章访问数:  403
  • HTML全文浏览量:  323
  • PDF下载量:  9
  • 被引次数: 0
出版历程
  • 收稿日期:  2014-07-17
  • 录用日期:  2014-12-11
  • 网络出版日期:  2016-01-11
  • 刊出日期:  2016-08-01

Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays

doi: 10.1007/s11633-015-0945-3
    基金项目:

    Natural Science Foundation of Hebei Province F2014208119

    Doctoral Foundation of Hebei University of Science and Technology 010075

    通讯作者: Xiao-Jing Wu received the M.Sc. and Ph.D. degrees from Yanshan University, China in 2009 and 2012, respectively. Now, she is a lecturer in Hebei University of Science and Technology, China. Her research interests include adaptive control of nonlinear system and fault-tolerant control. E-mail: wuxiaojing013@163.com (Corresponding author) ORCID iD: 0000-0003-2784-2278

English Abstract

Xiao-Jing Wu, Xue-Li Wu and Xiao-Yuan Luo. Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays. International Journal of Automation and Computing, vol. 13, no. 4, pp. 409-416, 2016 doi:  10.1007/s11633-015-0945-3
Citation: Xiao-Jing Wu, Xue-Li Wu and Xiao-Yuan Luo. Adaptive Neural Network Dynamic Surface Control for a Class of Nonlinear Systems with Uncertain Time Delays. International Journal of Automation and Computing, vol. 13, no. 4, pp. 409-416, 2016 doi:  10.1007/s11633-015-0945-3
    • Time delays often unavoidably exist in every aspect of engineering systems, such as chemical reactor systems, cardiovascular-respiratory control systems, network systems and circuit systems. Commonly, there are two approaches to prove stability over past years. They are Lyapunov Krazumikhin functions[1-3] and Lyapunov Krasovskii functions[4-6]. Recently, a lot of researches[7-10] including our previous works[11, 12] have been carried out to treat the various nonlinear systems involving uncertain time delays. Because of the existence of the uncertain time delays, system's accurate mathematical model description is not easy to get in complex dynamic processes. However, most studies[7-9, 11, 12] have been done with the uncertain time delay satisfying a certain assumption. Obviously, these researches are more conservative. Alternatively, as an efficient inductive approach, neural network (NN) can be used to provide new solution to overcome these drawbacks in dealing with uncertain time delays.

      On the other hand, it is well known that adaptive backstepping design method is a constructive tool, which is often used in nonlinear systems control with no-matching conditions, as well as systems with uncertain functions[13-16]. However, the traditional backstepping algorithm requires repetitive differentiations for the nonlinear components of the model, which will significantly increase the complexity of implementation owing to the ``explosion of terms''. To overcome this drawback, the dynamic surface control (DSC) technique was developed in [11, 12, 17-20] by introducing a first-order filtering of synthetic input at each step of the traditional backstepping approach. Despite a lot of efforts using DSC technique, it is not applied to the nonlinear system with unknown parameters and uncertain time delays by effectively integrating DSC method and indirect neural networks.

      Motivated by above observation, this paper investigates a general adaptive control procedure for a class of nonlinear system with uncertain time delays and unknown parameters by incorporating indirect adaptive NN into the DSC framework. The major contribution includes: 1) Under the DSC framework, a novel indirect adaptive NN algorithm is firstly introduced to approximate uncertain time-varying delays. 2) Dynamic compensation terms are introduced to facilitate the controller design. 3) The designed controller is independent of the time delays. 4) A practical circuit system is used to simulate for inspecting the effectiveness of the proposed control method.

      The rest of this paper is organised as follows. Section 2 elaborates system's descriptions and radial basis function (RBF) NN preliminaries. Then, Section 3 elucidates the indirect adaptive neural network DSC design procedure on a class of $n$ -order nonlinear systems with unknown parameters and uncertain time delays. Further, Section 4 provides a chaotic circuit system example that illustrates performance. In addition, a brief weight estimation algorithm of the indirect adaptive NN is given and provided in appendix.

    • Consider a class of nonlinear systems with uncertain time-varying delays in the following form:

      $\left\{ \begin{matrix} {{{\dot{x}}}_{i}}={{x}_{i+1}}+\theta _{i}^{\text{T}}{{g}_{i}}({{{\bar{x}}}_{i}})+\vartriangle {{f}_{i}}({{{\bar{x}}}_{i}}(t-{{\tau }_{i}}(t))), \\ i=1, \cdots, n-1 \\ {{{\dot{x}}}_{n}}=u+\theta _{n}^{\text{T}}{{g}_{n}}({{{\bar{x}}}_{n}})+\vartriangle {{f}_{n}}({{{\bar{x}}}_{n}}(t-{{\tau }_{n}}(t))) \\ \end{matrix} \right.$

      (1)

      where $x_{i}, i-1, \cdots, n$ is the system's state, ${\bar x_i}={[{x_1}, \cdots, {x_i}]^{\rm{T}}}$ , $u$ represents system's control input, $\theta _{i}$ is unknown constant vector, $g_{i}(\cdot)$ is known nonlinear function, $\triangle f_{i}(\cdot)$ denotes uncertain nonlinear function owing to the effects of time-varying delays and $\tau _{i}(t)$ is time-varying delay.

      Remark 1. Comparing with previous works[7-12], the researches take unknown parameters and uncertain time-varying delays into account simultaneously. Moreover, it is without imposing any condition on uncertain time-varying delays functions.

      In brief, it is well known that the following RBF NN is widely used as a tool for modelling nonlinear functions because of its good capability in function approximation. In structure, the RBF NN takes the form $W^{\rm T}\psi (x)$ , where $x\in \Omega _{x}\subset {\bf R}^{q}$ , $q$ is the NN input dimension, weight vector $W=[w_{1}, \cdots, w_{\iota }]^{\rm T}.$ % The NN node number $\iota >1, $ and $\psi (x)=[\varphi _{1}, \cdots, \varphi _{\iota }]^{\rm T}$ , which the element $\varphi _{i}$ is chosen as the commonly used Gaussian Function as $\varphi _{i}={\rm e} ^{\frac{-(x-\mu _{i})^{\rm T}(x-\mu _{i})}{\chi _{i}^{2}}}, i=1, \cdots, \iota $ , where $\mu _{i}=[\mu _{i1}, \cdots, \mu _{iq}]^{\rm T}$ is the center of the receptive field and $\chi _{i}$ % is the width of the Gaussian Function.

      It has been proved that the NN can approximate any continuous function $F(x)$ over a compact set $\Omega _{x}\in \boldsymbol{R}^{q}$ to arbitrary accuracy as $% F(x)=W^{\ast {\rm T}}\psi (x)+\varepsilon, $ where $W^{\ast }$ is the ideal NN weight and $\varepsilon $ is the NN approximation error, $W^{\ast }=\arg \underset{W\in \boldsymbol{R}^{\iota }}{\min }\{\sup \left\vert F(x)-W^{\rm T}\psi (x)\right\vert \}$ .

      In addition, consider system (1), choosing the following state transformation:

      $\left\{ {\begin{array}{*{20}{l}} {{z_1}={x_1}-{x_d}}\\ {{z_i}={x_i}-{\alpha _{i-1}}, i=2, \cdots, n-1}\\ {{z_n}={x_n}-{\alpha _{n-1}}} \end{array}} \right.$

      (2)

      where $\alpha _{i-1}, i=2, \cdots, n$ , will be given by the first-order filter and $x_{d}$ is the given continuous reference tracking signal.

      Then, in order to design dynamic surface controller, the boundary layer errors are defined as

      $\left\{ {\begin{array}{*{20}{l}} {{y_i}={\alpha _{i-1}}-\alpha _{i-1}^ *; i=2, \cdots, n-1}\\ {{y_n}={\alpha _{n-1}}-\alpha _{n-1}^ * } \end{array}} \right.$

      (3)

      where virtual control input $\alpha _{i-1}$ , $i=2, \cdots, n$ , will be designed later on.

      In this paper, the control objective is to ensure that the system's state $x_{1}$ can track a desired trajectory $x_{d}$ , i.e., force the tracking error $z_{1}$ to be asymptotically stable in the sense of uniform ultimate boundedness.

    • In the first step, consider system (1) and the tracking error $z_{1}=x_{1}-x_{d}$ , it has

      ${{\dot{z}}_{1}}={{x}_{2}}+\theta _{1}^{\text{T}}{{g}_{1}}+\vartriangle {{f}_{1}}-{{\dot{x}}_{d}}$

      (4)

      Choose the NN to approximate the uncertain time-varying delay function $ \triangle f_{1}$ , then

      ${\dot z_1}={x_2} + \theta _1^{\rm{T}}{g_1} + W_1^{ * {\rm{T}}}{\psi _1} + {\varepsilon _1}-{\dot x_d}$

      (5)

      where $W_{1}^{\ast }$ is the ideal NN weight vector and $\varepsilon _{1}$ is the NN approximation error, $\psi _{1}=[\varphi _{11}, \cdots, \varphi _{1l}]^{\rm T}, \varphi _{1j}$ is Gaussian function.

      Therefore, the tracking error $z_{1}$ can be estimated with

      ${\dot h_1}={x_2} + \hat \theta _1^{\rm{T}}{g_1} + \hat W_1^{\rm{T}}{\psi _1}-\dot x{\% _d}-{l_1}{e_1}$

      (6)

      where $h_{1}$ is the estimator of the state $z_{1}$ , and $\hat{W}_{1}$ is the estimate of the NN weight, and $l_{1}>\frac{1}{2}$ is the gain of the NN system to aim at accelerating the convergence.

      And, the error is defined as

      ${e_1}={h_1}-{z_1}.$

      (7)

      Define Lyapunov candidate function as

      ${V_1}=\frac{1}{2}z_1^2 + \frac{1}{2}\tilde \theta _1^{\rm{T}}\Lambda _1^{-1}\% {\tilde \theta _1} + \frac{1}{2}\tilde W_1^{\rm{T}}\Gamma _1^{-1}{\tilde W_1}$

      (8)

      where ${\tilde \theta _1}={\hat \theta _1} + {\hat \eta _1}-{\theta _1}, {\widetilde W_1}={\hat W_1} + {\hat M_1}-W_1^ * $ . $\Lambda _{1}>0$ and $\Gamma _{1}>0$ are diagonal matrices. $\hat{\eta}_{1}$ and $\hat{M}_{1}$ are the dynamic compensation terms, which will be designed later on.

      Then, the derivative of $V_{1}$ is given by

      $\begin{align} & {{{\dot{V}}}_{1}}={{z}_{1}}{{{\dot{z}}}_{1}}+\tilde{\theta }_{1}^{\text{T}}\Lambda _{1}^{-1}({{\overset{.}{\mathop{{\hat{\theta }}}}\, }_{1}}+{{{\dot{\hat{\eta }}}}_{1}})+\tilde{W}_{1}^{\text{T}}\Gamma _{1}^{-1}({{{\dot{\hat{W}}}}_{1}}+{{{\dot{\hat{M}}}}_{1}})=\\ & \ \ \ \ \ \ {{z}_{1}}({{z}_{2}}+{{y}_{2}}+\alpha _{1}^{*}+\theta _{1}^{\text{T}}{{g}_{1}}+W_{1}^{*\text{T}}{{\psi }_{1}}+{{\varepsilon }_{1}}-{{{\dot{x}}}_{d}})+ \\ & \ \ \ \ \ \ \tilde{\theta }_{1}^{\text{T}}\Lambda _{1}^{-1}({{{\dot{\hat{\theta }}}}_{1}}+{{\overset{.}{\mathop{\widehat{\eta }}}\, }_{1}})+\tilde{W}_{1}^{\text{T}}\Gamma _{1}^{-1}({{{\dot{\hat{W}}}}_{1}}+{{{\dot{\hat{M}}}}_{1}}). \\ \end{align}$

      (9)

      Furthermore, because of

      ${{z}_{1}}{{y}_{2}}\le \frac{1}{2}z_{1}^{2}+\frac{1}{2}y_{2}^{2}, \quad {{z}_{1}}{{\varepsilon }_{1}}\le \frac{1}{2}z_{1}^{2}+\frac{1}{2}\varepsilon _{1}^{2}$

      (10)

      the derivative of $V_{1}$ can be obtained by

      $\begin{align} & {{{\dot{V}}}_{1}}\le {{z}_{1}}({{z}_{2}}+\alpha _{1}^{*}+\theta _{1}^{\text{T}}{{g}_{1}}+W_{1}^{*\text{T}}{{\psi }_{1}}-{{{\dot{x}}}_{d}})+ \\ & \ \ \ \ \ \ z_{1}^{2}+\frac{1}{2}y_{2}^{2}+\frac{1}{2}\varepsilon _{1}^{2}+ \\ & \ \ \ \ \ \ \tilde{\theta }_{1}^{\text{T}}\Lambda _{1}^{-1}({{{\dot{\hat{\theta }}}}_{1}}+{{\overset{.}{\mathop{\widehat{\eta }}}\, }_{1}})+\tilde{W}_{1}^{\text{T}}\Gamma _{1}^{-1}({{{\dot{\hat{W}}}}_{1}}+{{{\dot{\hat{M}}}}_{1}}). \\ \end{align}$

      (11)

      Choose

      $\alpha _{1}^{*}=-{{k}_{1}}{{z}_{1}}-\hat{\theta }_{1}^{\text{T}}{{g}_{1}}-\hat{W}_{1}^{\text{T}}{{\psi }_{1}}+{{\dot{x}}_{d}}-\hat{\eta }_{1}^{\text{T}}{{g}_{1}}-\hat{M}_{1}^{\text{T}}{{\psi }_{1}}$

      (12)

      where the design parameter $k_{1}>1$ . And the adaptive laws are designed as

      ${{\dot{\hat{\theta }}}_{1}}={{\Lambda }_{1}}(-{{e}_{1}}{{g}_{1}}-{{\delta }_{1}}{{\hat{\theta }}_{1}})$

      (13)

      ${{\dot{\hat{W}}}_{1}}={{\Gamma }_{1}}(-{{e}_{1}}{{\psi }_{1}}-{{\sigma }_{1}}{{\hat{W}}_{1}}).$

      (14)

      Moreover, the dynamic compensation terms are designed as

      ${{\dot{\hat{\eta }}}_{1}}={{\Lambda }_{1}}({{h}_{1}}{{g}_{1}}-{{\delta }_{1}}{{\hat{\eta }}_{1}})$

      (15)

      ${{\dot{\hat{M}}}_{1}}={{\Gamma }_{1}}({{h}_{1}}{{\psi }_{1}}-{{\sigma }_{1}}{{\hat{M}}_{1}})$

      (16)

      where $\delta _{1}$ and $\sigma _{1}$ are positive design parameters.

      According to (12)-(16) and the following inequalities:

      $-{{\delta }_{1}}\tilde{\theta }_{1}^{\text{T}}({{\tilde{\theta }}_{1}}+{{\theta }_{1}})\le-\frac{{{\delta }_{1}}}{2}\tilde{\theta }_{1}^{\text{T}}{{\tilde{\theta }}_{1}}+\frac{{{\delta }_{1}}}{2}\theta _{1}^{\text{T}}{{\theta }_{1}}$

      (17)

      $-{{\sigma }_{1}}\tilde{W}_{1}^{\text{T}}({{\tilde{W}}_{1}}+W_{1}^{*})\le-\frac{{{\sigma }_{1}}}{2}\tilde{W}_{1}^{\text{T}}{{\tilde{W}}_{1}}+\frac{{{\sigma }_{1}}}{2}W_{1}^{*\text{T}}W_{1}^{*}$

      (18)

      the derivative of $V_{1}$ can be given by

      $\begin{align} & {{{\dot{V}}}_{1}}\le -({{k}_{1}}-1)z_{1}^{2}+{{z}_{1}}{{z}_{2}}-\tilde{\theta }_{1}^{\text{T}}{{g}_{1}}{{z}_{1}}-\tilde{W}_{1}^{\text{T}}{{\psi }_{1}}{{z}_{1}}+\frac{1}{2}y_{2}^{2}+ \\ & \ \ \ \ \frac{1}{2}\varepsilon _{1}^{2}+\tilde{\theta }_{1}^{\text{T}}\Lambda _{1}^{-1}({{{\dot{\hat{\theta }}}}_{1}}+{{{\dot{\hat{\eta }}}}_{1}})+\tilde{W}_{1}^{\text{T}}\Gamma _{1}^{-1}({{{\dot{\hat{W}}}}_{1}}+{{{\dot{\hat{M}}}}_{1}})= \\ & \ \ \ \ \ -\ ({{k}_{1}}-1)z_{1}^{2}+{{z}_{1}}{{z}_{2}}+\frac{1}{2}y_{2}^{2}+\frac{1}{2}\varepsilon _{1}^{2}- \\ & \ \ \ \ \ {{\delta }_{1}}\tilde{\theta }_{1}^{\text{T}}({{{\tilde{\theta }}}_{1}}+{{\theta }_{1}})-{{\sigma }_{1}}\tilde{W}_{1}^{\text{T}}({{{\tilde{W}}}_{1}}+W_{1}^{*})\le \\ & \ \ \ \ \ -({{k}_{1}}-1)z_{1}^{2}+{{z}_{1}}{{z}_{2}}+\frac{1}{2}y_{2}^{2}-\frac{{{\delta }_{1}}}{2}\tilde{\theta }_{1}^{\text{T}}{{{\tilde{\theta }}}_{1}}- \\ & \ \ \ \ \frac{{{\sigma }_{1}}}{2}\tilde{W}_{1}^{\text{T}}{{{\tilde{W}}}_{1}}+{{N}_{1}} \\ \end{align}$

      (19)

      where $N_{1}=\frac{\delta _{1}}{2}\theta _{1}^{\rm T}\theta _{1}+\frac{\sigma _{1}}{2}W_{1}^{\ast{\rm T}}W_{1}^{\ast }+\frac{1}{2}\varepsilon _{1}^{2}$ .

      In the second step, let $\alpha _{1}^{\ast }$ be passed through a first-order filter as

      ${{\lambda }_{2}}{{\dot{\alpha }}_{1}}+{{\alpha }_{1}}=\alpha _{1}^{*}$

      (20)

      where $\lambda _{2}$ is a time constant.

      Using system state (1) and state transformation (2), the following is obtained as

      ${{\dot{z}}_{2}}={{\dot{x}}_{2}}-{{\dot{\alpha }}_{1}}={{x}_{3}}+\theta _{2}^{\text{T}}{{g}_{2}}+\vartriangle {{f}_{2}}-{{\dot{\alpha }}_{1}}.$

      (21)

      Choose the NN to approximate the uncertain time-varying delay function $ \Delta f_{2}$ , then

      ${{\dot{z}}_{2}}={{x}_{3}}+\theta _{2}^{\text{T}}{{g}_{2}}+W_{2}^{*\text{T}}{{\psi }_{2}}+{{\varepsilon }_{2}}-{{\dot{\alpha }}_{1}}$

      (22)

      where $W_{2}^{\ast }$ is the ideal NN weight vector and $\varepsilon _{2}$ is the NN approximation error, $\psi _{2}=[\varphi _{21}, \cdots, \varphi _{2l}]^{\rm T}, \varphi _{2j}$ is Gaussian function.

      Therefore, the tracking error system state $z_{2}$ can be estimated with

      ${{\dot{h}}_{2}}={{x}_{3}}-{{\dot{\alpha }}_{1}}+\hat{\theta }_{2}^{\text{T}}{{g}_{2}}+\hat{W}%_{2}^{\text{T}}{{\psi }_{2}}-{{l}_{2}}{{e}_{2}}$

      (23)

      where $h_{2}$ is the estimator of the state, and $\hat{W}_{2}$ is the estimate of the NN weight, and $l_{2}>\frac{1}{2}$ is the gain of the NN system to aim at accelerating the convergence.

      And the error is defined as

      ${{e}_{2}}={{h}_{2}}-{{z}_{2}}.$

      (24)

      Define Lyapunov candidate function as

      ${{V}_{2}}={{V}_{1}}+\frac{1}{2}z_{2}^{2}+\frac{1}{2}y_{2}^{2}+\frac{1}{2}\widetilde{\theta }_{2}^{\text{T}}\Lambda _{2}^{-1}{{\tilde{\theta }}_{2}}+\frac{1}{2}\tilde{W}_{2}^{\text{T}}\Gamma _{2}^{-1}{{\tilde{W}}_{2}}$

      (25)

      where ${{\tilde{\theta }}_{2}}={{\hat{\theta }}_{2}}+{{\hat{\eta }}_{2}}-{{\theta }_{2}}, {{\widetilde{W}}_{2}}={{\hat{W}}_{2}}+{{\hat{M}}_{2}}-W_{2}^{*}$ . $\Lambda _{2}>0$ and $\Gamma _{2}>0$ are diagonal matrices. $\hat{\eta}_{2}$ and $\hat{M}_{2}$ are the dynamic compensation terms, which will be designed later on.

      Then, the derivative of $V_{2}$ is given by

      $\begin{align} & {{{\dot{V}}}_{2}}={{{\dot{V}}}_{1}}+{{z}_{2}}{{{\dot{z}}}_{2}}+{{y}_{2}}{{{\dot{y}}}_{2}}+\tilde{\theta }_{2}^{\text{T}}\Lambda _{2}^{-1}({{{\dot{\hat{\theta }}}}_{2}}+{{{\dot{\hat{\eta }}}}_{2}})+ \\ & \ \ \ \ \ \ \tilde{W}_{2}^{\text{T}}\Gamma _{2}^{-1}({{{\dot{\hat{W}}}}_{2}}+{{{\dot{\hat{M}}}}_{2}})=\\ & \ \ \ \ \ \ {{{\dot{V}}}_{1}}+{{z}_{2}}({{z}_{3}}+{{y}_{3}}+\alpha _{2}^{*}+\theta _{2}^{\text{T}}{{g}_{2}}+W_{2}^{*\text{T}}{{\psi }_{2}}+{{\varepsilon }_{2}}-{{{\dot{\alpha }}}_{1}})+ \\ & \ \ \ \ \ {{y}_{2}}{{{\dot{y}}}_{2}}+\tilde{\theta }_{2}^{\text{T}}\Lambda _{2}^{-1}({{\overset{.}{\mathop{{\hat{\theta }}}}\, }_{2}}+{{{\dot{\hat{\eta }}}}_{2}})+\tilde{W}_{2}^{\text{T}}\Gamma _{2}^{-1}({{{\dot{\hat{W}}}}_{2}}+{{{\dot{\hat{M}}}}_{2}}). \\ \end{align}$

      (26)

      Furthermore, because of

      ${{z}_{2}}{{y}_{3}}\le \frac{1}{2}z_{2}^{2}+\frac{1}{2}y_{3}^{2}, \quad {{z}_{2}}{{\varepsilon }_{2}}\le \frac{1}{2}z_{2}^{2}+\frac{1}{2}\varepsilon _{2}^{2}$

      (27)

      the derivative of $V_{2}$ can be given by

      $\begin{align} & {{{\dot{V}}}_{2}}\le {{{\dot{V}}}_{1}}+{{z}_{2}}({{z}_{3}}+\alpha _{2}^{*}+\theta _{2}^{\text{T}}{{g}_{2}}+W_{2}^{*\text{T}}{{\psi }_{2}}-{{{\dot{\alpha }}}_{1}})+ \\ & \ \ \ \ \ \ z_{2}^{2}+\frac{1}{2}y_{3}^{2}+\frac{1}{2}\varepsilon _{2}^{2}+{{y}_{2}}{{\overset{.}{\mathop{y}}\, }_{2}}+ \\ & \ \ \ \ \ \ \tilde{\theta }_{2}^{\text{T}}\Lambda _{2}^{-1}({{{\dot{\hat{\theta }}}}_{2}}+{{\overset{.}{\mathop{\widehat{\eta }}}\, }_{2}})+\tilde{W}_{2}^{\text{T}}\Gamma _{2}^{-1}({{{\dot{\hat{W}}}}_{2}}+{{{\dot{\hat{M}}}}_{2}}). \\ \end{align}$

      (28)

      Choose

      $\begin{align} & \alpha _{2}^{*}=-{{k}_{2}}{{z}_{2}}-{{z}_{1}}-\hat{\theta }_{2}^{\text{T}}{{g}_{2}}-\hat{W}_{2}^{\text{T}}{{\psi }_{2}}+{{{\dot{\alpha }}}_{1}}-\\ & \ \ \ \ \ \ \ {{{\hat{\eta }}}_{2}}{{g}_{2}}-{{{\hat{M}}}_{2}}{{\psi }_{2}} \\ \end{align}$

      (29)

      where the design parameter $k_{2}>1.$ And the adaptive laws are designed as

      ${{\dot{\hat{\theta }}}_{2}}={{\Lambda }_{2}}(-{{e}_{2}}{{g}_{2}}-{{\delta }_{2}}{{\hat{\theta }}_{2}})$

      (30)

      ${{\dot{\hat{W}}}_{2}}={{\Gamma }_{2}}(-{{e}_{2}}{{\psi }_{2}}-{{\sigma }_{2}}{{\hat{W}}_{2}}).$

      (31)

      Moreover, the dynamic compensation terms are designed as

      ${{\dot{\hat{\eta }}}_{2}}={{\Lambda }_{2}}({{h}_{2}}{{g}_{2}}-{{\delta }_{2}}{{\hat{\eta }}_{2}})$

      (32)

      ${{\dot{\hat{M}}}_{2}}={{\Gamma }_{2}}({{h}_{2}}{{\psi }_{2}}-{{\sigma }_{2}}{{\hat{M}}_{2}})$

      (33)

      where $\delta _{2}$ and $\sigma _{2}$ are positive design parameters.

      Therefore, with the following inequalities:

      $-{{\delta }_{2}}\tilde{\theta }_{2}^{\text{T}}({{\tilde{\theta }}_{2}}+{{\theta }_{2}})\le-\frac{{{\delta }_{2}}}{2}\tilde{\theta }_{2}^{\text{T}}{{\tilde{\theta }}_{2}}+\frac{{{\delta }_{2}}}{2}\theta _{2}^{\text{T}}{{\theta }_{2}}$

      (34)

      $-{{\sigma }_{2}}\tilde{W}_{2}^{\text{T}}({{\tilde{W}}_{2}}+W_{2}^{*})\le \frac{{{\sigma }_{2}}}{2}W_{2}^{*\text{T}}W_{2}^{*}-\frac{{{\sigma }_{2}}}{2}\tilde{W}_{2}^{\text{T}}{{\tilde{W}}_{2}}$

      (35)

      the derivative of $V_{2}$ can be given by

      $\begin{align} & {{{\dot{V}}}_{2}}\le {{{\dot{V}}}_{1}}+{{z}_{2}}{{z}_{3}}-{{z}_{1}}{{z}_{2}}-({{k}_{2}}-1)z_{2}^{2}+\frac{1}{2}y_{3}^{2}+\frac{1}{2}\varepsilon _{2}^{2}+ \\ & \ \ \ \ {{y}_{2}}{{{\dot{y}}}_{2}}-\frac{{{\delta }_{2}}}{2}\tilde{\theta }_{2}^{\text{T}}{{{\tilde{\theta }}}_{2}}+\frac{{{\delta }_{2}}}{2}\theta _{2}^{\text{T}}{{\theta }_{2}}-\frac{{{\sigma }_{2}}}{2}\tilde{W}_{2}^{\text{T}}{{{\tilde{W}}}_{2}}+ \\ & \ \ \ \ \frac{{{\sigma }_{2}}}{2}W_{2}^{*\text{T}}W_{2}^{*}\le \\ & \ \ -({{k}_{1}}-1)z_{1}^{2}-({{k}_{2}}-1)z_{2}^{2}+{{z}_{2}}{{z}_{3}}+\frac{1}{2}y_{3}^{2}+{{N}_{1}}+{{N}_{2}}+ \\ & \ \ \ \ \ (\frac{1}{2}-\frac{1}{{{\lambda }_{2}}}+\frac{\mu _{2}^{2}}{\gamma _{2}^{2}})y_{2}^{2}-\sum\limits_{j=1}^{2}{(\frac{{{\delta }_{j}}}{2}\tilde{\theta }_{j}^{\text{T}}{{{\tilde{\theta }}}_{j}}+\frac{{{\sigma }_{j}}}{2}\tilde{W}_{j}^{\text{T}}{{{\tilde{W}}}_{j}})} \\ \end{align}$

      (36)

      where $\mu _{2}\geq \left\vert \dot{\alpha}_{1}^{\ast }\right\vert $ , the design parameter $\gamma _{2}>0, $ and $N_{2}=\frac{\delta _{2}}{2}\theta _{2}^{\rm T}\theta _{2}+\frac{\sigma _{2}}{2}W_{2}^{\ast{\rm T}}W_{2}^{\ast }+\frac{1% }{2}\varepsilon _{2}^{2}+\frac{1}{4}\gamma _{2}^{2}.$

      In the $i$ -th step (for $i=3, \cdots, n-1$ ), let $\alpha _{i-1}^{\ast }$ be passed through the following first-order filter

      ${{\lambda }_{i}}{{\dot{\alpha }}_{i-1}}+{{\alpha }_{i-1}}=\alpha _{i-1}^{*}$

      (37)

      where $\lambda _{i}$ is a time constant.

      Using system state (1) and state transformation (2), it has

      ${{\dot{z}}_{i}}={{\dot{x}}_{i}}-{{\dot{\alpha }}_{i-1}}={{x}_{i+1}}+\theta _{i}^{\text{T}}{{g}_{i}}+\vartriangle {{f}_{i}}-{{\dot{\alpha }}_{i-1}}.$

      (38)

      Choose the NN to approximate the uncertain time-varying delay function $ \triangle f_{i}$ , then

      ${{\dot{z}}_{i}}={{x}_{i+1}}+\theta _{i}^{\text{T}}{{g}_{i}}+W_{i}^{*\text{T}}{{\psi }_{i}}+{{\varepsilon }_{i}}-{{\dot{\alpha }}_{i-1}}$

      (39)

      where $W_{i}^{\ast }$ is the ideal NN weight vector and $\varepsilon _{i}$ is the NN approximation error, $\psi _{i}=[\varphi _{i1}, \cdots, \varphi _{il}]^{\rm T}, \varphi _{ij}$ is Gaussian function.

      Therefore, the tracking error system state $z_{i}$ can be estimated with

      ${{\dot{h}}_{i}}={{x}_{i+1}}-{{\dot{\alpha }}_{i-1}}+\hat{\theta }_{i}^{\text{T}}{{g}_{i}}+\hat{W}_{i}^{\text{T}}{{\psi }_{i}}-{{l}_{i}}{{e}_{i}}$

      (40)

      where $h_{i}$ is the estimator of the state, and $\hat{W}_{i}$ is the estimate of the NN weight, and $l_{i}>\frac{1}{2}$ is the gain of the NN system to aim at accelerating the convergence.

      And the error is defined as

      ${{e}_{i}}={{h}_{i}}-{{z}_{i}}.$

      (41)

      Define Lyapunov candidate function as

      ${{V}_{i}}={{V}_{i-1}}+\frac{1}{2}z_{i}^{2}+\frac{1}{2}y_{i}^{2}+\frac{1}{2}\widetilde{\theta }_{i}^{\text{T}}\Lambda _{i}^{-1}{{\tilde{\theta }}_{i}}+\frac{1}{2}\tilde{W}_{i}^{\text{T}}\Gamma _{i}^{-1}{{\tilde{W}}_{i}}$

      (42)

      where ${{\tilde{\theta }}_{i}}={{\hat{\theta }}_{i}}+{{\hat{\eta }}_{i}}-{{\theta }_{i}}, {{\widetilde{W}}_{i}}={{\hat{W}}_{i}}+{{\hat{M}}_{i}}-W_{i}^{*}$ . $\Lambda _{i}>0$ and $\Gamma _{i}>0$ are diagonal matrices. $\hat{\eta}_{i}$ and $\hat{M}_{i}$ are the dynamic compensation terms, which will be designed later on.

      Then, the derivative of $V_{i}$ is given by

      $\begin{align} & {{{\dot{V}}}_{i}}\text{=}{{{\dot{V}}}_{i-1}}+{{z}_{i}}{{{\dot{z}}}_{i}}+{{y}_{i}}{{{\dot{y}}}_{i}}+\tilde{\theta }_{i}^{\text{T}}\Lambda _{i}^{-1}({{{\dot{\hat{\theta }}}}_{i}}+{{{\dot{\hat{\eta }}}}_{i}})+ \\ & \ \ \ \ \tilde{W}_{i}^{\text{T}}\Gamma _{i}^{-1}({{{\dot{\hat{W}}}}_{i}}+{{{\dot{\hat{M}}}}_{i}})=\\ & \ \ \ \ {{{\dot{V}}}_{i-1}}+{{y}_{i}}{{{\dot{y}}}_{i}}+{{z}_{i}}({{z}_{i+1}}+{{y}_{i+1}}+\alpha _{i}^{*}+ \\ & \ \ \ \ \theta _{i}^{\text{T}}{{g}_{i}}+W_{i}^{*\text{T}}{{\psi }_{i}}+{{\varepsilon }_{i}}-{{{\dot{\alpha }}}_{i-1}})+ \\ & \ \ \ \ \tilde{\theta }_{i}^{\text{T}}\Lambda _{i}^{-1}({{{\dot{\hat{\theta }}}}_{i}}+{{\overset{.}{\mathop{\widehat{\eta }}}\, }_{i}})+\tilde{W}_{i}^{\text{T}}\Gamma _{i}^{-1}({{{\dot{\hat{W}}}}_{i}}+{{{\dot{\hat{M}}}}_{i}}). \\ \end{align}$

      (43)

      Furthermore, because of

      ${{z}_{i}}{{y}_{i+1}}\le \frac{1}{2}z_{i}^{2}+\frac{1}{2}y_{i+1}^{2}, \quad {{z}_{i}}{{\varepsilon }_{i}}\le \frac{1}{2}z_{i}^{2}+\frac{1}{2}\varepsilon _{i}^{2}$

      (44)

      the derivative of $V_{i}$ can be given by

      $\begin{align} & {{{\dot{V}}}_{i}}\le {{{\dot{V}}}_{i-1}}+{{z}_{i}}({{z}_{i+1}}+\alpha _{i}^{*}+\theta _{i}^{\text{T}}{{g}_{i}}+W_{i}^{*\text{T}}{{\psi }_{i}}-{{{\dot{\alpha }}}_{i-1}})+ \\ & \ \ \ \ z_{i}^{2}+\frac{1}{2}y_{i+1}^{2}+\frac{1}{2}\varepsilon _{i}^{2}+{{y}_{i}}{{{\dot{y}}}_{i}}+ \\ & \ \ \ \ \tilde{\theta }_{i}^{\text{T}}\Lambda _{i}^{-1}({{{\dot{\hat{\theta }}}}_{i}}+{{\overset{.}{\mathop{\widehat{\eta }}}\, }_{i}})+\tilde{W}_{i}^{\text{T}}\Gamma _{i}^{-1}({{{\dot{\hat{W}}}}_{i}}+{{{\dot{\hat{M}}}}_{i}}). \\ \end{align}$

      (45)

      Choose

      $\begin{align} & \alpha _{i}^{*}=-{{k}_{i}}{{z}_{i}}-{{z}_{i-1}}-\hat{\theta }_{i}^{\text{T}}{{g}_{i}}-\hat{W}%_{i}^{\text{T}}{{\psi }_{i}}+{{{\dot{\alpha }}}_{i-1}} \\ & \ \ \ \ \ \-{{{\hat{\eta }}}_{i}}{{g}_{i}}-{{{\hat{M}}}_{i}}{{\psi }_{i}} \\ \end{align}$

      (46)

      where the design parameter $k_{i}>1$ . And the adaptive laws are designed as

      ${{\dot{\hat{\theta }}}_{i}}={{\Lambda }_{i}}(-{{e}_{i}}{{g}_{i}}-{{\delta }_{i}}{{\hat{\theta }}_{i}})$

      (47)

      ${{\dot{\hat{W}}}_{i}}={{\Gamma }_{i}}(-{{e}_{i}}{{\psi }_{i}}-{{\sigma }_{i}}{{\hat{W}}_{i}}).$

      (48)

      Moreover, the dynamic compensation terms are designed as

      ${{\dot{\hat{\eta }}}_{i}}={{\Lambda }_{i}}({{h}_{i}}{{g}_{i}}-{{\delta }_{i}}{{\hat{\eta }}_{i}})$

      (49)

      ${{\dot{\hat{M}}}_{i}}={{\Gamma }_{i}}({{h}_{i}}{{\psi }_{i}}-{{\sigma }_{i}}{{\hat{M}}_{i}})$

      (50)

      where $\delta _{i}$ and $\sigma _{i}$ are positive design parameters.

      Therefore, with the following inequalities:

      $-{{\delta }_{i}}\tilde{\theta }_{i}^{\text{T}}({{\tilde{\theta }}_{i}}+{{\theta }_{i}})\le-\frac{{{\delta }_{i}}}{2}\tilde{\theta }_{i}^{\text{T}}{{\tilde{\theta }}_{i}}+\frac{{{\delta }_{i}}}{2}\theta _{i}^{\text{T}}{{\theta }_{i}}$

      (51)

      $-{{\sigma }_{i}}\tilde{W}_{i}^{\text{T}}({{\tilde{W}}_{i}}+W_{i}^{*})\le-\frac{{{\sigma }_{i}}}{2}\tilde{W}_{i}^{\text{T}}{{\tilde{W}}_{i}}+\frac{{{\sigma }_{i}}}{2}W_{i}^{*\text{T}}W_{i}^{*}$

      (52)

      it has

      $\begin{align} & {{{\dot{V}}}_{i}}\le {{{\dot{V}}}_{i-1}}+{{z}_{i}}{{z}_{i+1}}-{{z}_{i-1}}{{z}_{i}}-({{k}_{i}}-1)z_{i}^{2}+\frac{1}{2}y_{i+1}^{2}+ \\ & \ \ \ \ \frac{1}{2}\varepsilon _{i}^{2}+{{y}_{i}}{{{\dot{y}}}_{i}}-\frac{{{\delta }_{i}}}{2}\tilde{\theta }_{i}^{\text{T}}{{{\tilde{\theta }}}_{i}}+\frac{{{\delta }_{i}}}{2}\theta _{i}^{\text{T}}{{\theta }_{i}}-\\ & \ \ \ \ \frac{{{\sigma }_{i}}}{2}\tilde{W}_{i}^{\text{T}}{{{\tilde{W}}}_{i}}+\frac{{{\sigma }_{i}}}{2}W_{i}^{*\text{T}}W_{i}^{*}. \\ \end{align}$

      (53)

      Furthermore, with the following inequality

      $\begin{align} & {{y}_{i}}{{{\dot{y}}}_{i}}={{y}_{i}}({{{\dot{\alpha }}}_{i-1}}-\dot{\alpha }_{i-1}^{*})={{y}_{i}}(-\frac{1}{{{\lambda }_{i}}}{{y}_{i}}-\dot{\alpha }_{i-1}^{*})\le \\ & \ \ \ \ \ \ \ -\frac{1}{{{\lambda }_{i}}}y_{i}^{2}+{{\mu }_{i}}\left| {{y}_{i}} \right|\le -\frac{1}{{{\lambda }_{i}}}y_{i}^{2}+\frac{\mu _{i}^{2}}{\gamma _{i}^{2}}y_{i}^{2}+\frac{1}{4}\gamma _{i}^{2} \\ \end{align}$

      (54)

      where $\mu _{i}\geq \left\vert \dot{\alpha}_{i-1}^{\ast }\right\vert, \gamma _{i}>0$ is the design parameter, the derivative of $V_{i}$ can be given by

      $\begin{align} & {{{\dot{V}}}_{i}}\le-\overset{i}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, ({{k}_{j}}-1)z_{j}^{2}+{{z}_{i}}{{z}_{i+1}}+\frac{1}{2}y_{i+1}^{2}+ \\ & \ \ \ \ \ \ \overset{i}{\mathop{\underset{j=2}{\mathop{\sum }}\, }}\, (\frac{1}{2}-\frac{1}{{{\lambda }_{j}}}+\frac{\mu _{j}^{2}}{\gamma _{j}^{2}})y_{j}^{2}-\\ & \ \ \ \ \ \ \overset{i}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, (\frac{{{\delta }_{j}}}{2}\tilde{\theta }_{j}^{\text{T}}{{{\tilde{\theta }}}_{j}}+\frac{{{\sigma }_{j}}}{2}\tilde{W}_{j}^{\text{T}}{{{\tilde{W}}}_{j}})+\overset{i}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, {{N}_{j}} \\ \end{align}$

      (55)

      where ${{N}_{1}}=\frac{{{\delta }_{1}}}{2}\theta _{1}^{\text{T}}{{\theta }_{1}}+\frac{{{\sigma }_{1}}}{2}W_{1}^{*\text{T}}W_{1}^{*}+\frac{1}{2}\varepsilon _{1}^{2}, $ , ${{N}_{j}}=\%frac{{{\delta }_{j}}}{2}\theta _{j}^{\text{T}}{{\theta }_{j}}+\frac{{{\sigma }_{j}}}{2}%W_{j}^{*\text{T}}W_{j}^{*}+\frac{1}{2}\varepsilon _{j}^{2}+\frac{1}{4}\%gamma _{j}^{2}, (j=2, \cdots, n-1)$

      In the $n$ -th step, let $\alpha _{n-1}^{\ast }$ be passed through the following first-order filter

      ${{\lambda }_{n}}{{\dot{\alpha }}_{n-1}}+{{\alpha }_{n-1}}=\alpha _{n-1}^{*}$

      (56)

      where $\lambda _{n}$ is a time constant.

      Using system state (1) and state transformation (2), it has

      ${{\dot{z}}_{n}}={{\dot{x}}_{n}}-{{\dot{\alpha }}_{n-1}}=u+\theta _{n}^{\text{T}}{{g}_{n}}+\vartriangle {{f}_{n}}-{{\dot{\alpha }}_{n-1}}.$

      (57)

      Choose the NN to approximate the uncertain time-varying delay function $ \triangle f_{n}$ , then

      ${{\dot{z}}_{n}}=u+\theta _{n}^{\text{T}}{{g}_{n}}+W_{n}^{*\text{T}}{{\psi }_{n}}+{{\varepsilon }_{n}}-%{{\dot{\alpha }}_{n-1}}$

      (58)

      where $W_{n}^{\ast }$ is the ideal NN weight vector and $\varepsilon _{n}$ is the NN approximation error, $\psi _{n}=[\varphi _{n1}, \cdots, \varphi _{nl}]^{\rm T}, \varphi _{nj}$ is Gaussian function.

      Therefore, the tracking error system state $z_{n}$ can be estimated with

      ${{\dot{h}}_{n}}=u-{{\dot{\alpha }}_{n-1}}+\hat{\theta }_{n}^{\text{T}}{{g}_{n}}+\hat{W}%_{n}^{\text{T}}{{\psi }_{n}}-{{l}_{n}}{{e}_{n}}$

      (59)

      where $h_{n}$ is the estimator of the state, and $\hat{W}_{n}$ is the estimate of the NN weight, and $l_{n}>\frac{1}{2}$ is the gain of the NN system to aim at accelerating the convergence.

      And the error is defined as

      ${{e}_{n}}={{h}_{n}}-{{z}_{n}}.$

      (60)

      Define Lyapunov candidate function as

      ${{V}_{n}}={{V}_{n-1}}+\frac{1}{2}z_{n}^{2}+\frac{1}{2}y_{n}^{2}+\frac{1}{2}\widetilde{\theta }_{n}^{\text{T}}\Lambda _{n}^{-1}{{\tilde{\theta }}_{n}}+\frac{1}{2}\tilde{W}_{n}^{\text{T}}\Gamma _{n}^{-1}{{\tilde{W}}_{n}}$

      (61)

      where ${{\tilde{\theta }}_{n}}={{\hat{\theta }}_{n}}+{{\hat{\eta }}_{n}}-{{\theta }_{n}}, {{\widetilde{W}}_{n}}={{\hat{W}}_{n}}+{{\hat{M}}_{n}}-W_{n}^{*}$ . $\Lambda _{n}>0$ and $\Gamma _{n}>0$ are diagonal matrices. $\hat{\eta}_{n}$ and $\hat{M}_{n}$ are the dynamic compensation terms, which will be designed later on.

      Then, the derivative of $V_{n}$ is given by

      $\begin{align} & {{{\dot{V}}}_{n}}={{{\dot{V}}}_{n-1}}+{{z}_{n}}{{{\dot{z}}}_{n}}+{{y}_{n}}{{{\dot{y}}}_{n}}+\tilde{\theta }_{n}^{\text{T}}\Lambda _{n}^{-1}({{{\dot{\hat{\theta }}}}_{n}}+{{{\dot{\hat{\eta }}}}_{n}})+ \\ & \ \ \ \ \ \ \tilde{W}_{n}^{\text{T}}\Gamma _{n}^{-1}({{{\dot{\hat{W}}}}_{n}}+{{{\dot{\hat{M}}}}_{n}})=\\ & \ \ \ \ \ \ {{{\dot{V}}}_{n-1}}+{{z}_{n}}(u+\theta _{n}^{\text{T}}{{g}_{n}}+W_{n}^{*\text{T}}{{\psi }_{n}}+{{\varepsilon }_{n}}-{{{\dot{\alpha }}}_{n-1}})+ \\ & \ \ \ \ \ \ {{y}_{n}}{{{\dot{y}}}_{n}}+\tilde{\theta }_{n}^{\text{T}}\Lambda _{n}^{-1}({{\overset{.}{\mathop{\hat{\theta }%}}\, }_{n}}+{{{\dot{\hat{\eta }}}}_{n}})+\tilde{W}_{n}^{\text{T}}\Gamma _{n}^{-1}({{{\dot{\hat{W}}}}_{n}}+{{{\dot{\hat{M}}}}_{n}}). \\ \end{align}$

      (62)

      Choose

      $\begin{align} & u=-{{k}_{n}}{{z}_{n}}-{{z}_{n-1}}-\hat{\theta }_{n}^{\text{T}}{{g}_{n}}-\hat{W}_{n}^{\text{T}}{{\psi }_{n}}+{{{\dot{\alpha }}}_{n-1}}-\\ & \ \ \ \ \ \ \ \hat{\eta }_{n}^{\text{T}}{{g}_{n}}-\hat{M}_{n}^{\text{T}}{{\psi }_{n}} \\ \end{align}$

      (63)

      where the design parameter $k_{n}>\frac{1}{2}.$

      And, the adaptive laws are designed as

      ${{\dot{\hat{\theta }}}_{n}}={{\Lambda }_{n}}(-{{e}_{n}}{{g}_{n}}-{{\delta }_{n}}{{\hat{\theta }}_{n}})$

      (64)

      ${{\dot{\hat{W}}}_{n}}={{\Gamma }_{n}}(-{{e}_{n}}{{\psi }_{n}}-{{\sigma }_{n}}{{\hat{W}}_{n}}).$

      (65)

      Moreover, the dynamic compensation terms are designed as

      ${{\dot{\hat{\eta }}}_{n}}={{\Lambda }_{n}}({{h}_{n}}{{g}_{n}}-{{\delta }_{n}}{{\hat{\eta }}_{n}})$

      (66)

      ${{\dot{\hat{M}}}_{n}}={{\Gamma }_{n}}({{h}_{n}}{{\psi }_{n}}-{{\sigma }_{n}}{{\hat{M}}_{n}})$

      (67)

      where $\delta _{n}$ and $\sigma _{n}$ are positive design parameters.

      Therefore, with the following inequalities:

      $-{{\delta }_{n}}\tilde{\theta }_{n}^{\text{T}}({{\tilde{\theta }}_{n}}+{{\theta }_{n}})\le-\frac{{{\delta }_{n}}}{2}\tilde{\theta }_{n}^{\text{T}}{{\tilde{\theta }}_{n}}+\frac{{{\delta }_{n}}}{2}\theta _{n}^{\text{T}}{{\theta }_{n}}$

      (68)

      $-{{\sigma }_{n}}\tilde{W}_{n}^{\text{T}}({{\tilde{W}}_{n}}+W_{n}^{*})\le \frac{{{\sigma }_{n}}}{2}W_{n}^{*\text{T}}W_{n}^{*}-\frac{{{\sigma }_{n}}}{2}\tilde{W}_{n}^{\text{T}}{{\tilde{W}}_{n}}$

      (69)

      $\begin{align} & {{y}_{n}}{{{\dot{y}}}_{n}}={{y}_{n}}({{{\dot{\alpha }}}_{n-1}}-\dot{\alpha }_{n-1}^{*})= \\ & \ \ \ \ \ \ \ \ {{y}_{n}}(-\frac{1}{{{\lambda }_{n}}}{{y}_{n}}-\dot{\alpha }_{n-1}^{*})\le -\frac{1}{{{\lambda }_{n}}}y_{n}^{2}+{{\mu }_{n}}\left| {{y}_{n}} \right|\le \\ & \ \ \ \ \ \ \ -\frac{1}{{{\lambda }_{n}}}y_{n}^{2}+\frac{\mu _{n}^{2}}{\gamma _{n}^{2}}y_{n}^{2}+\frac{1}{4}\gamma _{n}^{2} \\ \end{align}$

      (70)

      where $\mu _{n}\geq \left\vert \dot{\alpha}_{n-1}^{\ast }\right\vert, \gamma _{n}>0$ is the design parameter, it has

      $\begin{align} & {{{\dot{V}}}_{n}}\le-\overset{n-1}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, ({{k}_{j}}-1)z_{j}^{2}-({{k}_{n}}-\frac{1}{2})z_{n}^{2}+ \\ & \ \ \ \ \ \ \overset{n}{\mathop{\underset{j=2}{\mathop{\sum }}\, }}\, (\frac{1}{2}-\frac{1}{{{\lambda }_{j}}}+\frac{\mu _{j}^{2}}{\gamma _{j}^{2}})y_{j}^{2}-\\ & \ \ \ \ \ \ \overset{n}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, (\frac{{{\delta }_{j}}}{2}\tilde{\theta }_{j}^{\text{T}}{{{\tilde{\theta }}}_{j}}+\frac{{{\sigma }_{j}}}{2}\tilde{W}_{j}^{\text{T}}{{{\tilde{W}}}_{j}})+\overset{n}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, {{N}_{j}} \\ \end{align}$

      (71)

      where ${{N}_{1}}=\frac{{{\delta }_{1}}}{2}\theta _{1}^{\text{T}}{{\theta }_{1}}+\frac{{{\sigma }_{1}}}{2}W_{1}^{*\text{T}}W_{1}^{*}+\frac{1}{2}\varepsilon _{1}^{2}, $ , ${{N}_{j}}=\frac{{{\delta }_{j}}}{2}\theta _{j}^{\text{T}}{{\theta }_{j}}+\frac{{{\sigma }_{j}}}{2}W_{j}^{*\text{T}}W_{j}^{*}+\frac{1}{2}\varepsilon _{j}^{2}+\frac{1}{4}\gamma _{j}^{2}, (j=2, \cdots, n)$ .

      If we select appropriate design parameters $\lambda _{j}$ , $\mu _{j}$ and ${{\gamma }_{j}}, (j=2, \cdots, n)$ to ensure that

      $\rho _{j}^{*}=-\frac{1}{2}+\frac{1}{{{\lambda }_{j}}}-\frac{\mu _{j}^{2}}{\gamma _{j}^{2}}\ge 0$

      then

      $\begin{align} & {{{\dot{V}}}_{n}}\le-\overset{n-1}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, ({{k}_{j}}-1)z_{j}^{2}-({{k}_{n}}-\frac{1}{2})z_{n}^{2}-\overset{n}{\mathop{\underset{j=2}{\mathop{\sum }}\, }}\, \rho _{j}^{*}y_{j}^{2}-\\ & \ \ \ \ \ \ \overset{n}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, (\frac{{{\delta }_{j}}}{2}\tilde{\theta }_{j}^{\text{T}}{{{\tilde{\theta }}}_{j}}+\frac{{{\sigma }_{j}}}{2}\tilde{W}_{j}^{\text{T}}{{{\tilde{W}}}_{j}})+\overset{n}{\mathop{\underset{j=1}{\mathop{\sum }}\, }}\, {{N}_{j}}. \\ \end{align}$

      (72)

      With previous descriptions, we are ready to present our main results in Theorem 1.

      Theorem 1. For the $n$ -th nonlinear system with unknown parameters and uncertain time-varying delays described by (1), the tracking error $ z_{1}=x_{1}-x_{d}$ is asymptotically stable in the sense of uniformly ultimate boundedness with the indirect adaptive NN dynamic surface controller (63) and adaptive laws (64) and (65), added the dynamic compensation terms (66) and (67).

      Remark 2. In dynamic surface control design procedure, an indirect adaptive NN is firstly integrated into each step. And, the weight estimation algorithm of the indirect adaptive neural networks is proved to be effective in the Appendix.

    • In this section, for a simple chaotic circuit system, the simulation will be done by indirect adaptive NN dynamic surface controller (63) with adaptive laws (64) to (67).

      The dynamic equations of chaotic circuit system with a control input[21], as shown in Fig. 1 are described by

      Figure 1.  A chaotic Chua's circuit

      $\left\{ \begin{array}{*{35}{l}} {{{\dot{v}}}_{c1}}(t)=\frac{1}{{{C}_{1}}}(\frac{1}{R}({{v}_{c2}}(t)-{{v}_{c1}}(t))-g({{v}_{c1}}(t))) \\ {{{\dot{v}}}_{c2}}(t)=\frac{1}{{{C}_{2}}}(\frac{1}{R}({{v}_{c1}}(t)-{{v}_{c2}}(t))+{{i}_{L}}(t)) \\ {{{\dot{i}}}_{L}}(t)=\frac{1}{L}(-{{v}_{c2}}(t)+u) \\ \end{array} \right.$

      (73)

      where $i_{L}(t)$ is the inductor current, $v_{c1}(t)$ and $v_{c2}(t)$ are the capacitor voltages, $C_{1}$ and $C_{2}$ are capacitance parameters, $L$ and $R $ represent self-inducatance coefficient and resistance value, and $% g(v_{c1}(t))=-v_{c1}(t)+(0.02+0.005\sin \left(\frac{t}{5}\right))v_{c1}^{3}(t)$ denotes the current through the non-linear resistor.

      Let $x_{1}(t)=LC_{1}C_{2}Rv_{c1}(t), $ $x_{2}(t)=LC_{2}v_{c2}(t), $ $% x_{3}(t)=Li_{L}(t), $ the dynamic chaotic circuit system can be rewritten as

      $\left\{ \begin{align} & {{{\dot{x}}}_{1}}(t)={{x}_{2}}(t)+\theta _{1}^{\text{T}}{{g}_{1}}({{{\bar{x}}}_{1}}) \\ & {{{\dot{x}}}_{2}}(t)={{x}_{3}}(t)+\theta _{2}^{\text{T}}{{g}_{2}}({{{\bar{x}}}_{2}}) \\ & {{{\dot{x}}}_{3}}(t)=u+\theta _{3}^{\text{T}}{{g}_{3}}({{{\bar{x}}}_{3}}) \\ \end{align} \right.$

      (74)

      where $\theta _{1}^{\rm T}=\left[-\frac{1}{C_{1}R}, -LC_{2}R\right], $ $\theta _{2}^{\rm T}=\left[\frac{1}{C_{1}C_{2}R^{2}}, -\frac{1}{C_{2}R}\right], $ $\theta _{3}^{\rm T}=\left[-\frac{1}{LC_{2}}\right], $ $g_{1}(\bar{x}_{1})=[x_{1}(t), g(x_{1})]^{\rm T}, $ ${{g}_{2}}({{\bar{x}}_{2}})={{[{{x}_{1}}(t), {{x}_{2}}(t)]}^{\text{T}}}, $ $g_{3}(\bar{x}_{3})=x_{2}(t)$ and $g({{x}_{1}})=-\left(\frac{1}{L{{C}_{1}}{{C}_{2}}R} \right){{x}_{1}}(t)+(0.02+0.005\sin \left(\frac{t}{5} \right){{\left(\frac{1}{L{{C}_{1}}{{C}_{2}}R} \right)}^{3}}x_{1}^{3}(t)$ .

      In addition, taking the uncertain time delays into account[11], the dynamic system can be described by

      $\left\{ \begin{align} & {{{\dot{x}}}_{1}}(t)={{x}_{2}}(t)+\theta _{1}^{\text{T}}{{g}_{1}}({{{\bar{x}}}_{1}})+\Delta {{f}_{1}}({{x}_{1}}(t-{{\tau }_{1}})) \\ & {{{\dot{x}}}_{2}}(t)={{x}_{3}}(t)+\theta _{2}^{\text{T}}{{g}_{2}}({{{\bar{x}}}_{2}})+\Delta {{f}_{2}}({{x}_{2}}(t-{{\tau }_{2}})) \\ & {{{\dot{x}}}_{3}}(t)=u+\theta _{3}^{\text{T}}{{g}_{3}}({{{\bar{x}}}_{3}})+\Delta {{f}_{3}}({{x}_{3}}(t-{{\tau }_{3}})) \\ \end{align} \right.$

      (75)

      where $\Delta f_{1}(x_{1}(t-\tau _{1}))=\sin (x_{1}(t-\tau _{1})), $ $\Delta f_{2}(x_{2}(t-\tau _{2}))=x_{1}(t-\tau _{2})x_{2}(t-\tau _{2}), $ $\Delta f_{3}(x_{3}(t-\tau _{3}))=\sin (x_{3}(t-\tau _{3})).$ The system parameters are given as: $C_{1}=0.5, C_{2}=0.5, R=0.5, L=15, \tau _{1}=0.1, \tau _{2}=0.2, \tau _{3}=0.3.$ If the reference signal $x_{d}=0.2\sin (2t), $ the simulation results are shown in Fig. 2. If the reference signal $% x_{d}=0.2(\sin (2t)+\sin (t)), $ the simulation results are shown in Fig. 3.

      Figure 2.  Curves of states $x_{1}(t), x_{d}(t)$ and $z_{1}(t)$ with constant time delays and $x_{d}=0.2{\rm sin}(t)$

      Figure 3.  Curves of states $x_{1}(t), x_{d}(t)$ and $ z_{1}(t)$ with constant time delays and $x_{d}=0.2({\rm sin}(2t)+{\rm sin}(t)$ )

      Furthermore, by taking the uncertain time-varying delays into account, such as $\tau _{1}(t)=\tau _{2}(t)=\tau _{3}(t)=1+0.5\sin (t), $ the simulations are run. The result is shown in Fig. 4.

      Figure 4.  Curves of states $x_{1}(t), x_{d}(t)$ and $ z_{1}(t)$ with time varying delays and $x_{d}=0.2{\rm sin}(t)$

      By inspecting Figs. 2-4, it is easy to see that the closed-loop system has excellent tracking performance, no matter which smooth continuous reference signal is given. Also, the controller is equally effective by choosing appropriate design parameters, whether we take into account constant time delays or time varying delays.

    • A novel indirect adaptive NN DSC scheme is constructed for a class of nonlinear systems with uncertain time-varying delays and unknown parameters. Compared with conventional approaches, the advantages are its simplicity without the problem of ``explosion of complexity''. Moreover, incorporating indirect adaptive NN into DSC framework, the closed-loop system has excellent tracking performance and moves in a desired region by selecting appropriate design parameters. Finally, the simulation results have shown the effectiveness of the proposed method.

      In brief, this initial study has presented some promising results, but there still exist a lot of open problems in the follow up research. The potential studies for the control of more complex systems could cover new structure of neural network and new adaptive algorithm to adjust the weights.

    • In backstepping procedure, the weight estimation algorithm of indirect adaptive NN can be described in each step as follows.

      In the first step, the error

      ${{\dot{e}}_{1}}={{({{\hat{\theta }}_{1}}-{{\theta }_{1}})}^{\text{T}}}{{g}_{1}}+{{({{\hat{W}}_{1}}-W_{1}^{*})}^{\text{T}}}{{\psi }_{1}}-{{\varepsilon }_{1}}-{{l}_{1}}{{e}_{1}}.$

      (A1)

      Choose the Lyapunov function

      $\begin{align} & {{V}_{1}}=\frac{1}{2}e_{1}^{2}+\frac{1}{2}{{({{{\hat{\theta }}}_{1}}-{{\theta }_{1}})}^{\text{T}}}\Lambda _{1}^{-1}({{{\hat{\theta }}}_{1}}-{{\theta }_{1}})+ \\ & \ \ \ \ \ \ \ \frac{1}{2}{{({{{\hat{W}}}_{1}}-W_{1}^{*})}^{\text{T}}}\Gamma _{1}^{-1}({{{\hat{W}}}_{1}}-W_{1}^{*}). \\ \end{align}$

      (A2)

      With the adaptive law (13) and NN weight vector adaptive law (14), the derivative of $V_{1}$ is given by

      $\begin{align} & {{{\dot{V}}}_{1}}={{e}_{1}}{{{\dot{e}}}_{1}}+{{({{{\hat{\theta }}}_{1}}-{{\theta }_{1}})}^{\text{T}}}\Lambda _{1}^{-1}{{{\dot{\hat{\theta }}}}_{1}}+{{({{{\hat{W}}}_{1}}-W_{1}^{*})}^{\text{T}}}\Gamma _{1}^{-1}{{{\dot{\hat{W}}}}_{1}}\le \\ & \ \ \ \ \ -({{l}_{1}}-\frac{1}{2})e_{1}^{2}+\frac{1}{2}\varepsilon _{1}^{2}-\\ & \ \ \ \ \ \ {{\delta }_{1}}{{({{{\hat{\theta }}}_{1}}-{{\theta }_{1}})}^{\text{T}}}{{{\hat{\theta }}}_{1}}-{{\sigma }_{1}}{{({{{\hat{W}}}_{1}}-W_{1}^{*})}^{\text{T}}}{{{\hat{W}}}_{1}}\le \\ & \ \ \ \ \ -({{l}_{1}}-\frac{1}{2})e_{1}^{2}-\frac{{{\delta }_{1}}}{2}{{({{{\hat{\theta }}}_{1}}-{{\theta }_{1}})}^{\text{T}}}({{{\hat{\theta }}}_{1}}-{{\theta }_{1}})-\\ & \ \ \ \ \ \ \frac{{{\sigma }_{1}}}{2}{{({{{\hat{W}}}_{1}}-W_{1}^{*})}^{\text{T}}}({{{\hat{W}}}_{1}}-W_{1}^{*})+{{\Pi }_{1}} \\ \end{align}$

      (A3)

      where ${{l}_{1}}>\frac{1}{2}, {{\Pi }_{1}}=\frac{1}{2}\varepsilon _{1}^{2}+\frac{{{\delta }_{1}}}{2}\theta _{1}^{\text{T}}{{\theta }_{1}}+\frac{{{\sigma }_{1}}}{2}W_{1}^{*\text{T}}W_{1}^{*}.$

      In the ith step, $i=2, \cdots, n-1, $ the error

      ${{\dot{e}}_{i}}={{({{\hat{\theta }}_{i}}-{{\theta }_{i}})}^{\text{T}}}{{g}_{i}}+{{({{\hat{W}}_{i}}-W_{i}^{*})}^{\text{T}}}{{\psi }_{i}}-{{\varepsilon }_{i}}-{{l}_{i}}{{e}_{i}}.$

      (A4)

      Choose the Lyapunov function

      $\begin{align} & {{V}_{i}}=\frac{1}{2}e_{i}^{2}+\frac{1}{2}{{({{{\hat{\theta }}}_{i}}-{{\theta }_{i}})}^{\text{T}}}\Lambda _{i}^{-1}({{{\hat{\theta }}}_{i}}-{{\theta }_{i}})+ \\ & \ \ \ \ \ \ \frac{1}{2}{{({{{\hat{W}}}_{i}}-W_{i}^{*})}^{\text{T}}}\Gamma _{i}^{-1}({{{\hat{W}}}_{i}}-W_{i}^{*}). \\ \end{align}$

      (A5)

      With the adaptive law (47) and NN weight vector adaptive law (48), the derivative of $V_{i}$ is given by

      $\begin{align} & {{{\dot{V}}}_{i}}={{e}_{i}}{{{\dot{e}}}_{i}}+{{({{{\hat{\theta }}}_{i}}-{{\theta }_{i}})}^{\text{T}}}\Lambda _{i}^{-1}{{{\dot{\hat{\theta }}}}_{i}}+{{({{{\hat{W}}}_{i}}-W_{i}^{*})}^{\text{T}}}\Gamma _{i}^{-1}{{{\dot{\hat{W}}}}_{i}}\le \\ & \ \ \ \ \ -({{l}_{i}}-\frac{1}{2})e_{i}^{2}+\frac{1}{2}\varepsilon _{i}^{2}-\\ & \ \ \ \ \ {{\delta }_{i}}{{({{{\hat{\theta }}}_{i}}-{{\theta }_{i}})}^{\text{T}}}{{{\hat{\theta }}}_{i}}-{{\sigma }_{i}}{{({{{\hat{W}}}_{i}}-W_{i}^{*})}^{\text{T}}}{{{\hat{W}}}_{i}}\le \\ & \ \ \ \ -({{l}_{i}}-\frac{1}{2})e_{i}^{2}-\frac{{{\delta }_{i}}}{2}{{({{{\hat{\theta }}}_{i}}-{{\theta }_{i}})}^{\text{T}}}({{{\hat{\theta }}}_{i}}-{{\theta }_{i}})-\\ & \ \ \ \ \ \frac{{{\sigma }_{i}}}{2}{{({{{\hat{W}}}_{i}}-W_{i}^{*})}^{\text{T}}}({{{\hat{W}}}_{i}}-W_{i}^{*})+{{\Pi }_{i}}\text{ } \\ \end{align}$

      (A6)

      where ${{l}_{i}}>\frac{1}{2}, {{\Pi }_{i}}=\frac{1}{2}\varepsilon _{i}^{2}+\frac{{{\delta }_{i}}}{2}\theta _{i}^{\text{T}}{{\theta }_{i}}+\frac{{{\sigma }_{i}}}{2}W_{i}^{*\text{T}}W_{i}^{*}.$

      In the $n$ -th step, the error

      ${{\dot{e}}_{n}}={{({{\hat{\theta }}_{n}}-{{\theta }_{n}})}^{\text{T}}}{{g}_{n}}+{{({{\hat{W}}_{n}}-W_{n}^{*})}^{\text{T}}}{{\psi }_{n}}-{{\varepsilon }_{n}}-{{l}_{n}}{{e}_{n}}.$

      (A7)

      Choose the Lyapunov function

      $\begin{align} & {{V}_{n}}=\frac{1}{2}e_{n}^{2}+\frac{1}{2}{{({{{\hat{\theta }}}_{n}}-{{\theta }_{n}})}^{\text{T}}}\Lambda _{n}^{-1}({{{\hat{\theta }}}_{n}}-{{\theta }_{n}})+ \\ & \ \ \ \ \ \ \frac{1}{2}{{({{{\hat{W}}}_{n}}-W_{n}^{*})}^{\text{T}}}\Gamma _{n}^{-1}({{{\hat{W}}}_{n}}-W_{n}^{*}). \\ \end{align}$

      (A8)

      With the adaptive law (64) and NN weight vector adaptive law (65), the derivative of $V_{n}$ is given by

      $\begin{align} & {{{\dot{V}}}_{n}}={{e}_{n}}{{{\dot{e}}}_{n}}+{{({{{\hat{\theta }}}_{n}}-{{\theta }_{n}})}^{\text{T}}}\Lambda _{n}^{-1}{{{\dot{\hat{\theta }}}}_{n}}+{{({{{\hat{W}}}_{n}}-W_{n}^{*})}^{\text{T}}}\Gamma _{n}^{-1}{{{\dot{\hat{W}}}}_{n}}\le \\ & \ \ \ \ \ -({{l}_{n}}-\frac{1}{2})e_{n}^{2}+\frac{1}{2}\varepsilon _{n}^{2}-\\ & \ \ \ \ \ \ {{\delta }_{n}}{{({{{\hat{\theta }}}_{n}}-{{\theta }_{n}})}^{\text{T}}}{{{\hat{\theta }}}_{n}}-{{\sigma }_{n}}{{({{{\hat{W}}}_{n}}-W_{n}^{*})}^{\text{T}}}{{{\hat{W}}}_{n}}\le \\ & \ \ \ \ \ -({{l}_{n}}-\frac{1}{2})e_{n}^{2}-\frac{{{\delta }_{n}}}{2}{{({{{\hat{\theta }}}_{n}}-{{\theta }_{n}})}^{\text{T}}}({{{\hat{\theta }}}_{n}}-{{\theta }_{n}})-\\ & \ \ \ \ \ \ \frac{{{\sigma }_{n}}}{2}{{({{{\hat{W}}}_{n}}-W_{n}^{*})}^{\text{T}}}({{{\hat{W}}}_{n}}-W_{n}^{*})+{{\Pi }_{n}} \\ \end{align}$

      (A9)

      where ${{l}_{n}}>\frac{1}{2}, {{\Pi }_{n}}=\frac{1}{2}\varepsilon _{n}^{2}+\frac{{{\delta }_{n}}}{2}\theta _{n}^{\text{T}}{{\theta }_{n}}+\frac{{{\sigma }_{n}}}{2}W_{n}^{*\text{T}}W_{n}^{*}.$

      Hence, with previous descriptions, if it satisfies $l_{i}>\frac{1}{2}, (i=1, \cdots, n)$ , the error based on indirect adaptive NN in each step is bounded.

参考文献 (21)

目录

    /

    返回文章
    返回