A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting

J. Garcia-Tirado H. Botero F. Angulo

J. Garcia-Tirado, H. Botero and F. Angulo. A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting. International Journal of Automation and Computing, vol. 13, no. 6, pp. 653-664, 2016 doi:  10.1007/s11633-016-1015-1
Citation: J. Garcia-Tirado, H. Botero and F. Angulo. A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting. International Journal of Automation and Computing, vol. 13, no. 6, pp. 653-664, 2016 doi:  10.1007/s11633-016-1015-1

doi: 10.1007/s11633-016-1015-1
基金项目: 

This work was supported by the European Community s Seventh Framework Programme FP7/2007-2013 223854

A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting

Funds: 

This work was supported by the European Community s Seventh Framework Programme FP7/2007-2013 223854

More Information
    Author Bio:

    H. Botero, received the B. Sc. degree in electrical engineering, his specialist degree in industrial automation from University of Antioquia, Colombia, and the M. Sc. degree in engineering from University of Valle, Colombia. Finally, he received the Ph. D. degree from National University of Colombia at Medellin Campus. He is currently with the Department of Electrical Energy and Automatics, National University of Colombia, Medellın Colombia.
    His research interests include state estimation, identification of generation control systems, and education in engineering.
    E-mail:habotero@unal.edu.co

    F. Angulo,received the B. Sc. degree in electrical engineering with honors, the M. Sc. degree in automatics, and the Ph. D. degree in automatics and robotics from the National University of Colombia, Colombia in 1989, National University of Colombia, Colombia in 2000, and Polytechnic University of Catalonia, Spain in 2004, respectively. She is currently an associate professor in the Department of Electrical Engineering, Electronics, and Computer Science, National University of Colombia, Colombia. She is a member of the Research Group Perception and Intelligent Control-PCI.
    Her research interests include nonlinear control, nonlinear dynamics of nonsmooth systems, and applications to DC/DC converters.
    E-mail:fangulog@unal.edu.co

    Corresponding author: J. Garcia-Tirado received the B. Sc. degree in control engineering from the National University of Colombia in 2006, and the M. Sc. degree from CINVESTAV at Guadalajara, Mexico in 2009. He got the Ph.D. degree with honors in the School of Processes and Energy at the National University of Colombia in the early 2014. He is currently associate professor in the Department of Quality and Production at the Metropolitan Institute of Technology, Medellin, Colombia.
    His research interests include robust estimation theory, receding-horizon control and estimation, model-based control, and control of biological processes.
    E-mail: josegarcia@itm.edu.co ;
    ORCID iD: 0000-0002-9970-2162
图(4) / 表(2)
计量
  • 文章访问数:  445
  • HTML全文浏览量:  330
  • PDF下载量:  6
  • 被引次数: 0
出版历程
  • 收稿日期:  2014-12-10
  • 录用日期:  2016-01-11
  • 网络出版日期:  2016-09-02
  • 刊出日期:  2016-12-01

A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting

doi: 10.1007/s11633-016-1015-1
    基金项目:

    This work was supported by the European Community s Seventh Framework Programme FP7/2007-2013 223854

    作者简介:

    H. Botero, received the B. Sc. degree in electrical engineering, his specialist degree in industrial automation from University of Antioquia, Colombia, and the M. Sc. degree in engineering from University of Valle, Colombia. Finally, he received the Ph. D. degree from National University of Colombia at Medellin Campus. He is currently with the Department of Electrical Energy and Automatics, National University of Colombia, Medellın Colombia.
    His research interests include state estimation, identification of generation control systems, and education in engineering.
    E-mail:habotero@unal.edu.co

    F. Angulo,received the B. Sc. degree in electrical engineering with honors, the M. Sc. degree in automatics, and the Ph. D. degree in automatics and robotics from the National University of Colombia, Colombia in 1989, National University of Colombia, Colombia in 2000, and Polytechnic University of Catalonia, Spain in 2004, respectively. She is currently an associate professor in the Department of Electrical Engineering, Electronics, and Computer Science, National University of Colombia, Colombia. She is a member of the Research Group Perception and Intelligent Control-PCI.
    Her research interests include nonlinear control, nonlinear dynamics of nonsmooth systems, and applications to DC/DC converters.
    E-mail:fangulog@unal.edu.co

    通讯作者: J. Garcia-Tirado received the B. Sc. degree in control engineering from the National University of Colombia in 2006, and the M. Sc. degree from CINVESTAV at Guadalajara, Mexico in 2009. He got the Ph.D. degree with honors in the School of Processes and Energy at the National University of Colombia in the early 2014. He is currently associate professor in the Department of Quality and Production at the Metropolitan Institute of Technology, Medellin, Colombia.
    His research interests include robust estimation theory, receding-horizon control and estimation, model-based control, and control of biological processes.
    E-mail: josegarcia@itm.edu.co ;
    ORCID iD: 0000-0002-9970-2162

English Abstract

J. Garcia-Tirado, H. Botero and F. Angulo. A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting. International Journal of Automation and Computing, vol. 13, no. 6, pp. 653-664, 2016 doi:  10.1007/s11633-016-1015-1
Citation: J. Garcia-Tirado, H. Botero and F. Angulo. A New Approach to State Estimation for Uncertain Linear Systems in a Moving Horizon Estimation Setting. International Journal of Automation and Computing, vol. 13, no. 6, pp. 653-664, 2016 doi:  10.1007/s11633-016-1015-1
    • The problem of estimating the state of uncertain linear systems is well-known since the beginning of 1980s[1]. This problem attracted the interest of many researchers due to the inability of classic filters to address uncertainties in the statistics of the inputs and/or in the parameters of the model. As is well known,the Kalman filter (KF) has widely been used since a long time ago to solve the ${H}_{2}$-optimal state estimation problem subject to the complete knowledge of the stochastic properties of the disturbing noises[2]. However,if the statistics of noises are not known or are not properly identified,the Kalman filter may not work adequately. Therefore,suitable estimation strategies are needed[3-5].

      In most cases,additional available information about the system dynamics exists,which can improve the estimation procedure. Although this information is not part of the model itself,it does give heuristic knowledge about the real system. Realistic bounds on the states,allowable paths,optimal trajectories,and expected disturbances are some examples of a priori knowledge that can be useful to improve estimations. Normally,this additional information is mathematically characterized by means of equalities and inequalities involving the variables in the model. However,classic approaches like the Kalman-based filters and robust filters in general do not take into account the use of a priori information in the form of constraints[6,7]. Filters able to handle both constraints and uncertainties are nowadays needed to feed robust controllers like those presented in [8-10].

      An optimization-based estimation procedure is the key to obtain both of the features mentioned above,i.e.,robust estimates fulfilling constraints on the system variables. In this way,the moving horizon estimator (MHE) philosophy appears to be useful to deal with the aforementioned challenge. The classic MHE rewrites the optimal estimation problem in an optimization-based procedure,allowing the natural addition of useful insight in the form of constraints[6,7,11,12]. The basic strategy of MHE in the linear framework reformulates the estimation problem as a quadratic program using a moving,fixed-size estimation window. The fixed-size window is needed to bound the computational effort to solve the optimization problem and is the principal difference between MHE and the full information estimator (FIE)[6,11,12].

      According to the knowledge of the authors,obtaining of a robust version of MHE can be mainly attributed to the efforts of Alessandri et al.[13-17]. Generally speaking,they have proposed some receding horizon estimation strategies for uncertain linear systems to account for mainly the uncertainty in the system matrices assuming low variability of the disturbing noises. In these contributions,the authors tackle the problem through the formulation of a minimax optimization problem using the cost function provided in[12],which is,in fact,the cost function of the classic formulation of the MHE. However,a general concern about these contributions is the difficulty to provide a solution even in the unconstrained case; the given solutions are posed over successive reformulations of the original statement. In fact,the authors were explicit about the high nonlinearity of the derived solutions and the complexity to get both the numerical solutions and theoretical results.

      The present work provides an understandable and clear path to achieve an optimization-based estimation strategy for uncertain discrete-time linear systems. Our contribution differs from that presented by Alessandri et al. because we do not consider the uncertainty related to the matrices of the system but instead the uncertainty of the statistics of the disturbing inputs. Moreover,we reformulate the problem by using an alternative statement of the MHE to address the uncertainty with ${H}_{\infty}$ theory. As in[6],we provide a relationship between the full information estimation problem in a ${H}_\infty$ setting with its moving horizon approximation,namely the ${H}_\infty$-MHE. Sufficient conditions for the stability of the ${H}_\infty$-FIE and ${H}_\infty$-MHE are provided.

      The paper is organized as follows. In Section 2,the problem of estimating the state of an uncertain discrete-time linear system using an optimization-based framework is stated. Section 3 presents the ${H}_{\infty}$-FIE. Section 4 is divided in two parts. First,the ${H}_{\infty}$-MHE is defined as an approximation of the ${H}_{\infty}$-FIE by suitably defining the arrival cost. Then,the recursion for the weighting matrix of estimation error is presented as a key element for the definition of the above arrival cost. A stability analysis for both the ${H}_{\infty}$-FIE and the ${H}_{\infty}$-MHE is provided in Section 5. Section 6 shows the potential and benefits of the proposed scheme with a numerical example. Finally,conclusions and future work are given in Section 7.

    • Consider a dynamic system modeled by the following uncertain discrete-time linear system

      $\begin{align} & {{x}_{k+1}}=A{{x}_{k}}+G{{w}_{k}} \\ & {{y}_{k}}=C{{x}_{k}}+{{\nu }_{k}} \\ \end{align}$

      (1)

      where $x_{k} \in {\bf R}.{n}$ is the system state. Vectors $w_{k} \in {\bf R}.{n}$ and $\nu_{k} \in {\bf R}.{p}$ are unknown parameter and measurement uncertainties,respectively,where only bounds on $w_{k}$ and $\nu_{k}$ are known. Let $x\left(t;x_{0},\left\lbrace w_{k} \right\rbrace_{k=0}.{t-1}\right)$ denote the solution of (1) at time stept subject to the initial condition $x_{0}$ and the disturbance sequence $\left\lbrace w_{k}\right\rbrace_{k=0}.{t-1} = \left\lbrace w_{0},w_{1},\cdots,w_{t-1} \right\rbrace$

      $x\left( t;{{x}_{0}},\left\{ {{w}_{k}} \right\}_{k=0}.{t-1} \right)={{A}.{\text{T}}}{{x}_{0}}+\sum\limits_{k=0}.{t-1}{{{A}.{t-k-1}}}G{{w}_{k}}.$

      (2)

      For the sake of simplicity,let the solution of the state equation and the modeling disturbance sequence be written as ${{x}_{t}}=x\left( t;{{x}_{0}},\left\{ {{w}_{k}} \right\}_{k=0}.{t-1} \right)$ and $\left\{ {{w}_{k}} \right\}=\left\{ {{w}_{k}} \right\}_{k=0}.{t-1}$,respectively. The analogous notation applies for $\left\{ {{\nu }_{k}} \right\}$.

      The unconstrained estimation for uncertain discrete-time linear systems can be formulated as the solution of the following minimax problem

      $\bar{\psi }_{t}.{*}={{\min }_{{{x}_{0}}}}{{\max }_{\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\}}}{{\bar{\psi }}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\} \right)$

      (3)

      with the objective function $\bar{\psi}_{t}$ not as in the classic MHE[6,12],but as an objective function in the form of a disturbance attenuation function to obtain a robust setting[3,18]. Using the game-theoretical approximation of the ${H}_{\infty}$ filtering,we propose an objective function from the following performance criterion

      ${{\psi }_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\} \right)=\frac{\|{{x}_{0}}-{{{\bar{x}}}_{0}}\|_{\Pi _{0}.{-1}}.{2}}{\sum\limits_{k=0}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{\nu }_{k}}\|_{{{R}.{-1}}}.{2} \right)}}$

      (4)

      where $\bar{x}_{0}$ is an a priori guess of the initial state as in the classic FIE approach and $x_{0}$,$w_{k}$,and $\nu_{k}$ with $k=0,\cdots,t-1$ are,in principle,the free parameters to be found. $\Pi_{0}.{-1}$,$Q.{-1}$,and $R.{-1}$ are symmetric positive definite matrices weighting the guess on the initial condition and the uncertain inputs,respectively. A performance criterion like (4) helps to obtain a worst-case estimate,i.e.,an estimate of $x_{0}$ considering the presence of the worst system disturbances. In fact,an objective function derived from (4) prevents the optimization problem from using excessive force to maximize the cost[3,18]. In contrast,the optimizer must find a clever choice of the uncertainties to maximize (4). Because the direct minimization of (4) is not tractable,a common practice is to force the transfer function (4) to fulfill a performance bound[3,18]. In this way,assuming $\left(A,G \right)$ controllable and $\left(C,A \right)$ detectable,the optimal estimate of the initial state $x_{0}.{*}$ among all possible $x_{0}$ should satisfy

      $\underset{\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\}}{\mathop{\sup }}\,{{\psi }_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\} \right)\le \frac{1}{\gamma }$

      (5)

      $\forall \left( \left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\} \right)\in \left( {{L}_{2}},{{L}_{2}} \right)\ne 0$ where sup stands for {supremum},$L_{2}$ is the space of square-summable sequences,and $\gamma > 0$ is the robust performance bound. If the disturbance attenuation function is bounded above by $\frac{1}{\gamma}$,then the ${H}_{\infty}$ norm of the transfer function matrix from the disturbances to the estimation error,$T_{ed}$,is also bounded from above[18-19],i.e.

      ${{\psi }_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\} \right)\le \frac{1}{\gamma }\Rightarrow \|{{T}_{ed}}{{\|}_{\infty }}\le \theta $

      where $e = x_{0}-\bar{x}_{0}$,d represents the uncertain sequences $\left\lbrace w_{k} \right\rbrace$ and $\left\lbrace \nu_{k} \right\rbrace$,and $\theta$ is a scalar. The solution of the preceding estimation problem in a moving horizon setting is obtained by rewriting the performance criterion (4) as a performance index for a zero-sum differential game

      $\begin{align} & {{{\bar{\psi }}}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\},\left\{ {{\nu }_{k}} \right\} \right)=\frac{1}{2}\|{{x}_{0}}-{{{\bar{x}}}_{0}}\|_{\Pi _{0}.{-1}}.{2}- \\ & \frac{1}{2\gamma }\sum\limits_{k=0}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{\nu }_{k}}\|_{{{R}.{-1}}}.{2} \right)}\le 0. \\ \end{align}$

      (6)

      Remark 1. The assumption in (5) refers to the performance bound the designer is imposing over the filter. If $\gamma \rightarrow \infty$ then $\frac{1}{\gamma} \rightarrow 0$,implying that the designer is enforcing the filter to guarantee a lower maximum overshoot in the frequency domain. Conversely,if $\gamma \rightarrow 0$ then $\frac{1}{\gamma} \rightarrow \infty$ and the filter will gradually lose robustness since a larger maximum overshoot in the frequency domain is allowed.

      Remark 2. Classic ${H}_{\infty}$ filters are special cases of the full information estimator[3,18]. However,the implementation of the classic ${H}_{\infty}$ differs considerably from the implementation of the full information estimator since the former is based on a recursive solution and the latter depends on the solution of an optimization problem at each sample time. It is also known that classic ${H}_{\infty}$ filters are not suitable to handle constraints as a consequence of the particular recursive solution.

    • In this section,the full information estimator in a ${H}_{\infty}$ setting is defined,henceforth the ${H}_{\infty}$-FIE. Then an analytical solution is provided for the unconstrained case.

      Unconstrained ${ H_{\infty}}$-FIE. Consider a system that is dynamically described by (1),where $w_{k}$ and $\nu_{k}$ are unknown but bounded. Given a sequence of $t-1$ output data,the estimate of $x_{t}$,denoted as $\hat{x}_{t\vert t-1}$,is computed by means of (2) as

      ${{\hat{x}}_{t|t-1}}={{A}.{\text{T}}}{{\hat{x}}_{0|t-1}}+\sum\limits_{k=0}.{t-1}{{{A}.{t-k-1}}}G{{w}_{k}}$

      with $\hat{x}_{0|t-1} = x_{0}.{*}$ and $\left\lbrace w_{0},w_{1},\cdots,w_{t-1} \right\rbrace= \left\lbrace w_{0}.{*},w_{1}.{*},\cdots,w_{t-1}.{*} \right\rbrace$ from the solution of the minimax problem (3) with $\bar{\psi}_{t}$ as in (6).

      The unconstrained ${H}_{\infty}$-FIE problem has an analytic solution since constraints are not initially assumed. To proceed with the explicit solution of the unconstrained ${H}_{\infty}$-FIE,the cost (6) is rewritten using the output equation

      $\begin{align} & {{{\bar{\psi }}}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\} \right)=\frac{1}{2}\|{{x}_{0}}-{{{\bar{x}}}_{0}}\|_{\Pi _{0}.{-1}}.{2}- \\ & \frac{1}{2\gamma }\sum\limits_{k=0}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{y}_{k}}-C{{x}_{k}}\|_{{{R}.{-1}}}.{2} \right)}. \\ \end{align}$

      (7)

      Note that $\nu_{k}$ is a function of the observations $y_{k}$ and the system state $x_{k}$. Hence,the optimization problem (3) becomes

      $\bar{\psi }_{t}.{*}={{\min }_{{{x}_{0}}}}{{\max }_{\left\{ {{w}_{k}} \right\}}}{{\bar{\psi }}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\} \right).$

      (8)

      Now,let us rewrite the cost (7) without sums,i.e.,in matrix form. For this purpose,the cost related to the measurement uncertainty is rewritten as a function of the initial condition by using (13),where $Y_{t} \in {\bf R}.{t p \times 1}$,$\Gamma_{t} \in {\bf R}.{t p \times n}$,$\Xi_{t} \in {\bf R}.{t p \times t m}$,$\tilde{w}_{t} \in {\bf R}.{t m \times 1}$,and $\tilde{\nu}_{t} \in {\bf R}.{t p \times 1}$. Thus,the cost related to the measurement can be rewritten as

      $\begin{align} & \sum\limits_{k=0}.{t-1}{\|{{y}_{k}}-C{{x}_{k}}\|_{{{R}.{-1}}}.{2}}= \\ & \sum\limits_{k=0}.{t-1}{{{\left( {{y}_{k}}-C{{x}_{k}} \right)}.{\text{T}}}}{{R}.{-1}}\left( {{y}_{k}}-C{{x}_{k}} \right)= \\ & {{\left( Y-\Gamma {{x}_{0}}-\Xi \tilde{w} \right)}.{\text{T}}}\bar{R}\left( Y-\Gamma {{x}_{0}}-\Xi \tilde{w} \right) \\ \end{align}$

      (9)

      where the indexes are omitted for simplicity. With the above equations in mind,the cost (7) is rewritten in a more compact form as

      $\begin{align} & {{{\bar{\psi }}}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\} \right)=\frac{1}{2}{{\left( {{x}_{0}}-{{{\bar{x}}}_{0}} \right)}.{\text{T}}}\Pi _{0}.{-1}\left( {{x}_{0}}-{{{\bar{x}}}_{0}} \right)-\frac{1}{2\gamma }[{{{\tilde{w}}}.{\text{T}}}\bar{Q}\tilde{w}+ \\ & {{\left( Y-\Gamma {{x}_{0}}-\Xi \tilde{w} \right)}.{\text{T}}}\bar{R}\left( Y-\Gamma {{x}_{0}}-\Xi \tilde{w} \right)] \\ \end{align}$

      (10)

      where $$\bar{R}=\oplus _{j=1}.{\text{T}}R_{j}.{-1}$,$\bar{Q} = \bigoplus_{j=1}.{\rm T} Q_{j}.{-1}$. The $\bigoplus$ operator denotes block diagonal matrices. Making the products,ordering,and neglecting the independent terms,(10) is rewritten as

      $\begin{align} & {{{\bar{\psi }}}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\} \right)=\frac{1}{2}x_{0}.{\text{T}}\left( \Pi _{0}.{-1}-{{\gamma }.{-1}}{{\Gamma }.{\text{T}}}\bar{R}\Gamma \right){{x}_{0}}- \\ & \frac{1}{2\gamma }{{{\tilde{w}}}.{\text{T}}}\left( {{\Xi }.{\text{T}}}\bar{R}\Xi +\bar{Q} \right)\tilde{w}+ \\ & \frac{1}{\gamma }\left( {{Y}.{\text{T}}}\bar{R}\Gamma {{x}_{0}}+{{Y}.{\text{T}}}\bar{R}\Xi \tilde{w}-{{{\tilde{w}}}.{\text{T}}}{{\Xi }.{\text{T}}}\bar{R}\Gamma {{x}_{0}} \right)- \\ & \bar{x}_{0}.{\text{T}}\Pi _{0}.{-1}{{x}_{0}}. \\ \end{align}$

      (11)

      Note that (11) is written as a quadratic function of $x_{0}$ and $\tilde{w}$,the parameters to be optimized. Because there are no constraints,the critical point is found by taking the derivatives of $\bar{\psi}_{t}\left(x_{0},\left\lbrace w_{k} \right\rbrace\right)$ with respect to each optimization parameter

      $\begin{align} & \frac{\partial {{{\bar{\psi }}}_{t}}}{\partial {{x}_{0}}}=x_{0}.{\text{T}}\left( \Pi _{0}.{-1}-{{\gamma }.{-1}}{{\Gamma }.{\text{T}}}\bar{R}\Gamma \right)+ \\ & \frac{1}{\gamma }Y\left( .{\text{T}}\bar{R}\Gamma -{{{\tilde{w}}}.{\text{T}}}{{\Xi }.{\text{T}}}\bar{R} \right)\Gamma -\bar{x}_{0}.{\text{T}}\Pi _{0}.{-1} \\ \end{align}$

      (12a)

      $\begin{align} & \frac{\partial {{{\bar{\psi }}}_{t}}}{\partial \tilde{w}}=-\frac{1}{\gamma }{{{\tilde{w}}}.{\text{T}}}\left( {{\Xi }.{\text{T}}}\bar{R}\Xi +\bar{Q} \right)+ \\ & \gamma \left( {{Y}.{\text{T}}}\bar{R}\Xi -x_{0}.{\text{T}}{{\Gamma }.{\text{T}}}\bar{R}\Xi \right) \\ \end{align}$

      (12b)

      $\underbrace{\left[ \begin{matrix} \begin{align} & {{y}_{1\mid t-1}} \\ & {{y}_{2\mid t-1}} \\ & \vdots \\ \end{align} \\ {{y}_{t\mid t-1}} \\ \end{matrix} \right]}_{{{Y}_{t}}}=\underbrace{\left[ \begin{matrix} \begin{align} & CA \\ & C{{A}.{2}} \\ & \vdots \\ \end{align} \\ C{{A}.{t-1}} \\ \end{matrix} \right]}_{{{\Gamma }_{t}}}{{x}_{0}}+\underbrace{\left[ \begin{matrix} CG & 0 & \cdots & 0 \\ CVG & CG & \ddots & \vdots \\ \vdots & \vdots & \ddots & 0 \\ C{{A}.{t-2}}1G & C{{A}.{t-2}}G & \cdots & CG \\ \end{matrix} \right]}_{{{\Xi }_{t}}}\underbrace{\left[ \begin{matrix} \begin{align} & {{w}_{0}} \\ & {{w}_{1}} \\ & \vdots \\ \end{align} \\ {{w}_{t-1}} \\ \end{matrix} \right]}_{{{{\tilde{w}}}_{t}}}+\underbrace{\left[ \begin{matrix} {{\nu }_{1}} \\ n{{u}_{2}} \\ \vdots \\ n{{u}_{t}} \\ \end{matrix} \right]}_{{{{\tilde{\nu }}}_{t}}}.$

      (13)

      If each of the derivatives above is set equal to zero,a two-equation system is obtained

      $\begin{align} & \frac{\partial {{{\bar{\psi }}}_{t}}}{\partial {{x}_{0}}}=0\Rightarrow x_{0}.{\text{T}}=\left[ \bar{x}_{0}.{\text{T}}\Pi _{0}.{-1}+\frac{1}{\gamma }\left( {{{\tilde{w}}}.{\text{T}}}{{\Xi }.{\text{T}}}\bar{R}\Gamma -{{Y}.{\text{T}}}\bar{R}\Gamma \right) \right]\times \\ & {{\left( \Pi _{0}.{-1}-{{\gamma }.{-1}}{{\Gamma }.{\text{T}}}\bar{R}\Gamma \right)}.{-1}} \\ \end{align}$

      (14a)

      $\frac{\partial {{{\bar{\psi }}}_{t}}}{\partial \tilde{w}}=0\Rightarrow {{{\tilde{w}}}.{\text{T}}}=\left( {{Y}.{\text{T}}}\bar{R}\Xi -x_{0}.{\text{T}}{{\Gamma }.{\text{T}}}\bar{R}\Xi \right){{\left( {{\Xi }.{\text{T}}}\bar{R}\Xi +\bar{Q} \right)}.{-1}}.$

      (14b)

      Solving (14a) and (14b),the critical point is then obtained as follows

      $\begin{align} & {{\left( x_{0}.{\text{T}} \right)}.{*}}=\left( \gamma \bar{x}_{0}.{\text{T}}\Pi _{0}.{-1}-{{Y}.{\text{T}}}\bar{R}\Gamma +{{Y}.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right)\times \\ & {{\left( \gamma \Pi _{0}.{-1}-{{\Gamma }.{\text{T}}}\bar{R}\Gamma +{{\Gamma }.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right)}.{-1}} \\ \end{align}$

      (15)

      $\begin{align} & {{\left( {{{\tilde{w}}}.{\text{T}}} \right)}.{*}}=\left\{ {{Y}.{\text{T}}}\bar{R}\Xi -\left[ \left( \gamma \bar{x}_{0}.{\text{T}}\Pi _{0}.{-1}-{{Y}.{\text{T}}}\bar{R}\Gamma +{{Y}.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right) \right. \right.\times \\ & \left. \left. {{\left( \gamma \Pi _{0}.{-1}-{{\Gamma }.{\text{T}}}\bar{R}\Gamma +{{\Gamma }.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right)}.{-1}} \right]{{\Gamma }.{\text{T}}}\bar{R}\Xi \right\}M \\ \end{align}$

      (16)

      with $M = \left(\Xi.{\rm T}\bar{R}\Xi + \bar{Q} \right).{-1}$. Therefore,the optimal values of $x_{0}.{\rm T}$ and $\tilde{w}.{\rm T}$ are analytically determined. To verify the nature of the pair $\left(x_{0},\left\lbrace w \right\rbrace \right)$,the second derivatives of $\psi_{T}$ with respect to $x_{0}.{\rm T}$ and $\tilde{w}.{\rm T}$ are computed.

      $\frac{{{\partial }.{2}}{{{\bar{\psi }}}_{T}}}{\partial x_{0}.{2}}=\Pi _{0}.{-1}-{{\gamma }.{-1}}{{\Gamma }.{\text{T}}}\bar{R}\Gamma $

      (17a)

      $\frac{{{\partial }.{2}}{{{\bar{\psi }}}_{T}}}{\partial {{{\tilde{w}}}.{2}}}=-{{\gamma }.{-1}}\left( {{\Xi }.{\text{T}}}\bar{R}\Xi +\bar{Q} \right).$

      (17b)

      Thus,in order to obtain a saddle-point solution,the positive and negative definiteness of the above second order derivatives should be guaranteed

      $\Pi _{0}.{-1}-{{\gamma }.{-1}}\left( {{\Gamma }.{\text{T}}}\bar{R}\Gamma \right)\succ 0$

      (18a)

      ${{\gamma }.{-1}}\left( {{\Xi }.{\text{T}}}\bar{R}\Xi +\bar{Q} \right)\prec 0.$

      (18b)
    • The classic MHE is mainly formulated to deal with known models disturbed by zero-mean white noises with known statistics[6,11-12]. Under this setting,optimal designs are guaranteed in the Kalman sense only if no constraints are imposed over the system variables. However,most known strategies fail when constrained uncertain discrete-time linear systems arise.

      An analytical solution for the ${H}_{\infty}$-FIE was presented. However,the main drawback of this filter is the same as the classic moving horizon estimator,i.e.,the number of optimization parameters grows at least linearly with time; thus,a realistic implementation of this filter becomes infeasible.

      This section is about a moving horizon approximation to the ${H}_{\infty}$-FIE described earlier. Instead of seeking the best initial state and worst disturbance sequence from the initial to the current time,the new scheme uses a fixed-size estimation window to bound the amount of optimization parameters. The latter is accomplished by defining two key elements in a moving horizon approximation,specifically,the arrival cost and the recursion of the matrix weighting the estimation error.

    • he arrival cost is an important concept in estimation theory[6]. It helps to approximate the effect of the old data on the state at the beginning of the estimation window. The arrival cost is an analogue concept of the {cost to go} which is widely used in the model predictive control formulation[20]. In order to make the approximation,the arrival cost must be redefined to fit in the statement of the estimation problem. To this end,consider (8),the optimization problem associated with the ${H}_{\infty}$-FIE. The objective function (7) can be rearranged by dividing it into two parts

      $\begin{align} & {{{\bar{\psi }}}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\}_{k=0}.{t-1} \right)={{{\bar{\psi }}}_{t-N}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\}_{k=0}.{t-N-1} \right)- \\ & \frac{1}{2\gamma }\sum\limits_{k=t-N}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{y}_{k}}-C{{x}_{k}}\|_{{{R}.{-1}}}.{2} \right)} \\ \end{align}$

      (19)

      whereN is the estimation horizon. This parameter helps the estimation procedure to bound the size of the optimization problem to be solved at each sample time. The cost $\bar{\psi}_{t-N}$ must be approximated by the so-called arrival cost. The arrival cost summarizes the effect of the old data $\left\lbrace y_{k}\right\rbrace_{k=0}.{t-N-1}$ on the state $x_{t-N}$ to obtain an estimation procedure based on a fixed-size optimization problem[6]. Therefore,the redefinition of the optimization program (8) with cost (7) using the arrival cost is

      $\begin{align} & \underset{{{x}_{0}}}{\mathop{\min }}\,\underset{\left\{ {{w}_{k}} \right\}}{\mathop{\max }}\,{{{\bar{\psi }}}_{T}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\} \right)\equiv \underset{z}{\mathop{\min }}\,\underset{\left\{ {{w}_{k}} \right\}}{\mathop{\max }}\,{{\Theta }_{t-N}}(z)- \\ & \frac{1}{2\gamma }\sum\limits_{k=t-N}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{y}_{k}}-C{{x}_{k}}\|_{{{R}.{-1}}}.{2} \right)} \\ \end{align}$

      (20)

      with $\Theta_{t-N}(z)$,the arrival cost,defined as

      $\begin{align} & {{\Theta }_{t}}(z)=\underset{{{x}_{0}}}{\mathop{\min }}\,\underset{\left\{ {{w}_{k}} \right\}_{k=0}.{t-1}}{\mathop{\max }}\,\left\{ {{{\bar{\psi }}}_{t}}\left( {{x}_{0}},\left\{ {{w}_{k}} \right\}_{k=0}.{t-1} \right) \right.: \\ & \left. x\left( t;{{x}_{0}},\left\{ {{w}_{k}} \right\}_{k=0}.{t-1} \right)=z \right\} \\ \end{align}$

      (21)

      where the indexes on $w_{k}$ are recovered,for the sake of clarity. One way to approximate (21) is by means of

      ${{\Theta }_{t}}(z)={{\left( z-{{{\bar{x}}}_{t}} \right)}.{\text{T}}}\Pi _{t}.{-1}\left( z-{{{\bar{x}}}_{t}} \right)+\bar{\psi }_{t}.{*}$

      (22)

      wherez is the unknown parameter,$\bar{x}_{t}$ is an a priori guess on $x_{t\vert t-1}$,and $\Pi_{k}$,$k=0,\cdots,t$ is a matrix weighting the confidence we have on the a priori estimation at timet. The importance of this weighting matrix sequence is emphasized by Jazwinski in the following statement:\hfill "The\hfill knowledge of $\Pi_{k}$ is just as important as knowing the estimate of $x_{k}$ itself. An estimate is meaningless unless one knows how good it is"[21]. At timet the arrival cost can be used to rewrite (20) as (23),where $\hat{\psi}_{t}.{*}$ is used instead of $\bar{\psi}_{t}.{*}$ because of the approximation made by means of the arrival cost. $\bar{x}_{t-N}$ is the moving horizon estimate of the state at time $t-N$,i.e.,$\bar{x}_{t-N} = \hat{x}_{t-N\vert t-N-1}$,instead of a user-defined guess as in an FIE-based scheme. The pair $\left(\bar{x}_{t-N},\Pi_{t-N} \right)$ summarizes the prior information at time $t-N$. For $t \leq N$,the ${H}_{\infty}$-MHE is equivalent to the ${H}_{\infty}$-FIE. With the above elements in mind,the unconstrained ${H}_{\infty}$-MHE is defined as follows.

      $\begin{align} & \hat{\psi }_{t}.{*}({{x}_{t-N}},\left\{ {{w}_{k}} \right\})=\underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\left\{ {{w}_{k}} \right\}_{k=t-N}.{t-1}}{\mathop{\max }}\,-\frac{1}{2\gamma }\sum\limits_{k=t-N}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{y}_{k}}-C{{x}_{k}}\|_{{{R}.{-1}}}.{2} \right)}+ \\ & \frac{1}{2}{{\left( {{x}_{t-N}}-{{{\bar{x}}}_{t-N}} \right)}.{\text{T}}}\Pi _{t-N}.{-1}\left( {{x}_{t-N}}-{{{\bar{x}}}_{t-N}} \right)+\bar{\psi }_{t-N}.{*} \\ \end{align}$

      (23)

      Unconstrained ${ H_{\infty}}$-MHE: Consider a system that is dynamically described by (1),where $w_{k}$ and $\nu_{k}$ are unknown but bounded. Given a sequence of $t-N$ output data,the estimate of $x_{t}$,denoted by $\hat{x}_{t \mid t-1}$ is computed by means of (2) but modified as

      ${{\hat{x}}_{t|t-1}}={{A}.{t-N}}{{\hat{x}}_{t-N|t-1}}+\sum\limits_{k=t-N}.{t-1}{{{A}.{t-k-1}}}G{{w}_{k}}$

      (24)

      where ${{\hat{x}}_{t-N|t-1}}=x_{t-N}.{*}$ and $\left\lbrace w_{t-N},w_{t-N+1},\cdots,w_{t-1} \right\rbrace= \left\lbrace w_{t-N}.{*},w_{t-N+1}.{*},\cdots,w_{t-1}.{*} \right\rbrace$ are obtained from the solution of the following minimax problem

      $\hat{\psi }_{t}.{*}=\underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\left\{ {{w}_{k}} \right\}}{\mathop{\max }}\,{{\hat{\psi }}_{t}}\left( {{x}_{t-N}},\left\{ {{w}_{k}} \right\} \right)$

      (25)

      with

      $\begin{align} & {{{\hat{\psi }}}_{t}}\left( {{x}_{t-N}},\left\{ {{w}_{k}} \right\} \right)=\frac{1}{2}\|{{x}_{t-N}}-{{{\bar{x}}}_{t-N}}\|_{\Pi _{t-N}.{-1}}.{2}+\frac{1}{2}\hat{\psi }_{t-N}.{*}- \\ & \frac{1}{2\gamma }\sum\limits_{k=t-N}.{t-1}{\left( \|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}+\|{{y}_{k}}-{{C}_{k}}{{x}_{k}}\|_{{{R}.{-1}}}.{2} \right)}. \\ \end{align}$

      (26)

      As in the unconstrained ${H}_{\infty}$-FIE,an explicit solution is obtained as

      $\begin{align} & {{\left( x_{t-N}.{\text{T}} \right)}.{*}}=\left( \gamma \bar{x}_{t-N}.{\text{T}}\Pi _{t-N}.{-1}-{{Y}.{\text{T}}}\bar{R}\Gamma +{{Y}.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right)\times \\ & {{\left( \gamma \Pi _{t-N}.{-1}-{{\Gamma }.{\text{T}}}\bar{R}\Gamma +{{\Gamma }.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right)}.{-1}} \\ \end{align}$

      (27)

      $\begin{align} & {{\left( {{{\tilde{w}}}.{\text{T}}} \right)}.{*}}=\left\{ {{Y}.{\text{T}}}\bar{R}\Xi -\left[ \left( \gamma \bar{x}_{t-N}.{\text{T}}\Pi _{t-N}.{-1}-{{Y}.{\text{T}}}\bar{R}\Gamma +{{Y}.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right) \right. \right.\times \\ & \left. \left. {{\left( \gamma \Pi _{t-N}.{-1}-{{\Gamma }.{\text{T}}}\bar{R}\Gamma +{{\Gamma }.{\text{T}}}\bar{R}\Xi M{{\Xi }.{\text{T}}}\bar{R}\Gamma \right)}.{-1}} \right]{{\Gamma }.{\text{T}}}\bar{R}\Xi \right\}M \\ \end{align}$

      (28)

      where ${{\Pi }_{t-N}}$,the matrix weighting the confidence over $\bar{x}_{t-N}$,is presented as follows.

    • According to the analysis above,it is clear that an MHE approximation to the ${H}_{\infty}$-FIE is possible if a suitable matrix is chosen to weight the estimation error at time $t-N$. To obtain a recursive solution for $\Pi_{t}$,a variational approach to the minimax problem is used[22].

      Consider (6) rewritten as

      $\begin{align} & {{{\bar{\psi }}}_{t}}=\|{{x}_{0}}-{{{\bar{x}}}_{0}}\|_{\Pi _{0}.{-1}}.{2}-\frac{1}{\gamma }\sum\limits_{k=0}.{t-1}{(}\|{{w}_{k}}\|_{Q_{k}.{-1}}.{2}+\|{{y}_{k}}- \\ & {{C}_{k}}{{x}_{k}}\|_{R_{k}.{-1}}.{2})=\varphi ({{x}_{0}})+\sum\limits_{k=0}.{t-1}{{{\mathcal{L}}_{k}}} \\ \end{align}$

      (29)

      where $\varphi ({{x}_{0}})=\|{{x}_{0}}-{{\bar{x}}_{0}}\|_{\Pi _{0}.{-1}}.{2}$ and ${{\mathcal{L}}_{k}}=-\frac{1}{\gamma }(\|{{w}_{k}}\|_{Q_{k}.{-1}}.{2}+\|{{y}_{k}}-{{C}_{k}}{{x}_{k}}\|_{R_{k}.{-1}}.{2}).$

      The form of (29) allows the direct use of the Hamiltonian formulation to find the properties of stationarity of $\bar{\psi}_{t}$ with respect to $x_{0}$ and $w_{k}$[22-24].

      Consider the augmented version of $\bar{\psi}_{t}$ in (29),namely,$\bar{\psi}_{t}.{a}$,where the dynamics of the system are included as a set of soft constraints

      $\begin{align} & \bar{\psi }_{t}.{a}=\varphi ({{x}_{0}})+ \\ & \sum\limits_{k=0}.{t-1}{\left[ {{\mathcal{L}}_{k}}+2\lambda _{k+1}.{\text{T}}\left( {{A}_{k}}{{x}_{k}}+{{G}_{k}}{{w}_{k}}-{{x}_{k+1}} \right) \right]} \\ \end{align}$

      (30)

      with $\lambda_{1},\cdots,\lambda_{t}$ the Lagrange multipliers associated with the aforementioned constraints. Using some algebra,(31) is rewritten in terms of the Hamiltonian as

      $\bar{\psi }_{t}.{a}=\varphi ({{x}_{0}})+\sum\limits_{k=0}.{t-1}{\left( H-2\lambda _{k}.{\text{T}}{{x}_{k}} \right)}-2{{\lambda }_{t}}{{x}_{t}}+2{{\lambda }_{0}}{{x}_{0}}$

      (31)

      where ${H}$,the Hamiltonian,is defined as

      ${{H}_{k}}={{\mathcal{L}}_{k}}+2\lambda _{k+1}.{\text{T}}\left( {{A}_{k}}{{x}_{k}}+{{G}_{k}}{{w}_{k}} \right).$

      (32)

      A constrained stationary point for $\bar{\psi}_{t}.{a}$ must satisfy the following necessary conditions[22]

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{x}_{k}}}=0,2k=0,\cdots ,t$

      (33)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{w}_{k}}}=0,2k=0,\cdots ,t-1$

      (33)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{\lambda }_{k}}}=0,2k=0,\cdots ,t.$

      (33c)

      The above equations can be written as

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{x}_{0}}}=0$

      (34a)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{x}_{N}}}=0$

      (34b)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{x}_{k}}}=0,2k=1,\cdots ,t-1$

      (34c)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{w}_{k}}}=0,2k=0,\cdots ,t-1$

      (34d)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{\lambda }_{k}}}=0,2k=0,\cdots ,t.$

      (34e)

      Condition (34e) ensures the fulfilment of the soft constraints,i.e.,the plant dynamics. The first four conditions imply,respectively

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{x}_{0}}}=0\Rightarrow {{x}_{0}}={{{\bar{x}}}_{0}}+{{\Pi }_{0}}{{\lambda }_{0}}$

      (35a)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{w}_{k}}}=0\Rightarrow {{w}_{k}}=\gamma {{Q}_{k}}G_{k}.{\text{T}}{{\lambda }_{k+1}}$

      (35b)

      $\frac{\bar{\psi }_{t}.{a}}{\partial {{x}_{N}}}=0\Rightarrow \lambda _{T}.{\text{T}}=0$

      (35c)

      $\frac{\partial \bar{\psi }_{t}.{a}}{\partial {{x}_{k}}}=0\Rightarrow \frac{\partial \mathcal{H}}{\partial {{x}_{k}}}-2\lambda _{k}.{\text{T}}=0,2k=1,\cdots ,t-1.$

      (35d)

      Equation (35d) can again be rewritten as

      ${{\lambda }_{k}}=\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}\left( {{y}_{k}}-{{C}_{k}}{{x}_{k}} \right)+A_{k}.{\text{T}}{{\lambda }_{k+1}}.$

      (36)

      Replacing (35b) in the state (1) gives

      ${{x}_{k+1}}={{A}_{k}}{{x}_{k}}+\gamma {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}}{{\lambda }_{k+1}}.$

      (37)

      Consider the conditions (35a) and (35c). These equations are the boundary conditions of the two-point boundary value problem (36) and (37)

      $\begin{align} & \left[ \begin{matrix} {{x}_{k+1}} \\ {{\lambda }_{k}} \\ \end{matrix} \right]=\left[ \begin{matrix} {{A}_{k}} & {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}} \\ {{\gamma }.{-1}}C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}} & A_{k}.{\text{T}}{{\lambda }_{k+1}} \\ \end{matrix} \right]\left[ \begin{matrix} {{x}_{k}} \\ {{\lambda }_{k+1}} \\ \end{matrix} \right]+ \\ & \left[ \begin{matrix} 0 \\ {{\gamma }.{-1}}C_{k}.{\text{T}}R_{k}.{-1}{{y}_{k}} \\ \end{matrix} \right]. \\ \end{align}$

      Since the two-point boundary value problem is linear,the solution is assumed to be of the form

      ${{x}_{k}}={{\mu }_{k}}+{{\Pi }_{k}}{{\lambda }_{k}}$

      (38)

      where $\mu_{k}$ is a guess of the state at the time stepk. The evolution of the state is assumed to be a function of the unknowns $\mu_{k}$ and $\Pi_{k}$. Substituting (38) into (37) gives

      $\begin{align} & {{\mu }_{k+1}}+{{\Pi }_{k+1}}{{\lambda }_{k+1}}={{A}_{k}}{{\mu }_{k}}+{{A}_{k}}{{\Pi }_{k}}{{\lambda }_{k}}+ \\ & \gamma {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}}{{\lambda }_{k+1}} \\ \end{align}$

      (39)

      Substituting (38) into (36) produces

      $\begin{align} & {{\lambda }_{k}}=\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}\left( {{y}_{k}}-{{C}_{k}}{{\mu }_{k}} \right)-\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}}{{\lambda }_{k}}+ \\ & A_{k}.{\text{T}}{{\lambda }_{k+1}}. \\ \end{align}$

      (40)

      Rearranging (40) yields

      $\begin{align} & {{\lambda }_{k}}={{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}\times \\ & \left[ A_{k}.{\text{T}}{{\lambda }_{k+1}}+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}\left( {{y}_{k}}-{{C}_{k}}{{\mu }_{k}} \right) \right]. \\ \end{align}$

      (41)

      Substituting this expression for $\lambda_{k}$ into (39) gives

      $\begin{align} & {{\mu }_{k+1}}+{{\Pi }_{k+1}}{{\lambda }_{k+1}}={{A}_{k}}{{{\hat{x}}}_{k}}+\gamma {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}}{{\lambda }_{k+1}}+ \\ & {{A}_{k}}{{\Pi }_{k}}{{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}\times \\ & \left[ A_{k}.{\text{T}}{{\lambda }_{k+1}}+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}\left( {{y}_{k}}-{{C}_{k}}{{\mu }_{k}} \right) \right] \\ \end{align}$

      (42)

      which can finally be rearranged as (43). For (43) to hold for arbitrary $\lambda_{k}$,each side is set to be equal to the zero matrix of corresponding order,resulting in (44) where (44b) yields the variation of the estimation error weight in a recursive way.

    • Algorithm 1 summarizes the ${H}_{\infty}$-MHE procedure. After defining the algorithm feeding data,i.e.,$\bar{x}_0$,$\Pi_0$,N,Q,andR (the initial guess,the initial condition of the estimation error weight,the estimation window size,and the matrices weighting the modeling and measurement uncertainties,respectively) the ${H}_\infty$-FIE is solved during the firstN steps. Then,using the recursion for $\Pi_k$,the ${H}_\infty$-MHE approximation is computed as explained.

    • Stability is a very important property to be checked when controllers and estimators are proposed. Almost every well-known controller and/or estimator must guarantee stability even in a very restricted scenario. This gives reliability to control theorists. Regarding the ongoing contribution,it has been shown that an optimization-based estimator for uncertain linear systems works well under the proposed scenario. However,a stability proof is needed in order to generalize the functioning of the proposed filter.

      $\begin{align} & {{\mu }_{k+1}}-{{A}_{k}}{{\mu }_{k}}-{{A}_{k}}{{\Pi }_{k}}{{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}\left[ \frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}\left( {{y}_{k}}-{{C}_{k}}{{\mu }_{k}} \right) \right]= \\ & \left\{ -{{\Pi }_{k+1}}+{{A}_{k}}{{\Pi }_{k}}{{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}A_{k}.{\text{T}}+\gamma {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}} \right\}{{\lambda }_{k+1}} \\ \end{align}$

      (43)

      ${{\mu }_{k+1}}={{A}_{k}}{{\mu }_{k}}+{{A}_{k}}{{\Pi }_{k}}{{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}\times \left[ \frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}\left( {{y}_{k}}-{{C}_{k}}{{\mu }_{k}} \right) \right]$

      (44a)

      ${{\Pi }_{k+1}}={{A}_{k}}{{\Pi }_{k}}{{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}A_{k}.{\text{T}}+\gamma {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}}.$

      (44b)

      Algorithm 1 (Unconstrained ${ H_{\infty}}$-MHE algorithm).

      Require: $\bar{x}_{0}$,$\Pi_{0}$,N,Q,andR.

      1) {\bf if} {$t \leq N$} {\bf then} 2) Compute $\hat{x}_{0\vert t-1}$ and $\left\lbrace w_{k} \right\rbrace_{k=0}.{t-1} $ using (15) and (16),respectively,i.e.,under the ${H}_{\infty}$-FIE setting.

      3) Compute $\hat{x}_{t\vert t-1}$ by means of (2).

      4) Compute $\Pi_{t}$ from (44b).

      5) {\bf else} 6) Compute $\hat{x}_{t-N\vert t-1}$ and $\left\lbrace w_{k} \right\rbrace_{t-N}.{t-1} $ from (27) and (28),respectively,using $\Pi_{t-N}$ saved from previous iterations,under the ${H}_{\infty}$-MHE setting. $\bar{x}_{t-N}$ is the a priori guess of $x_{t-N}$,which is given by its ${H}_{\infty}$-MHE estimation.

      7) Compute $\hat{x}_{t\vert t-1}$ by means of (24).

      8) Compute $\Pi_{t}$ from (44b).

      9) bf end if

      Consider a zero-sum dynamic game described by

      $\begin{align} & {{x}_{k+1}}={{f}_{k}}({{x}_{0}},{{x}_{k}},{{w}_{k}}),2mmk=0,\cdots ,N-1 \\ & {{y}_{k}}={{h}_{k}}({{x}_{k}},{{w}_{k}}),2mmk=1,\cdots ,N \\ \end{align}$

      (45)

      where $x_{k}\in {\bf R}.{n}$,$y_{k}\in {\bf R}.{p}$,$w_{k}\in \mathit{W}\subset {\bf R}.{m}$,and a finite-horizon cost given by

      $L({{x}_{0}},w)=\sum\limits_{k=0}.{N-1}{{{g}_{k}}}({{x}_{k+1}},{{x}_{k}}$

      (46)

      which needs to be minimized by Player 1 and maximized by Player 2,using $x_{0}$ and $\left\lbrace w\right\rbrace_{k=0}.{N-1}$,respectively. Given the sets $\mathcal{X}$ and $\mathcal{W}$ to which the policies $x_{0}$ and $\left\lbrace w\right\rbrace_{k=0}.{N-1}$ belong,the triple $\left\lbrace J;{X},{W}\right\rbrace $ denotes the normal form of the zero-sum dynamic game. In this context,the saddle-point equilibrium is defined as.

      Definition 1. Given a zero-sum dynamic game $\{J;X,W\}$ in normal form,a pair of policies $\left(x_{0}.{*},\left\lbrace w\right\rbrace.{*} \right)\in {X}\times{W}$ constitutes a saddle-point solution if,for all $\left(x_{0},\left\lbrace w\right\rbrace\right) \in {X}\times{W}$

      $\begin{array}{*{35}{l}} L(x_{0}.{*},w)\le {{L}.{*}}=L(x_{0}.{*},{{w}.{*}})\le L(x_{0,w}.{*}),\forall {{x}_{0}}\in X,\forall w\in W. \\ \end{array}$

      The quantity $L.{*}$ is the value of the dynamic game.

      In the case of a zero-sum dynamic game,the optimization problem is attached to the system dynamics. For the sake of completeness,consider (1) with (6) which is to be minimized by $x_0$ and maximized by $\left\lbrace w_k \right\rbrace $,respectively,where the initial state takes values in ${X}$ and $\left\lbrace w_k\right\rbrace$ takes values in ${W}$,with ${W}$ being the cartesian product of the space where each disturbance vectorwk belongs. Therefore,if there exists a pair $x_{0}.{*} \in {X},\left\lbrace w_{k}\right\rbrace .{*} \in {W}$ such that

      $\begin{align} & \underset{{{x}_{0}}\in \mathcal{X}}{\mathop{\min }}\,\bar{\psi }({{x}_{0}},{{\left\{ {{w}_{k}} \right\}}.{*}})=\underset{\left\{ {{w}_{k}} \right\}\in W}{\mathop{\max }}\,\bar{\psi }(x_{0}.{*},\left\{ {{w}_{k}} \right\})= \\ & \bar{\psi }(x_{0}.{*},{{\left\{ {{w}_{k}} \right\}}.{*}})={{{\bar{\psi }}}.{*}} \\ \end{align}$

      (47)

      then the pair $(x_{0}.{*},\left\lbrace w_{k}\right\rbrace .{*})$ is called a pure-strategy saddle-point solution for the dynamic game. Therefore,the saddle-point solution will also satisfy

      $\begin{align} & \bar{\psi }(x_{0}.{*},\left\{ {{w}_{k}} \right\})\le \bar{\psi }(x_{0}.{*},{{\left\{ {{w}_{k}} \right\}}.{*}})\le \bar{\psi }({{x}_{0}},{{\left\{ {{w}_{k}} \right\}}.{*}}) \\ & \forall {{x}_{0}},\left\{ {{w}_{k}} \right\}\in X\times W. \\ \end{align}$

      (48)

      The existence of such a saddle-point solution for the \mbox {${H}_{\infty}$-FIE} is then reduced to guarantee the cost $\bar{\psi_t}$ in (7) to be concave-convex. In this way,assume $\Pi_{0}$,$\bar{Q}$ and $\bar{R}$ positive definite. Then,$\bar{\psi}_{t}(x_{0},\left\lbrace w_{k}\right\rbrace )$ in (6) is strictly convex in $x_{0}$ and strictly concave in $\left\lbrace w_{k}\right\rbrace_{k=0}.{t-1}$ if conditions (18a) and (18b) are fulfilled. A similar argument follows for the ${H}_{\infty}$-MHE if the cost function (23) is considered. The following Lemma provides a necessary and sufficient condition for (23) to be strictly concave in $\left\lbrace w_{k}\right\rbrace_{k=t-N}.{t-1}$ and strictly convex in $x_{t-N}$.

      Lemma 1. Assume $\Pi_{0}$,Q andR are positive definite. For the quadratic two-person zero-sum dynamic game described by (1) and (23),the functional $\hat{\psi}_{t}$ is strictly concave in $\left\lbrace w_{k}\right\rbrace_{k=t-N}.{t-1}$ for all $x_{t-N} \in {X}$ and is strictly convex in $x_{t-N}$,for all $\left\lbrace w_{k}\right\rbrace_{k=t-N}.{t-1} \in{W}$ if,and only if

      $\begin{array}{*{35}{l}} \Pi _{k}.{-1}-{{\gamma }.{-1}}({{\Gamma }.{\text{T}}}\bar{R}\Gamma )\succ 0\text{ } \\ \end{array}$

      (49)

      where $\Pi_{k}$ is given by (44)

      ${{\Pi }_{k+1}}={{A}_{k}}{{\Pi }_{k}}{{\left[ I+\frac{1}{\gamma }C_{k}.{\text{T}}R_{k}.{-1}{{C}_{k}}{{\Pi }_{k}} \right]}.{-1}}A_{k}.{\text{T}}+\gamma {{G}_{k}}{{Q}_{k}}G_{k}.{\text{T}}.$

      Proof. Concavity of $\hat{\psi}_{t}$ is guaranteed by (18b)

      ${{\gamma }.{-1}}({{\Xi }.{\text{T}}}\bar{R}\Xi +\bar{Q})\succ 0\text{ }$

      provided $\bar{R}$ and $\bar{Q}$ are positive definite. Since $\hat{\psi}_{t}$ is a quadratic functional of $x_{0}$,the requirement of strict convexity is equivalent to the existence of a unique solution to the optimal control problem

      $\underset{{{x}_{t-N}}\in X}{\mathop{\min }}\,{{\hat{\psi }}_{t}}\left( {{x}_{t-N}},\left\{ w_{k}.{*} \right\} \right)$

      (50)

      subject to the dynamics and for each sequence $\left\lbrace w_{k}\right\rbrace_{k=t-N}.{t-1} \in {W}$. Furthermore,since the Hessian matrix of $\hat{\psi}_{t}$ with respect to $x_{t-N}$ is independent of $\left\lbrace w_{k}\right\rbrace_{k=t-N}.{t-1}$,the positive definiteness of (49) guarantees the claim.

      We showed that both the ${H}_{\infty}$-FIE and the ${H}_{\infty}$-MHE problems guarantee a saddle-point solution at each sample time if conditions (18a),(18b) and (49),(18b) are fulfilled. Then,a sequence of saddle-points are expected as time goes on. To prove stability of the filter,the Lyapunov stability theory is used. The following definition gives the guidelines towards the stability proofs.

      { Definition 2.[12] The estimator is an asymptotically stable observer for the system

      ${{x}_{k+1}}=A{{x}_{k}},{{y}_{k}}=C{{x}_{k}}\text{ }$

      (51)

      if for any $\epsilon$ there corresponds a number $\delta > 0$ and a positive integer $\bar{t}$ such that $\Vert x_{0} - \bar{x}_{0}\Vert \leq \delta$ and $\bar{x}_{0} \in {X}$,then $\Vert \bar{x}_{t} - A.{\rm T}x_{0}\Vert \leq \epsilon$ for all $t \geq \bar{t}$ and ${{\bar{x}}_{t}}\to {{A}.{\text{T}}}{{x}_{0}}$ as $t\to \infty $.

      Assumption 1. Suppose the system (51) with initial condition $x_{0}$ generates the data $y_{k} = CA.{k}x_{0}$. The existence of $x_{0\vert \infty}$ and $\left\lbrace w_{k}\right\rbrace_{k=0}.{\infty}$,and $\rho > 0$ are assumed,such that

      $\begin{align} & {{\left( {{x}_{0|\infty }}-{{{\bar{x}}}_{0}} \right)}.{\text{T}}}\Pi _{0}.{-1}\left( {{x}_{0|\infty }}-{{{\bar{x}}}_{0}} \right)- \\ & \frac{1}{\gamma }\left( \sum\limits_{k=0}.{\infty }{w_{k|\infty }.{\text{T}}}{{Q}.{-1}}{{w}_{k|\infty }}+\nu _{k|\infty }.{\text{T}}{{R}.{-1}}{{\nu }_{k|\infty }} \right)\ge \\ & -\rho \|{{x}_{0}}-{{{\bar{x}}}_{0}}{{\|}.{2}}. \\ \end{align}$

      Assumption 1 states the existence of a feasible state and disturbance sequence yielding bounded cost if an infinite amount of data is considered. Assumption 1 is also a sufficient condition for the existence of a solution to the minimax problems (8) and (25). To establish asymptotic stability for both the ${H}_{\infty}$-FIE and the ${H}_{\infty}$-MHE,we require the following lemma.

      Lemma 2. Suppose $(C,A)$ is observable and $N\geq n$. If

      $\begin{align} & \sum\limits_{k=t-N}.{t-1}{\|}{{w}_{k|t-1}}\|_{{{Q}.{-1}}}.{2}+\|{{\nu }_{k|t-1}}\|_{{{R}.{-1}}}.{2}\to 0 \\ & \text{then}\|{{{\hat{x}}}_{t}}-{{x}_{t}}\|\to 0 \\ \end{align}$

      Proof. The proof is omitted for brevity. See [25].

      Proposition 1. AssumeQ,R,and $\Pi_{0}$ positive definite,$(C,A)$ observable,and Assumption 1 holds. Therefore,the ${H}_{\infty}$-full information estimator is an asymptotically stable observer for the system (51).

      Proof. The proof is omitted for brevity. See [25].

      Similarly to the classic MHE,the positive definiteness of the matrix $\Pi_{t}$ must be guaranteed. As $\Pi_{t}$ is computed from a Riccati recursion,the positive definiteness of the unique solution is well established by the following technical Theorem 1.

      Theorem 1. Subject to $\Pi_{0} > 0$,the detectability of $(C,A)$ and the nonexistence of unreachable modes of $(A,GQ.\frac{1}{2})$ on the unique circle are necessary and sufficient conditions for

      $\underset{t\to \infty }{\mathop{\lim }}\,{{\Pi }_{t}}={{\Pi }_{\infty }}$

      where $\Pi_{\infty}$,with initial condition $\Pi_{0}$,is the unique stabilizing solution of the Riccati equation (44b).

      Proof. The proof is provided in[26].\hfill $ \square $ If $\Pi_{0}$ is chosen such that $\Pi_{0} \geq \Pi_{\infty}$,then $\Pi_{k}$ is positive definite $\forall k\geq0$. As an alternative,ifG is nonsingular ($GQG.{\rm T}$ positive definite),then $\Pi_{k}$ is also positive definite $\forall k\geq0$. Before proceeding with the stability of the \mbox {${H}_{\infty}$-MHE,} the following assumption is posed.

      \ Assumption 2. the error weighting matrix $\Pi_{t}$,defined by (44b),satisfies the following inequality for all $p \in {R}_{t}$:

      $\begin{align} & {{(p-{{{\hat{x}}}_{t}})}.{\text{T}}}\Pi _{t}.{-1}(p-{{{\hat{x}}}_{t}})+\hat{\psi }_{t}.{*}\ge \\ & \underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\{{{w}_{k}}\}}{\mathop{\max }}\,\{{{{\hat{\psi }}}_{t}}({{x}_{t-N}},\{{{w}_{k}}\}): \\ & x(N;{{x}_{t-N}},\{{{w}_{k}}\})=p\}={{{\hat{\Theta }}}_{t}}(p) \\ \end{align}$

      where ${R}_{t}$ is the reachable set of states at timet generated by a feasible initial condition $x_0$ and disturbance sequence $\left\lbrace w_k \right\rbrace_{k=0}.{t-1}$.

      The following proposition guarantees the stability of the ${H}_{\infty}$-MHE.

      Proposition 2. Suppose the matricesQ,R,and $\Pi_{0}$ are positive definite,$(C,A)$ is observable,Assumption 1 holds,$N\geq0$,and either 1) The matrixG is nonsingular,or 2) $(A,GQ.{1/2})$ is controllable and $\Pi_{0} \geq \Pi_{\infty}$.

      Then the constrained ${H}_{\infty}$-MHE is an asymptotically stable observer for the system (51).

      Proof. As in the ${H}_{\infty}$-FIE stability proof,convergence of the cost is demonstrated first. Then,the stability of the filter is shown in the sense of Definition 2. An optimal solution to the constrained ${H}_{\infty}$-MHE problem exists as stated by Lemma 1 and Assumption 1. By definition

      $\begin{align} & \bar{\psi }_{t}.{*}-\bar{\psi }_{t-N}.{*}\le \\ & -\frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|}{{{\hat{w}}}_{k|t-1}}\|_{{{Q}.{-1}}}.{2}+\|{{{\hat{\nu }}}_{k|t-1}}\|_{{{R}.{-1}}}.{2} \right)\to 0. \\ \end{align}$

      (52)

      As the aim is to demonstrate that $-\rho\Vert x_{0} - \bar{x}_{0}\Vert.{2}$ is a uniform bound,let an induction argument be considered. The case when $t\leq N$ is equivalent to the ${H}_{\infty}$-FIE. It was already shown that $-\rho\Vert x_{0} - \bar{x}_{0}\Vert.{2}$ is indeed a lower bound to the associated objective function. By using Assumption 2

      ${{\hat{\Theta }}_{t}}({{x}_{t|\infty }})\le {{({{x}_{t|\infty }}-{{\hat{x}}_{t}})}.{\text{T}}}\Pi _{t}.{-1}({{x}_{t|\infty }}-{{\hat{x}}_{t}})+\hat{\psi }_{t}.{*}.$

      For the induction argument,assume that

      $\begin{align} & {{{\hat{\Theta }}}_{t-N}}({{x}_{t-N|\infty }})\le {{({{x}_{t-N|\infty }}-{{{\hat{x}}}_{t-N}})}.{\text{T}}}\times \\ & \Pi _{t-N}.{-1}({{x}_{t-N|\infty }}-{{{\hat{x}}}_{t-N}})+\hat{\psi }_{t-N}.{*}. \\ \end{align}$

      From Assumption 1,feasibility of the solution to the estimation problem is guaranteed by using an infinite set of data. Moreover by optimality,the induction assumption,and the properties related to the arrival cost the following inequality holds

      $\begin{align} & \underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\{{{w}_{k}}\}}{\mathop{\max }}\,\left\{ -\frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}}+\|{{\nu }_{k}}\|_{{{R}.{-1}}}.{2} \right) \right.+ \\ & {{{\hat{\Theta }}}_{t-N}}({{x}_{t-N}}):x(N;{{x}_{t-N}},\{{{w}_{k}}\})={{x}_{t|\infty }}\}\ge \\ & -\rho \|{{x}_{0}}-{{{\hat{x}}}_{0}}{{\|}.{2}} \\ \end{align}$

      for all $t \geq N$.

      Using the induction argument the following inequality also holds

      $\begin{align} & \underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\{{{w}_{k}}\}}{\mathop{\max }}\,\left\{ -\frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}}+\|{{\nu }_{k}}\|_{{{R}.{-1}}}.{2} \right) \right.+ \\ & {{({{x}_{t-N}}-{{{\hat{x}}}_{t-N}})}.{\text{T}}}\Pi _{t-N}.{-1}({{x}_{t-N}}-{{{\hat{x}}}_{t-N}})+ \\ & \hat{\psi }_{t-N}.{*}:x(N;{{x}_{t-N}},\{{{w}_{k}}\})={{x}_{t|\infty }}\}\ge \\ & \underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\{{{w}_{k}}\}}{\mathop{\max }}\,\left\{ -\frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}}+\|{{\nu }_{k}}\|_{{{R}.{-1}}}.{2} \right) \right.+ \\ & \left. {{{\hat{\Theta }}}_{t-N}}({{x}_{t-N}}):x(N;{{x}_{t-N}},\{{{w}_{k}}\})={{x}_{t|\infty }} \right\}\ge \\ & -\rho \|{{x}_{0}}-{{{\hat{x}}}_{0}}{{\|}.{2}}. \\ \end{align}$

      Finally by Assumption 2

      $\begin{align} & {{({{x}_{t}}-{{{\hat{x}}}_{t}})}.{\text{T}}}\Pi _{t}.{-1}({{x}_{t}}-{{{\hat{x}}}_{t}})+\hat{\psi }_{t}.{*}\ge \\ & \underset{{{x}_{t-N}}}{\mathop{\min }}\,\underset{\{{{w}_{k}}\}}{\mathop{\max }}\,\left\{ -\frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|{{w}_{k}}\|_{{{Q}.{-1}}}.{2}}+\|{{\nu }_{k}}\|_{{{R}.{-1}}}.{2} \right) \right.+ \\ & \left. {{{\hat{\Theta }}}_{t-N}}({{x}_{t-N}}):x(N;{{x}_{t-N}},\{{{w}_{k}}\})={{x}_{t|\infty }} \right\}\ge \\ & -\rho \|{{x}_{0}}-{{{\hat{x}}}_{0}}{{\|}.{2}} \\ \end{align}$

      where it is verified that

      $\hat{\psi }_{t}.{*}\ge -\rho \|{{x}_{0}}-{{\hat{x}}_{0}}{{\|}.{2}}.$

      Hence,the sequence $\{\hat{\psi}_{t}.{*}\}$ is monotone nonincreasing and bounded below by $-\rho \Vert x_{0}-\hat{x}_{0}\Vert.{2}$. As verified before,convergence implies (in) as $t \rightarrow \infty$. By Lemma 2,the estimation error $\Vert \hat{x}_{t} - A.{\rm T}x_{0}\Vert \rightarrow 0$ as $t\rightarrow \infty$. Now,the stability proof follows a similar procedure as in the ${H}_{\infty}$-FIE. Let $\epsilon>0$ and choose $\zeta>0$ sufficiently small for $t=N$ as specified in Lemma 2. Choose $\delta>0$ such that $-\rho\delta.{2}>-\zeta$,then the following inequality holds for all $t\geq N$

      $\begin{align} & -\rho {{\delta }.{2}}\le \|{{{\hat{x}}}_{t-N|t-1}}-{{{\hat{x}}}_{t-N}}\|_{\Pi _{t-N}.{-1}}.{2}- \\ & \frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|}{{{\hat{w}}}_{k|t-1}}\|_{{{Q}.{-1}}}.{2}+\|{{{\hat{\nu }}}_{k|t-1}}\|_{{{R}.{-1}}}.{2} \right)+\hat{\psi }_{t-N}.{*}\le \\ & \|{{{\hat{x}}}_{t-N|t-1}}-{{{\hat{x}}}_{t-N}}\|_{\Pi _{t-N}.{-1}}.{2}- \\ & \frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|}{{{\hat{w}}}_{k|t-1}}\|_{{{Q}.{-1}}}.{2}+\|{{{\hat{\nu }}}_{k|t-1}}\|_{{{R}.{-1}}}.{2} \right) \\ \end{align}$

      since $\hat{\psi}_{t-N}.{*} < 0$. Then,using a similar argument as the ${H}_{\infty}$-FIE

      $-\zeta <-\rho {{\delta }.{2}}\le -\frac{1}{\gamma }\left( \sum\limits_{k=t-N}.{t-1}{\|}{{{\hat{w}}}_{k|t-1}}\|_{{{Q}.{-1}}}.{2}+\|{{{\hat{\nu }}}_{k|t-1}}\|_{{{R}.{-1}}}.{2} \right).$

      Therefore using Lemma 2,if the initial estimation error $\Vert x_{0} -\hat{x}_{0} \Vert\leq \delta$,then the estimation error $\Vert \hat{x}_{t-N} - A.{\rm T}x_{0} \Vert\leq \epsilon$ for all $t\geq N$ as claimed.\hfill $ \square$

    • In this section,we give an illustrative example to show the performance and benefits of the proposed filter.

      Consider the spring-mass-damper system from[27]. The dynamics of the system are modeled by means of the following uncertain continuous-time linear system

      $\dot{x}=\left[ \begin{matrix} 0 & 1 \\ -\frac{{{k}_{0}}}{{{m}_{0}}} & -\frac{{{k}_{0}}}{{{m}_{0}}} \\ \end{matrix} \right]\left[ \begin{matrix} \begin{align} & {{x}_{1}} \\ & {{x}_{2}} \\ \end{align} \\ \end{matrix} \right]+\left[ \begin{matrix} 0 \\ \frac{1}{{{m}_{0}}} \\ \end{matrix} \right]w$

      (53a)

      $y=\left[ \begin{matrix} 1 & 0 \\ \end{matrix} \right]\left[ \begin{matrix} \begin{align} & {{x}_{1}} \\ & {{x}_{2}} \\ \end{align} \\ \end{matrix} \right]+\nu $

      (53b)

      where $x = [\begin{array}{cc} x_{1} & x_{2}\end{array}].{\rm T}$ is the state of the system with $x_{1}$ the position of the mass and $x_{2}$ its velocity,w is the disturbance force,$\nu$ is the measurement uncertainty,$m_{0}$ is the nominal mass,$b_{0}$ is the nominal viscous damping coefficient,and $k_{0}$ is the nominal spring constant. The nominal parameters are known to be $\dfrac{1}{m_{0}} = 1.25$,$b_{0} = 0.15$,and $k_{0} = 5$. The objective is the estimation of the state by measuring $x_{1}$,taking into account the unknown inputs.

      Because we are considering discrete-time designs,the system is discretized with a sampling time of $T_{s}=0.1$ s. The matrices of the discrete-time model are

      $\begin{array}{l} {A_d} = \left[ {\begin{array}{*{20}{c}} {0.9691}&{0.0980}\\ {0.6127}&{0.9507} \end{array}} \right],{B_d} = \left[ {\begin{array}{*{20}{c}} \begin{array}{l} 0.0062\\ 0.1225 \end{array} \end{array}} \right]\\ {C_d} = \left[ {\begin{array}{*{20}{c}} 1&0 \end{array}} \right],{D_d} = 0. \end{array}$

      Using the discrete-time model,four filters are designed: the ${H}_{\infty}$ filters presented in[3,4],the classic Kalman filter,and the proposed ${H}_{\infty}$-MHE. For the sake of simplicity,let us call the filters presented in [3,4] as RF1 and RF2,meaning robust filter one and two,respectively. These filters are used for comparison since they provide a robust design using the ${H}_{\infty}$ theory by means of a game-theoretical approach.

      The uncertainty $\nu_{k}$ is assumed to be zero-mean white-noise with an unknown covariance. The modeling uncertainty $w_{k}$ is not assumed to be a Gaussian noise but rather a zero-mean white noise with unit covariance after being filtered by the following low-pass filter

      $F(s)=\frac{0.5}{s+0.1}.$

      At time $k = 20$ s,an impulse signal with a gain of $10$ is added to $w_{k}$.

      Because there are no explicit strategies to tune the filters,they were tuned by a trial-and-error procedure as follows. For RF1,the tuning parameters areM,$P_{0}$ and $\gamma$. $\gamma$ is found by an iterative procedure,onceM and $P_{0}$ are defined[4].M and $P_{0}$ are sought such that the mean square estimation error is minimized. Then,fixingM to be the identity matrix,the best value of $P_{0}$ is found to be $\alpha I_{2}$ with $\alpha = 2$. Decreasing $\alpha$ from this value decreases $\gamma$ and vice versa. In this scenario,the variation ofM did not affect the performance of the filter significantly; hence,it is chosen to be the identity matrix. For RF2,the tuning parameters are $S_{k}$,$L_{k}$,$Q_{k}$,$R_{k}$,$P_{0}$,and $\gamma=\frac{1}{\gamma_{s}}$. Analogous to RF1,after fixing each parameter to a certain value,$\gamma_{s}$ is found after applying an iterative solution[3]. Then,the latter parameters were modified in the following order of variation $Q_{k}$,$P_{0}$,$S_{k}$ and $L_{k}$. Finally for the ${H}_{\infty}$-MHE filter,the parameters are $Q_{k}$,$R_{k}$,$P_{0}$,$N = 50$,and $\gamma$. The weighting matrix penalizing the estimation error term is found from the solution of the Ricatti (44b). The order of tuning considered is $P_{0}$,$Q_{k}$,$R_{k}$,$\gamma$ andN. It is worth to mention that a clear procedure for giving a numeric value to $\gamma$ is currently not available for the proposed approach.

      The parameters of the estimators are presented inTable 1,where $D(\cdot,\cdot)$ stands for a diagonal matrix with the arguments being the elements of the main diagonal,NA means "not applicable"journal" and $I_{2}\in \mathbf{R}.{2}$ represents the identity matrix.

      Table 1.  Tuning parameters of KF, RF1, RF2 and $\mathcal{H}_{\infty}$-MHE

      Fig. 1 shows the time response of the filters for states $x_{1}$ and $x_{2}$ at the top and bottom,respectively. The plant and the estimators are initialized at $x_{0,1} = 2$,$x_{0,2} = 4$ and $\bar{x}_{0,1} = 0$,$\bar{x}_{0,2} = 0$,respectively. The simulation is shown from time $0$ to time $5$ in order to appreciate the initial conditions of the filters and the plant.Fig. 2 shows the performance improvement of the ${H}_{\infty}$-MHE when compared to the other filters once the impulsive input is added to the uncertain input.

      Figure 1.  Time response of the four filters for state x1 (top) and state x2 (bottom). The initial conditions of the filters are (x0,1 = 0, x0,2 = 0)

      Figure 2.  ime response of the four filters for state $x_{1}$ (top) and state $x_2$ (bottom). Zoom from the time step 20 to the time step 30. Initial conditions of the filters are $(x_{0,1} = 0$,$x_{0,2} = 0)$

      Similar results are obtained in Figs. 3 and4,where a different initial condition of the filter is used,i.e.,$x_{0,1} = 10$,$x_{0,2} = 20$. The initial condition for the plant was set as before. From the figures,it is clearly seen that KF has the worst performance since its original formulation does not consider directly noises of unknown type.

      Figure 3.  Time response of the four filters for state $x_{1}$ (top) and state $x_2$ (bottom). Initial conditions of the filters $(\hat{x}_{0,1} = 10$,$\hat{x}_{0,2} = 20)$

      Figure 4.  Time response of the four filters for state $x_{1}$ (top) and state $x_2$ (bottom). Zoom from the time step 20 to the time step 30. Initial conditions of the filters $(x_{0,1} = 10$,$x_{0,2} = 20)$

      In total,five sets of different initial conditions were tested. The mean square errors of all simulations are gathered inTable 2 where IC stands for initial condition. The Kalman filter and RF1 show a lack of robustness against the applied disturbances. RF1 gives a poor performance due to the lack of tuning knobs compared to RF2 and the ${H}_{\infty}$-MHE. Based on the previous analysis,the performances of RF2 and the proposed \mbox {${H}_{\infty}$-MHE} were significantly better. From these two latter filters,the ${H}_{\infty}$-MHE behaves better than RF2 for this scenario,as shown byTable 2.

      Table 2.  Mean square error (MSE) of the filters. The initial condition of the plant was set as $x_{0,1} = 2$,$x_{0,2} = 4$

    • A novel and promising estimation strategy for uncertain linear systems based on both a moving horizon setting and the game-theoretic approach to the ${H}_{\infty}$ filtering was developed,namely the ${H}_{\infty}$-MHE. For the sake of completeness,the ${H}_{\infty}$-FIE was defined first. Then,its moving horizon approximation was derived by a suitable definition of the arrival cost. The analytical nature of the solution of the estimation problem was presented for both the ${H}_{\infty}$-FIE and ${H}_{\infty}$-MHE since no constraints were assumed. A stability analysis was provided to demonstrate the feasibility of the proposed estimation scheme in a practical scenario. A performance comparison among the proposed filter,the Kalman filter and two filters under the ${H}_{\infty}$ setting was also provided,showing promising results for the proposed strategy.

      Although it was not discussed in this contribution in detail,the true potential of an optimization-based estimation approach of this type is the ability to include constraints on the variables to provide realistic estimates. Therefore,future directions will focus on solving the minimax optimization problem for quadratic costs with constraints. The extension to uncertain model parameters will also be considered.

参考文献 (27)

目录

    /

    返回文章
    返回