Volume 13 Number 5
October 2016
Article Contents
Shamik Misra, Rajasekhara Reddy and Prabirkumar Saha. Model Predictive Control of Resonant Systems Using Kautz Model. International Journal of Automation and Computing, vol. 13, no. 5, pp. 501-515, 2016. doi: 10.1007/s11633-016-0954-x
Cite as: Shamik Misra, Rajasekhara Reddy and Prabirkumar Saha. Model Predictive Control of Resonant Systems Using Kautz Model. International Journal of Automation and Computing, vol. 13, no. 5, pp. 501-515, 2016. doi: 10.1007/s11633-016-0954-x

Model Predictive Control of Resonant Systems Using Kautz Model

Author Biography:
  • ORCID iD: 0000-0002-1684-4174

    E-mail: rajasekhara@iitg.ernet.in

  • Corresponding author: ORCID iD: 0000-0002-1121-1829
  • Received: 2013-12-09
  • Accepted: 2015-02-15
  • Published Online: 2016-04-27
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures (5)

Metrics

Abstract Views (4121) PDF downloads (2665) Citations (0)

Model Predictive Control of Resonant Systems Using Kautz Model

  • Corresponding author: ORCID iD: 0000-0002-1121-1829

Abstract: The scope of this paper broadly spans in two areas: system identification of resonant system and design of an efficient control scheme suitable for resonant systems. Use of filters based on orthogonal basis functions (OBF) have been advocated for modelling of resonant process. Kautz filter has been identified as best suited OBF for this purpose. A state space based system identification technique using Kautz filters, viz. Kautz model, has been demonstrated. Model based controllers are believed to be more efficient than classical controllers because explicit use of process model is essential with these modelling techniques. Extensive literature search concludes that very few reports are available which explore use of the model based control studies on resonant system. Two such model based controllers are considered in this work, viz. model predictive controller and internal model controller. A model predictive control algorithm has been developed using the Kautz model. The efficacy of the model and the controller has been verified by two case studies, viz. linear second order underdamped process and a mildly nonlinear magnetic ball suspension system. Comparative assessment of performances of these controllers in those case studies have been carried out.

Shamik Misra, Rajasekhara Reddy and Prabirkumar Saha. Model Predictive Control of Resonant Systems Using Kautz Model. International Journal of Automation and Computing, vol. 13, no. 5, pp. 501-515, 2016. doi: 10.1007/s11633-016-0954-x
Citation: Shamik Misra, Rajasekhara Reddy and Prabirkumar Saha. Model Predictive Control of Resonant Systems Using Kautz Model. International Journal of Automation and Computing, vol. 13, no. 5, pp. 501-515, 2016. doi: 10.1007/s11633-016-0954-x
  • There are several mechanical and electrical systems which show resonating characteristics. Existence of one or more pairs of complex poles in those systems yields oscillatory behaviour in their output profiles. Such kind of systems require an efficient and adequate control strategy that can offer tight and stable closed loop control. Some examples of such resonating systems can be found in robotics, power system electronics, mechanical systems like crane, etc. Even in large scale chemical processes some oscillatory behaviour may be observed in the process outputs, especially where multiple recycle loops exist in the process network or the case of a cascade control with the primary loop cut off[1].

    PID controller, the most widely used controller over the decades due to its robustness and simplicity, is however not well understood for plants with resonating response. Some tuning techniques based on heuristic knowledge have been proposed by Åstrñm and Hägglund[2], nevertheless it has been observed that such controllers seldom need human intervention[3], and one may have to tune the PID controller manually through trial and error while implementing it in practice. One of the reasons of failure of PID controllers for resonating systems may be attributed to its classical design procedure where no scope exists for explicit use of exact model of the process. Model based control strategies, such as internal model control (IMC) and model predictive control (MPC), might be good alternatives for classical control strategies as these control systems embed the process model in the control algorithm. They may always not be the substitute of the conventional control schemes rather they would act as an aid to improve the traditional control strategies[4]. In fact Riveara et al.[5] has proved that the IMC design procedure, in certain situations, leads to traditional PID controller with feedback loop. Design method of IMC based PID controller is quite well-established and is available even in textbooks[6]. On the other hand, though originally developed to meet the specialized control needs of power plant & petroleum refineries, MPC technology can now be found in a wide variety of application areas, including chemicals, food processing, automotive & aerospace applications.

    MPC uses an explicit dynamic model of the plant in order to predict the future output by simulating input manipulation and thereby optimizes an appropriate objective function to calculate the best control action (i.e., the best set of input manipulations) for the actual process. Process/model mismatch, arising out of the actual implementation of control action, is fed back to the MPC algorithm before calculating the next set of best control actions. Due to its immense prospect, researchers of both academic and industrial fields show great interest in MPC that resulted in various developments in its techniques over the years[7]. Nevertheless, there hardly exists any research report that explores the applicability of MPC on resonant systems. As the name suggests, modelling is a very important part of a model predictive control scheme. It is perhaps the absence of appropriate modelling techniques for resonant systems, that has barred the researchers to study MPC for resonant systems. This work is mainly focussed on application of MPC for resonant systems.

    Modelling of a process consists of formulating a set of mathematical equations which describes dynamic input/output behavior of the process[8]. Modelling can either be knowledge based (mechanistic modelling) or experience based (black-box modelling). A mechanistic model needs physical insight of the system that includes differential equations of balance of states (mass, energy or momentum) and algebraic equations of thermodynamic and/or chemical equilibrium of a process. However, in most of the real cases, it is difficult to obtain complete knowledge of the system, therefore people resort to black-box model which consists of a recursive filter whose present values of output variables are expressed as functions of past values of outputs and inputs. These functions (linear, polynomial functions, etc.) are associated with a set of parameters. The input/output model structure (i.e., the filter function) is fixed apriori and parameters of the model are extracted through optimization using available input/output data. Simplicity in model structure and parsimony of the model (few numbers of parameters to be evaluated) are keys to success in efficient black-box modelling of a process.

    In recent years, the use of orthogonal basis functions (OBF) in system identification of dynamic processes has been increased appreciably[9]. The main reason of using OBF in such areas is that the corresponding models usually are parsimonious in nature and thereby have simpler solutions. A model based on OBFs incorporates an approximate knowledge about the dominant dynamics of the process into the procedure of system identification. With the help of this knowledge, the number of free design parameters can be set and thus the variance of their estimates gets reduced. This results in an increase in robustness and accuracy in the model. The simplest structure based upon OBF, which is also most popular is the finite impulse response (FIR) model. FIR modelling corresponds to the estimation of coefficients of a partial expansion in terms of standard OBFs $z^{-k}$. The main advantage of FIR modelling is its parameters which appear linearly in the model structure. So the system identification problem simplifies to a linear regression estimation problem. However, when FIR is used to approximate a system with long impulse response the minimum number of delays required to provide an acceptable approximation is quite high. In other words, the parsimony of this OBF is lost for such case. This is due to the fact that the time domain equivalents of the basis function $z^{-k}$ are the pulse function $\delta \left(t-k\right) $. But in general, impulse response of a system shows exponential decay. To overcome this problem of non-parsimony, a special type of model structure viz. Takenaka-Malmquist basis[10], is introduced which consists of the sum of orthonormal basis functions that also have exponential delay. Nevertheless, Takenaka-Malmquist constructions are not much popular due to their complex structure. Various researches have been done till now on a specific case of this generalized structure, best known as Laguerre functions. Classical orthonormal Laguerre functions, as explained in [11], were originally introduced by French mathematician E. Laguerre way back in 1879. The recursive nature of Laguerre construction makes it easy to compute. The reason of popularity of Laguerre filter is its simplicity as it can be parameterized by single real-valued pole. The Laguerre basis is preferable for representing well damped dynamic system. Systems with poorly damped dynamics, however cannot be accurately described by Laguerre functions, i.e., these functions are not appropriate for approximating signals having strong oscillatory behavior. This drawback has led to an increasing interest in the Kautz functions.

    Kautz filters are more generalized structure of Laguerre filters and a model developed with these filters deals with complex poles; thus facilitates an efficient modelling of resonant systems. Though Kautz functions were introduced by Kautz in 1954[13], very few works have been done using it because of its complexity. The aim of this paper is two fold, firstly to establish the efficacy of Kautz modelling for resonant system having linear and mildly nonlinear characteristics and secondly develop an MPC based on Kautz model that can successfully be used with those resonant systems. This paper proposes a suitable state space form of Kautz filter on the lines of [12] and thereby develops an MPC algorithm using this state space model. To understand the efficacy of this control scheme, two case studies, viz. a linear second order underdamped process and a mildly nonlinear magnetic ball suspension system, have been carried out. Results are compared with conventional control schemes viz. IMC or IMC based PID controller.

  • The problem of orthogonalizing a set of discrete time exponential functions can be summarized[14] as following:

    Theorem 1. The sequence of funcions $\Psi _{j}\left(z\right) $

    $\begin{align} \Psi _{2n-1}\left( z\right) =&C_{1}^{\left( n\right) }\left\{ 1-a_{1}^{\left( n\right) }z\right\} \Gamma ^{\left( n\right) }\left( z\right) \end{align}$

    (1)

    $\begin{align} \Psi _{2n}\left( z\right) =&C_{2}^{\left( n\right) }\left\{ 1-a_{2}^{\left( n\right) }z\right\} \Gamma ^{\left( n\right) }\left( z\right) \end{align}$

    (2)

    for $\forall n=1, 2, \cdots$, where

    $\begin{align} \Gamma ^{\left( n\right) }\left( z\right) =&\dfrac{\prod\limits_{j=1}^{n-1} \left( 1-\beta _{j}z\right) \left( 1-\beta _{j}^{\ast }z\right) }{% \prod\limits_{j=1}^{n}\left( z-\beta _{j}\right) \left( z-\beta _{j}^{\ast }\right) } \end{align}$

    (3)

    $\begin{align} 0 =&\left\{ 1+a_{1}^{\left( n\right) }a_{2}^{\left( n\right) }\right\} \left\{ 1+\beta _{n}\beta _{n}^{\ast }\right\}- \nonumber \\ &\left\{ a_{1}^{\left( n\right) }+a_{2}^{\left( n\right) }\right\} \left\{ \beta _{n}+\beta _{n}^{\ast }\right\} \end{align}$

    (4)

    $\begin{align} C_{1}^{\left( n\right) } =&\left[\dfrac{\left( 1-\beta _{n}^{2}\right) \left( 1-\beta _{n}^{\ast 2}\right) \left( 1-\beta _{n}\beta _{n}^{\ast }\right) }{% \begin{array}{c} \left\{ 1+\left( a_{1}^{\left( n\right) }\right) ^{2}\right\} \left\{ 1+\beta _{n}\beta _{n}^{\ast }\right\}-\\ 2a_{1}^{\left( n\right) }\left\{ \beta _{n}+\beta _{n}^{\ast }\right\} \end{array} }\right] ^{\frac{1}{2}} \end{align}$

    (5)

    $\begin{align} C_{2}^{\left( n\right) } =&\left[\dfrac{\left( 1-\beta _{n}^{2}\right) \left( 1-\beta _{n}^{\ast 2}\right) \left( 1-\beta _{n}\beta _{n}^{\ast }\right) }{% \begin{array}{c} \left\{ 1+\left( a_{2}^{\left( n\right) }\right) ^{2}\right\} \left\{ 1+\beta _{n}\beta _{n}^{\ast }\right\}-\\ 2a_{2}^{\left( n\right) }\left\{ \beta _{n}+\beta _{n}^{\ast }\right\} \end{array} }\right] ^{\frac{1}{2}} \end{align}$

    (6)

    form an orthonormal set, i.e.,

    $\begin{align} \delta _{jl}=\dfrac{1}{2\pi i}\oint \Psi _{j}\left( z\right) \Psi _{l}\left( z^{-1}\right) \dfrac{{\rm d}z}{z} \end{align}$

    (7)

    where $\delta _{jl}$ is the Kronecker delta function and $\left\{ \beta _{n}, \beta _{n}^{\ast }\right\} $ are pairs of complex numbers in the region $\left\vert \beta _{n}\right\vert < 1$. The functions $\left\{ \Psi _{j}\left(z\right), \forall j=1, 2, \cdots\right\} $ are called discrete Kautz functions.

  • A stable transfer function based on the linear combination of discrete Kautz functions can be expressed as[14]

    $\begin{equation*} G\left( z\right) =\sum\limits_{n=1}^{2N}\theta _{n}\Psi _{n}\left( z\right) \end{equation*}$

    where input-output relations can be written as

    $ \begin{align} y\left( k\right) =\Theta \times \Phi \left( k\right) \label{eq0} \end{align}$

    (8)

    where

    $\begin{align} \Phi \left( k\right) =&\left[ \begin{array}{l} \varphi _{1}\left( k\right) \\ \varphi _{3}\left( k\right) \\ \cdots \\ \varphi _{2N-1}\left( k\right) \\ \varphi _{2}\left( k\right) \\ \varphi _{4}\left( k\right) \\ \cdots \\ \varphi _{2N}\left( k\right) \end{array} \right] = \notag\\ &\left[ \begin{array}{l} \Psi _{1}\left( q\right) \\ \Psi _{3}\left( q\right) \\ \cdots \\ \Psi _{2N-1}\left( q\right) \\ \Psi _{2}\left( q\right) \\ \Psi _{4}\left( q\right) \\ \cdots \\ \Psi _{2N}\left( q\right) \end{array} \right] \times u\left( k\right) \label{eq01a} \\ \end{align}$

    (9)

    $\begin{align} \Theta =&\left[ \begin{array}{l} \theta _{1} \\ \theta _{3} \\ \cdots \\ \theta _{2N-1} \\ \theta _{2} \\ \theta _{4} \\ \cdots \\ \theta _{2N} \end{array} \right] ^{\rm T} \end{align}$

    (10)

    and q is the shift operator. The parameter vector $\Theta $ can be estimated using least squares approach. In this paper, we develop a model only for an SISO process, nevertheless the technique can be extended for MIMO process too. The state space representation of Kautz model is given as follows.

    Define the following states

    $\begin{align} x_{2n-1}\left( k\right) =&\left\{ \begin{array}{ll} x_{odd, 1}\times u\left( k\right), & \text{for }n=1 \\ x_{odd, n}\times x_{2n-3}\left( k\right), & \text{for }n\geq 2% \end{array} \right. \label{eq9} \end{align}$

    (11)

    $\begin{align} x_{2n}\left( k\right) =&% \begin{array}{ll} x_{2n-1}\left( k-1\right) ; & \text{for }n\geq 1% \end{array} \label{eq10} \\ x_{odd, 1} =&\dfrac{q}{q^{2}+h_{1}^{\left( 1\right) }q+h_{2}^{\left( 1\right) }} \notag \\ x_{odd, n} =&\dfrac{h_{2}^{\left( n-1\right) }q^{2}+h_{1}^{\left( n-1\right) }q+1}{q^{2}+h_{1}^{\left( n\right) }q+h_{2}^{\left( n\right) }} \notag \end{align}$

    (12)

    where

    $\begin{align} h_{1}^{\left( n\right) } =&-\left( \beta _{n}+\beta _{n}^{\ast }\right) \end{align}$

    (13)

    $\begin{align} h_{2}^{\left( n\right) } =&\beta _{n}\beta _{n}^{\ast }. \end{align}$

    (14)

    The components of regression vector in (9) are expressed as

    $\begin{align} \varphi _{2n-1}\left( k\right) =&C_{1}^{\left( n\right) }\left[ x_{2n}\left( k\right)-a_{1}^{\left( n\right) }x_{2n-1}\left( k\right) % \right] \label{eq13} \end{align}$

    (15)

    $\begin{align} \varphi _{2n}\left( k\right) =&C_{2}^{\left( n\right) }\left[ x_{2n}\left( k\right)-a_{2}^{\left( n\right) }x_{2n-1}\left( k\right) \right] \label{eq14} \end{align}$

    (16)

    where

    $\begin{align} C_{1}^{\left( k\right) } =&\sqrt{\dfrac{\left\{ 1-h_{1}^{\left( k\right) 2}+3h_{2}^{\left( k\right) 2}\right\} \left( 1-h_{2}^{\left( k\right) }\right) }{\left( 1+a_{1}^{\left( k\right) 2}\right) \left( 1+h_{2}^{\left( k\right) }\right) +2a_{1}^{\left( k\right) }h_{1}^{\left( k\right) }}} \label{eq14a} \end{align}$

    (17)

    $\begin{align} C_{2}^{\left( k\right) } =&\sqrt{\dfrac{\left\{ 1-h_{1}^{\left( k\right) 2}+3h_{2}^{\left( k\right) 2}\right\} \left( 1-h_{2}^{\left( k\right) }\right) }{\left( 1+a_{2}^{\left( k\right) 2}\right) \left( 1+h_{2}^{\left( k\right) }\right) +2a_{2}^{\left( k\right) }h_{1}^{\left( k\right) }}}. \label{eq14b} \end{align}$

    (18)

    The states in (11) and (12) can further be arranged in a vector form as

    $\begin{align} x_{odd}\left( k\right) =&\left[ \begin{array}{c} x_{1}\left( k\right) \\ x_{3}\left( k\right) \\ x_{5}\left( k\right) \\ \vdots \\ x_{2n-1}\left( k\right) \end{array} \right] \label{eq17a} \end{align}$

    (19)

    $\begin{align} x_{even}\left( k\right) =&\left[ \begin{array}{c} x_{2}\left( k\right) \\ x_{4}\left( k\right) \\ x_{6}\left( k\right) \\ \vdots \\ x_{2n}\left( k\right) \end{array} \right] =\left[ \begin{array}{c} x_{1}\left( k-1\right) \\ x_{3}\left( k-1\right) \\ x_{5}\left( k-1\right) \\ \vdots \\ x_{2n-1}\left( k-1\right) \end{array} \right] = \notag \\ &x_{odd}\left( k-1\right) \label{eq17b} \end{align}$

    (20)

    which would yield the complete state space representation as

    $\begin{align} x_{odd}\left( k\right) =A_{1}x_{odd}\left( k-1\right) +A_{2}x_{odd}\left( k-2\right) +Bu\left( k-1\right) \label{eq17} \end{align}$

    (21)

    where

    $\begin{align} A_{1} =&\left[ \begin{array}{lllll} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \end{array} \right] \label{eq171} \end{align}$

    (22)

    $\begin{align} A_{2} =&\left[ \begin{array}{lllll} a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \end{array} \right] \label{eq172} \end{align}$

    (23)

    $ \begin{align} a_{11} =&\left[ \begin{array}{c} -h_{1}^{\left( 1\right) } \\ h_{1}^{\left( 1\right) }\left( 1-h_{2}^{\left( 1\right) }\right) \\ h_{1}^{\left( 1\right) }h_{2}^{\left( 2\right) }\left( 1-h_{2}^{\left( 1\right) }\right) \\ \vdots \\ h_{1}^{\left( 1\right) }h_{2}^{\left( 2\right) }h_{2}^{\left( 3\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( 1\right) }\right) \end{array} \right] \notag \\ a_{12} =&\left[ \begin{array}{c} 0 \\ -h_{1}^{\left( 2\right) } \\ h_{1}^{\left( 2\right) }\left( 1-h_{2}^{\left( 2\right) }\right) \\ \vdots \\ h_{1}^{\left( 2\right) }h_{2}^{\left( 3\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( 2\right) }\right) \end{array} \right] \notag \\ a_{13} =&\left[ \begin{array}{c} 0 \\ 0 \\ -h_{1}^{\left( 3\right) } \\ \vdots \\ h_{1}^{\left( 3\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( 3\right) }\right) \end{array} \right] \notag \\ a_{15} =&\left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ \vdots \\ -h_{1}^{\left( n\right) } \end{array} \right] \notag \\ a_{21} =&\left[ \begin{array}{c} -h_{2}^{\left( 1\right) } \\ \left( 1-h_{2}^{2\left( 1\right) }\right) \\ h_{2}^{\left( 2\right) }\left( 1-h_{2}^{2\left( 1\right) }\right) \\ \vdots \\ h_{2}^{\left( 2\right) }h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( 1\right) }\right) \end{array} \right] \notag \\ a_{22} =&\left[ \begin{array}{c} 0 \\ -h_{2}^{\left( 2\right) } \\ \left( 1-h_{2}^{2\left( 2\right) }\right) \\ \vdots \\ h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( 2\right) }\right) \end{array} \right] \notag \\ a_{23} =&\left[ \begin{array}{c} 0 \\ 0 \\ -h_{2}^{\left( 3\right) } \\ \vdots \\ h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( 3\right) }\right) \end{array} \right] \notag \\ a_{25} =&\left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ \vdots \\ -h_{2}^{\left( n\right) } \end{array} \right] \notag \\ B =&\left[ \begin{array}{c} 1 \\ h_{2}^{\left( 1\right) } \\ h_{2}^{\left( 1\right) }h_{2}^{\left( 2\right) } \\ \vdots \\ h_{2}^{\left( 1\right) }h_{2}^{\left( 2\right) }h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) } \end{array} \right].\label{eq173} \end{align}$

    (24)

    The regressors in (15) and (16) can be written as

    $\begin{align} \varphi _{odd}\left( k\right) =&\overline{C}_{1}\circ \left[ x_{even}\left( k\right)-\overline{a}_{1}\circ x_{odd}\left( k\right) \right] \label{eq18} \end{align}$

    (25)

    $\begin{align} \varphi _{even}\left( k\right) =&\overline{C}_{2}\circ \left[ x_{even}\left( k\right)-\overline{a}_{2}\circ x_{odd}\left( k\right) \right] \label{eq19} \end{align}$

    (26)

    where $\circ $ denotes the Schur product and

    $\begin{align} &\varphi _{odd}\left( k\right) =\left[ \begin{array}{c} \varphi _{1}\left( k\right) \\ \varphi _{3}\left( k\right) \\ \varphi _{5}\left( k\right) \\ \cdots \varphi _{\left( 2n-1\right) }\left( k\right) \end{array} \right] \end{align}$

    (27)

    $\begin{align} &\varphi _{even}\left( k\right) =\left[ \begin{array}{c} \varphi _{2}\left( k\right) \\ \varphi _{4}\left( k\right) \\ \varphi _{6}\left( k\right) \\ \cdots \\ \varphi _{\left( 2n\right) }\left( k\right) \end{array} \right] \end{align}$

    (28)

    $\begin{align} &\overline{C}_{1} =\left[ \begin{array}{lllll} C_{1}^{\left( 1\right) } & C_{1}^{\left( 2\right) } & C_{1}^{\left( 3\right) } & \cdots & C_{1}^{\left( n\right) } \end{array} \right] ^{\rm T} \end{align}$

    (29)

    $\begin{align} &\overline{C}_{2} =\left[ \begin{array}{lllll} C_{2}^{\left( 1\right) } & C_{2}^{\left( 2\right) } & C_{2}^{\left( 3\right) } & \cdots & C_{2}^{\left( n\right) } \end{array} \right] ^{\rm T} \end{align}$

    (30)

    $\begin{align} &\overline{a}_{1} =\left[ \begin{array}{lllll} a_{1}^{\left( 1\right) } & a_{1}^{\left( 2\right) } & a_{1}^{\left( 3\right) } & \cdots & a_{1}^{\left( n\right) } \end{array} \right] ^{\rm T} \end{align}$

    (31)

    $\begin{align} &\overline{a}_{2} =\left[ \begin{array}{lllll} a_{2}^{\left( 1\right) } & a_{2}^{\left( 2\right) } & a_{2}^{\left( 3\right) } & \cdots & a_{2}^{\left( n\right) } \end{array} \right] ^{\rm T}. \label{eq19c} \end{align}$

    (32)

    Detailed derivation of (11)-(32) have been given in the Appendix A. The equations (8), (9), (20), (21), (25) and (26) can be written in the incremental forms as

    $\begin{align} \delta y\left( k\right) =&\delta \Phi \left( k\right) \times \Theta \label{eq20c} \end{align}$

    (33)

    $\begin{align} \delta \Phi \left( k\right) =&\left[ \begin{array}{c} \delta \varphi _{1}\left( k\right) \\ \delta \varphi _{3}\left( k\right) \\ \cdots \\ \delta \varphi _{2n-1}\left( k\right) \\ \delta \varphi _{2}\left( k\right) \\ \delta \varphi _{4}\left( k\right) \\ \cdots \\ \delta \varphi _{2n}\left( k\right) \end{array} \right] ^{\rm T} \label{eq20d} \end{align}$

    (34)

    $\begin{align} \delta \varphi _{odd}\left( k\right) =&\overline{C}_{1}\circ \left[\delta x_{even}\left( k\right)-\overline{a}_{1}\circ \delta x_{odd}\left( k\right) % \right] \label{eq19a} \end{align}$

    (35)

    $\begin{align} \delta \varphi _{even}\left( k\right) =&\overline{C}_{2}\circ \left[\delta x_{even}\left( k\right)-\overline{a}_{2}\circ \delta x_{odd}\left( k\right) % \right] \label{eq19b} \end{align}$

    (36)

    $\begin{align} \delta x_{odd}\left( k\right) =&A_{1}\delta x_{odd}\left( k-1\right) +A_{2}\delta x_{odd}\left( k-2\right)+ \notag \\ &B\delta u\left( k-1\right) \label{eq20} \end{align}$

    (37)

    $\begin{align} \delta x_{even}\left( k\right) =&\delta x_{odd}\left( k-1\right) \end{align}$

    (38)

    where

    $\begin{align} \delta y\left( k\right) =&y\left( k\right) -y\left( k-1\right) \label{eq19d} \end{align}$

    (39)

    $\begin{align} \delta x_{i}\left( k\right) =&x_{i}\left( k\right) -x_{i}\left( k-1\right), \quad i\text{ is an integer} \end{align}$

    (40)

    $\begin{align} \delta u\left( k\right) =&u\left( k\right) -u\left( k-1\right). \label{eq20b} \end{align}$

    (41)

    Now, (33)-(41) may be used to develop a Kautz model of a resonant system and thereby used in the formulation of an MPC algorithm.

  • Let us consider the following equalities which will be useful for derivation of predictors.

    $\begin{align} \left. \begin{array}{lll} Q_{a1} & = & A_{1} \\ Q_{a2} & = & A_{1}Q_{a1}+A_{2} \\ Q_{a3} & = & A_{1}Q_{a2}+A_{2}Q_{a1} \\ \vdots & \vdots & \vdots \\ Q_{ai} & = & A_{1}Q_{a\left( i-1\right) }+A_{2}Q_{a\left( i-2\right) } \\ \vdots & \vdots & \vdots \end{array} \right\} \label{equ} \end{align}$

    (42)

    and

    $\begin{align} \left. \begin{array}{lll} Q_{b1} & = & A_{2} \\ Q_{b2} & = & A_{1}Q_{b1}+A_{2} \\ Q_{b3} & = & A_{1}Q_{b2}+A_{2}Q_{b1} \\ \vdots & \vdots & \vdots \\ Q_{bi} & = & A_{1}Q_{b\left( i-1\right) }+A_{2}Q_{b\left( i-2\right) } \\ \vdots & \vdots & \vdots \end{array} \right\} . \label{equ1} \end{align}$

    (43)

    Then it is easy to derive an i-step ahead prediction of states, as in (21), over a prediction horizon of P and control horizon M as

    $\begin{align} \delta x_{odd}\left( k+i\right) =Q_{ai}\delta x_{odd}\left( k\right) +Q_{bi}\delta x_{odd}\left( k-1\right) +{Q}_{i}\delta U_{i} \label{eqdelx} \end{align}$

    (44)

    where

    $\begin{align} {Q}_{i} =&\left[ \begin{array}{c} Q_{a\left( i-1\right) } \\ Q_{a\left( i-2\right) } \\ Q_{a\left( i-3\right) } \\ \cdots \\ Q_{a1} \\ I% \end{array} \right] \times B,\text{ for }i\leq M \label{Imat} \end{align}$

    (45)

    $\begin{align} &\left[ \begin{array}{c} Q_{a\left( i-1\right) } \\ Q_{a\left( i-2\right) } \\ Q_{a\left( i-3\right) } \\ \cdots \\ Q_{a\left( i-M+1\right) } \\ Q_{a\left( i-M\right) } \end{array} \right] \times B, \text{ for }i>M \label{Mp1} \end{align}$

    (46)

    $\begin{align} \delta U_{i} =&\left[ \begin{array}{c} \delta u\left( k\right) \\ \delta u\left( k+1\right) \\ \delta u\left( k+2\right) \\ \cdots \\ \delta u\left( k+i-1\right) \end{array} \right], \text{ for }i\leq M\label{dUi} \end{align}$

    (47)

    $\begin{align} &\left[ \begin{array}{c} \delta u\left( k\right) \\ \delta u\left( k+1\right) \\ \delta u\left( k+2\right) \\ \cdots \\ \delta u\left( k+M-1\right) \end{array} \right], \text{ for }i>M \label{delUi} \end{align}$

    (48)

    and I in (45) is an identity matrix. The reader may refer to the Appendix B for complete derivation of (44)-(52). And (44) can be used in (34)-(36) in order to obtain the following prediction of regressors

    $\begin{align} \delta \varphi _{odd}\left( k+i\right) =&\overline{C}_{1}\circ \left[ \begin{array}{c} \delta x_{even}\left( k+i\right)-\\ \overline{a}_{1}\circ \delta x_{odd}\left( k+i\right) \end{array} \right] \end{align}$

    (49)

    $\begin{align} \delta \varphi _{even}\left( k+i\right) =&\overline{C}_{2}\circ \left[ \begin{array}{c} \delta x_{even}\left( k+i\right)-\\ \overline{a}_{2}\circ \delta x_{odd}\left( k+i\right) \end{array} \right] \end{align}$

    (50)

    $\begin{align} \delta \Phi \left( k+i\right) =&\left[ \begin{array}{c} \delta \varphi _{1}\left( k+i\right) \\ \delta \varphi _{3}\left( k+i\right) \\ \cdots \\ \delta \varphi _{2n-1}\left( k+i\right) \\ \delta \varphi _{2}\left( k+i\right) \\ \delta \varphi _{4}\left( k+i\right) \\ \cdots \\ \delta \varphi _{2n}\left( k+i\right) \end{array} \right] ^{\rm T} \end{align}$

    (51)

    the algebraic manipulation of which would lead to

    $\begin{align} \delta \Phi \left( k+i\right) =&\mu _{1i}\delta x_{odd}\left( k\right) +\mu _{2i}\delta x_{odd}\left( k-1\right) + \notag \\ &\mu _{3i}\delta U_{\left( i-1\right) }+\mu _{4i}\delta U_{i} \label{eq23a} \end{align}$

    (52)

    where $\mu _{1i}, \mu _{2i}, \mu _{3i}$ and $\mu _{4i}$ are $2N\times N, 2N\times N, 2N\times \left(i-1\right) $ and $2N\times i$ matrices.

    $\begin{align} \mu _{1i} =&\left[ \begin{array}{l} \overline{C}_{1}\circ \left\{ Q_{a\left( i-1\right) }-\overline{a}_{1}\circ Q_{ai}\right\} \\ \overline{C}_{2}\circ \left\{ Q_{a\left( i-1\right) }-\overline{a}_{2}\circ Q_{ai}\right\} \end{array} \right] \end{align}$

    (53)

    $\begin{align} \mu _{2i} =&\left[ \begin{array}{l} \overline{C}_{1}\circ \left\{ Q_{b\left( i-1\right) }-\overline{a}_{1}\circ Q_{bi}\right\} \\ \overline{C}_{2}\circ \left\{ Q_{b\left( i-1\right) }-\overline{a}_{2}\circ Q_{bi}\right\} \end{array} \right] \end{align}$

    (54)

    $\begin{align} \mu _{3i} =&\left[ \begin{array}{l} \overline{C}_{1}\circ {Q}_{\left( i-1\right) } \\ \overline{C}_{2}\circ {Q}_{\left( i-1\right) } \end{array} \right] \end{align}$

    (55)

    $\begin{align} \mu _{4i} =&\left[ \begin{array}{l} \overline{C}_{1}\circ \overline{a}_{1}\circ {Q}_{i} \\ \overline{C}_{2}\circ \overline{a}_{2}\circ {Q}_{i} \end{array} \right] \end{align}$

    (56)

    and (52) can be re-written as

    $\begin{align} \delta \Phi \left( k+i\right) =&\mu _{1i}\delta x_{odd}\left( k\right)+\mu _{2i}\delta x_{odd}\left( k-1\right) + \notag \\ &\mu _{ui}\delta U_{M} =\notag \\ &\mu _{xi}+\mu _{ui}\delta U_{M} \label{eq24} \end{align}$

    (57)

    where $\mu _{xi}$ is a $2N\times 1$ matrix

    $\begin{align} \mu _{xi}=\mu _{1i}\delta x_{odd}\left( k\right) +\mu _{2i}\delta x_{odd}\left( k-1\right) \end{align}$

    (58)

    and $\mu _{ui}$ is a $2N\times M$ matrix, composed of $\mu _{3i}$ and $\mu _{4i}$ in such a manner that (57) holds. From (8) and (39) the $\left(k+i\right) $-{th} prediction of output is

    $\begin{align} y_{p}\left( k+i\right) =&y_{p}\left( k\right) +\sum_{j=1}^{i}\delta y_{p}\left( k+j\right) = \notag \\ &y_{p}\left( k\right) +\Theta \sum_{j=1}^{i}\left( \mu _{xj}+\mu _{uj}\delta U_{M}\right). \end{align}$

    (59)

    The predictive control law is in general obtained by minimization of the following criterion:

    $\begin{align} J =&\sum_{i=1}^{P}\left[y_{sp}\left( k+i\right)-\left\{ y_{p}\left( k+i\right) +d\left( k+i\right) \right\} \right] ^{2}+ \notag \\ &\sum_{i=0}^{M-1}\lambda _{i+1}\left[\delta u\left( k+i\right) \right] ^{2} \label{eq25} \end{align}$

    (60)

    where $d\left(k+i\right) $ is the process/model mismatch at the $\left(k+i\right)$-th prediction. It is customary to assume

    $\begin{align} d\left( k+i\right) =d\left( k\right) =y_{m}\left( k\right) -y_{p}\left( k\right) \end{align}$

    (61)

    where $y_{m}\left(k\right) $ is the process measurement at k-th instant. Equation (60) can further be simplified as

    $\begin{align} J =&\sum_{i=1}^{P}\left[ \begin{array}{c} y_{sp}\left( k+i\right)- \\ \left\{ \begin{array}{c} y_{p}\left( k\right) + \\ \Theta \sum_{j=1}^{i}\left( \begin{array}{c} \mu _{xj}+ \\ \mu _{uj}\delta U_{M} \end{array} \right) +\\ \left[y_{m}\left( k\right)-y_{p}\left( k\right) \right] \end{array} \right\} \end{array} \right] ^{2} + \notag \\ &\delta U_{M}^{\rm T}R\delta U_{M}= \notag \\ &\sum_{i=1}^{P}\left[ \begin{array}{c} \left\{ \begin{array}{c} y_{sp}\left( k+i\right)-y_{m}\left( k\right)-\\ \Theta \sum_{j=1}^{i}\mu _{xj} \end{array} \right\} \\ -\left( \Theta \sum_{j=1}^{i}\mu _{uj}\right) \delta U_{M} \end{array} \right] ^{2}+ \notag \\ &\delta U_{M}^{\rm T}R\delta U_{M} = \notag \\ &\sum_{i=1}^{P}\left[j_{i}-\mu _{Ui}\delta U_{M}\right] ^{2}+\delta U_{M}^{\rm T}R\delta U_{M} \label{eq26} \end{align}$

    (62)

    where

    $\begin{align} j_{i} =y_{sp}\left( k+i\right) -y_{m}\left( k\right) -\Theta \sum_{j=1}^{i}\mu _{xj} \end{align}$

    (63)

    $\begin{align} \mu _{Ui} =\Theta \sum_{j=1}^{i}\mu _{uj}\qquad\qquad\qquad\qquad\quad \end{align}$

    (64)

    $\begin{align} R =\left[ \begin{array}{ccccc} \lambda _{1} & 0 & 0 & \cdots & 0 \\ 0 & \lambda _{2} & 0 & \cdots & 0 \\ 0 & 0 & \lambda _{3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & 0 & \cdots & \lambda _{M} \end{array} \right].\quad \end{align}$

    (65)

    For an SISO process, $j_{i}$ is a scalar quantity, $\mu _{Ui}$ is a row vector with M elements and ${\bf R}$ is an $M\times M$ matrix. Continuing with (63)

    $\begin{align} J =&\sum_{i=1}^{P}\left[j_{i}^{2}+\delta U_{M}^{\rm T}\mu _{Ui}^{\rm T}\mu _{Ui}\delta U_{M}-2j_{i}\mu _{Ui}\delta U_{M}\right] + \notag \\ &\delta U_{M}^{\rm T}R\delta U_{M} =\notag \\ &\left( \sum_{i=1}^{P}j_{i}^{2}\right) +\delta U_{M}^{\rm T}\left( R+\sum_{i=1}^{P}\mu _{Ui}^{\rm T}\mu _{Ui}\right) \delta U_{M} - \notag \\ &2\left( \sum_{i=1}^{P}j_{i}\mu _{Ui}\right) \delta U_{M}. \label{eq27} \end{align}$

    (66)

    Without constraints, the optimal solution of the cost function (66) is given by

    $\begin{align} \dfrac{\partial J}{\partial \left( \delta U_{M}\right) } =\, &0 =\notag \\ &\left( {\bf R}+\sum_{i=1}^{P}\mu _{Ui}^{\rm T}\mu _{Ui}\right) \delta U_{M} - \notag \\ &\left( \sum_{i=1}^{P}\mu _{Ui}^{\rm T}j_{i}^{\rm T}\right) \end{align}$

    (67)

    or

    $\begin{align} \delta U_{M}=\left[{\bf R}+\sum_{i=1}^{P}\mu _{Ui}^{\rm T}\mu _{Ui}\right] ^{-1}\left[\sum_{i=1}^{P}j_{i}\mu _{Ui}\right] ^{\rm T}. \end{align}$

    (68)

    Equation (68) can be solved to compute the incremental control move which can then be used to compute the actual control move using (41).

  • The most desirable features of tuning a controller are its simplicity and optimality. In most of the cases, first a simple control structure is fixed and then optimality is extracted out of it[5]. In other words, the structure of the model (along with all the poles and zeros of the process) is not explicitly used while designing a classical controller; the controller parameters are rather tuned as a function of a simplified approximation of the "more accurate" model of the process. Moreover, classical PID controllers are not designed to handle constraints. IMC strategy, on the other hand, is based on the concept of perfect control. For an open loop process

    $\begin{align} y\left( s\right) =G_{p}\left( s\right) G_{c}\left( s\right) y_{sp}\left( s\right) \label{imc} \end{align}$

    (69)

    where $G_{p}\left(s\right) $ and $G_{c}\left(s\right) $ represent the process and controller transfer functions and $y\left(s\right) $ and $% y_{sp}\left(s\right) $ are the output and setpoint trajectories respectively. Now from (70) it is evident that if $G_{c}\left(s\right)=\dfrac{1}{G_{p}\left(s\right) }$ then output trajectory will be the same as the reference (setpoint) one. This holds the key idea of IMC control, i.e. output of the system resembles the reference signal if the controller transfer function is just the reciprocal of the system's transfer function. The control scheme is given elaborately in [6]. At first, the process model should be divided in invertible $\left(\widetilde{G}_{p-}\right) $ and non-invertible $\left(\widetilde{G} _{p+}\right) $ parts. The non-invertible part contains right half plane (RHP) zeros and time delay. The controller constitutes the reciprocal of the invertible part. To make the controller proper, a filter $f\left(s\right) $ is added.

    $\begin{align} G_{IMC}=\widetilde{G}_{p-}^{-1}\left( s\right) f\left( s\right). \end{align}$

    (70)

    Controller calculates the desired control move; the process and model are excited by the same control move (manipulation of input). Difference in their outputs, i.e. the process/model mismatch, is fed back to the controller. Controller then recalculates the future control move and procedure repeats itself. By structural re-formatting of control loop, IMC can also be written in terms of regular feedback control structure as

    $\begin{align} G_{feedback}=\dfrac{G_{IMC}}{1-G_{p}\left( s\right) G_{IMC}}. \end{align}$

    (71)

    By appropriate choice of filter transfer function it is possible, at least for some cases, to obtain $G_{feedback}$ in the form of regular PID controller. Unlike other tuning techniques, IMC based tuning can handle underdamped systems. IMC works perfectly if the model is perfect. But perfect model is a utopian concept. The order of the filter is chosen to make the controller proper. The filter parameter is the main tuning parameter of IMC. Large value of filter ensures robustness whereas smaller value yields faster response. There is always a tradeoff between robustness and speed of response[6, 15].

  • The efficacy of the model predictive controller based on Kautz model has been tested on the following two case studies. In the first case, a linear second order underdamped system, originally proposed by [14], has been considered. A time series analysis has been done for modelling as opposed to frequency domain analysis in [14]. In the second case, a magnetic suspension system has been considered which is mildly nonlinear as well as resonating in nature. A comparative study of performance of the two controllers, viz. Kautz-MPC and PID-IMC, has been presented subsequently. All simulations were carried out using Matlab (Version 7, 64-bit) under Windows 7 (64-bit) operating system on a PC with Intel CoreTM 2 Duo processor @2.4 GHz speed.

  • Consider this continuous time transfer function of a linear second order underdamped system

    $\begin{align} G\left( s\right) =\dfrac{1}{s^{2}+0.2s+1} \label{Gs} \end{align}$

    (72)

    with a resonant frequency $\omega=1$ and damping coefficient $\xi=0.1$. This system is sampled using a zero order hold circuit with sampling period, $T=0.5$. Reference [14] studied the efficacy of the Kautz model by analyzing its steady state characteristics. A relevant Bode diagram was generated and the validity of the Kautz model was established within a certain frequency range.

  • The system has been subjected to an input perturbation with mean $1.921$ and variance $0.072$. A total of 1000 data points were collected for both input and output. $75\%$ of the data set was used to train the Kautz model while $25\%$ data were used to test it. A Kautz model of order 3 has been developed using (33)-(41) that yields a very good match with the process. The results are shown in Fig. 1. In fact the process/model mismatch is so low that the profiles almost overlap with each other. The root mean square (RMS) value of the process/model mismatch is 0.018 3.

    Figure 1.  System identification of a linear second order process. (a) Series of random perturbation introduced to the input of the process; (b) Profile of output of the process as a result of random perturbation in its input

  • An unconstrained MPC law has been formulated using (69). The prediction and control horizons are chosen as 1000 and 1 respectively. Since the process is highly oscillatory, a longer prediction horizon is needed for the MPC law to calculate an appropriate control action. The closed loop response using the above MPC-Kautz controller is shown in Fig. 2.

    Figure 2.  The performance of two controllers, viz. MPC using Kautz model and PID with IMC tuning, in controlling the linear second order process. (a) Profile of output; (b) Profile of inpu

  • The tuning parameters for PID controller based on IMC techniques for a second order underdamped system having a transfer function

    $\begin{align} G_{p}\left( s\right) =\dfrac{K_{p}}{s^{2}+2\xi \omega s+\omega ^{2}} \label{Gps} \end{align}$

    (73)

    are as follows[6]:

    $\begin{align} K_{c} =&\dfrac{2\xi \omega }{\lambda K_{p}} \label{Kc} \end{align}$

    (74)

    $\begin{align} \tau _{I} =&\dfrac{2\xi }{\omega } \end{align}$

    (75)

    $\begin{align} \tau _{D} =&\dfrac{1}{2\xi \omega }. \label{tauD} \end{align}$

    (76)

    The parameter $\lambda $ is a user specified filter tuning parameter which helps to adjust the speed of the closed loop response. In the present case, the value of the filter tuning parameter has been taken as $\lambda=0.9.$ Comparing (73) with (72) one obtains, $K_{p}=1;\, \omega=1;\, \xi=0.1$ and hence using these values in (74)-(76) one obtains, $K_{c}=0.222;\, \tau _{I}=0.2;\, \tau _{D}=5.$ The closed loop response using the above IMC-PID controller is shown in Fig. 2.

    Figure 3.  The schematic of a magnetic ball suspension system

  • It is observed in Fig. 2 that for a $40\%$ change in the setpoint of the process, the MPC-Kautz is able to guide the process to its new setpoint in 18 whereas IMC-PID controller fails to achieve it even after 125. Although minor oscillation is observed in the output profile under MPC-Kautz controlled process, the output remains within $\pm 5\%$ of the final setpoint after 18 of the simulation run. An overshoot of $31.4\%$ is observed in the output profile, however an initial decay ratio of $16.69\%$ is good enough to arrest the oscillation in the controlled output. On the other hand, the IMC-PID controller yields higher overshoot in the controlled output and its insufficient decay ratio fails the output to sustain within acceptable limit of oscillation near the setpoint. This justifies the superiority of the MPC-Kautz controller over IMC-PID controller in controlling a linear second order process.

  • Fig. 3 shows a schematic of magnetic ball suspension system (MBSS). It consists of an electromagnet firmly placed at the ceiling of an encloser while an iron ball is suspended over the floor by means of a spring. The electric coil that winds the electromagnet has a resistor $(R)$ and an inductor $(L)$ in series. The voltage, e, supplied to the electromagnet yields a current, i, which in turn generates the magnetic field sufficient to pull the iron ball upwards. The mass of the ball is M and the spring constant is k. The distance between the ball and the electromagnet is denoted by y. The objective of the system is to control the position of the ball $(y)$ by adjusting the input voltage $(e)$.

    The differential equations of the system are given by

    $\begin{align} &M\dfrac{{\rm d}^{2}y\left( t\right) }{{\rm d}t^{2}}+k\dfrac{{\rm d}y\left( t\right) }{{\rm d}t}+Mg =\dfrac{i^{2}\left( t\right) }{y\left( t\right) } \label{mag1} \end{align}$

    (77)

    $\begin{align} &L\dfrac{{\rm d}i\left( t\right) }{{\rm d}t}+Ri\left( t\right) =e\left( t\right) \label{magss2} \end{align}$

    (78)

    where g is the accelaration due to gravity. For all simulation studies in this paper, the following numerical values have been considered, $g\, \!=\, \!9.8$; $M\, \!=\, \!0.01$; $L\, \!=\, \!10$; $R\, \!=\, \!100$; $k\, \!=\, \!0.01$. The nominal steady state position of the ball is $2.551$, (i.e., $y_{s}=2.551$) away from the magnet; and the corresponding nominal value of input voltage and current are $e_{s}=50$ and $% i_{s}=0.5$ respectively. The nominal value of (78) indicates that the process is nonlinear in nature. It is worth to examine how efficiently a linear Kautz model can approximate this process.

  • For modelling purpose, the system is subjected to a series of random changes in the input voltage. The change in the voltage causes a change in the current passing through the system and subsequently the steel ball's position changes. The process has been subjected to an input perturbation with mean $49.98$ and variance $0.084$. A total of 1000 data points were collected for both input and output. Similar to the Case Study Ⅰ, $75\%$ of the data set was used to train the Kautz model while $25\%$ data were used to test it. A Kautz model of order 5 has been developed using (33)-(41) that yields a very good match with the process. The results are shown in Fig. 4.

    Figure 4.  System identification of a magnetic ball suspension system. (a) Series of random perturbation introduced to the input of the process; (b) Profile of output of the process as a result of random perturbation in its input.

    The process/model mismatch is quite low and the profiles of process and model overlap with each other at most of the places. The root mean square (RMS) value of the process/model mismatch is $0.015\, 7$. The above observation indicates that the linear Kautz model is capable of capturing the process dynamics quite well despite the fact that the process is inherently nonlinear in nature. It is perhaps due to the fact that the quadratic nonlinearity which is evident in the mechanistic model, is mild enough in the region of nominal steady state; and a linear Kautz model is good enough for this mild nonlinearity to be captured.

  • An unconstrained MPC law has been formulated using (68). The prediction and control horizons are chosen as 10 and 1 respectively. The closed loop response using the above MPC-Kautz controller is shown in Fig. 5.

    Figure 5.  The performance of three controllers, viz. MPC using Kautz model, MPC using transfer function model and PID with IMC tuning, in controlling the magnetic ball suspension system. (a) Profile of distance of steel ball from ground; (b) Profile of input voltage to the electromagnet

  • In order to design an IMC for the MBSS system one needs to linearize the nonlinear system. The only nonlinear term in the process is $\dfrac{% i^{2}\left(t\right) }{y\left(t\right) }$ in (\ref{mag1}). Applying Taylor's series expansion (upto first order only) in the nonlinear term, one obtains

    $\begin{align} \dfrac{i^{2}\left( t\right) }{y\left( t\right) } =&\left( \dfrac{i_{s}^{2}}{% y_{s}}\right) +\left( \dfrac{2i_{s}}{y_{s}}\right) \left\{ i\left( t\right) -i_{s}\right\} - \notag \\ &\left( \dfrac{i_{s}^{2}}{y_{s}^{2}}\right) \left\{ y\left( t\right) -y_{s}\right\} . \label{linmag1} \end{align}$

    (79)

    Using (79) in (77), one obtains

    $\begin{align} M\dfrac{{\rm d}^{2}y\left( t\right) }{{\rm d}t^{2}}+k\dfrac{{\rm d}y\left( t\right) }{{\rm d}t}+Mg =&\left( \dfrac{i_{s}^{2}}{y_{s}}\right) +\left( \dfrac{2i_{s}}{y_{s}} \right) \left\{ i\left( t\right) -i_{s}\right\} - \notag \\ &\left( \dfrac{i_{s}^{2}}{y_{s}^{2}}\right) \left\{ y\left( t\right) -y_{s}\right\}. \label{magss1} \end{align}$

    (80)

    Taking deviation form of the variables and subsequently converting in Laplace domain, (80) and (78) take the form

    $\begin{align} Ms^{2}y\left( s\right) +ksy\left( s\right) =&\left( \dfrac{2i_{s}}{y_{s}} \right) i\left( s\right) -\left( \dfrac{i_{s}^{2}}{y_{s}^{2}}\right) y\left( s\right) \label{le1} \end{align}$

    (81)

    $\begin{align} Lsi\left( s\right) +Ri\left( s\right) =&e\left( s\right). \label{le2} \end{align}$

    (82)

    Algebraic rearrangement of (81) and (82) yield

    $\begin{align} &y\left( s\right) =\notag\\ &\quad \dfrac{\left( \dfrac{2i_{s}}{LMy_{s}}\right) e\left( s\right) }{s^{3}+\left( \dfrac{k}{M}+\dfrac{R}{L}\right) s^{2}+\left( \dfrac{% i_{s}^{2}}{My_{s}^{2}}+\dfrac{kR}{LM}\right) s+\left( \dfrac{Ri_{s}^{2}}{% LMy_{s}^{2}}\right) }. \end{align}$

    (83)

    And putting the values of the coefficient terms, one obtains the linearized transfer function model of the MBSS process

    $\begin{align} G\left( s\right) =\dfrac{y\left( s\right) }{e\left( s\right) }=\dfrac{3.92}{% s^{3}+11s^{2}+13.84s+38.42}. \end{align}$

    (84)

    This transfer function may be used to construct the IMC. As it does not contain any positive zeros or dead time element the entire transfer function is invertible. A low pass filter of 3rd order is used to make the controller proper. So the controller transfer function takes the form[6]

    $\begin{align} G_{IMC}\left( s\right) =\dfrac{s^{3}+11s^{2}+13.84s+38.42}{3.92\left( \lambda s+1\right) ^{3}}. \end{align}$

    (85)

    The parameter $\lambda $ is a user specified filter tuning parameter which helps to adjust the speed of the closed loop response. In the present case, the value of the filter tuning parameter has been taken as $\lambda=0.9.$ The closed loop response using the above IMC controller is shown in Fig. 5.

  • It is observed in the Fig. 5 that for a $40\%$ change in the setpoint of the process, the MPC-Kautz is able to guide the process to its new setpoint in 120 time units while IMC controller is also able to achieve it in same period of time. Although minor oscillation is observed in the output profile under MPC-Kautz controlled process, the output remains within $\pm 5\%$ of the final setpoint after 40 time units of the simulation run. An overshoot of $16\%$ is observed in the output profile, however an initial decay ratio of $34.72\%$ is good enough to arrest the oscillation in the controlled output. It is also observed that the MPC based on Kautz model performs better than MPC based on transfer function model. The ISE value of MPC-Kautz is 12.02 whereas that of MPC-TF is 14.57. On the other hand, the IMC controller yields higher overshoot in the controlled output. While the performances of IMC and MPC-Kautz are comparable for smaller changes in setpoint, they grossly differ at large change in setpoint. Interestingly, large offset $\left(24\%\right) $ is observed for IMC controller when it is used for large setpoint changes. This further justifies the superiority of the MPC-Kautz controller over IMC controller in controlling a MBSS process.

  • The Kautz model has been proved to be an efficient modelling technique for resonant systems. The case studies with both linear and mildly nonlinear processes support this fact. The process/model mismatch has been extremely less in both the cases. The MPC developed on the basis of Kautz model turns out to be a better controller than IMC and/or IMC based PID controller. The extent of overshoot and the duration of unacceptable oscillation are less for MPC controlled processes. Decay ratio of MPC controlled output is stronger than IMC controlled output. The IMC generates offset in case of mildly nonlinear process whereas MPC yields offset free response. For all practical purposes, MPC with Kautz model stands out to be a far better option for modelling and controlling a resonant system.

  • Using Theorem 1 the Kautz model in (9) can further be represented as the following state space realization (with shift operator q):

    $\begin{align} \varphi _{1}\left( k\right) =&\Psi _{1}\left( q\right) u\left( k\right) =\notag \\ &C_{1}^{\left( 1\right) }\left( 1-a_{1}^{\left( 1\right) }q\right) \Gamma ^{\left( 1\right) }\left( q\right) u\left( k\right) =\notag \\ &C_{1}^{\left( 1\right) }\left( 1-a_{1}^{\left( 1\right) }q\right) \times\notag \\ & \dfrac{1}{\left( q-\beta _{1}\right) \left( q-\beta _{1}^{\ast }\right) }u\left( k\right) = \notag \\ &C_{1}^{\left( 1\right) }\left( q^{-1}-a_{1}^{\left( 1\right) }\right) \times \notag \\ &\dfrac{q}{q^{2}+h_{1}^{\left( 1\right) }q+h_{2}^{\left( 1\right) }}u\left( k\right) \label{eq1} \end{align}$

    (A1)

    where

    $\begin{align} h_{1}^{\left( 1\right) } =&-\left( \beta _{1}+\beta _{1}^{\ast }\right) \end{align}$

    (A2)

    $\begin{align} h_{2}^{\left( 1\right) } =&\beta _{1}\beta _{1}^{\ast }. \end{align}$

    (A3)

    Define the following states

    $\begin{align} x_{1}\left( k\right) =&\dfrac{q}{q^{2}+h_{1}^{\left( 1\right) }q+h_{2}^{\left( 1\right) }}\times u\left( k\right) \label{eq2} \end{align}$

    (A4)

    $\begin{align} x_{2}\left( k\right) =&x_{1}\left( k-1\right). \label{eq3} \end{align}$

    (A5)

    Using (A4) and (A5) in (A1) one obtains

    $\begin{align} \varphi _{1}\left( k\right) =&C_{1}^{\left( 1\right) }\left( q^{-1}-a_{1}^{\left( 1\right) }\right) x_{1}\left( k\right)= \notag \\ &C_{1}^{\left( 1\right) }\left[x_{1}\left( k-1\right) -a_{1}^{\left( 1\right) }x_{1}\left( k\right) \right] = \notag \\ &C_{1}^{\left( 1\right) }\left[x_{2}\left( k\right) -a_{1}^{\left( 1\right) }x_{1}\left( k\right) \right] . \label{eq7} \end{align}$

    (A6)

    Similarly,

    $\begin{align} \varphi _{2}\left( k\right) =&\Psi _{2}\left( q\right) u\left( k\right) =\notag \\ &C_{2}^{\left( 1\right) }\left( 1-a_{2}^{\left( 1\right) }q\right) \Gamma ^{\left( 1\right) }\left( q\right) u\left( k\right) = \notag \\ &C_{2}^{\left( 1\right) }\left( q^{-1}-a_{2}^{\left( 1\right) }\right)\times \notag \\ & \dfrac{q}{q^{2}+h_{1}^{\left( 1\right) }q+h_{2}^{\left( 1\right) }} \times u\left( k\right)= \notag \\ &C_{2}^{\left( 1\right) }\left[x_{2}\left( k\right) -a_{2}^{\left( 1\right) }x_{1}\left( k\right) \right] . \label{eq8} \end{align}$

    (A7)

    Further,

    $\begin{align} \varphi _{3}\left( k\right) =&\Psi _{3}\left( q\right) u\left( k\right)= \notag \\ &C_{1}^{\left( 2\right) }\left( 1-a_{1}^{\left( 2\right) }q\right) \Gamma ^{\left( 2\right) }\left( q\right) u\left( k\right) = \notag \\ &C_{1}^{\left( 2\right) }\left( 1-a_{1}^{\left( 2\right) }q\right) \times \notag \\ &\dfrac{\left( 1-\beta _{1}q\right) \left( 1-\beta _{1}^{\ast }q\right) u\left( k\right) }{\left( q-\beta _{1}\right) \left( q-\beta _{1}^{\ast }\right) \left( q-\beta _{2}\right) \left( q-\beta _{2}^{\ast }\right) } =\notag \\ &C_{1}^{\left( 2\right) }\left( q^{-1}-a_{1}^{\left( 2\right) }\right) \times \notag \\ &\dfrac{h_{2}^{\left( 1\right) }q^{2}+h_{1}^{\left( 1\right) }q+1}{% q^{2}+h_{1}^{\left( 2\right) }q+h_{2}^{\left( 2\right) }}\times \notag \\ &\dfrac{q}{q^{2}+h_{1}^{\left( 1\right) }q+h_{2}^{\left( 1\right) }}u\left( k\right)= \notag \\ &C_{1}^{\left( 2\right) }\left( q^{-1}-a_{1}^{\left( 2\right) }\right) \times \notag \\ &\dfrac{h_{2}^{\left( 1\right) }q^{2}+h_{1}^{\left( 1\right) }q+1}{% q^{2}+h_{1}^{\left( 2\right) }q+h_{2}^{\left( 2\right) }}\times x_{1}\left( k\right) \label{eq4} \end{align}$

    (A8)

    where

    $\begin{align} h_{1}^{\left( 2\right) } =&-\left( \beta _{2}+\beta _{2}^{\ast }\right) \end{align}$

    (A9)

    $\begin{align} h_{2}^{\left( 2\right) } =&\beta _{2}\beta _{2}^{\ast }. \end{align}$

    (A10)

    Define the following states

    $\begin{align} x_{3}\left( k\right) =&\dfrac{h_{2}^{\left( 1\right) }q^{2}+h_{1}^{\left( 1\right) }q+1}{q^{2}+h_{1}^{\left( 2\right) }q+h_{2}^{\left( 2\right) }} \times x_{1}\left( k\right) \label{eq5} \end{align}$

    (A11)

    $\begin{align} x_{4}\left( k\right) =&x_{3}\left( k-1\right). \label{eq6} \end{align}$

    (A12)

    Using (A11) and (A12) in (A8), one obtains the regressors (similar to A6 and A7) as

    $\begin{align} \varphi _{3}\left( k\right) =&C_{1}^{\left( 2\right) }\left[ x_{4}\left( k\right)-a_{1}^{\left( 2\right) }x_{3}\left( k\right) \right] \end{align}$

    (A13)

    $\begin{align} \varphi _{4}\left( k\right) =&C_{2}^{\left( 2\right) }\left[ x_{4}\left( k\right)-a_{2}^{\left( 2\right) }x_{3}\left( k\right) \right]. \end{align}$

    (A14)

    The derivation of regressors can further be continued and a generalized expression[14] can be given as in (11)-(16).

    For $n=1$, in (11)

    $\begin{align} x_{1}\left( k\right) =&\dfrac{q}{q^{2}+h_{1}^{\left( 1\right) }q+h_{2}^{\left( 1\right) }}\times u\left( k\right)= \notag \\ &\dfrac{q^{-1}}{1+h_{1}^{\left( 1\right) }q^{-1}+h_{2}^{\left( 1\right) }q^{-2}}\times u\left( k\right) = \notag \\ &-h_{1}^{\left( 1\right) }x_{1}\left( k-1\right) -h_{2}^{\left( 1\right) }x_{1}\left( k-2\right) + \notag \\ &u\left( k-1\right). \label{eq15} \end{align}$

    (A15)

    For $n=2$, in (11)

    $\begin{align} x_{3}\left( k\right) =&\dfrac{h_{2}^{\left( 1\right) }q^{2}+h_{1}^{\left( 1\right) }q+1}{q^{2}+h_{1}^{\left( 2\right) }q+h_{2}^{\left( 2\right) }} \times x_{1}\left( k\right) = \notag \\ &\dfrac{h_{2}^{\left( 1\right) }+h_{1}^{\left( 1\right) }q^{-1}+q^{-2}}{% 1+h_{1}^{\left( 2\right) }q^{-1}+h_{2}^{\left( 2\right) }q^{-2}}\times x_{1}\left( k\right)= \notag \\ &-h_{1}^{\left( 2\right) }x_{3}\left( k-1\right) -h_{2}^{\left( 2\right) }x_{3}\left( k-2\right) + \notag \\ &h_{2}^{\left( 1\right) }x_{1}\left( k\right) +h_{1}^{\left( 1\right) }x_{1}\left( k-1\right) + \notag \\ &x_{1}\left( k-2\right). \label{eq16} \end{align}$

    (A16)

    Using (A15) in (A16) and simplifying one gets

    $\begin{align} x_{3}\left( k\right) =&h_{1}^{\left( 1\right) }\left( 1-h_{2}^{\left( 1\right) }\right) x_{1}\left( k-1\right) -h_{1}^{\left( 2\right) }x_{3}\left( k-1\right) + \notag \\ &\left( 1-h_{2}^{2\left( 1\right) }\right) x_{1}\left( k-2\right) -h_{2}^{\left( 2\right) }x_{3}\left( k-2\right)+ \notag \\ &h_{2}^{\left( 1\right) }u\left( k-1\right). \end{align}$

    (A17)

    Similarly continuing with $n=3$ and 4 one obtains the following state space representation

    $\begin{align} \left[ \begin{array}{l} x_{1}\left( k\right) \\ x_{3}\left( k\right) \\ x_{5}\left( k\right) \\ x_{7}\left( k\right) \end{array} \right] =&A_{1}\left[ \begin{array}{l} x_{1}\left( k-1\right) \\ x_{3}\left( k-1\right) \\ x_{5}\left( k-1\right) \\ x_{7}\left( k-1\right) \end{array} \right] + \notag \\ &A_{2}\left[ \begin{array}{l} x_{1}\left( k-2\right) \\ x_{3}\left( k-2\right) \\ x_{5}\left( k-2\right) \\ x_{7}\left( k-2\right) \end{array} \right] + \notag \\ &Bu\left( k-1\right) \end{align}$

    (A18)

    where $A_{1}$, $A_{2}$ and B are obtained from (22), (23) and (24). The complete state space representation can be given as (21) whose $(2n-1)$-{th} row will be

    $\begin{align} x_{2n-1}\left( k\right) =&\left[ \begin{array}{c} h_{1}^{\left( 1\right) }h_{2}^{\left( 2\right) }h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( 1\right) }\right) \\ h_{1}^{\left( 2\right) }h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( 2\right) }\right) \\ h_{1}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( 3\right) }\right) \\ \vdots \\ h_{1}^{\left( n-2\right) }h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{\left( n-2\right) }\right) \\ h_{1}^{\left( n-1\right) }\left( 1-h_{2}^{\left( n-1\right) }\right) \\ -h_{1}^{\left( n\right) } \end{array} \right] ^{\rm T} \times \notag \\ & \left[ \begin{array}{c} x_{1}\left( k-1\right) \\ x_{3}\left( k-1\right) \\ x_{5}\left( k-1\right) \\ \vdots \\ x_{2n-1}\left( k-1\right) \end{array} \right] +\notag \\ &\left[ \begin{array}{c} h_{2}^{\left( 2\right) }h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( 1\right) }\right) \\ h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( 2\right) }\right) \\ h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( 3\right) }\right) \\ \vdots \\ h_{2}^{\left( n-1\right) }\left( 1-h_{2}^{2\left( n-2\right) }\right) \\ \left( 1-h_{2}^{2\left( n-1\right) }\right) \\ -h_{2}^{\left( n\right) } \end{array} \right] ^{\rm T} \times \notag \\ & \left[ \begin{array}{c} x_{1}\left( k-2\right) \\ x_{3}\left( k-2\right) \\ x_{5}\left( k-2\right) \\ \vdots \\ x_{2n-1}\left( k-2\right) \end{array} \right] + \notag \\ &\left\{ h_{2}^{\left( 1\right) }h_{2}^{\left( 2\right) }h_{2}^{\left( 3\right) }h_{2}^{\left( 4\right) }\cdots h_{2}^{\left( n-1\right) }\right\} u\left( k-1\right). \end{align}$

    (A19)

    The value of $C_{2}^{\left(k\right) }$ in (\ref{eq14b}) can be simplified as

    $\begin{align} C_{1}^{\left( k\right) } =&\sqrt{\dfrac{\left( 1-\beta _{k}^{2}\right) \left( 1-\beta _{k}^{\ast 2}\right) \left( 1-\beta _{k}\beta _{k}^{\ast }\right) }{\left( 1+a_{1}^{\left( k\right) 2}\right) \left( 1+\beta _{k}\beta _{k}^{\ast }\right) -2a_{1}^{\left( k\right) }\left( \beta _{k}+\beta _{k}^{\ast }\right) }}= \notag \\ &\sqrt{\dfrac{\left( 1-\beta _{k}^{2}-\beta _{k}^{\ast 2}+\beta _{k}^{2}\beta _{k}^{\ast 2}\right) \left( 1-\beta _{k}\beta _{k}^{\ast }\right) }{\left( 1+a_{1}^{\left( k\right) 2}\right) \left( 1+\beta _{k}\beta _{k}^{\ast }\right) -2a_{1}^{\left( k\right) }\left( \beta _{k}+\beta _{k}^{\ast }\right) }}= \notag \\ &\sqrt{\dfrac{\left( 1-\beta _{k}^{2}-\beta _{k}^{\ast 2}-2\beta _{k}^{2}\beta _{k}^{\ast 2}+3\beta _{k}^{2}\beta _{k}^{\ast 2}\right) \left( 1-\beta _{k}\beta _{k}^{\ast }\right) }{\left( 1+a_{1}^{\left( k\right) 2}\right) \left( 1+\beta _{k}\beta _{k}^{\ast }\right) -2a_{1}^{\left( k\right) }\left( \beta _{k}+\beta _{k}^{\ast }\right) }} = \notag \\ &\sqrt{\dfrac{\left\{ 1-\left( \beta _{k}+\beta _{k}^{\ast }\right) ^{2}+3\beta _{k}^{2}\beta _{k}^{\ast 2}\right\} \left( 1-\beta _{k}\beta _{k}^{\ast }\right) }{\left( 1+a_{1}^{\left( k\right) 2}\right) \left( 1+\beta _{k}\beta _{k}^{\ast }\right) -2a_{1}^{\left( k\right) }\left( \beta _{k}+\beta _{k}^{\ast }\right) }}= \notag \\ &\sqrt{\dfrac{\left\{ 1-h_{1}^{\left( k\right) 2}+3h_{2}^{\left( k\right) 2}\right\} \left( 1-h_{2}^{\left( k\right) }\right) }{\left( 1+a_{1}^{\left( k\right) 2}\right) \left( 1+h_{2}^{\left( k\right) }\right) +2a_{1}^{\left( k\right) }h_{1}^{\left( k\right) }}}. \end{align}$

    (A20)

    The value of $C_{2}^{\left(k\right) }$ in terms of $a_{2}^{\left(k\right) } $, $h_{1}^{\left(k\right) }$ and $h_{2}^{\left(k\right) }$ can be derived in the similar fashion. Hence (17) and (18) can be obtained.

  • Using equalities in (42), (37) can be written as

    $\begin{align} &\delta x_{odd}\left( k+1\right) =\notag\\ &\quad Q_{a1}\delta x_{odd}\left( k\right) +Q_{b1}\delta x_{odd}\left( k-1\right) +B\delta u\left( k\right). \end{align}$

    (B1)

    Further, the i-step ahead incremental predictor can be written as

    $\begin{align} \delta x_{odd}\left( k+2\right) =\;&A_{1}\delta x_{odd}\left( k+1\right) +A_{2}\delta x_{odd}\left( k\right)+ \notag \\ &B\delta u\left( k+1\right) = \notag \\ &A_{1}\left\{ \begin{array}{c} Q_{a1}\delta x_{odd}\left( k\right) + \\ Q_{b1}\delta x_{odd}\left( k-1\right) + \\ B\delta u\left( k\right) \end{array} \right\} + \notag \\ &A_{2}\delta x_{odd}\left( k\right) +B\delta u\left( k+1\right) = \notag \\ &\left( A_{1}Q_{a1}+A_{2}\right) \delta x_{odd}\left( k\right) + \notag \\ &A_{1}Q_{b1}\delta x_{odd}\left( k-1\right)+ \notag \\ &A_{1}B\delta u\left( k\right) +B\delta u\left( k+1\right)= \notag \\ &Q_{a2}\delta x_{odd}\left( k\right) +Q_{b2}\delta x_{odd}\left( k-1\right) +\notag \\ &Q_{a1}B\delta u\left( k\right) +B\delta u\left( k+1\right) \end{align}$

    (B2)

    $\begin{align} \delta x_{odd}\left( k+3\right) =\;&A_{1}\delta x_{odd}\left( k+2\right) +A_{2}\delta x_{odd}\left( k+1\right) + \notag \\ &B\delta u\left( k+2\right)= \notag \\ &A_{1}\left\{ \begin{array}{c} Q_{a2}\delta x_{odd}\left( k\right) + \\ Q_{b2}\delta x_{odd}\left( k-1\right) + \\ Q_{a1}B\delta u\left( k\right) + \\ B\delta u\left( k+1\right) \end{array} \right\} + \notag \\ &A_{2}\left\{ \begin{array}{c} Q_{a1}\delta x_{odd}\left( k\right) + \\ Q_{b1}\delta x_{odd}\left( k-1\right) +\\ B\delta u\left( k\right) \end{array} \right\} + \notag \\ &B\delta u\left( k+2\right) = \notag \\ &\left\{ A_{1}Q_{a2}+A_{2}Q_{a1}\right\} \delta x_{odd}\left( k\right) + \notag \\ &\left\{ A_{1}Q_{b2}+A_{2}Q_{b1}\right\} \delta x_{odd}\left( k-1\right) +\notag \\ &\left\{ A_{1}Q_{a1}+A_{2}\right\} B\delta u\left( k\right) + \notag \\ &Q_{a1}B\delta u\left( k+1\right) +B\delta u\left( k+2\right) =\notag \\ &Q_{a3}\delta x_{odd}\left( k\right) +Q_{b3}\delta x_{odd}\left( k-1\right) +\notag \\ &Q_{a2}B\delta u\left( k\right) +Q_{a1}B\delta u\left( k+1\right) +\notag \\ &B\delta u\left( k+2\right) \end{align}$

    (B3)

    $\begin{align} &\vdots \notag \\ \delta x_{odd}\left( k+i\right) =\;&Q_{ai}\delta x_{odd}\left( k\right) +Q_{bi}\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{i}\delta U_{i} \end{align}$

    (B4)

    where Qi and δUi are given in (45) and (47) respectively.

    Again the M-{th} prediction is

    $\begin{align} &\delta x_{odd}\left( k+M\right) =\notag\\ &\quad Q_{aM}\delta x_{odd}\left( k\right) +Q_{bM}\delta x_{odd}\left( k-1\right) +{Q}_{M}\delta U_{M} \end{align}$

    (B5)

    and the $\left(M+1\right)$-{th} prediction is

    $\begin{align} \delta x_{odd}\left( k+M+1\right) =\;&A_{1}\delta x_{odd}\left( k+M\right) + \notag \\ &A_{2}\delta x_{odd}\left( k+M-1\right) = \notag \\ &A_{1}\left\{ \begin{array}{c} Q_{aM}\delta x_{odd}\left( k\right)+ \\ Q_{bM}\delta x_{odd}\left( k-1\right)+ \\ {Q}_{M}\delta U_{M} \end{array} \right\} + \notag \\ &A_{2}\left\{ \begin{array}{c} Q_{a\left( M-1\right) }\delta x_{odd}\left( k\right) + \\ Q_{b\left( M-1\right) }\delta x_{odd}\left( k-1\right)+ \\ {Q}_{M-1}\delta U_{M-1} \end{array} \right\} = \notag \\ &\left\{ \begin{array}{c} A_{1}Q_{aM}+ \\ A_{2}Q_{a\left( M-1\right) } \end{array} \right\} \delta x_{odd}\left( k\right) + \notag \\ &\left\{ \begin{array}{c} A_{1}Q_{bM}+ \\ A_{2}Q_{b\left( M-1\right) } \end{array} \right\} \delta x_{odd}\left( k-1\right) + \notag \\ &A_{1}{Q}_{M}\delta U_{M}+A_{2}{Q}_{M-1}\delta U_{M-1} =\notag \\ &Q_{a\left( M+1\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+1\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &A_{1}{Q}_{M}\delta U_{M} + \notag \\ &A_{2}{Q}_{M-1}\delta U_{M-1} . \label{eq20a} \end{align}$

    (B6)

    Now,

    $\begin{align} A_{1}{Q}_{M}\delta U_{M} =\;&A_{1}Q_{a\left( M-1\right) }B\delta u\left( k\right) + \notag \\ &A_{1}Q_{a\left( M-2\right) }B\delta u\left( k+1\right) + \notag \\ &A_{1}Q_{a\left( M-3\right) }B\delta u\left( k+2\right) +\cdots + \notag \\ &A_{1}Q_{a1}B\delta u\left( k+M-2\right) + \notag \\ &A_{1}B\delta u\left( k+M-1\right) \label{eq21} \end{align}$

    (B7)

    $\begin{align} A_{2}{Q}_{M-1}\delta U_{M-1} =\;&A_{2}Q_{a\left( M-2\right) }B\delta u\left( k\right) + \notag \\ &A_{2}Q_{a\left( M-3\right) }B\delta u\left( k+1\right) + \notag \\ &A_{2}Q_{a\left( M-4\right) }B\delta u\left( k+2\right) +\cdots + \notag \\ &A_{2}B\delta u\left( k+M-2\right). \label{eq22} \end{align}$

    (B8)

    Adding (B7) and (B8), one obtains

    $\begin{align} SUM =&A_{1}{Q}_{M}\delta U_{M}+A_{2}{Q}_{M-1}\delta U_{M-1}= \notag \\ &\left\{ A_{1}Q_{a\left( M-1\right) }+A_{2}Q_{a\left( M-2\right) }\right\} B\delta u\left( k\right) + \notag \\ &\left\{ A_{1}Q_{a\left( M-2\right) }+A_{2}Q_{a\left( M-3\right) }\right\} B\delta u\left( k+1\right) + \notag \\ &\left\{ A_{1}Q_{a\left( M-3\right) }+A_{2}Q_{a\left( M-4\right) }\right\} B\delta u\left( k+2\right) + \cdots + \notag \\ &\left\{ A_{1}Q_{a1}+A_{2}\right\} B\delta u\left( k+M-2\right) + \notag \\ &A_{1}B\delta u\left( k+M-1\right) = \notag \\ &Q_{aM}B\delta u\left( k\right) +Q_{a\left( M-1\right) }B\delta u\left( k+1\right) + \notag \\ &Q_{a\left( M-2\right) }B\delta u\left( k+2\right) +\cdots+ \notag \\ &Q_{a2}B\delta u\left( k+M-2\right) + \notag \\ &Q_{a1}B\delta u\left( k+M-1\right)= \notag \\ &{Q}_{M+1}\delta U_{M} \end{align}$

    (B9)

    where ${Q}_{M+1}$ can be denoted by (46). Hence, (B6) takes the form of

    $\begin{align} \delta x_{odd}\left( k+M+1\right) =\;&Q_{a\left( M+1\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+1\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{M+1}\delta U_{M}. \end{align}$

    (B10)

    Again,

    $\begin{align} \delta x_{odd}\left( k+M+2\right) =\;&A_{1}\delta x_{odd}\left( k+M+1\right)+ \notag \\ &A_{2}\delta x_{odd}\left( k+M\right) = \notag \\ &A_{1}\left\{ \begin{array}{c} Q_{a\left( M+1\right) }\delta x_{odd}\left( k\right) + \\ Q_{b\left( M+1\right) }\delta x_{odd}\left( k-1\right)+ \\ {Q}_{M+1}\delta U_{M} \end{array} \right\} + \notag \\ &A_{2}\left\{ \begin{array}{c} Q_{aM}\delta x_{odd}\left( k\right) + \\ Q_{bM}\delta x_{odd}\left( k-1\right)+ \\ {Q}_{M}\delta U_{M} \end{array} \right\} = \notag \\ &\left\{ \begin{array}{c} A_{1}Q+ \\ A_{2}Q_{aM} \end{array} \right\} \delta x_{odd}\left( k\right) + \notag \\ &\left\{ \begin{array}{c} A_{1}Q_{b\left( M+1\right) }+ \\ A_{2}Q_{aM} \end{array} \right\} \delta x_{odd}\left( k-1\right) + \notag \\ &\left\{ A_{1}{Q}_{M+1}+A_{2}{Q}_{M}\right\} \delta U_{M} =\notag \\ &Q_{a\left( M+2\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+2\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &\left\{ \begin{array}{c} A_{1}{Q}_{M+1}+ \\ A_{2}{Q}_{M} \end{array} \right\} \delta U_{M} .\label{eq23} \end{align}$

    (B11)

    Now,

    $\begin{align} A_{1}{Q}_{M+1}+A_{2}{Q}_{M} =\;&A_{1}Q_{aM}B+A_{1}Q_{a\left( M-1\right) }B + \notag \\ &A_{1}Q_{a\left( M-2\right) }B+\cdots + \notag \\ &A_{1}Q_{a2}B+A_{1}Q_{a1}B + \notag \\ &A_{2}Q_{a\left( M-1\right) }B+ \notag \\ &A_{2}Q_{a\left( M-2\right) }B+ \notag \\ &A_{2}Q_{a\left( M-3\right) }B+\cdots + \notag \\ &A_{2}Q_{a1}B+A_{2}B = \notag \\ &Q_{a\left( M+1\right) }B+Q_{aM}B + \notag \\ &Q_{a\left( M-1\right) }B+\cdots + \notag \\ &Q_{a3}B+Q_{a2}B = \notag \\ &{Q}_{M+2} \end{align}$

    (B12)

    where ${Q}_{M+2}$ can also be denoted by (46). Hence (B11) can be re-written as

    $\begin{align} \delta x_{odd}\left( k+M+2\right) =&Q_{a\left( M+2\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+2\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{M+2}\delta U_{M}. \end{align}$

    (B13)

    Similarly it can be proved that

    $\begin{align} \delta x_{odd}\left( k+M+3\right) =\;&Q_{a\left( M+3\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+3\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{M+3}\delta U_{M} \end{align}$

    (B14)

    $\begin{align} \delta x_{odd}\left( k+M+4\right) =\;&Q_{a\left( M+4\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+4\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{M+4}\delta U_{M} \end{align}$

    (B15)

    $\begin{align} &\vdots \notag \\ \delta x_{odd}\left( k+M+i\right) =\;&Q_{a\left( M+i\right) }\delta x_{odd}\left( k\right) + \notag \\ &Q_{b\left( M+i\right) }\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{M+i}\delta U_{M} \end{align}$

    (B16)

    $\begin{align} &\vdots \notag \\ \delta x_{odd}\left( k+P\right) =\;&Q_{aP}\delta x_{odd}\left( k\right) + \notag \\ &Q_{bP}\delta x_{odd}\left( k-1\right) + \notag \\ &{Q}_{P}\delta U_{M} \end{align}$

    (B17)

    where ${Q}_{M+3}, {Q}_{M+4}, \cdots, {Q}_{M+i}$ can be denoted by (46) and ${Q}_{P}$ can be represented by

    $\begin{align} {Q}_{P}=\left[ \begin{array}{c} Q_{a\left( P-1\right) } \\ Q_{a\left( P-2\right) } \\ Q_{a\left( P-3\right) } \\ \cdots \\ Q_{a\left( P-M+1\right) } \\ Q_{a\left( P-M\right) } \end{array} \right] ^{\rm T}\times B. \end{align}$

    (B18)

    Hence the above derivation leads to (44)-(48).

    The i-{th} prediction of the regressors are

    $\begin{align} \delta \varphi _{odd}\left( k+i\right) =\;&\overline{C}_{1}\cdot \left[ \begin{array}{c} \delta x_{even}\left( k+i\right)-\\ \overline{a}_{1}\cdot \delta x_{odd}\left( k+i\right) \end{array} \right] = \notag \\ &\overline{C}_{1}\cdot \left[ \begin{array}{c} \delta x_{odd}\left( k+i-1\right)-\\ \overline{a}_{1}\cdot \delta x_{odd}\left( k+i\right) \end{array} \right] = \notag \\ &\left[\overline{C}_{1}\cdot \left\{ \begin{array}{c} Q_{a\left( i-1\right) }-\\ \overline{a}_{1}\cdot Q_{ai} \end{array} \right\} \right] \delta x_{odd}\left( k\right) + \notag \\ &\left[\overline{C}_{1}\cdot \left\{ \begin{array}{c} Q_{b\left( i-1\right) }-\\ \overline{a}_{1}\cdot Q_{bi} \end{array} \right\} \right] \delta x_{odd}\left( k-1\right) + \notag \\ &\left[\overline{C}_{1}\cdot {Q}_{\left( i-1\right) }\right] \delta U_{\left( i-1\right) }- \notag \\ &\left[\overline{C}_{1}\cdot \overline{a}_{1}\cdot {Q}_{i}\right] \delta U_{i}. \end{align}$

    (B19)

    Similarly

    $\begin{align} \delta \varphi _{even}\left( k+i\right) =&\left[ \overline{C}_{2}\cdot \left\{ \begin{array}{c} Q_{a\left( i-1\right) }-\\ \overline{a}_{2}\cdot Q_{ai} \end{array} \right\} \right] \delta x_{odd}\left( k\right) + \notag \\ &\left[\overline{C}_{2}\cdot \left\{ \begin{array}{c} Q_{b\left( i-1\right) }-\\ \overline{a}_{2}\cdot Q_{bi} \end{array} \right\} \right] \delta x_{odd}\left( k-1\right) + \notag \\ &\left[\overline{C}_{2}\cdot {Q}_{\left( i-1\right) }\right] \delta U_{\left( i-1\right) } - \notag \\ &\left[\overline{C}_{2}\cdot \overline{a}_{2}\cdot {Q}_{i}\right] \delta U_{i}. \end{align}$

    (B20)

    Hence, from (9),

    $\begin{align} \delta \Phi \left( k+i\right) =&\left[ \begin{array}{l} \overline{C}_{1}\cdot \left\{ \begin{array}{c} Q_{a\left( i-1\right) }-\\ \overline{a}_{1}\cdot Q_{ai} \end{array} \right\} \\ \overline{C}_{2}\cdot \left\{ \begin{array}{c} Q_{a\left( i-1\right) }-\\ \overline{a}_{2}\cdot Q_{ai} \end{array} \right\} \end{array} \right] \delta x_{odd}\left( k\right) + \notag \\ &\left[ \begin{array}{l} \overline{C}_{1}\cdot \left\{ \begin{array}{c} Q_{b\left( i-1\right) }-\\ \overline{a}_{1}\cdot Q_{bi} \end{array} \right\} \\ \overline{C}_{2}\cdot \left\{ \begin{array}{c} Q_{b\left( i-1\right) }-\\ \overline{a}_{2}\cdot Q_{bi} \end{array} \right\} \end{array} \right] \delta x_{odd}\left( k-1\right)+ \notag \\ &\left[ \begin{array}{l} \overline{C}_{1}\cdot {Q}_{\left( i-1\right) } \\ \overline{C}_{2}\cdot {Q}_{\left( i-1\right) } \end{array} \right] \delta U_{\left( i-1\right) } + \notag \\ &\left[ \begin{array}{l} \overline{C}_{1}\cdot \overline{a}_{1}\cdot {Q}_{i} \\ \overline{C}_{2}\cdot \overline{a}_{2}\circ {Q}_{i} \end{array} \right] \delta U_{i}. \end{align}$

    (B21)

    which is same as (52).

Reference (15)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return