202
Advances in Industrial Control Springer London Berlin Heidelberg New York Barcelona Hong Kong Milan Paris Santa Clara Singapore Tokyo

Adaptive Control with Recurrent High-order Neural Networks: Theory and Industrial Applications

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Advances in Industrial Control
Springer London Berlin Heidelberg New York Barcelona Hong Kong Milan Paris Santa Clara Singapore Tokyo
Other titles published in this Series:
Control of Modern Integrated Power Systems E. Mariani and S.S. Murthy
Advanced Load Dispatch for Power Systems: Principles, Practices and Economies E. Mariani and S.S. Murthy
Supervision and Control for Industrial Processes Bjorn Sohlberg
Modelling and Simulation of Human Behaviour in System Control Pietro Carlo Cacciabue
Modelling and Identification in Robotics Krzysztof Kozlowski
Spacecraft Navigation and Guidance Maxwell Noton
Robust Estimation and Failure Detection Rami Mangoubi
Adaptive Internal Model Control Aniruddha Datta
Price-Based Commitment Decisions in the Electricity Market Eric Allen and Marija Hie
Compressor Surge and Rotating Stall Jan Tommy Gravdahl and Olav Egeland
Radiotherapy Treatment Planning Oliver Haas
Feedback Control Theory For Dynamic Traffic Assignment Pushkin Kachroo and Kaan 6zbay
Control Instrumentation for Wastewater Treatment Plants Reza Katebi, Michael A. Johnson and Jacqueline Wilkie
Autotuning ofPID Controllers Cheng-Ching Yu
Robust Aeroservoelastic Stability Analysis Rick Lind & Marty Brenner
Performance Assessment of Control Loops:Theory and Applications Biao Huang & Sirish L. Shah
Data Mining and Knowledge Discovery for Process Monitoring and Control XueZ. Wang
Advances in PID Control Tan Kok Kiong, Wang Quing-Guo & Hang Chang Chieh with Tore J. Hagglund
George A. Rovithakis and Manolis A. Christodoulou
Adaptive Control with Recurrent High-order Neural Networks Theory and Industrial Applications
With 30 Figures
Manolis A. Christodoulou, PhD Department of Electronic and Computer Engineering, Technical University of Crete, GR-73100 Chania, Crete, Greece.
British Library Cataloguing in Publication Data Rovithakis, George A.
Adaptive control with recurrent high-order neural networks : theory and industrial applications. - (Advances in industrial control) l.Adaptive control systems 2. Neural networks (Computer science) I. Title II. Christodoulou, Manolis A. 629.8'36
ISBN-13: 978-1-4471-1201-3 DOl: 10.1007/978-1-4471-0785-9
e-ISBN-13: 978-1-4471-0785-9
A catalog record for this book is available from the Library of Congress
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of repro graphic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers.
© Springer-Verlag London Limited 2000
Softcover reprint of the hardcover I st edition 2000
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made.
Typesetting: Camera ready by authors
69/3830-543210 Printed on acid-free paper SPIN 10728731
Advances in Industrial Control
Industrial Control Centre Department of Electronic and Electrical Engineering University of Strathdyde Graham Hills Building 50 George Street GlasgowG11QE United Kingdom
Series Advisory Board
Professor Dr-Ing J. Ackermann DLR Institut fur Robotik und Systemdynamik Postfach 1116 D82230 WeBling Germany
Professor I.D. Landau Laboratoire d'Automatique de Grenoble ENSIEG, BP 46 38402 Saint Martin d'Heres France
Dr D.C. McFarlane Department of Engineering University of Cambridge Cambridge CB2 1 QJ United Kingdom
Professor B. Wittenmark Department of Automatic Control Lund Institute of Technology PO Box 118 S-221 00 Lund Sweden
Professor D.W. Clarke Department of Engineering Science University of Oxford Parks Road Oxford OX1 3PJ United Kingdom
Professor Dr -Ing M. Thoma Institut fiir Regelungstechnik Universitiit Hannover Appelstr. 11 30167 Hannover Germany
Professor H. Kimura Department of Mathematical Engineering and Information Physics Faculty of Engineering The University of Tokyo 7-3-1 Hongo Bunkyo Ku Tokyo 113 Japan
Professor A.J. Laub College of Engineering - Dean's Office University of California One Shields Avenue Davis California 95616-5294 United States of America
Professor J.B. Moore Department of Systems Engineering The Australian National University Research School of Physical Sciences GPO Box4 Canberra ACT 2601 Australia
Dr M.K. Masten Texas Instruments 2309 Northcrest Plano TX 75075 United States of America
Professor Ton Backx AspenTech Europe B.V. DeWaal32 NL-5684 PH Best The Netherlands
SERIES EDITORS' FOREWORD
The series Advances in Industrial Control aims to report and encourage technology transfer in control engineering. The rapid development of control technology has an impact on all areas of the control discipline. New theory, new controllers, actuators, sensors, new industrial processes, computer methods, new applications, new philosophies ... , new challenges. Much of this development work resides in industrial reports, feasibility study papers and the reports of advanced collaborative projects. The series offers an opportunity for researchers to present an extended exposition of such new work in all aspects of industrial control for wider and rapid dissemination.
Neural networks is one of those areas where an initial burst of enthusiasm and optimism leads to an explosion of papers in the journals and many presentations at conferences but it is only in the last decade that significant theoretical work on stability, convergence and robustness for the use of neural networks in control systems has been tackled. George Rovithakis and Manolis Christodoulou have been interested in these theoretical problems and in the practical aspects of neural network applications to industrial problems. This very welcome addition to the Advances in Industrial Control series provides a succinct report of their research.
The neural network model at the core of their work is the Recurrent High Order Neural Network (RHONN) and a complete theoretical and simulation development is presented. Different readers will find different aspects of the development of interest. The last chapter of the monograph discusses the problem of manufacturing or production process scheduling. Based on the outcomes of a European Union ESPRIT funded project, a full presentation of the application of the RHONN network model to the scheduling problem is given. Ultimately, the cost implication of reduced inventory holdings arising from the RHONN solution is discussed. Clearly, with such an excellent mix of theoretical development and practical application, this monograph will appeal to a wide range of researchers and readers from the control and production domains.
M.J. Grimble and M.A. Johnson Industrial Control Centre
Glasgow, Scotland, UK
PREFACE
Recent technological developments have forced control engineer~ to deal with extremely complex systems that include uncertain, and possibly unknown, nonlinearities, operating in highly uncertain environments. The above, to­ gether with continuously demanding performance requirements, place con­ trol engineering as one of the most challenging technological fields. In this perspective, many "conventional" control schemes fail to provide solid de­ sign procedures, since they mainly require known mathematical models of the system and/or make assumptions that are often violated in real world applications. This is the reason why a lot of research activity has been con­ centrated on "intelligent" techniques recently.
One of the most significant tools that serve in this direction, is the so called artificial neural networks (ANN). Inspired by biological neuronal systems, ANNs have presented superb learning, adaptation, classification and function­ approximation properties, making their use in on line system identification and closed-loop control promising.
Early enrolment of ANNs in control exhibit a vast number of papers proposing different topologies and solving various application problems. Un­ fortunately, only computer simulations were provided at that time, indicating good performance. Before hitting real-world applications, certain properties like stability, convergence and robustness of the ANN-based control archi­ tectures, must be obtained although such theoretical investigations though started to appear no earlier than 1992.
The primary purpose of this book is to present a set of techniques, which would allow the design of
• controllers able to guarantee stability, convergence and robustness for dy­ namical systems with unknown nonlinearities
• real time schedulers for manufacturing systems.
To compensate for the significant amount of uncertainty in system struc­ ture, a recently developed neural network model, named Recurrent High Or­ der Neural Network (RHONN), is employed. This is the major novelty of this book, when compared with others in the field. The relation between neural and adaptive control is also clearly revealed.
It is assumed that the reader is familiar with a standard undergraduate background in control theory, as well as with stability and robustness con-
X Preface
cepts. The book is the outcome of the recent research efforts of its authors. Although it is intended to be a research monograph, the book is also useful for an industrial audience, where the interest is mainly on implementation rather than analyzing the stability and robustness of the control algorithms. Tables are used to summarize the control schemes presented herein.
Organization of the book. The book is divided into six chapters. Chap­ ter 1 is used to introduce neural networks as a method for controlling un­ known nonlinear dynamical plants. A brief history is also provided. Chapter 2 presents a review of the recurrent high-order neural network model and an­ alyzes its approximation capabilities based on which all subsequent control and scheduling algorithms are developed. An indirect adaptive control scheme is proposed in Chapter 3. Its robustness owing to unmodeled dynamics is an­ alyzed using singular perturbation theory. Chapter 4 deals with the design of direct adaptive controllers, whose robustness is analyzed for various cases including unmodeled dynamics and additive and multiplicative external dis­ turbances. The problem of manufacturing systems scheduling is formulated in Chapter 5. A real time scheduler is developed to guarantee the fulfillment of production demand, avoiding the buffer overflow phenomenon. Finally, its implementation on an existing manufacturing system and comparison with various conventional scheduling policies is discussed in Chapter 6.
The book can be used in various ways. The reader who is interested in studying RHONN's approximation properties and its usage in on-line system identification, may read only Chapter 2. Those interested in neuroadaptive control architectures should cover Chapters 2, 3 and 4, while for those wishing to elaborate on industrial scheduling issues, Chapters 2, 5 and 6 are required. A higher level course intended for graduate students that are interested in a deeper understanding of the application of RHONNs in adaptive control systems, could cover all chapters with emphasis on the design and stability proofs. A course for an industrial audience, should cover all chapters with emphasis on the RHONN based adaptive control algorithms, rather than stability and robustness.
Chania, Crete, Greece August 1999
George A. Rovithakis Manolis A. Christodoulou
CONTENTS
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 General Overview ...................................... 1 1.2 Book Goals & Outline .................................. 7 1.3 Notation.............................................. 8
2. Identification of Dynamical Systems Using Recurrent High-order Neural Networks. . . . . . . . . .. . . . . . . . . 9 2.1 The RHONN Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 10
2.1.1 Approximation Properties . . . . . . . . . . . . . . . . . . . . . . . .. 13 2.2 Learning Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 15
2.2.1 Filtered Regressor RHONN . . . . . . . . . . . . . . . . . . . . . . .. 16 2.2.2 Filtered Error RHONN ........................... 19
2.3 Robust Learning Algorithms. . . .. . . .. . . . . . . . . . . . . . . .. . . .. 20 2.4 Simulation Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 25 Summary .................................................. 27
3. Indirect Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 29 3.1 Identification .......................................... 29
3.1.1 Robustness of the RHONN Identifier Owing to Un­ modeled Dynamics. .. . . .. . . .. . . .. . . .. . . .. . . . . . . .. 31
3.2 Indirect Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 35 3.2.1 Parametric Uncertainty ........................... 36 3.2.2 Parametric plus Dynamic Uncertainties ............. 39
3.3 Test Case: Speed Control of DC Motors. . . . . . . . . . . . . . . . . .. 43 3.3.1 The Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 44 3.3.2 Simulation Results ............................... 46
Summary .................................................. 48
4. Direct Adaptive Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 53 4.1 Adaptive Regulation - Complete Matching. . .. . . .. . . .. . . .. 53 4.2 Robustness Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 61
4.2.1 Modeling Error Effects . . . . . . . . . . . . . . . . . . . . . . . . . . .. 62 4.2.2 Model Order Problems. . . . . . . . . . . . . . . . . . . . . . . . . . .. 71 4.2.3 Simulations...................................... 80
XII Contents
4.3 Modeling Errors with Unknown Coefficients. . . . . . . . . . . . . . .. 83 4.3.1 Complete Model Matching at Ixl = 0.. .. . . .. . . .. . . .. 93 4.3.2 Simulation Results ............................... 95
4.4 Tracking Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 95 4.4.1 Complete Matching Case ............. , . . .. . . .. . . .. 97 4.4.2 Modeling Error Effects ............................ 102
4.5 Extension to General Affine Systems ...................... 108 4.5.1 Adaptive Regulation .............................. 110 4.5.2 Disturbance Effects ............................... 123 4.5.3 Simulation Results ............................... 130
Summary .................................................. 134
5.1.1 Continuous Control Input Definition ........... , .... 146 5.1.2 The Manufacturing Cell Dynamic Model ............ 147
5.2 Continuous-time Control Law ............................ 151 5.2.1 The Ideal Case ................................... 152 5.2.2 The Modeling Error Case .......................... 153
5.3 Real-time Scheduling ................................... 155 5.3.1 Determining the Actual Discrete Dispatching Decision 155 5.3.2 Discretization Effects ............................. 157
5.4 Simulation Results ...................................... 159 Summary .................................................. 163
6. Scheduling using RHONNs: A Test Case ................. 165 6.1 Test Case Description ................................... 166
6.1.1 General Description .............................. 166 6.1.2 Production Planning & Layout in SHW ............. 166 6.1.3 Problem Definition ............................... 168 6.1.4 Manufacturing Cell Topology ...................... 169 6.1.5 RHONN Model Derivation ........................ 171 6.1.6 Other Scheduling Policies .......................... 173
6.2 Results & Comparisons ................................. 174 Summary .................................................. 183
References .................................................... 184
Index ......................................................... 191
CHAPTER 1
1.1 General Overview
Man has two principal objectives in the scientific study of his environment: he wants to understand and to control. The two goals reinforce each other, since deeper understanding permits firmer control, and, on the other hand, systematic application of scientific theories inevitably generates new problems which require further investigation, and so on.
It might be assumed that a fine-grained descriptive theory of terrestrial phenomena would be required before an adequate theory of control could be constructed. In actuality this is not the case, and indeed, circumstances themselves force us into situations where we must exert regulatory and cor­ rective influences without complete knowledge of basic causes and effects. In connection with the design of experiments, space travel, economics, and the study of cancer, we encounter processes which are not fully understood. Yet design and control decisions are required. It is easy to see that in the treat­ ment of complex processes, attempts at complete understanding at a basic level may consume so much time and so large a quantity of resources as to impede us in more immediate goals of control.
Artificial Neural Networks have been studied for many years with the hope of achieving human-like performance in solving certain problems in speech and image processing. There has been a recent resurgence in the field of neu­ ral networks owing to the introduction of new network topologies, training algorithms and VLSI implementation techniques. The potential benefits of neural networks such as parallel distributed processing, high computation rates, fault tolerance and adaptive capability, have lured researchers from other fields such as controls, robotics etc. to seek solutions to their compli­ cated problems.
Several types of neural networks appear to offer promise for use in con­ trol systems. These include the multilayer neural network trained with the commonly attributed to Rumelhart et al., [97], the recurrent neural networks such as the feedback network of Hopfield, [38], the cerebellar model articula­ tion controller (CMAC) model of Albus, [2], the content-addressable memory ofKohonen, [55], and the Gaussian node network of Moody and Darken, [69]. The choice of which neural network to use and which training procedure to
G. A. Rovithakis et al., Adaptive Control with Recurrent High-order Neural Networks © Springer-Verlag London Limited 2000
2 1. Introduction
invoke is an important decision and varies depending on the intended appli­ cation.
The type of neural network most commonly used in control systems is the feedforward multilayer neural network, where no information is fed back during operation. There is, however, feedback information available during training. Typically, supervised learning methods, where the neural network is trained to learn input-output patterns presented to it, are used. Most of­ ten, versions of the backpropagation (BP) algorithm are used to adjust the neural network weights during training. This is generally a slow and very time consuming process, because the algorithm usually takes a long time to converge. However, other optimization methods such as conjugate direc­ tions and quasi-Newton have also been implemented; see [36]. Most often, the individual neuron-activation functions are sigmoidal, but also signum -or radial-basis Gaussian functions are also used. Note that there are additional systems and control results involving recurrent networks, as discussed later.
Theoretical studies by several research groups [16],[24], [35],[40], demon­ strated that multilayer neural networks with just one hidden layer can ap­ proximate any continuous function uniformly over a compact domain, by simply adjusting the synaptic weights, such that a functional of the error between the neural network output and the output of the unknown map, is minimized.
The procedure of training a neural network to represent the forward dy­ namics of a plant is called forward modeling. The neural network model is placed in parallel with the plant and the error between the plant and the net­ work outputs - the prediction error - is used as the network training signal.
At this point, we should mention that the plant can be single-input single­ output or multi-input multi-output, continuous or discrete, linear or nonlin­ ear. For the neural network training, discrete samples of the plant inputs and outputs are often used. We assume that the plant is described by the nonlinear difference equation:
yP(k + 1) = f(yP(k), '" yP(k - n + 1); u(k), '" u(k - m + 1)).
Thus, the system output yP at time k + 1 depends on the past n output values and the past m values of the input u. An obvious approach for system modeling is to choose the input-output structure of the neural network to be the same as that of the system. Denoting the output of the network as ym, we then have:
ym(k + 1) = fapr(yP(k), '" yP(k - n + 1); u(k), '" u(k - m + 1)).
Here, fapr represents the nonlinear input output map of the network, that is, the approximation of f. We can readily see that the input to the network includes the past values of the real system output, hence, the system has no feedback. If we assume that after a certain training period the network gives a good representation of the plant, that is ym ~ yP, then for subsequent post-training purposes the network output together with its delay values can
1.1 General Overview 3
be fed back and used as part of the network input. In this way, the network can be used independently of the plant. Such a network model is described by
ym(k + 1) = fapr(ym(k), .. , ym(k - n + 1); u(k), .. , u(k - m + 1».
u PLANT
Fig. 1.1. Plant identification with a multilayer neural network
Suppose now that the information we have about the plant is in the form of an input-output table, which makes the problem of identification look like a typical pattern recognition problem; then, for the training of the plant model the current and previous inputs to the plant, as well as the previous outputs of the plant should be used again. Other possibilities for the training include the plant states and derivatives of the plant states.
For this reason, if a feedforward multilayer neural network is used and the training is done with the BP algorithm, then we realize that since we need discrete outputs of the plant model, a discrete or discretized continuous plant has to be considered, as discussed before. This can be illustrated in Figure 1.1. The arrow that passes through the neural model is indicative of the fact that the output error is used to train the neural network. As mentioned before, we see that the discrete inputs of the plant, as well as the discrete outputs of the plant are used for the training. The number of delays of previous inputs and outputs is unknown; since we have no information about the structure of the plant this number has to be determined experimentally. As far as the training signal is concerned, it has been suggested, [41],[74], that a random signal uniformly distributed over certain ranges should be used.
Instead of training a neural network to identify the forward dynamics of the plant, a neural network can be trained to identify the inverse dynamics of the plant. The neural network's input is the plant's output, and the desired neural network output is the plant's input. The error difference between the actual input of the plant and the output of the neural network is to be
4 1. Introduction
minimized and can be used to train the neural network. The desired output of the neural network is the current input to the plant. When modeling the inverse dynamics of the plant with a neural network, the assumption is being made, either implicitly or explicitly, that the neural network can approximate the inverse of the plant well. This, of course, means that the inverse exists and it is unique; if not unique then care should be taken with the ranges of the inputs to the network. It also means that the inverse is stable.
NEURAL CONTROLLER
REFERENCE 1 MODEL J
u
PLANT plant output
f DELAY }-l
Once an identification neural model of the plant is available, this model can be used for the design of the controller, as shown below. A neural net­ work can be used as a conventional controller in both open and closed loop
1.1 General Overview 5
configurations. The training of a neural network as an open loop controller is shown in Figure 1.2. The error e = Y - Yd is used to train the neural network. Since we do not have a desired output for the neural controller, the error at the output of the plant is backpropagated through the plant to account for this. The backpropagation of the error can be done by several methods as stated in [36]. The most convenient way appears to be by using a neural model of the plant. A neural network is first trained to provide a model of the nonlinear plant in question as discussed before. This can be used in parallel with the plant, with errors at the plant output backpropagated through its neural model. The computed error in the input of the plant is the error at the output of the controller. Finally, the BP algorithm is used on this error to train the neural controller. As we can see in Figure 1.3, the inputs to the neural controller include the current and previous reference inputs, previous outputs of the neural controller, as well as previous outputs of the reference model. In this figure, the existence of a reference model has been assumed, so that the task of the controller is to force the plant to the output designated by the reference model.
At this point, we should mention that for the construction of the neu­ ral model of the controller there exist further possibilities beside the mean squared error between the output of the reference model and the output of the actual plant. Other terms that can be included are the mean squared error between the reference input and the real output, r - Yp, as well as the input u to the plant. The inclusion of u in the cost function is desirable, in order to preserve control energy. In the same way, the rate of u can also be included, so that the transition from one extreme value for u to another can be avoided. On the other hand, each one of the terms that participate in the cost function can be assigned a weight, so that their contribution to the minimizing function varies, depending on the specific application.
In order that a neural network architecture be able to approximate the behavior of a dynamical system in some sense, it is clear that it should con­ tain some form of dynamics, or stated differently, feedback connections. In the neural network literature, such networks are known as . They were orig­ inally designed for pattern recognition applications. A static neural network can also be made a dynamic one, by simply connecting the past neural out­ puts as inputs to the neural network, thus making the neural network a very complicated and highly nonlinear dynamical system. A more efficient way to introduce dynamics with the aid of feedforward multilayer neural networks was proposed in [74]. They connect stable linear dynamical systems with static multilayer networks. The connections need not be only serial; parallel, and feedback connections and combinations of the three types are also per­ mitted. Similar to the static multilayer networks, the synaptic weights are adjusted according to a gradient descent rule.
The main problem with the dynamic neural networks that are based on static multilayer networks is that the synaptic weights appear nonlinearly in
6 1. Introduction
the mathematical representation that governs their evolution. This leads to a number of significant drawbacks. First, the learning laws that are used, require a high amount of computational time. Second, since the synaptic weights are adjusted to minimize a functional of the approximation error and the weights appear nonlinearly, the functional possesses many local minima so there is no way to ensure the convergence of the weights to the global min­ imum. Moreover, due to the highly nonlinear nature of the neural network architecture, basic properties like stability, convergence and robustness, are very difficult to verify. The fact that even for linear systems such adapta­ tion methods can lead to instability was also shown in [3],[50],[78]. On the other hand, the recurrent networks possessing a linear-in-the-weights prop­ erty, make the issues of proving stability and convergence feasible and their incorporation into a control loop promising.
The most significant problem in generalizing the application of neural networks in control, is the fact that the very interesting simulation results that are provided, lack theoretical verification. Crucial properties like stability, convergence and robustness of the overall system must be developed and/or verified. The main reason for the existence of the above mentioned problem, is the mathematical difficulties associated with nonlinear systems controlled by highly nonlinear neural network controllers. In view of the mathematical difficulties encountered in the past in the adaptive control of linear systems, (which remained as an active problem until the early 1980's [22],[68],[71],[30)), it is hardly surprising that the analytical study of nonlinear adaptive control using neural networks, is a difficult problem indeed, but progress has been made in this area and certain important results have begun to emerge, aiming to bridging the gap between theory and applications.
The problem of controlling an unknown nonlinear dynamical system, has been attacked from various angles using both direct and indirect adaptive control structures and employing different neural network models. A beautiful survey of the above mentioned techniques, can be found in a paper by Hunt et al. [42], in which links between the fields of control science and neural networks were explored and key areas for future research were proposed, but all works share the key idea, that is, since neural networks can approximate static and dynamic, highly nonlinear systems arbitrarily well, the unknown system is substituted by a neural network model, which is of known structure but contains a number of unknown parameters (synaptic weights), plus a modeling error term. The unknown parameters may appear both linearly or nonlinearly with respect to the network nonlinearities, thus transforming the original problem into a nonlinear robust adaptive control problem.
Recent advances in nonlinear control theory and, in particular, feedback linearization techniques, [47],[76], created a new and challenging problem, which came to be known as adaptive nonlinear control. It was formulated to deal with the control of systems containing both unknown parameters and known nonlinearities. Several answers to this problem have been proposed in
1.2 Book Goals & Outline 7
the literature with typical examples [70],[105),[102],[52], [53],(54),[6],[83], [64). A common assumption made in the above works is that of linear parameter­ ization. Although sometimes it is quite realistic, it constraints considerably the application field. An attempt to relax this assumption and provide global adaptive output feedback control for a class of nonlinear systems, determined by specific geometric conditions, is given by Marino and Tomei in their recent paper [65).
The above discussion makes apparent that adaptive control research, thus far, has been directed towards systems with special classes of parametric uncertainties. The need to deal with increasingly complex systems, to ac­ complish increasingly demanding design requirements and the need to attain these requirements with less precise advanced knowledge of the plant and its environment, inspired much work that came mostly from the area of neural networks but with obvious and strong relation to the adaptive control field [9), [10], [81), [85)-[96], [98)-[100]' [61), [62).
1.2 Book Goals & Outline
As the first results in neural control started to appear, it became increasingly clear that in order to achieve global admittance within the control systems society and before thinking of real world applications, much more was needed than merely presenting some simulation results.
The purpose of this book, is to present a rigorous mathematical framework to analyze and design closed loop control systems based on neural networks especially on those of the specific structure termeded recurrent high-order neural nets (RHONNs). The proposed neurocontrol schemes will be applied to nonlinear systems possessing highly uncertain and possibly unknown non­ linearities. Owing to the great amount of uncertainty allowed, the controller should be able to handle various robustness issues like modeling errors, un­ modeled dynamics and external disturbances acting both additively and mul­ tiplicatively. Since the scope of the book series is strongly related to industrial applications, the presented theory will be extended to cover issues of schedul­ ing manufacturing cells.
To accomplish the aforementioned goals, the presentation of this book proceeds as follows:
• Chapter 2 introduces the RHONN structure and analyze its approxima­ tion capabilities. It is seen that the proposed neural network scheme may approximate general nonlinear systems, whose vector fields satisfy a local Lipschitz condition arbitrarily well. We go beyond the existence theorem and present stable learning algorithms for tuning the RHONN weights, using Lyapunov theory. Simulations performed on a robotic manipulator conclude this chapter.
8 1. Introduction
• Chapter 3 deals with the problem of controlling affine in the control non­ linear dynamical systems, attacking it from an indirect adaptive control point of view. Modified accordingly, the learning algorithms developed in Chapter 2 are employed for on-line system identification. Subsequently, the RHONN model acquired, is used for control. The scheme is tested for both parametric and dynamic uncertainties, operated within a singular pertur­ bation theory. Simulations performed on a nonlinearly operated Dc motor, highlight certain performance issues.
• Chapter 4 introduces the problem of controlling nonlinear dynamical sys­ tems through direct adaptive control techniques. The algorithms developed may handle various destabilizing mechanisms like modeling errors, external disturbances and unmodeled dynamics, without the need of singular per­ turbation theory. Both regulation and tracking issues are examined. The results are also extended to cover the case where the number of measured states is different from the number of control inputs.
• Chapter 5 discusses the issues of manufacturing systems modeling and control, using recurrent high order neural networks. Appropriately de­ signed RHONN-based controllers are used to output the required schedule, guaranteeing achievement of production demand, while keeping all system buffers bounded.
• Finally, Chapter 6, applies the theoretical framework developed in Chap­ ter 5 to solve a real test case. Calculation of various performance indices indicates near optimal operation.
1.3 Notation
The following notations and definitions will extensively be used throughout the book. I denotes the identity matrix. I . I denotes the usual Euclidean norm of a vector. In cases where y is a scalar, I y I denotes its absolute value. If A is a matrix, then IIAII denotes the Frobenius matrix norm [29], defined as
IIAI12 = L laijl2 = tr{AT A}, ij
where tr{.} denotes the trace of a matrix. Now let d(t) be a vector function of time. Then
IIdll2 ~ (100 Id(rWdr)1/2,
Ildlloo ~ sup Id(t)l. t~O
We will say that d E L2 when IIdl12 is finite. Similarly, we will say that d E Loo when IIdlloo is finite.
CHAPTER 2
IDENTIFICATION OF DYNAMICAL SYSTEMS USING RECURRENT HIGH-ORDER NEURAL NETWORKS
The use of multilayer neural networks for pattern recognition and for model­ ing of "static" systems is currently well-known (see, for example, [1]). Given pairs of input-output data (which may be related by an unknown algebraic relation, a so-called "static" function) the network is trained to learn the par­ ticular input-output map. Theoretical work by several researchers, including Cybenko [16], and Funahashi [24], have proven that, even with one hidden layer, neural networks can approximate any continuous function uniformly over a compact domain, provided the network has a sufficient number of units, or neurons. Recently, interest has been increasing towards the usage of neural networks for modeling and identification of dynamical systems. These networks, which naturally involve dynamic elements in the form of feedback connections, are known as recurrent neural networks.
Several training methods for recurrent networks have been proposed in the literature. Most of these methods rely on the gradient methodology and involve the computation of partial derivatives, or sensitivity functions. In this respect, they are extensions of the backpropagation algorithm for feedforward neural networks [97]. Examples of such learning algorithms include the recur­ rent backpropagation [80], the backpropagation-through-time algorithms [106], the real-time recurrent learning algorithm [107]' and the dynamic backprop­ agation [75] algorithms. The last approach is based on the computation of sensitivity models for generalized neural networks. These generalized neural networks, which were originally proposed in [74], combine feedforward neural networks and dynamical components in the form of stable rational transfer functions.
Although the training methods mentioned above have been used success­ fully in many empirical studies, they share some fundamental drawbacks. One drawback is the fact that, in general, they rely on some type of approximation for computing the partial derivative. Furthermore, these training methods re­ quire a great deal of computational time. A third disadvantage is the inability to obtain analytical results concerning the convergence and stability of these schemes.
Recently, there has been a concentrated effort towards the design and analysis of learning algorithms that are based on the Lyapunov stability the­ ory [81],[100], [10], [9],[61], [98], [99], [85], [57] targeted at providing stability,
G. A. Rovithakis et al., Adaptive Control with Recurrent High-order Neural Networks © Springer-Verlag London Limited 2000
10 2. RHONNs for Identification of Dynamical Systems
convergence and robustness proofs, in this way, bridging the existed gap be­ tween theory and applications.
In this chapter we discuss the identification problem which consists of choosing an appropriate identification model and adjusting its parameters according to some adaptive law, such that the response of the model to an input signal (or a class of input signals), approximates the response of the real system to the same input. Since a mathematical characterization of a system is often a prerequisite to analysis and controller design, system identifica­ tion is important not only for understanding and predicting the behavior of the system, but also for obtaining an effective control law. For identification models we use recurrent high-order neural networks. High-order networks are expansions of the first-order Hopfield [39] and Cohen-Grossberg [12] models that allow higher-order interactions between neurons. The superior storage capacity of has been demonstrated in [77, 4], while the stability properties of these models for fixed-weight values have been studied in [18, 51]. Fur­ thermore, several authors have demonstrated the feasibility of using these architectures in applications such as grammatical inference [28] and target detection [63].
The idea of recurrent neural networks with dynamical components dis­ tributed throughout the network in the form dynamical neurons and their application for identification of dynamical systems was proposed in [57]. In this chapter, we combine distributed recurrent networks with high-order con­ nections between neurons. In Section 1 we show that recurrent high-order neural networks are capable of modeling a large class of dynamical systems. In particular, it is shown that if enough higher-order connections are allowed in the network then there exist weight values such that the input-output be­ havior of the RHONN model approximates that of an arbitrary dynamical system whose state trajectory remains in a compact set. In Section 2, we develop weight adjustment laws for system identification under the assump­ tion that the system to be identified can be modeled exactly by the RHONN model. It is shown that these adjustment laws guarantee boundedness of all the signals and weights and furthermore, the output error converges to zero. In Section 3, this analysis is extended to the case where there is a nonzero mismatch between the system and the RHONN model with optimal weight values. In Section 4, we apply this methodology to the identification of a simple robotic manipulator system and in Section 5 some final conclusions are drawn.
2.1 The RHONN Model
Recurrent neural network (RNN) models are characterized by a two way con­ nectivity between units (i.e., neurons). This distinguishes them from feedfor­ ward neural networks, where the output of one unit is connected only to units
2.1 The RHONN Model 11
of the next layer. In the most simple case, the state history of each neuron is governed by a differential equation of the form:
Xi = -aiXi + bi L WijYj , (2.1) j
where Xi is the state of the i-th neuron, ai, bi are constants, Wij is the synaptic weight connecting the j-th input to the i-th neuron and Yj is the j-th input to the above neuron. Each Yj is either an external input or the state of a neuron passed through a sigmoid function (i.e., Yj = s(Xj)), where s(.) is the sigmoid nonlinearity.
The dynamic behavior and the stability properties of neural network mod­ els of the form (2.1) have been studied extensively hy various researchers [39], [12], [51], [18]. These studies exhibited encouraging results in applica­ tion areas such as associative memories, but they also revealed the limitations inherent in such a simple model.
In a recurrent second order neural network, the input to the neuron is not only a linear combination of the components Yj, but also of their product Yj Yk· One can pursue this line further to include higher order interactions represented by triplets Yj Yk YI, quadruplets, etc. forming the recurrent high order neural networks (RHONNs).
Let us now consider a RHONN consisting of n neurons and m inputs. The state of each neuron is governed by a differential equation of the form:
[ L 1 • dj(k) Xi = -aiXi + bi L Wik IT Yj ,
k=I jE1k
(2.2)
where {II, h, ... , h} is a collection of L not-ordered subsets of {I, 2, ... , m+ n}, ai, bi are real coefficients, Wik are the (adjustable) synaptic weights of the neural network and dj (k) are non-negative inegers. The state of the i-th neuron is again represented by Xi and Y = [VI, Y2, ... ,Ym+n]T is the input vector to each neuron defined by:
YI S(XI)
Y2 S(X2)
Yn+I U2
Yn+m Um
where U = [UI, U2, ... , umJT is the external input vector to the network. The function s(.) is monotone-increasing, differentiable and is usually represented by sigmoids of the form:
12 2. RHONNs for Identification of Dynamical Systems
a s(X) = l+e- fJx -"I, (2.4)
where the parameters a, {3 represent the bound and slope of sigmoid's curva­ ture and "I is a bias constant. In the special case where a = {3 = 1, "I = 0, we obtain the logistic function and by setting a = (3 = 2, "I = 1, we obtain the hyperbolic tangent function; these are the sigmoid activation functions most commonly used in neural network applications.
We now introduce the L-dimensional vector z, which is defined as
[ Zl] [ TIjElt y1;(I) I z TI d;(2)
_ 2 _ jE/2 Yj z- . - . . . . . .
(2.5)
Xi = -aiXi + bi [t WikZkj. k=l
(2.6)
Wi = bi[Wil Wi2 ... WiLf,
then (2.6) becomes . T Xi = -aiXi + Wi Z. (2.7)
The vectors {Wi : i = 1,2, ... , n} represent the adjustable weights of the network, while the coefficients {ai : i = 1,2, ... , n} are part of the underlying network architecture and are fixed during training.
In order to guarantee that each neuron Xi is bounded-input bounded­ output (BIBO) stable, we shall assume that ai > 0, Vi = 1,2, ... , n. In the special case of a continuous time Hopfield model [39], we have ai = R;le;, where R; > ° and Ci > ° are the resistance and capacitance connected at the i-th node of the network respectively.
The dynamic behavior of the overall network is described by expressing (2.7) in vector notation as:
(2.8)
where X = [Xl, X2, ... , Xn]T E ~n, W = [WI, W2, ... , WnY E ~Lxn and A = diag{ -aI, -a2, ... , -an} is a n x n diagonal matrix. Since ai > 0, Vi = 1,2, ... , n, A is a stability matrix. Although it is not written explicitly, the vector Z is a function of both the neural network state X and the external input u.
2.1 The RHONN Model 13
2.1.1 Approximation Properties
Consider now the problem of approximating a general nonlinear dynamical system whose input-output behavior is given by
x = F(X, u), (2.9)
where X E ~n is the system state, u E ~m is the system input and F : ~n+m --+ ~n is a smooth vector field defined on a compact set Y c ~n+m.
The approximation problem consists of determining whether by allow­ ing enough higher-order connections, there exist weights W, such that the RHONN model approximates the input-output behavior of an arbitrary dy­ namical system of the form (2.9).
In order to have a well-posed problem, we assume that F is continuous and satisfies a local Lipschitz condition such that (2.9) has a unique solution -in the sense of Caratheodory [34]- and (x(t), u(t)) E Y for all t in some time interval JT = {t : 0 ~ t ~ T}. The interval JT represents the time period over which the approximation is to be performed. Based on the above assumptions we obtain the following result.
Theorem 2.1.1. Suppose that the system (2.9) and the model (2.8) are initially at the same state x(O) = X(O); then for any c > 0 and any finite T > 0, there exists an integer L and a matrix W* E ~Lxn such that the state x(t) of the RHONN model (2.8) with L high-order connections and weight values W = W* satisfies
sup Ix(t) - x(t)1 ~ C. O~t~T
Proof: From (2.8), the dynamic behavior of the RHONN model is described by
i: = Ax + WT z( x, u) .
Adding and subtracting AX, (2.9) is rewritten as
X=AX+G(X,u),
(2.10)
(2.11)
where G(X, u) = F(X, u) - AX. Since x(O) = X(O), thE; state error e = x - X satisfies the differential equation
e=Ae+WTz(x,u)-G(x,u), e(O)=O. (2.12)
By assumption, (X(t), u(t)) E Y for all t E [0, T], where Y is a compact subset of ~n+m. Let
Ye = {(X, u) E ~n+m : I(x, u) - (Xy, uy)1 ~ c, (Xy, uy) E y}. (2.13)
It can be seen readily that Ye is also a compact subset of ~n+m and Y c Yeo In simple words Ye is c larger than y, where c is the required degree of approximation. Since z is a continuous function, it satisfies a Lipschitz
14 2. RHONNs for Identification of Dynamical Systems
condition in Ye, i.e., there exists a constant I such that for all (Xl, u), (X2' u) E Ye
(2.14)
In what follows, we show that the function WT z( x, u) satisfies the condi­ tions of the Stone-Weierstrass Theorem and can approximate any continuous function over a compact domain, therefore.
From (2.2), (2.3) it is clear that z(x, u) is in the standard polynomial expansion with the exception that each component of the vector X is prepro­ cessed by a sigmoid function s(.). As shown in [14], preprocessing of input via a continuous invertible function does not affect the ability of a network to approximate continuous functions; therefore, it can be shown readily that if L is sufficiently large, then there exist weight values W = W* such that W*T z( x, u) can approximate G( x, u) to any degree of accuracy, for all (x, u) in a compact domain. Hence, there exists W = W* such that
sup IW*T z(X, u) - G(X, u)1 :::; 8, (2.15) (x,u)EY.
where 8 is a constant to be designed in the sequel. The solution of (2.12) is
e(t) = lot eA(t-r) [W*T z(x(r), u(r)) - G(x(r), u(r))] dr,
= lot eA(t-r) [W*T z(x(r), u(r)) - W*T z(x(r), u(r))] dr
+ lot eA(t-r) [W*T z(X( r), u( r)) - G(X( r), u( r))] dr. (2.16)
Since A is a diagonal stability matrix, there exists a positive constant a such that IleAtl1 :::; e- at for all t 2: o. Also, let L = IIlW*II. Based on the aforementioned definitions of the constants a, L, c:, let 8 in (2.15) be chosen as
c:a L 8=2e--;;->0. (2.17)
First consider the case where (x(t), u(t)) E Ye for all t E [0, T]. Starting from (2.16), taking norms on both sides and using (2.14), (2.15) and (2.17), the following inequalities hold for all t E [0, T]:
le(t)1 :::; lot lIeA(t-r)lIlw*T z(x(r), u(r)) - W*T z(x(r), u(r))1 dr
+ lot lIeA(t-r)lIlw*T z(x(r), u(r)) - G(x(r), u(r)) I dr,
:::; lot e-a(t-r) Lle( r)ldr + lot 8e-a(t-r)dr ,
:::; ~e-~ + L lot e-a(t-r)le(r)ldr.
2.2 Learning Algorithms 15
le(t)1 :::; ~e-~ + eL lot e-a(t-r)dT,
:::; ~ . (2.18)
Now suppose (for the sake of contradiction), that (x, u) does not belong to Ye for all t E [0, T]; then, by the continuity of x{t), there exist a T*, where 0< T* < T, such that (x{T*), u(T*)) E aYe where aYe denotes the boundary of Yeo If we carry out the same analysis for t E [0, T*] we obtain that in this intervallx(t) - x(t)1 :::; ~, which is clearly a contradiction. Hence, (2.18) holds for all t E [0, T]. •
The aforementioned theorem proves that if sufficiently large number of connections is allowed in the RHONN model then it is possible to approx­ imate any dynamical system to any degree of accuracy. This is strictly an existence result; it does not provide any constructive method for obtaining the optimal weights W*. In what follows, we consider the learning problem of adjusting the weights adaptively, such that the RHONN model identifies general dynamic systems.
2.2 Learning Algorithms
In this section we develop weight adjustment laws under the assumption that the unknown system is modeled exactly by a RHONN architecture of the form (2.8). This analysis is extended in the next section to cover the case where there exists a nonzero mismatch between the system and the RHONN model with optimal weight values. This mismatch is referred to as modeling error.
Although the assumption of no modeling error is not very realistic, the identification procedure of this section is useful for two reasons:
• the analysis is more straightforward and thus easier to understand, • the techniques developed for the case of no modeling error are also very
important in the design of weight adaptive laws in the presence of modeling errors.
Based on the assumption of no modeling error, there exist unknown weight vectors wf, i = 1,2, ... , n, such that each state Xi of the unknown dynamic system (2.9) satisfies:
Xi = -aiXi + wfz{x, u), Xi{O) = X? (2.19)
where X? is the initial i-th state of the system. In the following, unless there is confusion, the arguments of the vector field z will be omitted.
As is standard in system identification procedures, we will assume that the input u(t) and the state X(t) remain bounded for all t 2: O. Based on
16 2. RHONNs for Identification of Dynamical Systems
the definition of z(X, u), as given by (2.5), this implies that z(x, u) is also bounded. In the subsections that follow we present different approaches for estimating the unknown parameters wf of the RHONN model.
2.2.1 Filtered Regressor RHONN
The following lemma is useful in the development of the adaptive identifica­ tion scheme presented in this subsection.
Lemma 2.2.1. The system described by
Xi = -aiXi + wtz(X, u), Xi(O) = X?
can be expressed as
(i = 1t e-ai(t-T)z(x(r),u(r))dr;
therefore,
(2.20)
(2.21 )
(2.22)
wtT (i + e-aitx? = e-aitx? + 1t e-ai(t-T)wtT z(X( r), u( r))dr . (2.23)
Using (2.20), the right hand side of (2.23) is equal to Xi(t) and this concludes the proof. •
Using Lemma 2.2.1, the dynamical system described by (2.9) is rewritten as
.T( Xi=Wi i+(j, i = 1,2, .. . ,n, (2.24)
where (i is a filtered version of the vector z (as described by (2.5)) and (i := eait X? is an exponentially decaying term which appears if the system is in a nonzero initial state. By replacing the unknown weight vector wi in (2.24), by its estimate Wi and ignoring the exponentially decaying term (i,
we obtain the RHONN model
Xi = wT(j, i=1, 2, ... ,no (2.25)
The exponentially decaying term (i(t) can be omitted in (2.25) since, as we shall see later, it does not affect the convergence properties of the scheme. The state error ei = Xi - Xi between the system and the model satisfies
(2.26)
where <Pi = Wi - wi is the weight estimation error. The problem now is to derive suitable adaptive laws for adjusting the weights Wi for i = 1, ... n.
2.2 Learning Algorithms 17
This can be achieved by using well-known optimization techniques for mini­ mization of the quadratic cost functional
1 n 1 n 2
J(Wl, 00 .Wn ) = '2 ~::>; = '2 L [(Wi - wif (i - f;] . ;=1 i=l
(2.27)
Depending on the optimization method that is employed, different weight adjustment laws can be derived. Here we consider the gradient and the least­ squares methods [45]. The gradient method yields
i= 1,2,oo.,n, (2.28)
where r; is a positive definite matrix referred to as the adaptive gain or learning rate. With the we obtain
{ i=1,2,oo.,n, (2.29)
where P(O) is a symmetric positive definite matrix. In the above formulation, the least-squares algorithm can be thought of as a gradient algorithm with a time-varying learning rate.
The stability and convergence properties of the weight adjustment laws given by (2.28) and (2.29) are well-known in the adaptive control literature (see, for example, [31, 73]). For tutorial purposes and for completeness we present the stability proof for the gradient method here.
Theorem 2.2.1. Consider the RHONN model given by (2.25) whose pa­ rameters are adjusted according to (2.28). Then for i = 1,2, ... , n
(a) e;, rPi E Loo (ei and rP are uniformly bounded) (b) limt--+oo ei(t) = 0
Proof: Consider the Lyapunov function candidate
(2.30)
Using (2.28) and (2.26), the time derivative of V in (2.30) is expressed as
. ~ ( T 1 2) V = L..J -eirPi (; - '2fi , .=1
(2.31)
Since V ::; 0, we obtain that rPi E Loo. Moreover, using (2.26) and the bound­ edness of (i, we have that ei is also bounded. To show that ei(t) converges to
18 2. RHONNs for Identification of Dynamical Systems
zero, we first note that since V is a non-increasing function of time and also bounded from below, the limt_oo V(t) = Voo exists; therefore, by integrating both sides of (2.31) from t = 0 to 00, and taking bounds we obtain
[00 n
in I>?(T) dT :::; 2 (V(O) - Voo ) , o ;=1
so for i = 1, ... n, e;(t) is square integrable. Furthermore, using (2.26) . 'T T·· ei(t) = <Pi (i + <Pi (i - f; ,
= -ei(T riC; - ai<Pr (i + <pr Z - fi .
Since ei, (;, <Pi, z, fi are all bounded, fi E Loo. Hence, by applying Barbalat's Lemma [73] we obtain that limt_oo ei(t) = O. •
Remark 2.2.1. The stability prooffor the least-squares algorithm (2.29) proceeds along the same lines as in the proof of Theorem 2.2.1 by considering the Lyapunov function
V = ~ t (<pr Pi- 1<Pi + 100 f}(T) dT) .
2 ;=1 t
A problem that may be encountered in the application of the least-squares algorithm is that P may become arbitrarily small and thus slow down adap­ tation in some directions [45, 31]. This so-called problem can be prevented by using one of various modifications which prevent P(t) from going to zero. One such modification is the so-called, where if the smallest eigenvalue of P(t) becomes smaller than PI then P(t) is reset to P(t) = pol, where Po ~ PI > 0 are some design constants.
Remark 2.2.2. The above theorem does not imply that the weight esti­ mation error <Pi = wi - wi converges to zero. In order to achieve convergence of the weights to their correct value the additional assumption of persis­ tent excitation needs to be imposed on the regressor vector (i. In particular, (;(t) E ~L is said to be persistently exciting if there exist positive scalars c, d and T such that for all t ~ 0
It+T
where I is the L x L identity matrix.
Remark 2.2.3. The learning algorithms developed above can be extended to the case where the underlying neuron structure is governed by the higher­ order Cohen-Grossberg model [12, 18]
Xi = -ai(xi) [bi(Xi) + t Wik II y1 j (k)] , (2.33) k=1 j Elk
2.2 Learning Algorithms 19
where ai(·), bi(·) satisfy certain conditions required for the boundedness of the state variables [18]. It can be seen readily that in (2.33) the differential equation is still linear in the weights and hence a similar parameter estimation procedure can be applied.
The filtered-regressor RHONN model considered in this subsection relies on filtering the vector z, which is sometimes referred to as the regressor vector. By using this filtering technique, it is possible to obtain a very simple alge­ braic expression for the error (as given by (2.26)), which allows the application of well-known optimization procedures for designing and analyzing weight ad­ justment laws but there is an important drawback to this method, namely the complex configuration and heavy computational demands required in the filtering of the regressor. Generally, the dimension of the regressor will be larger than the dimension of the system, i.e., L > n, it might be very ex­ pensive computationaly to employ so many filters. In the next subsection we consider a simpler structure that requires only n filters and hence, fewer computations.
2.2.2 Filtered Error RHONN
In developing this identification scheme we start again from the differential equation that describes the unknown system, i.e.,
i=1,2, ... ,n. (2.34)
i=1,2, ... ,n, (2.35)
where Wi is again the estimate of the unknown vector wi. In this case the state error ei := Xi - Xi satisfies
. ",T ei = -aiei+'Pi z, i=1,2, ... ,n, (2.36)
where <Pi = Wi-wi. The weights Wi, for i = 1,2, ... , n, are adjusted according to the learning laws
(2.37)
where the adaptive gain rj is a positive definite L x L matrix. In the special case that ri = IiI, where /i > 0 is a scalar, then rj in (2.37) can be replaced by Ii.
The next theorem shows that this identification scheme has similar con­ vergence properties as the filtered regressor RHONN model with the gradient method for adjusting the weights.
Theorem 2.2.2. Consider the filtered error RHONN model given by (2.35) whose weights are adjusted according to (2.37). Then Jor i = 1,2, ... , n
(a) ej, <Pi E .coo
20 2. RHONNs for Identification of Dynamical Systems
(b) limt--+oo ei(t) = 0
Proof. Consider the Lyapunov function candidate
1 ~ ( 2 T -1 ) V = 2" L..... e; + cP; ri cPi i=1
(2.38)
Then, using (2.36), (2.37), and the fact that ~i = Wi, the time derivative of V in (2.38) satisfies
(2.39)
Since V S 0, from (2.38) we obtain that ei, cPi E £00 for i = 1, .. . n. Using this result in (2.36) we also have that ei E £00' Now, by employing the same techniques as in the proof of Theorem 2.2.1 it can be shown readily that ei E £2, i.e., e;(t) is square integrable; therefore, by applying Barbalat's Lemma we obtain that limt--+oo e;(t) = O. •
Table 2.1. Filtered-regressor RHONN identifier
System Model:
Parametric Model:
4>i = Wi - wt,
i = 1,2, ... , n
i = 1,2, ... ,n
i = 1,2, ... ,n
i = 1,2, ... ,n
i = 1,2, ... , n
i = 1,2, ... , n
i = 1,2, ... , n
i = 1,2, ... , n
i = 1,2, ... ,n
The derivation of the learning algorithms developed in the previous section made the crucial assumption of no modeling error. Equivalently, it was as­ sumed that there exist weight vectors wi, for i = 1, ... n, such that each state of the unknown dynamical system (2.9) satisfies
2.3 Robust Learning Algorithms 21
Table 2.2. Filtered-error RHONN identifier
System Model: X = F(X,u), X E iRn , u E iRm
Parametric Model: Xi = -aiXi + wtT z, i = 1,2, ... ,n
RHONN Identifier Model: Xi = -aiXi + wT z, i = 1,2, ... , n
Identification Error: ei = Xi - Xi, i = 1,2, ... , n
Weight Estimation Error: rPi = Wi - wt, i = 1,2, ... ,n
Learning Law: Wi = -rizei i = 1,2, ... , n
. .T ( ) Xi = -aiXi +wi z X,U . (2.40)
In many cases this assumption will be violated. This is mainly due to an insufficient number of higher-order terms in the RHONN model. In such cases, if standard adaptive laws are used for updating the weights, then the presence of the modeling error in problems related to learning in dynamic environments, may cause the adjusted weight values (and, consequently, the error ej = Xi - X;) to drift to infinity. Examples of such behavior, which is usually referred to as , can be found in the adaptive control literature of linear systems [73, 45].
In this section we shall modify the standard weight adjustment laws in order to avoid the parameter drift phenomenon. These modified weight ad­ justment laws will be referred to as robust learning algorithms.
In formulating the problem it is noted that by adding and subtracting aiXi + wiT z(x, u), the dynamic behavior of each state of the system (2.9) can be expressed by a differential equation of the form
Xi = -aiXi + wiT z(X, u) + Vi(t) , (2.41)
where the modeling error viet) is given by
Viet) := Fi(X(t), u(t)) + aix(t) - wiT z(X(t), u(t)) . (2.42)
The function Fi(X, u) denotes the i-th component of the vector field F(X, u), while the unknown optimal weight vector wi is defined as the value of the weight vector Wi that minimizes the Loo-norm difference between Fi(X, u) + ajX and wT z(X, u) for all (X, u) EYe ~n+m, subject to the constraint that IWil ~ Mi, where Mi is a large design constant. The region Y denotes the smallest compact subset of ~n+m that includes all the values that (X, u) can take, i.e., (X(t), u(t)) E Y for all t ~ O. Since by assumption u(t) is uniformly bounded and the dynamical system to be identified is BIBO stable, the existence of such Y is assured. It is pointed out that in our analysis we do not require knowledge of the region y, nor upper bounds for the modeling error viet).
In summary, for i = 1,2, ... , n, the optimal weight vector wi is defined as
22 2. RHONNs for Identification of Dynamical Systems
w; := arg min. { sup IFi(X, u) + aiX - wi z(x, u)l} Iw,lSM, (x,u)EY
(2.43)
The reason for restricting wi to a ball of radius Mi is twofold: firstly, to avoid any numerical problems that may arise owing to having weight values that are too large, and secondly, to allow the use of the iT-modification [45], which will be developed below to handle the parameter drift problem.
The formulation developed above follows the methodology of [81] closely. Using this formulation, we now have a system of the form (2.41) instead of (2.40). It is noted that since X(t) and u(t) are bounded, the modeling error Vi(t) is also bounded, i.e., SUPt>O IVi(t)1 :S Vi for some finite constant Vi.
In what follows we develop robust learning algorithms based on the filtered error RHONN identifier; however, the same underlying idea can be extended readily to the filtered-regressor RHONN. Hence, the identifier is chosen as in (2.35), i.e.,
i= 1,2, ... ,n (2.44)
where Wi is the estimate of the unknown optimal weight vector wi. Using (2.41), (2.44), the state error ei = Xi - Xi satisfies
• A,T ej = -aiej + 'f'i Z - Vi , (2.45)
where <Pi = wi - wi. Owing to the presence of the modeling error Vi, the learning laws given by (2.37) are modified as follows:
if IW'I < M· '- . if IWil > Mi
(2.46)
where iTj is a positive constant chosen by the designer. The above weight adjustment law is the same as (2.37) if Wi belongs to a ball of radius Mi. In the case that the weights leave this ball, the weight adjustment law is modified by the addition of the leakage term iTjriWj, whose objective is to prevent the weight values from drifting to infinity. This modification is known as the [45].
In the following theorem we use the vector notation V := [Vi ... vnf and
e := [ei ... enf.
Theorem 2.3.1. Consider the filtered error RHONN model given by (2.44) whose weights are adjusted according to (2.46). Then for i = 1, ... n
(aJ ej, <Pi E £00 (b J there exist constants A, J.l such that
it le(rW dr :S A + J.l it Iv(rW dr.
Proof: Consider the Lyapunov function candidate
2.3 Robust Learning Algorithms 23
1 ~ (2 T -1 ) V = - L...J ei + ¢i r i ¢i . 2 i=1
(2.47)
(2.48)
w here I~. is the indicator function defined as I~ i = 1 if 1 Wi 1 > Mi and I~ i = 0 if IWi 1 :::; Mi. Since ¢i = Wi - wi, we have that
T 1 T 1 ( T T *) ¢i Wi = "2¢i ¢i +"2 ¢i ¢i + 2¢i Wi ,
1 2 1 2 1 *2 = "21¢il + "2lwd - "2IWi 1 .
Since, by definition, Iwi 1 :::; Mi and IWil > Mi for I~i = 1, we have that
I~i ~i (lw;j2 _ Iwi 12) ;::: 0;
therefore, (2.48) becomes n
V < '" (-a.e~ - 1* (Ti 1),.1 2 - e.v.) - L...J 1 1 Wi 2 '1'1 1 1 ,
i=1
a := min {ai, )..maX~~i-1) ; i = 1,2, ... , n} ,
and )..max (ri- 1) > 0 denotes the maximum eigenvalue of r i- 1 • Since
if Iw·1 < M· 1 _ I
otherwise
we obtain that (1 - I~.) TI¢;j2 :::; (TiM? Hence (2.50) can be written in the form
V:::; -aV +K,
where K := 2::7=1 ((TiM? + vl!2ai) and Vi is an upper bound for Vi; therefore, for V ;::: Vo = Kia, we have V :::; 0, which implies that V E 'coo. Hence ei, ¢i E 'coo·
24 2. RHONNs for Identification of Dynamical Systems
To prove the second part, we note that by completing the square in (2.49) we obtain
n n ( 2) • 2 ai 2 v· V < ~ (-a·e. - e·v-) < ~ --e· +-' -L.J 'I I. -L.J 2' 2.
i=l i=l a, (2.51 )
Integrating both sides of (2.51) yields
V(t) - V(O) ~ t (- ai it e;( r) dr + ~ it v;( r) dr) , . 2 0 2a, 0 1=1
~ _ amin it !e(rW dr + _1_ t !v(rW dr, 2 0 2amin Jo
where amin:= min{ai ; i = 1, .. . n}; therefore,
it 2 1 it !e(rW dr ~ -. [V(O) - V(t)] + -2- !v(rW dr,
o amm amin 0
~ .x + J.L 1t !v(r)!2 dr,
where .x := (2/amin) SUPt>o (V(O) - V(t)] and J.L := l/a~in' This proves part (b) and concludes the proof of Theorem 2.3.1. •
In simple words the above theorem states that the weight adaptive law (2.46) guarantees that ei and 1>i remain bounded for all i = 1, ... n, and furthermore, the "energy" of the state error e(t) is proportional to the "en­ ergy" of the modeling error v(t). In the special case that the modeling error is square integrable, i.e., v E £2, then e(t) converges to zero asymptotically.
Remark 2.3.1. It is noted that the O'-modification causes the adaptive law (2.46) to be discontinuous; therefore standard existence and uniqueness results of solutions to differential equations are in general not applicable. In order to overcome the problem of existence and uniqueness of solutions, the trajectory behavior of Wi(t) can be made "smooth" on the discontinuity hypersurface {Wi E WL !w;j = Md by modifying the adaptive law (2.46) to
-rizei if {!Wi! < M;} or {!w;j = Mi and wr rizei > O}
Wi= if {!w;j = M;} and (2.52)
{ -O'iWr riw ~ wr rizei ~ O} -rizei - O'iriWi if {!Wi! > M;} or {!Wi! = M;}
and {wr rizei < -O'iWr riW}
As shown in [82], the adaptive law (2.52) retains all the properties of (2.46) and, in addition, guarantees the existence of a unique solution, in the sense of Caratheodory [34]. The issue of existence and uniqueness of solutions in adaptive systems is treated in detail in [82].
Table 2.3. Robust learning algorithms
System Model:
Xi = -aiXi + wtT z + Vi(t), RHONN Identifier Model:
Xi = -aiXi + wT z, Identification Error:
ei = Xi - Xi, Weight Estimation Error:
¢>i = Wi - wt, Modeling Error:
2.4 Simulation Results 25
Robust Learning Algorithms:
a) Switching u-modification:
if
if
2.4 Simulation Results
!w;/ :S Mi
{!w;/ = Mi and wT F;zei > o} if {!Wi! = M;} and
{ -uiwT riW :S wT rizei :S o} if {!w;/ > M;} or {!Wi! = M;}
and {wT rizei < -uiwT F;w}
In this section we present simulation results of nonlinear system identification. The efficiency of an identification procedure depends mainly on the following: a) the error convergence and speed of convergence b) stability in cases of abrupt input changes c) performance of the identification model after the training stops All three factors are checked during our simulations. We have used a recurrent second-order neural network based on the filtered-error scheme described by (2.36) and the weight adjustment laws given by (2.37). The particular
26 2. RHONNs for Identification of Dynamical Systems
sigmoidal nonlinearity employed is the function (2.4) with a = 4, J3 = 0.1, 'Y = 2.
Now, consider an n-degree-of-freedom robotic manipulator which is de­ scribed by the following nonlinear vector differential equation
r(t) = M(w(t),p)w(t) + C(w(t), w(t),p)w(t) + G(w(t)), (2.53)
where
• r(t) is an n x 1 vector of joint torques • w(t) is an n x 1 vector containing the joint variables • M( w(t), p) represents the contribution of the inertial forces to the dynam­
ical equation; hence the matrix M represents the inertia matrix of the manipulator
• C( w(t), w(t), p) represents the Coriolis forces • G( w( t)) represents the gravitational forces • p is a parameter vector whose elements are functions of the geometric and
inertial characteristics of the manipulator links and the payload, i.e., p depends on the lengths and moments of inertia of each individual link and the payload.
It is noted that the parameter vector p can be constant in time (for example in the case of constant payload) or it can be varying as a function of time, p = p(t), as in the case of changing payload. An introduction to the derivation of the dynamical model of a robotic manipulator can be found in [15].
For simplicity in our case we assume that the manipulator consists of n = 2 degrees of freedom and more especially of two revolute joints whose axes are parallel. In this case the parameter vector is chosen as
PI = It + h + lac + L~M2 + L~(Ma + M4 + Mp) + P2,
P2 = 1a + 14 + 1p + L~M4 + L~Mp , Pa = LIL4M4 + LIL2Mp ,
where the geometric and inertial parameter values are provided by the fol­ lowing table.
It =0.2675, rotor 1 inertia 12=0.360, arm 1 inertia about c.g. Ia=0.0077, motor 2 rotor inertia Iac=0.040,motor 2 stator inertia 14=0.051, arm 2 inertia about c.g. 1p=0.046, payload inertia M I=73.0, motor 1 mass M2=10.6 , arm 1 mass Ma=12.0, motor 2 mass M 4=4.85, arm 2 mass Mp=6.81, payload mass
Ll =0.36, arm 1 length L2=0.24, arm 1 radius of gyration L 3 =0.139, arm 2 radius of gyration L 4 =0,099 The system matrices M and C can be written as:
M( (t) ) _ ((1,0, 2COSW2)p (0, 1, COSW2)P) W ,p - (0,1, 2COSW2)p (0,0, O)p ,
Summary 27
C(w(t), w(t),p) = ((0, 0, -~2s~nw2)p (0,0, -(WI + w2)SinW2)p) (0,0, -Wlsmw2)p (0,0, O)p
The above mathematical model and the particular numerical values of the robot parameters have been taken from [8]. It is noted that in this robot model there are no gravitational forces affecting the robot dynamics.
The RHONN identifier consists of four dynamic neurons, two for the an­ gular positions WI and W2 and two for the angular velocities WI and W2. The objective here is to train the network so as to identify the robot model.
The training starts at time t = O. The learning rate is I = 0.05 for all parameters and the sampling takes place every 0.001 seconds. The training is as follows: for the first 2 seconds or 2000 steps the input torques for both joints 1 and 2 are generated as random data in the range [-1,1]. For the next two seconds or steps from 2001 to 4000 the input torques are sin(0.5t3 ) for joint 1 and cos(0.5t3 ) for joint 2 and for the next two seconds or steps from 4001 to 6000 the input torques are sin(0.001t2) and cos(0.002t2) for joints 1 and 2 respectively. The above training input waveforms were repeatedly applied every 6000 steps until the 96000 step or the 96th second. After this instant the training ended and the same inputs as for the first 6000 steps were applied to the neural network and the robot model. Figure 2.1 shows WI and W2, the outputs of the first and second joint respectively. The solid line corresponds to the robot model and the dashed line corresponds to the RHONN model. It is seen that after the 96th second when the training stops there is a small error; however the network outputs follow closely the outputs of the robot model.
Summary
In this chapter we have studied the stability, convergence and approximation properties of the recurrent high-order neural networks (RHONN) as models of nonlinear dynamical systems. The overall structure of the RHONN con­ sists of dynamical elements distributed throughout the network in the form of dynamical neurons, which are interconnected by high-order connections be­ tween them. We have shown that if a sufficiently large number of high-order connections between neurons is allowed then the RHONN model is capable of approximating the input-output behavior of general dynamical systems to any degree of accuracy.
28 2. RHONNs for Identification of Dynamical Systems
:l 4
Tune ill secoDds
Time ita secoads
Fig. 2.1. Identification performance of the first and second joint outputs, Wi and W2 respectively. The solid line corresponds to the robot model and the dashed line corresponds to the RHONN model. The training stops at the 96th second, after which the adjustable weights are kept fixed
Based on the linear-in-the-weights property of the RHONN model, we have developed identification schemes and derived weight-adaptive laws for the adjustment of weights. The convergence and stability properties of these weight-adaptIve laws have been analyzed. We showed that in the case of no modeling error, the state error between the system and RHONN model con­ verges to zero asymptotically. In the case that modeling errors are present, we proposed the O"-modification as a method of guaranteeing the stability of the overall scheme. Using the O"-modification we showed that the state error and the weight estimation error remain bounded and the residual state error is proportional to the magnitude of the modeling error. The feasibility of apply­ ing these techniques has been demonstrated by considering the identification of a simple rigid robotic system.
CHAPTER 3
INDIRECT ADAPTIVE CONTROL
This chapter is devoted to the development of indirect adaptive Gontrol tech­ niques (based on RHONNs), for controlling nonlinear dynamical systems, with highly uncertain and possibly unknown nonlinearities.
The approach is comprised of an identification model, whose parameters are updated on-line in such a way that the error between the actual system output and the model output is approximately zero. The controller receives information from the identifier and outputs the necessary signal, which forces the plant to perform a prespecified task.
The learning laws developed in the previous chapter can also be used herein in the building up of the identification part of the architecture. These algorithms are enriched further to increase robustness however, especially in the case of model order mismatch.
The contents of this chapter is based on [85]. Since the actual system is assumed to be completely unknown, we propose a two phase algorithm. In phase one, a RHONN is employed to perform "black box" identification around a known operational point. Many cases that lead to modeling errors (i.e., parametric, dynamic uncertainties), are taken into consideration. Stabil­ ity of the identification scheme plus convergence of the identification error to within a small neighborhood of zero, is guaranteed with the aid of Lyapunov and singular perturbations theories.
The successful completion of phase one, implies that a model of the origi­ nally unknown nonlinear dynamical system has been obtained. Thus, we are free to proceed to the control phase of our algorithm, in which an appropriate state feedback is constructed to achieve asymptotic regulation of the output, while keeping bounded all signals in the closed loop. A block diagram of the indirect adaptive control architecture is pictured in Figure 3.1. The algo­ rithm has also been applied successfully to control the speed of a DC motor, operated in a nonlinear fashion providing an application insight in this way.
3.1 Identification
We consider affine in the control, nonlinear dynamical systems of the form
i: = f(x) + G(x)u, (3.1)
G. A. Rovithakis et al., Adaptive Control with Recurrent High-order Neural Networks © Springer-Verlag London Limited 2000
30 3. Indirect Adaptive Control
I 1 RHON
Fig. 3.1. The two-stage control algorithm architecture
where the state x E ~n, is assumed to be completely measured, the control u is in ~n , f is an unknown smooth vectorfield called the drift term and G is a matrix with columns the unknown smooth controlled vectorfields gi , i = 1,2, ... , n G = [g1 g2 ... gn]. The above class of continuous-time nonlinear systems are called affine, because in (3.1) the control input appears linear with respect to G. The main reason for considering this class of nonlinear systems rather than the general one treated in Chapter 2 is that most of the systems encountered in engineering, are by nature or design, affine. Furthermore, we note that non affine systems of the form
x = f(x, u),
can be converted into affine, by passing the input through integrators [76], a procedure known as dynamic extension. The following mild assumptions are also imposed on (3.1), to guarantee the existence and uniqueness of solution for any finite initial condition and u E U.
Assumption 3.1.1. Given a class U of admissible inputs, then for any u E U and any finite initial condition, the state trajectories are uniformly bounded for any finite T> O. Hence, 1 x(T) 1< 00.
Assumption 3.1.2. The vectorfields f, gi i = 1,2, ... , n are continuous with respect to their arguments and satisfy a local Lipschitz condition so that the solution x(t) of (3.1) is unique for any finite initial condition and u E U.
Following the discussion and analysis of Chapter 2, an affine RHONN model of the form
i = Ax + BWS(x) + BW1 S'(x)u, (3.2)
can be used to describe (3.1). In (3.2) x E ~n, the inputs u E U C ~n, W is a n x n matrix of synaptic weights, A is a n x n stable matrix which for
3 .1 Identification 31
simplicity can be taken to be diagonal, B is a n x n matrix with elements the scalars bi for all i = 1,2, ... , n and WI is a n x n diagonal matrix of synaptic weights of the form WI = diag[wllw2 1 .. . wn 1]. Finally, S(x) is a n-dimensional vector and S' (x) is a n x n diagonal matrix, with elements combinations of sigmoid functions. For more details concerning the RHONN structure and its approximation capabilities, the reader is referred to Chapter 2.
In the case where only parametric uncertainties are present we can prove using techniques analogous to the ones presented in Chapter 2 the theorem:
Theorem 3.1.1. Consider the identification scheme
e = Ae + BWS(x) + BW1S'(x)u,
Wil = -bis'(Xi)Piuiei ,
for all i, j = 1,2, ... , n guarantees the following properties • e, X, W, WI E Loo , e E L2 .
• limt ..... oo e(t) = 0, limt ..... oo W(t) = 0, limt ..... oo W l(t) = 0
The robust learning algorithms developed in Chapter 2 can also be used herein to cover for the existence of modeling errors.
3.1.1 Robustness of the RHONN Identifier Owing to Unmodeled Dynamics
In the previous section we assumed that there exist weight values W*, wt such that a nonlinear dynamical system can be completely described by a neural network of the form
x = Ax + BW*S(x) + BW{S'(x)u,
where all matrices are as defined previously. It is well known however, that the model can be of lower order than the plant, owing to the presence of unmod­ eled dynamics. In the following, we extend our theory within the framework of singular perturbations, to include the case where dynamic uncertainties are present. For more details concerning singular perturbation theory, the interested reader is referred to [56]. Now we can assume that the unknown plant can be completely described by
x = Ax + BW*S(x) + BW{S'(x)u
+F(x, W, W1)Aol BoWou + F(x, W, W1)z,
J1.z = Aoz + BoWou, z E ~r (3.3)
where z is the state of the unmodeled dynamics and J1. > 0 a small singular perturbation scalar. If we define the error between the identifier states and
32 3. Indirect Adaptive Control
the real system states as e = x - x then from (3.2) and (3.3) we obtain the error equation
e = Ae + BWS(x) + BW1S'(x)u
-F(x, W, Wl)Ai) 1 BoWou - F(x, w, Wl)Z,
J-tz = Aoz + Bo Wou, z E ~r , (3.4)
where F(x, W, W l ), BoWou, BWS(x), BW1S'(x)u, are bounded and differen­ tiable with respect to their arguments for every W E Bw a ball in ~nxn, Wl E BWI a ball in ~n and all x E Bx a ball in ~n. Further, we assume that the unmodeled dynamics are asymptotically stable for all x E Bx. In other words we assume that there exists a constant v > 0 such that
Re A{Ao} ::; -v < o. Note that z is large since J-t is small and hence, the unmodeled dynamics are fast. For a singular perturbation from J-t > 0 to J-t = 0 we obtain
z = -Ai)lBoWou.
Since the unmodeled dynamics are asymptotically stable the existence of Ai) 1
is assured. As it is well known from singular perturbation theory, we express the state z as
z=h(x,TJ)+'f/, (3.5)
where h(x, TJ) is defined as the quasi-steady-state of z and 'f/ as its fast tran­ sient. In our case
h(x, 'f/) = -Ai) 1 BoWou
Substituting (3.5) into (3.4) we obtain the singularly perturbed model as
e = Ae + BWS(x) + BW1S'(x)u - F(x, W, Wl)'f/,
J-try = Ao'f/ - J-th(e, W, Wb 'f/, u),
where we define . - - 8h . oh':' 8h,:. 8h . h(e, W, Wl,TJ,u) = J'}e + --W + ---Wl + J'}u.
ve oW OWl vU
(3.6)
Notice, however, that in the control case, u is a function of e, W, Wl therefore making h(e, W, Wb 'f/, u) to be equal to
. - - 8h 8h,:. oh':' h(e, W, Wl,TJ,u) = J'}e + --W + ---Wl ·
ve 8W OWl
Remark 3.1.1. F(x, W, Wl)Ai) 1 BoWou, F(x, w, Wl)z in (3.3) can be viewed as correction terms in the input vectorfields and in the drift term of
x = Ax + BW*S(x) + BW[S'(x)u,
in the sense that the unknown system can now be described by a neural network plus the correction terms.
3 .1 Identification 33
Before proceeding any further we need to prove the following lemma.
Lemma 3.1.1. It is true that h(e, W, vV1, TJ, u) is bounded by
Ilh(e, W, WI, TJ, u)11 :s; pdlell + p211TJII, provided that the following inequalities hold
and
IlheBW1S'(x)ull :s; k211ell , IlheBWS(x)11 :s; k311ell,
IlheF(x, W, W1)11 :s; P2 ,
IlheAell :s; k411ell , Ilh"ull :s; k511ell ,
PI = ko + kl + k2 + k3 + k4 + k5 .
Proof" Differentiating h(e, W, WI, TJ, u) we obtain . .
h(e, W, WI, TJ, u) = hee + hwW + hWl WI + h"u,
or
therefore,
+llheF(x, W, W1)TJII + IlhwWII
+llhw1 WIll + Ilh"ull , :s; k411ell + k311ell + k211ell + IlheF(x, W, W1)IIIITJII
+kollell + k11lell + k511ell , :s; k411ell + k311ell + k211ell + P2111J11
+kollell + k11lell + k5 l1ell·
which concludes the proof. We are now able to prove the following theorem •
34 3. Indirect Adaptive Control
Theorem 3.1.2. The equilibrium of the singularly perturbed model IS
asymptotically stable for all
1 jlE(O, ), C1C2 + 2C3
S = {e, W, Wl,T}: Vee, W, Wl,T}):::; c},
where c is the largest constant such that the set {e, W, Wl : V( e, W, W!, 0) :::; c} is contained to Be X Bw X Bw, . Furthermore, the following properties are guaranteed
• e,x,T}, W, Wl E Loo , e,T} E L2 • limt--+oo e~t) = 0, limt--+oo T}(~) = 0
• limt--+oo Wet) = 0, limt--+oo W let) = 0
Proof: Let us take the Lyapunov function candidate
- - 1 TIT Vee, W, Wl , T}) = 2Cl Pe + 2C2T} PoT}
1 -T- 1 -T- +2cltr{W W} + 2cltr{Wl Wd,
where P, Po > 0 are chosen to satisfy the Lyapunov equation
PA+ATp=-I,
(3.7)
Observe that (3.7) is a weighted sum composed of a slow and a fast part. Taking the time derivative of (3.7) and using the learning law
Wij = -biPis(xj )ei ,
Win+l = -bis'(xi)Piuiei ,
for all i = 1,2, ... , n, we obtain, as in a previous subsection that
V = - clllel12 - ~1IT}112 - cleT P F(:r, W, Wl)T} 2 2jl
T . -- -C2T} Poh(e, W, Wl , T}, u),
:::; _ c; IIel1 2 _ ;~ 11T}11 2 T T . --+llcle P F(x, W, W1T} + C2T} Poh(e, W, Wl , T}, u)ll·
Employing Lemma 3.1.1 we obtain
. Cl 2 C2 2 T V :::; - 2"llell - 2jlllT}11 + clile P F(x, W, Wl)IIIIT}11
+C211T}Poll(Plllell + p211T}11), which finally takes the form
3.2 Indirect Control 35
. CI 2 1 2 V:::; -Zllell - C2(2jj - C3)111711 + CIC21Iellll17ll, (3.8)
provided that the following inequalities hold
IIPF(x, W, Wdll:::; C2,
IIPollpl :::; CI , IIPollp2 :::; C3;
. [ T -~ ] [Ilell] V:::; - [lIellll17111 -T C2(JJl - C3) 111711
The 2 x 2 matrix in (3.9) is positive definite, when
1 jj< ----
CIC2 + 2C3
(3.9)
Then V is negative semidefinite. Since V :::; 0 we conclude that V E Loo , which implies e, 17, W, WI E Loo. Furthermore, e, x = e+x , W = W + W* , WI = WI + wt are also bounded. Since V is a non-increasing function of time and bounded from below, the limt ..... oo V = Voo exists so by integrating V from 0 to 00 we have
CI roo IIel12dt + C2( 2.. _ C3) roo 1117112dt 2 Jo 2jj Jo
-cIc21°° lI ellll17ll dt :::; [V(O) - Vool < 00,
which implies that e, 17 E L2 .Furthermore
e = Ae + BWS(x) + BWIS'(x)u - F(x, W, WI)17 , jjiJ = Ao17 - jjh(e, W, WI, 17, u).
Since u, Ao, h(e, W, WI, 17, u) are bounded, e E Loo and iJ E Loo. Since e E L2nLoo,17 E L2nLoo, using Barbalat's Lemma we conclude that limt ..... oo e(t) = 0,