18
2002 D Discrete Control Systems 18. Dictionary.com Unabridged (v 1.1). discover. http://dictionary. reference.com/browse/discover. Accessed 5 Nov 2007 19. Dietterich TG (2000) Ensemble methods in machine learn- ing. In: First International Workshop on Multiple Classifier Sys- tems. Lecture Notes in Computer Science. Springer, New York, pp 1–15 20. Dixon J (2005) Pentaho Open Source Business Intelligence Platform Technical White Paper. Pentaho Corporation, Or- lando. http://sourceforge.net/project/showfiles.php?group_ id=140317 21. Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data min- ing to knowledge discovery in databases (a survey). AI Mag 17(3):37–54 22. Fayyad U, Piatesky-Shapiro G, Smyth P, Uthurusamy R (eds) (1996) Advances in knowledge discovery and data mining. AAAI Press, Menlo Park 23. Frawley W, Piatesky-Shapiro G, Matheus C (1991) Knowledge discovery in databases: An overview. In: Piatesky-Shapiro G, Frowley W (eds) Knowledge Discovery in Databases. AAAI/MIT Press, pp 1–27, Menlo Park 24. Freund Y, Schapire RE (1996) Experiments with a new boost- ing algorithm. In: Proceedings Thirteenth International Confer- ence on Machine Learning. Morgan Kaufman, San Francisco, pp 148–156 25. Goldberg DE (1989) Genetic algorithms in search, optimiza- tion, and machine learning. Addison, Reading 26. Hand D, Mannila H, Smyth P (eds) (2001) Principles of data min- ing. MIT Press, Cambridge 27. Holland JH (1975) Adaptation in natural and artificial systems. MIT Press, Cambridge 28. Iglesias CJ (1996) The role of hybrid systems in intelligent data management: The case of fuzzy/neural hybrids. Control Eng Pract 4(6):839–845 29. Kass GV (1980) An exploratory technique for investigating large quantities of categorical data. Appl Stat 29:119–127 30. Kurgan L, Musilek P (2006) A survey of Knowledge Discovery and Data Mining process models. Knowl Eng Rev 21(1):1–24 31. Loh W, Shih Y (1997) Split selection methods for classification trees. Stat Sinica 7:815–840 32. Mannila H (2000) Theoretical frameworks of data mining. SIGKDD Explor 1:30–32 33. Mierswa I, Wurst M, Klinkenberg R, Scholz M, Euler T (2006) YALE: Rapid Prototyping for Complex Data Mining Tasks. In: Proc of the 12th ACMSIGKDD. International Conference on Knowledge Discovery and Data Mining, Philadelphia, pp 1–6 34. Pechenizkiy M, Tsymbal A, Puuronen S (2005) Meta-knowl- edge management in multistrategy process-oriented knowl- edge discovery systems. Technical Report, Dublin, Trinity Col- lege Dublin, Department of Computer Science, TCD-CS-2005– 30, p 12 35. Piatetsky-Shapiro G (1991) Knowledge discovery in real databases: A report on the IJCAI-89 Workshop. AI Mag 11(5): 68–70 36. Piatetsky-Shapiro G (1999) The data mining industry coming to age. IEEE Intel Syst 14(6):32–33 37. Provost F, Fawcett T, Kohavi R (1998) The case against accuracy estimation for comparing classifiers. In: Proceedings of the Fif- teenth International Conference on Machine Learning, (ICML- 98), San Francisco 38. Quinlan JR (1986) Induction of decision trees. In: Machine Learning, vol 1. Kluwer, Hingham 39. Quinlan R (1993) C4.5: Programs for machine learning. Morgan Kaufmann, San Francisco 40. Rakotomalala R (2005) TANAGRA: Un logiciel gratuit pour l’enseignement et la recherche. In: Proc of the 5th Journees d’Extraction et Gestion des Connaissances 2:697–702 41. Reeves CR (ed) (1993) Modern heuristic techniques for combi- natorial problems. Wiley, New York 42. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning repre- sentations by back-propagating errors. Nature 323:533–536 43. Sano M, Katoa Y, Taira K (2005) Functional gene-discovery sys- tems based on libraries of hammerhead and hairpin ribozymes and short hairpin RNAs. Mol Biosyst 1:27–35 44. Shearer C (2000) The CRISP-DM model: the new blueprint for data mining. J Data Wareh l5(4):13–19 45. Smyth P, Goodman RM (1991) Rule induction using informa- tion theory. In: Piatetsky-Schapiro G, Frawley WJ (eds) Knowl- edge Discovery in Databases. AAAI Press, pp 159–176, Menlo Park 46. Snedecor GW, Cochran WG (1989) Statistical methods, 8th edn. Iowa State University Press, Ames 47. Tan P, Steinbach M, Kumar V (2005) Introduction to data min- ing. Addison, Boston 48. The Discovery System. discovery system for person- ality profiling. http://www.insights.com/core/English/ TheDiscoverySystem/default.shtm. Accessed 6 Nov 2007 49. Towsey M, Alpsan D, Sztriha L (1995) Training a neural net- work with conjugate gradient methods. IEEE Proc Neural Netw 1:373–378 50. Weiss GM, Provost F (2001) The effect of class distribution on classifier learning. Technical Report ML-TR 43, Department of Computer Science, Rutgers University 51. Werbos PJ (1994) The roots of backpropagation. Wiley, New York 52. Witten IH, Frank E (2005) Data mining: Practical machine learn- ing tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco 53. Wolpert D, Macready W (1997) No free lunch theorems for op- timization. IEEE Trans Evol Comput 1(1):67–82 Books and Reviews Berthold M, Hand DJ (2003) Intelligent data analysis: An introduc- tion, 2nd edn. Springer, New York Lin TY, Ohsuga S, Liau CJ, Hu X, Tsumoto S (eds) (2005) Foundations of data mining and knowledge discovery. Studies in Computa- tional Intelligence, vol 6. Springer, New York Discrete Control Systems TAEYOUNG LEE 1 ,MELVIN LEOK 2 , HARRIS MCCLAMROCH 1 1 Department of Aerospace Engineering, University of Michigan, Ann Arbor, USA 2 Department of Mathematics, Purdue University, West Lafayette, USA

UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2002 D Discrete Control Systems

18. Dictionary.com Unabridged (v 1.1). discover. http://dictionary.reference.com/browse/discover. Accessed 5 Nov 2007

19. Dietterich TG (2000) Ensemble methods in machine learn-ing. In: First International Workshop on Multiple Classifier Sys-tems. Lecture Notes in Computer Science. Springer, New York,pp 1–15

20. Dixon J (2005) Pentaho Open Source Business IntelligencePlatform Technical White Paper. Pentaho Corporation, Or-lando. http://sourceforge.net/project/showfiles.php?group_id=140317

21. Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data min-ing to knowledge discovery in databases (a survey). AI Mag17(3):37–54

22. Fayyad U, Piatesky-Shapiro G, Smyth P, Uthurusamy R (eds)(1996) Advances in knowledge discovery and data mining.AAAI Press, Menlo Park

23. Frawley W, Piatesky-Shapiro G, Matheus C (1991) Knowledgediscovery in databases: An overview. In: Piatesky-Shapiro G,Frowley W (eds) Knowledge Discovery in Databases. AAAI/MITPress, pp 1–27, Menlo Park

24. Freund Y, Schapire RE (1996) Experiments with a new boost-ing algorithm. In: Proceedings Thirteenth International Confer-ence on Machine Learning. Morgan Kaufman, San Francisco,pp 148–156

25. Goldberg DE (1989) Genetic algorithms in search, optimiza-tion, and machine learning. Addison, Reading

26. Hand D,Mannila H, Smyth P (eds) (2001) Principles of datamin-ing. MIT Press, Cambridge

27. Holland JH (1975) Adaptation in natural and artificial systems.MIT Press, Cambridge

28. Iglesias CJ (1996) The role of hybrid systems in intelligent datamanagement: The case of fuzzy/neural hybrids. Control EngPract 4(6):839–845

29. Kass GV (1980) An exploratory technique for investigatinglarge quantities of categorical data. Appl Stat 29:119–127

30. Kurgan L, Musilek P (2006) A survey of Knowledge Discoveryand Data Mining process models. Knowl Eng Rev 21(1):1–24

31. Loh W, Shih Y (1997) Split selection methods for classificationtrees. Stat Sinica 7:815–840

32. Mannila H (2000) Theoretical frameworks of data mining.SIGKDD Explor 1:30–32

33. Mierswa I, Wurst M, Klinkenberg R, Scholz M, Euler T (2006)YALE: Rapid Prototyping for Complex Data Mining Tasks. In:Proc of the 12th ACMSIGKDD. International Conference onKnowledge Discovery and Data Mining, Philadelphia, pp 1–6

34. Pechenizkiy M, Tsymbal A, Puuronen S (2005) Meta-knowl-edge management in multistrategy process-oriented knowl-edge discovery systems. Technical Report, Dublin, Trinity Col-lege Dublin, Department of Computer Science, TCD-CS-2005–30, p 12

35. Piatetsky-Shapiro G (1991) Knowledge discovery in realdatabases: A report on the IJCAI-89 Workshop. AI Mag 11(5):68–70

36. Piatetsky-Shapiro G (1999) The datamining industry coming toage. IEEE Intel Syst 14(6):32–33

37. Provost F, Fawcett T, Kohavi R (1998) The case against accuracyestimation for comparing classifiers. In: Proceedings of the Fif-teenth International Conference on Machine Learning, (ICML-98), San Francisco

38. Quinlan JR (1986) Induction of decision trees. In: MachineLearning, vol 1. Kluwer, Hingham

39. Quinlan R (1993) C4.5: Programs for machine learning. MorganKaufmann, San Francisco

40. Rakotomalala R (2005) TANAGRA: Un logiciel gratuit pourl’enseignement et la recherche. In: Proc of the 5th Journeesd’Extraction et Gestion des Connaissances 2:697–702

41. Reeves CR (ed) (1993) Modern heuristic techniques for combi-natorial problems. Wiley, New York

42. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning repre-sentations by back-propagating errors. Nature 323:533–536

43. Sano M, Katoa Y, Taira K (2005) Functional gene-discovery sys-tems based on libraries of hammerhead and hairpin ribozymesand short hairpin RNAs. Mol Biosyst 1:27–35

44. Shearer C (2000) The CRISP-DM model: the new blueprint fordata mining. J Data Wareh l5(4):13–19

45. Smyth P, Goodman RM (1991) Rule induction using informa-tion theory. In: Piatetsky-Schapiro G, Frawley WJ (eds) Knowl-edge Discovery in Databases. AAAI Press, pp 159–176, MenloPark

46. Snedecor GW, CochranWG (1989) Statisticalmethods, 8th edn.Iowa State University Press, Ames

47. Tan P, Steinbach M, Kumar V (2005) Introduction to data min-ing. Addison, Boston

48. The Discovery System. discovery system for person-ality profiling. http://www.insights.com/core/English/TheDiscoverySystem/default.shtm. Accessed 6 Nov 2007

49. Towsey M, Alpsan D, Sztriha L (1995) Training a neural net-work with conjugate gradient methods. IEEE Proc Neural Netw1:373–378

50. Weiss GM, Provost F (2001) The effect of class distribution onclassifier learning. Technical Report ML-TR 43, Department ofComputer Science, Rutgers University

51. Werbos PJ (1994) The roots of backpropagation. Wiley, NewYork

52. Witten IH, Frank E (2005) Data mining: Practical machine learn-ing tools and techniques, 2nd edn. Morgan Kaufmann, SanFrancisco

53. Wolpert D, Macready W (1997) No free lunch theorems for op-timization. IEEE Trans Evol Comput 1(1):67–82

Books and Reviews

Berthold M, Hand DJ (2003) Intelligent data analysis: An introduc-tion, 2nd edn. Springer, New York

Lin TY, Ohsuga S, Liau CJ, Hu X, Tsumoto S (eds) (2005) Foundationsof data mining and knowledge discovery. Studies in Computa-tional Intelligence, vol 6. Springer, New York

Discrete Control Systems

TAEYOUNG LEE1, MELVIN LEOK2,HARRIS MCCLAMROCH1

1 Department of Aerospace Engineering,University of Michigan, Ann Arbor, USA

2 Department of Mathematics, Purdue University,West Lafayette, USA

Page 2: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2003

Article Outline

GlossaryDefinition of the SubjectIntroductionDiscrete Lagrangian and Hamiltonian MechanicsOptimal Control of Discrete Lagrangian

and Hamiltonian SystemsControlled Lagrangian Method

for Discrete Lagrangian SystemsFuture DirectionsAcknowledgmentsBibliography

Glossary

Discrete variational mechanics A formulation of me-chanics in discrete-time that is based on a discrete ana-logue ofHamilton’s principle, which states that the sys-tem takes a trajectory for which the action integral isstationary.

Geometric integrator A numerical method for obtainingnumerical solutions of differential equations that pre-serves geometric properties of the continuous flow,such as symplecticity, momentum preservation, andthe structure of the configuration space.

Lie group A differentiable manifold with a group struc-ture where the composition is differentiable. The cor-responding Lie algebra is the tangent space to the Liegroup based at the identity element.

Symplectic Amap is said to be symplectic if given any ini-tial volume in phase space, the sum of the signed pro-jected volumes onto each position-momentum sub-space is invariant under the map. One consequence ofsymplecticity is that the map is volume-preserving aswell.

Definition of the Subject

Discrete control systems, as considered here, refer to thecontrol theory of discrete-time Lagrangian or Hamilto-nian systems. These discrete-time models are based ona discrete variational principle, and are part of the broaderfield of geometric integration. Geometric integrators arenumerical integration methods that preserve geometricproperties of continuous systems, such as conservation ofthe symplectic form, momentum, and energy. They alsoguarantee that the discrete flow remains on the manifoldon which the continuous system evolves, an importantproperty in the case of rigid-body dynamics.

In nonlinear control, one typically relies on differen-tial geometric and dynamical systems techniques to prove

properties such as stability, controllability, and optimality.More generally, the geometric structure of such systemsplays a critical role in the nonlinear analysis of the corre-sponding control problems. Despite the critical role of ge-ometry and mechanics in the analysis of nonlinear controlsystems, nonlinear control algorithms have typically beenimplemented using numerical schemes that ignore the un-derlying geometry.

The field of discrete control systems aims to addressthis deficiency by restricting the approximation to thechoice of a discrete-time model, and developing an associ-ated control theory that does not introduce any additionalapproximation. In particular, this involves the construc-tion of a control theory for discrete-time models based ongeometric integrators that yields numerical implementa-tions of nonlinear and geometric control algorithms thatpreserve the crucial underlying geometric structure.

Introduction

The dynamics of Lagrangian and Hamiltonian systemshave unique geometric properties; the Hamiltonian flow issymplectic, the total energy is conserved in the absence ofnon-conservative forces, and the momentum maps asso-ciated with symmetries of the system are preserved. Manyinteresting dynamics evolve on a non-Euclidean space. Forexample, the configuration space of a spherical pendu-lum is the two-sphere, and the configuration space of rigidbody attitude dynamics has a Lie group structure, namelythe special orthogonal group. These geometric features de-termine the qualitative behavior of the system, and serve asa basis for theoretical study.

Geometric numerical integrators are numerical inte-gration algorithms that preserve structures of the con-tinuous dynamics such as invariants, symplecticity, andthe configuration manifold (see [14]). The exact geomet-ric properties of the discrete flow not only generate im-proved qualitative behavior, but also provide accurate andefficient numerical techniques. In this article, we view a ge-ometric integrator as an intrinsically discrete dynamicalsystem, instead of concentrating on the numerical approx-imation of a continuous trajectory.

Numerical integration methods that preserve the sim-plecticity of a Hamiltonian system have been studied(see [28,36]). Coefficients of a Runge–Kutta method arecarefully chosen to satisfy a simplecticity criterion andorder conditions to obtain a symplectic Runge–Kuttamethod. However, it can be difficult to construct such in-tegrators, and it is not guaranteed that other invariants ofthe system, such as a momentum map, are preserved. Al-ternatively, variational integrators are constructed by dis-

Page 3: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2004 D Discrete Control Systems

cretizing Hamilton’s principle, rather than discretizing thecontinuous Euler–Lagrange equation (see [31,34]). The re-sulting integrators have the desirable property that theyare symplectic and momentum preserving, and they ex-hibit good energy behavior for exponentially long times(see [2]). Lie groupmethods are numerical integrators thatpreserve the Lie group structure of the configuration space(see [18]). Recently, these two approaches have been uni-fied to obtain Lie group variational integrators that pre-serve the geometric properties of the dynamics as well asthe Lie group structure of the configuration space with-out the use of local charts, reprojection, or constraints(see [26,29,32]).

Optimal control problems involve finding a controlinput such that a certain optimality objective is achievedunder prescribed constraints. An optimal control prob-lem that minimizes a performance index is described bya set of differential equations, which can be derived usingPontryagin’s maximum principle. Discrete optimal con-trol problems involve finding a control input for a dis-crete dynamic system such that an optimality objectiveis achieved with prescribed constraints. Optimality con-ditions are derived from the discrete equations of mo-tion, described by a set of discrete equations. This ap-proach is in contrast to traditional techniques where a dis-cretization appears at the last stage to solve the optimal-ity condition numerically. Discrete mechanics and optimalcontrol approaches determine optimal control inputs andtrajectories more accurately with less computational load(see [19]). Combined with an indirect optimization tech-nique, they are substantiallymore efficient (see [17,23,25]).

The geometric approach to mechanics can providea theoretical basis for innovative control methodologies ingeometric control theory. For example, these techniquesallow the attitude of satellites to be controlled usingchanges in its shape, as opposed to chemical propulsion.While the geometric structure of mechanical systems playsa critical role in the construction of geometric controlalgorithms, these algorithms have typically been imple-mented using numerical schemes that ignore the underly-ing geometry. By applying geometric control algorithms todiscrete mechanics that preserve geometric properties, weobtain an exact numerical implementation of the geomet-ric control theory. In particular, the method of controlledLagrangian systems is based on the idea of adopting a feed-back control to realize a modification of either the poten-tial energy or the kinetic energy, referred to as potentialshaping or kinetic shaping, respectively. These ideas areapplied to construct a real-time digital feedback controllerthat stabilizes the inverted equilibrium of the cart-pendu-lum (see [10,11]).

In this article, we will survey discrete Lagrangian andHamiltonian mechanics, and their applications to optimalcontrol and feedback control theory.

Discrete Lagrangian and HamiltonianMechanics

Mechanics studies the dynamics of physical bodies actingunder forces and potential fields. In Lagrangian mechan-ics, the trajectory of the object is derived by finding thepath that extremizes the integral of a Lagrangian over time,called the action integral. In many classical problems, theLagrangian is chosen as the difference between kinetic en-ergy and potential energy. The Legendre transformationprovides an alternative description of mechanical systems,referred to as Hamiltonian mechanics.

Discrete Lagrangian and Hamiltonian mechanics hasbeen developed by reformulating the theorems and theprocedures of Lagrangian and Hamiltonian mechanics ina discrete time setting (see, for example, [31]). Therefore,discrete mechanics has a parallel structure with the me-chanics described in continuous time, as summarized inFig. 1 for Lagrangian mechanics. In this section, we de-scribe discrete Lagrangian mechanics in more detail, andwe derive discrete Euler–Lagrange equations for severalmechanical systems.

Consider a mechanical system on a configurationspace Q, which is the space of possible positions. The La-grangian depends on the position and velocity, which areelements of the tangent bundle to Q, denoted by TQ. LetL : TQ ! R be the Lagrangian of the system. The discreteLagrangian, Ld : Q�Q ! R is an approximation to theexact discrete Lagrangian,

Lexactd (q0; q1) DZ h

0L(q01(t); q01(t)) dt ; (1)

where q01(0) D q0, q01(h) D q1; and q01(t) satisfies theEuler–Lagrange equation in the time interval (0; h). A dis-crete action sum Gd : QNC1 ! R, analogous to the actionintegral, is given by

Gd(q0; q1; : : : ; qN ) DN�1X

kD0

Ld(qk ; qkC1) : (2)

The discrete Hamilton’s principle states that

ıGd D 0

for any ıqk , which yields the discrete Euler–Lagrange(DEL) equation,

D2Ld(qk�1; qk )C D1Ld(qk ; qkC1) D 0 : (3)

Page 4: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2005

Discrete Control Systems, Figure 1Procedures to derive continuous and discrete equations of motion

This yields a discrete Lagrangian flow map (qk�1; qk ) 7!(qk ; qkC1). The discrete Legendre transformation, whichfrom a pair of positions (q0; q1) gives a position-momen-tum pair (q0; p0) D (q0;�D1Ld(q0; q1)) provides a dis-crete Hamiltonian flow map in terms of momenta.

The discrete equations of motion, referred to as vari-ational integrators, inherit the geometric properties ofthe continuous system. Many interesting Lagrangian andHamiltonian systems, such as rigid bodies evolve ona Lie group. Lie group variational integrators preservethe nonlinear structure of the Lie group configurations aswell as geometric properties of the continuous dynamics(see [29,32]). The basic idea for all Lie group methods is toexpress the update map for the group elements in terms ofthe group operation,

g1 D g0 f0 ; (4)

where g0; g1 2 G are configuration variables in a Liegroup G, and f0 2 G is the discrete update represented bya right group operation on g0. Since the group elementis updated by a group operation, the group structure ispreserved automatically without need of parametrizations,constraints, or re-projection. In the Lie group variationalintegrator, the expression for the flow map is obtainedfrom the discrete variational principle on a Lie group, thesame procedure presented in Fig. 1. But, the infinitesi-mal variation of a Lie group element must be carefully ex-

pressed to respect the structure of the Lie group. For ex-ample, it can be expressed in terms of the exponential mapas

ıg Ddd�

ˇˇˇ�D0

g exp �� D g� ;

for a Lie algebra element � 2 g. This approach has beenapplied to the rotation group SO(3) and to the spe-cial Euclidean group SE(3) for dynamics of rigid bodies(see [24,26,27]). Generalizations to arbitrary Lie groupsgives the generalized discrete Euler–Poincaré (DEP) equa-tion,

T�e L f0 � D2Ld(g0; f0) � Ad�f0 ��T�e L f1 � D2Ld(g1; f1)

CT�e Lg1 � D1Ld(g1; f1) D 0 ;(5)

for a discrete Lagrangian on a Lie group, Ld : G�G ! R.Here L f : G ! G denotes the left translation map givenby L f g D f g for f ; g 2 G, TgL f : TgG ! Tf gG is the tan-gential map for the left translation, and Adg : g! g is theadjoint map. A dual map is denoted by a superscript �

(see [30] for detailed definitions).We illustrate the properties of discrete mechanics us-

ing several mechanical systems, namely a mass-spring sys-tem, a planar pendulum, a spherical pendulum, and a rigidbody.

Page 5: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2006 D Discrete Control Systems

Example 1 (Mass-spring System) Consider a mass-springsystem, defined by a rigid body that moves along a straightfrictionless slot, and is attached to a linear spring.

Continuous equation of motion: The configurationspace is Q D R, and the Lagrangian L : R�R! R isgiven by

L(q; q) D12mq2 �

12�q2 ; (6)

where q 2 R is the displacement of the body measuredfrom the point where the spring exerts no force. Themass of the body and the spring constant are denotedby m; � 2 R, respectively. The Euler–Lagrange equationyields the continuous equation of motion.

mqC �q D 0 : (7)

Discrete equation of motion: Let h > 0 be a discretetime step, and a subscript k denotes the kth discrete vari-able at t D kh. The discrete Lagrangian Ld : R�R! Ris an approximation of the integral of the continuous La-grangian (6) along the solution of (7) over a time step.Here, we choose the following discrete Lagrangian.

Ld(qk ; qkC1) D hL�qk C qkC1

2;qkC1 � qk

h

D12h

m(qkC1 � qk)2 �h�8(qk C qkC1)2 :

(8)

Direct application of the discrete Euler–Lagrange equationto this discrete Lagrangian yields the discrete equations ofmotion. We develop the discrete equation of motion usingthe discrete Hamilton’s principle to illustrate the princi-ples more explicitly. Let Gd : RNC1 ! R be the discreteaction sum defined as Gd D

PN�1kD0 Ld(qk ; qkC1), which

approximates the action integral. The infinitesimal varia-tion of the action sum can be written as

ıGd D

N�1X

kD0

ıqkC1

�mh(qkC1 � qk )�

h�4(qk C qkC1)

C ıqk��mh(qkC1 � qk) �

h�4(qk C qkC1)

�:

Since ıq0 D ıqN D 0, the summation index can be rewrit-ten as

ıGd D

N�1X

kD1

ıqkn�

mh(qkC1 � 2qk C qk�1)

�h�4(qkC1 C 2qk C qk�1)

o:

From discrete Hamilton’s principle, ıGd D 0 for any ıqk .Thus, the discrete equation of motion is given by

mh(qkC1 � 2qk C qk�1)

Ch�4(qkC1 C 2qk C qk�1) D 0 : (9)

For a given (qk�1; qk ), we solve the above equation to ob-tain qkC1. This yields a discrete flow map (qk�1; qk) 7!(qk ; qkC1), and this process is repeated. The discrete Leg-endre transformation provides the discrete equation ofmotion in terms of the velocity as

�1C

h2�4m

�qkC1 D hqk C

�1 �

h2�4m

�qk ; (10)

qkC1 D qk �h�2m

qk �h�2m

qkC1 : (11)

For a given (qk ; qk), we compute qkC1 and qkC1 by (10)and (11), respectively. This yields a discrete flow map(qk ; qk) 7! (qkC1; qkC1). It can be shown that this varia-tional integrator has second-order accuracy, which followsfrom the fact that the discrete action sum is a second-orderapproximation of the action integral.

Numerical example:We compare computational prop-erties of the discrete equations of motion given by (10)and (11) with a 4(5)th order variable step size Runge–Kutta method. We choose m D 1 kg, � D 1 kg/s2 sothat the natural frequency is 1 rad/s. The initial condi-tions are q0 D

p2m, q0 D 0, and the total energy is

E D 1Nm. The simulation time is 200� sec, and the step-size h D 0:035 of the discrete equations of motion is cho-sen such that the CPU times are the same for both meth-ods. Figure 2a shows the computed total energy. The varia-tional integrator preserves the total energy well. The meanvariation is 2:7327�10�13 Nm. But, there is a notable dis-sipation of the computed total energy for the Runge–Kuttamethod

Example 2 (Planar Pendulum) A planar pendulum isa mass particle connected to a frictionless, one degree-of-freedom pivot by a rigid massless link under a uni-form gravitational potential. The configuration space isthe one-sphere S1 D fq 2 R2 j kqk D 1g. While it is com-mon to parametrize the one-sphere by an angle, we de-velop parameter-free equations of motion in the specialorthogonal group SO(2), which is a group of 2�2 orthog-onal matrices with determinant 1, i. e. SO(2) D fR 2R2�2 j RTR D I2�2; det[R] D 1g. SO(2) is diffeomor-phic to the one-sphere. It is also possible to develop globalequations of motion on the one-sphere directly, as shown

Page 6: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2007

Discrete Control Systems, Figure 2Computed total energy (RK45: blue, dotted, VI: red, solid)

in the next example, but here we focus on the special or-thogonal group in order to illustrate the key steps to de-velop a Lie group variational integrator.

We first exploit the basic structures of the Lie groupSO(2). Define a hat map �, which maps a scalar˝ to a 2�2skew-symmetric matrix ˆ as

ˆ D�0 �˝

˝ 0

�:

The set of 2�2 skew-symmetric matrices forms the Lie al-gebra so(2). Using the hat map, we identify so(2) with R.An inner product on so(2) can be induced from the in-ner product on R as h ˆ1; ˆ 2i D 1/2tr[ ˆ T1 ˆ2] D ˝1˝2for any˝1;˝2 2 R. The matrix exponential is a local dif-feomorphism from so(2) to SO(2) given by

exp ˆ D�cos˝ � sin˝sin˝ cos˝

�:

The kinematics equation for R 2 SO(2) can be written interms of a Lie algebra element as

R D R ˆ : (12)

Continuous equations of motion: The Lagrangian fora planar pendulum L : SO(2)�so(2)! R can be writtenas

L(R; ˆ ) D12ml2˝2 C mgleT2 Re2

D12ml2h ˆ ; ˆ i C mgleT2 Re2 ;

(13)

where the constant g 2 R is the gravitational acceleration.The mass and the length of the pendulum are denoted bym; l 2 R, respectively. The second expression is used todefine a discrete Lagrangian later. We choose the bases

of the inertial frame and the body-fixed frame such thatthe unit vector along the gravity direction in the iner-tial frame, and the unit vector along the pendulum axisin the body-fixed frame are represented by the same vec-tor e2 D [0; 1] 2 R2. Thus, for example, the hanging atti-tude is represented by R D I2�2. Here, the rotation ma-trix R 2 SO(2) represents the linear transformation froma representation of a vector in the body-fixed frame to theinertial frame.

Since the special orthogonal group is not a linear vec-tor space, the expression for the variation should be care-fully chosen. The infinitesimal variation of a rotation ma-trix R 2 SO(2) can be written in terms of its Lie algebraelement as

ıR Ddd�

ˇˇˇ�D0

R exp �� D R� exp ��ˇˇˇ�D0D R� ; (14)

where � 2 R so that � 2 so(2). The infinitesimal varia-tion of the angular velocity is induced from this expressionand (12) as

ı ˆ D ıRT RC RTıR

D (R�)TRC RT(R�C R ˆ�)

D �� ˆ C ˆ �C ˆ� D ˆ� ;

(15)

where we used the equality of mixed partials to computeıR as d/dt(ıR).

Define the action integral to be G DR T0 L(R; ˆ )dt.

The infinitesimal variation of the action integral is ob-tained by using (14) and (15). Hamilton’s principle yieldsthe following continuous equations of motion.

˙ CgleT2 Re1 D 0 ; (16)

R D R ˆ : (17)

Page 7: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2008 D Discrete Control Systems

If we parametrize the rotation matrix as

R D�cos � � sin �sin � cos �

�;

these equations are equivalent to

� Cglsin � D 0 : (18)

Discrete equations of motion: We develop a Lie groupvariational integrator on SO(2). Similar to (4), defineFk 2 SO(2) such that

RkC1 D RkFk : (19)

Thus, Fk D RTk RkC1 represents the relative update be-

tween two integration steps. If we find Fk 2 SO(2), the or-thogonal structure is preserved through (19) since multi-plication of orthogonal matrices is also orthogonal. This isa key idea of Lie group variational integrators.

Define the discrete Lagrangian Ld : SO(2)�SO(2)! Rto be

Ld(Rk ; Fk) D12h

ml2hFk � I2�2; Fk � I2�2i

Ch2mgleT2 Rk e2 C

h2mgleT2 RkC1e2

D12h

ml2tr[I2�2 � Fk]Ch2mgleT2 Rk e2

Ch2mgleT2 RkC1e2 ;

(20)

which is obtained by an approximation h ˆ k ' RTk (RkC1

� Rk) D Fk � I2�2, applied to the continuous Lagrangiangiven by (13).

As for the continuous time case, expressions for theinfinitesimal variations should be carefully chosen. Theinfinitesimal variation of a rotation matrix is the sameas (14), namely

ıRk D Rk �k ; (21)

for �k 2 R, and the constrained variation of Fk is obtainedfrom (19) as

ıFk D ıRTk RkC1 C RT

k ıRkC1

D ��kFk C Fk �kC1 D Fk(�kC1 � FTk �kFk)

D Fk(�kC1 � �k) ;

(22)

where we use the fact that F�FT D � for any F 2 SO(2)and � 2 so(2).

Define an action sum Gd : SO(2)NC1 ! R as Gd DPN�1kD0 Ld(Rk ; Fk). Using (21) and (22), the variation of the

action sum is written as

ıGd D

N�1X

kD1

�12h

ml2�Fk�1 � FT

k�1

�12h�Fk � FT

k � hmgl2eT2 Rk e1; �k

:

From the discrete Hamilton’s principle, ıGd D 0 for any�k . Thus, we obtain the Lie group variational integrator onSO(2) as

�Fk � FT

k ��FkC1 � FT

kC1 �2h2gl

3eT2 RkC1e1 D 0 ; (23)

RkC1 D RkFk : (24)

For a given (Rk ; Fk) and RkC1 D RkFk , (23) is solvedto find FkC1. This yields a discrete map (Rk ; Fk) 7!(RkC1; FkC1). If we parametrize the rotation matrices Rand F with � and�� and if we assume that�� 1, theseequations are equivalent to

1h(�kC1 � 2�k C �k)C

hglsin �k D 0 :

The discrete version of the Legendre transformationprovides the discrete Hamiltonian map as follows.

Fk � FTk D 2h ˆ �

h2gl

2eT2 Rk e1 ; (25)

RkC1 D RkFk ; (26)

˝kC1 D ˝k �hg2l

eT2 Rk e1 �hg2l

eT2 RkC1e1 : (27)

For a given (Rk ;˝k), we solve (25) to obtain Fk. Using this,(RkC1;˝kC1) is obtained from (26) and (27). This yieldsa discrete map (Rk ;˝k ) 7! (RkC1;˝kC1).

Numerical example: We compare the computationalproperties of the discrete equations of motion givenby (25)–(27) with a 4(5)th order variable step size Runge–Kutta method. We choose m D 1 kg, l D 9:81m. The ini-tial conditions are �0 D �/2 rad,˝ D 0, and the total en-ergy is E D 0Nm. The simulation time is 1000 sec, andthe step-size h D 0:03 of the discrete equations of mo-tion is chosen such that the CPU times are identical. Fig-ure 2b shows the computed total energy for both methods.The variational integrator preserves the total energy well.There is no drift in the computed total energy, and themean variation is 1:0835�10�2 Nm. But, there is a notabledissipation of the computed total energy for the Runge–

Page 8: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2009

Kutta method. Note that the computed total energy wouldfurther decrease as the simulation time increases.

Example 3 (Spherical Pendulum) A spherical pendulum isa mass particle connected to a frictionless, two degree-of-freedom pivot by a rigid massless link. The mass particleacts under a uniform gravitational potential. The config-uration space is the two-sphere S2 D fq 2 R3 j kqk D 1g.It is common to parametrize the two-sphere by two angles,but this description of the spherical pendulum has a singu-larity. Any trajectory near the singularity encounters nu-merical ill-conditioning. Furthermore, this leads to com-plicated expressions involving trigonometric functions.

Here we develop equations of motion for a spheri-cal pendulum using the global structure of the two-spherewithout parametrization. In the previous example, we de-velop equations of motion for a planar pendulum us-ing the fact that the one-sphere S1 is diffeomorphic tothe special orthogonal group SO(2). But, the two-sphereis not diffeomorphic to a Lie group. Instead, there ex-ists a natural Lie group action on the two-sphere. Thatis the 3-dimensional special orthogonal group SO(3),a group of 3�3 orthogonal matrices with determinant 1,i. e. SO(3) D fR 2 R3�3 j RTR D I3�3; det[R] D 1g. Thespecial orthogonal group SO(3) acts on the two-spherein a transitive way; for any q1; q2 2 S2, there existsa R 2 SO(3) such that q2 D Rq1. Thus, the discrete updatefor the two-sphere can be expressed in terms of a rotationmatrix as (19). This is a key idea to develop a discrete equa-tions of motion for a spherical pendulum.

Continuous equations of motion: Let q 2 S2 be a unitvector from the pivot point to the point mass. The La-grangian for a spherical pendulum can be written as

L(q; q) D12ml2q � qC mgle3 � q ; (28)

where the gravity direction is assumed to be e3 D [0; 0;1] 2 R3. The mass and the length of the pendulum aredenoted by m; l 2 R, respectively. The infinitesimal vari-ation of the unit vector q can be written in terms of thevector cross product as

ıq D � � q ; (29)

where � 2 R3 is constrained to be orthogonal to the unitvector, i. e. � � q D 0. Using this expression for the in-finitesimal variation, Hamilton’s principle yields the fol-lowing continuous equations of motion.

qC (q � q)q Cgl(q � (q � e3)) D 0 : (30)

Since q D !�q for some angular velocity ! 2 R3 satisfy-ing ! � q D 0, this can also be equivalently written as

! �glq � e3 D 0 ; (31)

q D ! � q : (32)

These are global equations of motion for a spherical pen-dulum; these are much simpler than the equations ex-pressed in term of angles, and they have no singularity.

Discrete equations of motion:We develop a variationalintegrator for the spherical pendulum defined on S2. Sincethe special orthogonal group acts on the two-sphere tran-sitively, we can define the discrete update map for the unitvector as

qkC1 D Fkqk (33)

for Fk 2 SO(3). The rotation matrix Fk is not uniquelydefined by this condition, since exp( qk ), which corre-sponds to a rotation about the qk direction fixes the vec-tor qk. Consequently, if Fk satisfies (33), then Fk exp( qk )does as well. We avoid this ambiguity by requiring thatFk does not rotate about qk, which can be achieved by let-ting Fk D exp( fk), where fk � qk D 0.

Define a discrete Lagrangian Ld : S2 � S2 ! R to be

Ld(qk ; qkC1) D12h

ml2(qkC1 � qk ) � (qkC1 � qk)

Ch2mgle3 � qk C

h2mgle3 � qkC1 :

The variation of qk is the same as (29), namely

ıqk D �k � qk (34)

for �k 2 R3 with a constraint �k � qk D 0. Using this dis-crete Lagrangian and the expression for the variation, dis-crete Hamilton’s principle yields the following discreteequations of motion for a spherical pendulum.

qkC1 D

�h!k C

h2g2l

qk � e3�� qk

C

1 �����h!k C

h2g2l

qk � e3����

2!1/2

qk ; (35)

!kC1 D !k Chg2l

qk � e3 Chg2l

qkC1 � e3 : (36)

Since an explicit solution for Fk 2 SO(3) can be obtainedin this case, the rotation matrix Fk does not appear inthe equations of motion. This variational integrator onS2 exactly preserves the unit length of qk, the constraint

Page 9: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2010 D Discrete Control Systems

Discrete Control Systems, Figure 3Numerical simulation of a spherical pendulum (RK45: blue, dotted, VI: red, solid)

qk � !k D 0, and the third component of the angular ve-locity !k � e3 which is conserved since gravity exerts nomoment along the gravity direction e3.

Numerical example: We compare the computationalproperties of the discrete equations of motion givenby (35) and (36) with a 4(5)th order variable step sizeRunge–Kutta method for (31) and (32). We choosem D 1 kg, l D 9:81m. The initial conditions are q0 D[p3/2; 0; 1/2], !0 D 0:1[

p3; 0; 3] rad/sec, and the total

energy is E D �0:44Nm. The simulation time is 200 sec,and the step-size h D 0:05 of the discrete equations ofmotion is chosen such that the CPU times are identical.Figure 3 shows the computed total energy and the unitlength errors. The variational integrator preserves the to-tal energy and the structure of the two-sphere well. Themean total energy deviation is 1:5460�10�4 Nm, and themean unit length error is 3:2476�10�15. But, there isa notable dissipation of the computed total energy for theRunge–Kutta method. The Runge–Kutta method also failsto preserve the structure of the two-sphere. The mean unitlength error is 1:0164�10�2.

Example 4 (Rigid Body in a Potential Field) Considera rigid body under a potential field that is dependent onthe position and the attitude of the body. The configura-tion space is the special Euclidean group, which is a semi-direct product of the special orthogonal group and Eu-clidean space, i. e. SE(3) D SO(3) s�R3.

Continuous equations of motion: The equations of mo-tion for a rigid body can be developed either from Hamil-ton’s principle (see [26]) in a similar way as Example 2,or directly from the generalized discrete Euler–Poincaréequation given at (5). Here, we summarize the results. Letm 2 R and J 2 R3�3 be the mass and the moment of in-ertia matrix of a rigid body. For (R; x) 2 SE(3), the linear

transformation from the body-fixed frame to the inertialframe is denoted by the rotationmatrix R 2 SO(3), and theposition of the mass center in the inertial frame is denotedby a vector x 2 R3. The vectors˝; v 2 R3 are the angularvelocity in the body-fixed frame, and the translational ve-locity in the inertial frame, respectively. Suppose that therigid body acts under a configuration-dependent potentialU : SE(3)! R. The continuous equations of motion forthe rigid body can be written as

R D R ˆ ; (37)

x D v ; (38)

J ˙ C˝ � J˝ D M ; (39)

mv D �@U@x

; (40)

where the hat map � is an isomorphism from R3 to3�3 skew-symmetric matrices so(3), defined such thatx y D x�y for any x; y 2 R3. The moment due to the po-tential M 2 R3 is obtained by the following relationship:

M D@U@R

TR � RT @U

@R: (41)

The matrix @U/@R 2 R3�3 is defined such that[@U/@R]i j D@U/@[R]i j for i; j 2 f1; 2; 3g, where the i, jthelement of a matrix is denoted by [�]i j .

Discrete equations of motion: The corresponding dis-crete equations of motion are given by

hbJ˝k Ch2

2Mk D Fk Jd � JdFT

k ; (42)

RkC1 D RkFk ; (43)

Page 10: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2011

xkC1 D xk C hvk �h2

2m@Uk

@xk; (44)

J˝kC1 D FTk J˝k C

h2FTk Mk C

h2MkC1 ; (45)

mvkC1 D mvk �h2Uk

@xk�

h2@UkC1

@xkC1; (46)

where Jd 2 R3�3 is a non-standard moment of iner-tia matrix defined as Jd D1/2tr[J]I3�3 � J. For a given(Rk ; xk ;˝k ; vk ), we solve the implicit Eq. (42) to findFk 2 SO(3). Then, the configuration at the next stepRkC1;xkC1 is obtained by (43) and (44), and the mo-ment and force MkC1;(@UkC1)/(@xkC1) can be computed.Velocities ˝kC1;vkC1 are obtained from (45) and (46).This defines a discrete flow map, (Rk ; xk ;˝k ; vk) 7!(RkC1; xkC1;˝kC1; vkC1), and this process can be re-peated. This Lie group variational integrator on SE(3) canbe generalized to multiple rigid bodies acting under theirmutual gravitational potential (see [26]).

Numerical example: We compare the computationalproperties of the discrete equations of motion given by(42)–(46) with a 4(5)th order variable step size Runge–Kutta method for (37)–(40). In addition, we compute theattitude dynamics using quaternions on the unit three-sphere S3. The attitude kinematics Eq. (37) is rewritten interms of quaternions, and the corresponding equations areintegrated by the same Runge–Kutta method.

We choose a dumbbell spacecraft, that is two spheresconnected by a rigid massless rod, acting under a centralgravitational potential. The resulting system is a restrictedfull two body problem. The dumbbell spacecraft model hasan analytic expression for the gravitational potential, re-sulting in a nontrivial coupling between the attitude dy-namics and the orbital dynamics.

As shown in Fig. 4a, the initial conditions are cho-sen such that the resulting motion is a near-circular orbit

Discrete Control Systems, Figure 4Numerical simulation of a dumbbell rigid body (LGVI: red, solid, RK45 with rotation matrices: blue, dash-dotted, RK45 with quater-nions: black, dotted)

combined with a rotational motion. Figure 4b and c showthe computed total energy and the orthogonality errorsof the rotation matrix. The Lie group variational integra-tor preserves the total energy and the Lie group structureof SO(3). Themean total energy deviation is 2:5983�10�4,and the mean orthogonality error is 1:8553�10�13. But,there is a notable dissipation of the computed total en-ergy and the orthogonality error for the Runge–Kuttamethod. The mean orthogonality errors for the Runge–Kutta method are 0.0287 and 0.0753, respectively, usingkinematics equation with rotation matrices, and using thekinematics equation with quaternions. Thus, the attitudeof the rigid body is not accurately computed for Runge–Kutta methods. It is interesting to see that the Runge–Kutta method with quaternions, which is generally as-sumed to have better computational properties than thekinematics equation with rotation matrices, has larger to-tal energy error and orthogonality error. Since the unitlength of the quaternion vector is not preserved in thenumerical computations, orthogonality errors arise whenconverted to a rotation matrix. This suggests that it is crit-ical to preserve the structure of SO(3) in order to study theglobal characteristics of the rigid body dynamics.

The importance of simultaneously preserving the sym-plectic structure and the Lie group structure of the con-figuration space in rigid body dynamics can be observednumerically. Lie group variational integrators, which pre-serve both of these properties, are compared to meth-ods that only preserve one, or neither, of these properties(see [27]). It is shown that the Lie group variational inte-grator exhibits greater numerical accuracy and efficiency.Due to these computational advantages, the Lie groupvariational integrator has been used to study the dynam-ics of the binary near-Earth asteroid 66391 (1999 KW4) injoint work between the University of Michigan and the JetPropulsion Laboratory, NASA (see [37]).

Page 11: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2012 D Discrete Control Systems

Optimal Control of Discrete Lagrangianand Hamiltonian Systems

Optimal control problems involve finding a control inputsuch that a certain optimality objective is achieved underprescribed constraints. An optimal control problem thatminimizes a performance index is described by a set ofdifferential equations, which can be derived using Pon-tryagin’s maximum principle. The equations of motion fora system are constrained by Lagrange multipliers, and nec-essary conditions for optimality is obtained by the calcu-lus of variations. The solution for the corresponding twopoint boundary value problem provides the optimal con-trol input. Alternatively, a sub-optimal control law is ob-tained by approximating the control input history with fi-nite data points.

Discrete optimal control problems involve findinga control input for a given system described by discrete La-grangian and Hamiltonian mechanics. The control inputsare parametrized by their values at each discrete time step,and the discrete equations of motion are derived from thediscrete Lagrange–d’Alembert principle [21],

ı

N�1X

kD0

Ld(qk ; qkC1) DN�1X

kD0

[F�d (qk ; qkC1) � ıqk

C FCd (qk ; qkC1) � ıqkC1] ;

which modifies the discrete Hamilton’s principle by tak-ing into account the virtual work of the external forces.Discrete optimal control is in contrast to traditional tech-niques such as collocation, wherein the continuous equa-tions of motion are imposed as constraints at a set of col-location points, since this approach induces constraints onthe configuration at each discrete timestep.

Any optimal control algorithm can be applied to thediscrete Lagrangian or Hamiltonian system. For an in-direct method, our approach to a discrete optimal con-trol problem can be considered as a multiple stage vari-ational problem. The discrete equations of motion arederived by the discrete variational principle. The corre-sponding variational integrators are imposed as dynamicconstraints to be satisfied by using Lagrange multipli-ers, and necessary conditions for optimality, expressedas discrete equations on multipliers, are obtained froma variational principle. For a direct method, control inputscan be optimized by using parameter optimization toolssuch as a sequential quadratic programming. The discreteoptimal control can be characterized by discretizing theoptimal control problem from the problem formulationstage.

This method has substantial computational advan-tages when used to find an optimal control law. As dis-cussed in the previous section, the discrete dynamics aremore faithful to the continuous equations of motion, andconsequently more accurate solutions to the optimal con-trol problem are obtained. The external control inputsbreak the Lagrangian and Hamiltonian system structure.For example, the total energy is not conserved for a con-trolled mechanical system. But, the computational supe-riority of the discrete mechanics still holds for controlledsystems. It has been shown that the discrete dynamics ismore reliable even for controlled system as it computesthe energy dissipation rate of controlled systems more ac-curately (see [31]). For example, this feature is extremelyimportant in computing accurate optimal trajectories forlong term spacecraft attitude maneuvers using low energycontrol inputs.

The discrete dynamics does not only provide an ac-curate optimal control input, but also enables us to findit efficiently. For the indirect optimal control approach,optimal solutions are usually sensitive to a small varia-tion of multipliers. This causes difficulties, such as nu-merical ill-conditioning, when solving the necessary con-ditions for optimality expressed as a two point boundaryvalue problem. Sensitivity derivatives along the discretenecessary conditions do not have numerical dissipation in-troduced by conventional numerical integration schemes.Thus, they are numerically more robust, and the necessaryconditions can be solved computationally efficiently. Forthe direct optimal control approach, optimal control in-puts can be obtained by using a larger discrete step size,which requires less computational load.

We illustrate the basic properties of the discrete opti-mal control using optimal control problems for the spher-ical pendulum and the rigid body model presented in theprevious section.

Example 5 (Optimal Control of a Spherical Pendulum) Westudy an optimal control problem for the spherical pendu-lum described in Example 3. We assume that an externalcontrol moment u 2 R3 acts on the pendulum. Controlinputs are parametrized by their values at each time step,and the discrete equations of motion are modified to in-clude the effects of the external control inputs by using thediscrete Lagrange–d’Alembert principle. Since the compo-nent of the control moment that is parallel to the directionalong the pendulum has no effect, we parametrize the con-trol input as uk D qk�wk for wk 2 R3.

The objective is to transfer the pendulum from a giveninitial configuration (q0; !0) to a desired configuration(qd ; !d ) during a fixed maneuver time N, while minimiz-

Page 12: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2013

Discrete Control Systems, Figure 5Optimal control of a spherical pendulum

ing the square of the weighted l2 norm of the control mo-ments.

minwk

J DNX

kD0

h2uTk uk D

NX

kD0

h2(qk � wk)T(qk � wk) :

We solve this optimal control problem by using a di-rect numerical optimization method. The terminal bound-ary condition is imposed as an equality constraint, and the3(N C 1) control input parameters fwkg

NkD0 are numeri-

cally optimized using sequential quadratic programming.This method is referred to as a DMOC (Discrete Mechan-ics and Optimal Control) approach (see [19]).

Figure 5 shows a optimal solution transferring thespherical pendulum from a hanging configuration givenby (q0; !0) D (e3; 03�1) to an inverted configuration(qd ; !d ) D (�e3; 03�1) during 1 second. The time stepsize is h D 0:033. Experiment have shown that the DMOCapproach can compute optimal solutions using largerstep sizes than typical Runge–Kutta methods, and conse-quently, it requires less computational load. In this case,using a second-order accurate Runge–Kutta method, thesame optimization code fails while giving error messagesof inaccurate and singular gradient computations. It ispresumed that the unit length errors of the Runge–Kuttamethod, shown in Example 3, cause numerical instabilitiesfor the finite-difference gradient computations requiredfor the sequential quadratic programming algorithm.

Example 6 (Optimal Control of a Rigid Body in a PotentialField) We study an optimal control problem of a rigidbody using a dumbbell spacecraft model described in Ex-ample 4 (see [25] for detail). We assume that external con-trol forces u f 2 R3, and control moment um 2 R3 act onthe dumbbell spacecraft. Control inputs are parametrizedby their values at each time step, and the Lie group vari-ational integrators are modified to include the effects of

the external control inputs by using discrete Lagrange–d’Alembert principle.

The objective is to transfer the dumbbell froma given initial configuration (R0; x0;˝0; v0) to a desiredconfiguration (Rd ; xd;˝d; vd) during a fixed maneuvertime N, while minimizing the square of the l2 norm of thecontrol inputs.

minukC1

JDN�1X

kD0

h2

u fkC1

�TWf u

fkC1C

h2�umkC1

T WmumkC1 ;

where Wf ;Wm 2 R3�3 are symmetric positive definitematrices. Here we use a modified version of the discreteequations of motion with first order accuracy, as it yieldsa compact form for the necessary conditions.

Necessary conditions for optimality: We solve this op-timal control problem by using an indirect optimizationmethod, where necessary conditions for optimality are de-rived using variational arguments, and a solution of thecorresponding two-point boundary value problem pro-vides the optimal control. This approach is common inthe optimal control literature; here the optimal controlproblem is discretized at the problem formulation levelusing the Lie group variational integrator presented inSect. “Discrete Lagrangian and Hamiltonian Mechanics”.

Ja DN�1X

kD0

h2

u fkC1

�TW f u f

kC1 Ch2�umkC1

T WmumkC1

C 1;Tk f�xkC1 C xk C hvkg

C 2;Tk

��mvkC1 C mvk � h

@UkC1

@xkC1C hu f

kC1

C 3;Tk�logm(Fk � RT

k RkC1) _

C 4;Tk˚�J˝kC1 C FT

k J˝k C h�MkC1 C umkC1

�;

Page 13: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2014 D Discrete Control Systems

where 1k ; 2k ;

3k ;

4k 2 R3 are Lagrange multipliers. The

matrix logarithm is denoted by logm: SO(3)! so(3) andthe vee map _ : so(3)! R3 is the inverse of the hat mapintroduced in Example 4. The logarithm form of (43) isused, and the constraint (42) is considered implicitly usingconstrained variations. Using similar expressions for thevariation of the rotation matrix and the angular velocitygiven in (14) and (15), the infinitesimal variation can bewritten as

ıJa DN�1X

kD1

hıu f ;Tk

nWf u

fk C

2k�1

o

C hıum;Tk˚Wmumk C

4k�1�

C zTk˚� k�1 C AT

k k�;

where k D [ 1k ; 2k ;

3k ;

4k ] 2 R12, and zk 2 R12 rep-

resents the infinitesimal variation of (Rk ; xk ;˝k ; vk),given by zk D [logm(RT

k ıRk )_; ıxk ; ı˝k ; ıvk]. The ma-trix Ak 2 R12�12 is defined in terms of (Rk ; xk ;˝k ; vk).Thus, necessary conditions for optimality are given by

u fkC1 D �W

�1f 2k ; (47)

umkC1 D �W�1m 4k ; (48)

k D ATkC1 kC1 (49)

together with the discrete equations of motion and theboundary conditions.

Computational approach:Necessary conditions for op-timality are expressed in terms of a two point boundaryproblem. The problem is to find the optimal discrete flow,multipliers, and control inputs to satisfy the equations ofmotion, optimality conditions, multiplier equations, andboundary conditions simultaneously. We use a neighbor-ing extremal method (see [12]). A nominal solution sat-

Discrete Control Systems, Figure 6Optimal control of a rigid body

isfying all of the necessary conditions except the bound-ary conditions is chosen. The unspecified initial multiplieris updated by successive linearization so as to satisfy thespecified terminal boundary conditions in the limit. This isalso referred to as the shooting method. The main advan-tage of the neighboring extremal method is that the num-ber of iteration variables is small.

The difficulty is that the extremal solutions are sensi-tive to small changes in the unspecified initial multipliervalues. The nonlinearities also make it hard to constructan accurate estimate of sensitivity, thereby resulting innumerical ill-conditioning. Therefore, it is important tocompute the sensitivities accurately to apply the neighbor-ing extremal method. Here the optimality conditions (47)and (48) are substituted into the equations of motion andthe multiplier equations, which are linearized to obtainsensitivity derivatives of an optimal solution with respectto boundary conditions. Using this sensitivity, an initialguess of the unspecified initial conditions is iterated tosatisfy the specified terminal conditions in the limit. Anytype of Newton iteration can be applied. We use a linesearch with backtracking algorithm, referred to as New-ton–Armijo iteration (see [22]).

Figure 6 shows optimized maneuvers, where a dumb-bell spacecraft on a reference circular orbit is transferredto another circular orbit with a different orbital radiusand inclination angle. Figure 6a shows the violation of theterminal boundary condition according to the number ofiterations on a logarithmic scale. Red circles denote outeriterations in the Newton–Armijo iteration to compute thesensitivity derivatives. The error in satisfaction of the ter-minal boundary condition converges quickly to machineprecision after the solution is close to the local minimumat around the 20th iteration. These convergence results areconsistent with the quadratic convergence rates expectedof Newton methods with accurately computed gradients.

Page 14: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2015

The neighboring extremal method, also referred to asthe shooting method, is numerically efficient in the sensethat the number of optimization parameters is minimized.But, in general, this approach may be prone to numeri-cal ill-conditioning (see [3]). A small change in the ini-tial multiplier can cause highly nonlinear behavior of theterminal attitude and angular momentum. It is difficultto compute the gradient for Newton iterations accurately,and the numerical error may not converge. However, thenumerical examples presented in this article show excel-lent numerical convergence properties. The dynamics ofa rigid body arises from Hamiltonian mechanics, whichhave neutral stability, and its adjoint system is also neu-trally stable. The proposed Lie group variational integra-tor and the discrete multiplier equations, obtained fromvariations expressed in the Lie algebra, preserve the neu-tral stability property numerically. Therefore the sensitiv-ity derivatives are computed accurately.

Controlled LagrangianMethodfor Discrete Lagrangian Systems

The method of controlled Lagrangians is a procedure forconstructing feedback controllers for the stabilization ofrelative equilibria. It relies on choosing a parametrizedfamily of controlled Lagrangians whose correspondingEuler–Lagrange flows are equivalent to the closed loop be-havior of a Lagrangian systemwith external control forces.The condition that these two systems are equivalent resultsin matching conditions. Since the controlled system cannow be viewed as a Lagrangian system with a modified La-grangian, the global stability of the controlled system canbe determined directly using Lyapunov stability analysis.

This approach originated in Bloch et al. [5] and wasthen developed in Auckly et al. [1]; Bloch et al. [6,7,8,9];Hamberg [15,16]. A similar approach for Hamiltoniancontrolled systems was introduced and further studied inthe work of Blankenstein, Ortega, van der Schaft, Maschke,Spong, and their collaborators (see, for example, [33,35]and related references). The two methods were shown tobe equivalent in Chang et al. [13] and a nonholonomic ver-sion was developed in Zenkov et al. [40,41], and Bloch [4].

Since the method of controlled Lagrangians relieson relating the closed-loop dynamics of a controlledsystem with the Euler–Lagrange flow associated witha modified Lagrangian, it is natural to discretize this ap-proach through the use of variational integrators. In Blochet al. [10,11], a discrete theory of controlled Lagrangianswas developed for variational integrators, and applied tothe feedback stabilization of the unstable inverted equilib-rium of the pendulum on a cart.

The pendulum on a cart is an example of an underac-tuated control problem, which has two degrees of freedom,given by the pendulum angle and the cart position. Onlythe cart position has control forces acting on it, and thestabilization of the pendulum has to be achieved indirectlythrough the coupling between the pendulum and the cart.The controlled Lagrangian is obtained by modifying thekinetic energy term, a process referred to as kinetic shap-ing. Similarly, it is possible to modify the potential energyterm using potential shaping.

Since the pendulum on a cart model involves botha planar pendulum, and a cart that translates in one-di-mension, the configuration space is a cylinder, S1�R.

Continuous kinetic shaping: The Lagrangian has theform kinetic minus potential energy

L(q; q) D12�˛�2 C 2ˇ(�)� sC � s2

�� V(q); (50)

and the corresponding controlled Euler–Lagrange dynam-ics is

ddt@L@��@L@�D 0 ; (51)

ddt@L@sD u ; (52)

where u is the control input.Since the potential energy is translation invariant, i. e.,

V(q) D V(�), the relative equilibria � D �e , s D const areunstable and given by non-degenerate critical points ofV(�). To stabilize the relative equilibria � D �e , s D constwith respect to � , kinetic shaping is used. The controlledLagrangian in this case is defined by

L�; (q; q) D L(�; � ; sC �(�)�)C12�� (�(�)�)2 ; (53)

where �(�) D �ˇ(�). This velocity shift corresponds toa new choice of the horizontal space (see [8] for details).The dynamics is just the Euler–Lagrange dynamics forcontrolled Lagrangian (53),

ddt@L�;

@��@L�;

@�D 0 ; (54)

ddt@L�;

@sD 0 : (55)

The Lagrangian (53) satisfies the simplified matching con-ditions of Bloch et al. [9] when the kinetic energy metriccoefficient � in (50) is constant.

Setting u D �d���(�)�

/dt defines the control in-

put, makes Eqs. (52) and (55) identical, and results incontrolled momentum conservation by dynamics (51)

Page 15: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2016 D Discrete Control Systems

and (52). Setting � D �1/�� makes Eqs. (51) and (54)identical when restricted to a level set of the controlledmomentum.

Discrete kinetic shaping: Here, we adopt the followingnotation:

qkC1/2 Dqk C qkC1

2;

�qk D qkC1 � qk ; qk D (�k ; sk) :

Then, a second-order accurate discrete Lagrangian is givenby,

Ld(qk ; qkC1) D hL(qkC1/2; �qk /h) :

The discrete dynamics is governed by the equations

@Ld(qk ; qkC1)@�k

C@Ld(qk�1; qk)

@�kD 0 ; (56)

@Ld(qk ; qkC1)@sk

C@Ld(qk�1; qk)

@skD �uk ; (57)

where uk is the control input. Similarly, the discrete con-trolled Lagrangian is,

L�;d (qk ; qkC1) D hL�; (qkC1/2; �qk /h) ;

with discrete dynamics given by,

@L�;d (qk ; qkC1)@�k

C@L�;d (qk�1; qk)

@�kD 0 ; (58)

@L�;d (qk ; qkC1)@sk

C@L�;d (qk�1; qk)

@skD 0 : (59)

Equation (59) is equivalent to the discrete controlled mo-mentum conservation:

pk D � ;

where

pk D �@L�;d (qk ; qkC1)

@sk

D(1C ��)ˇ(�kC1/2)��k C ��sk

h:

Setting

uk D ����k�(�kC1/2) � ���k�1�(�k�1/2)

h

makes Eqs. (57) and (59) identical and allows one to repre-sent the discrete momentum equation (57) as the discretemomentum conservation law pk D p.

The condition that (56)–(57) are equivalent to (58)–(59) yield the discrete matching conditions. The dynamicsdetermined by Eqs. (56)–(57) restricted to the momentumlevel pk D p is equivalent to the dynamics of Eqs. (58)–(59) restricted to the momentum level pk D � if and onlyif the matching conditions

� D �1��; � D

p1C ��

;

hold.Numerical example: Simulating the behavior of the

discrete controlled Lagrangian system involves viewingEqs. (58)–(59) as an implict update map (qk�2; qk�1) 7!(qk�1; qk). This presupposes that the initial conditions aregiven in the form (q0; q1); however it is generally prefer-able to specify the initial conditions as (q0; q0). This isachieved by solving the boundary condition,

@L@q

(q0; q0)C D1Ld(q0; q1)C F�d (q0; q1) D 0 ;

for q1. Once the initial conditions are expressed in theform (q0; q1), the discrete evolution can be obtained usingthe implicit update map.

We first consider the case of kinetic shaping on a levelsurface, when � is twice the critical value, and without dis-sipation. Here, h D 0:05 sec, m D 0:14 kg, M D 0:44 kg,and l D 0:215m. As shown in Fig. 7, the � dynamics isstabilized, but since there is no dissipation, the oscillationsare sustained. The s dynamics exhibits both a drift and os-cillations, as potential shaping is necessary to stabilize thetranslational dynamics.

Future Directions

Discrete Receding Horizon Optimal Control: The existingwork on discrete optimal control has been primarily fo-cused on constructing the optimal trajectory in an openloop sense. In practice, model uncertainty and actuationerrors necessitate the use of feedback control, and it wouldbe interesting to extend the existing work on optimal con-trol of discrete systems to the feedback setting by adoptinga receding horizon approach.

Discrete State Estimation: In feedback control, one typ-ically assumes complete knowledge regarding the state ofthe system, an assumption that is often unrealistic in prac-tice. The general problem of state estimation in the con-text of discrete mechanics would rely on good numericalmethods for quantifying the propagation of uncertaintyby solving the Liouville equation, which describes the evo-lution of a phase space distribution function advected by

Page 16: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Discrete Control Systems D 2017

Discrete Control Systems, Figure 7Discrete controlled dynamics with kinetic shaping and without dissipation. The discrete controlled system stabilizes the � motionabout the equilibrium, but the s dynamics is not stabilized; since there is no dissipation, the oscillations are sustained

a prescribed vector field. In the setting of Hamiltonian sys-tems, the solution of the Liouville equation can be solvedby the method of characteristics (Scheeres et al. [38]). Thisimplies that a collocational approach (Xiu [39]) com-bined with Lie group variational integrators, and interpo-lation based on noncommutative harmonic analysis on Liegroups could yield an efficient means of propagating un-certainty, and serve as the basis of a discrete state estima-tion algorithm.

Forced Symplectic–Energy-Momentum Variational In-tegrators: One of the motivations for studying the controlof Lagrangian systems using the method of controlled La-grangians is that the method provides a natural candidateLyapunov function to study the global stability proper-ties of the controlled system. In the discrete theory, thisapproach is complicated by the fact that the energy ofa discrete Lagrangian system is not exactly conserved, butrather oscillates in a bounded fashion.

This can be addressed by considering the symplec-tic-energy-momentum [20] analogue to the discrete La-grange-d’Alembert principle,

ı

N1X

kD0

Ld(qk ; qkC1; hk) DN�1X

kD0

[F�d (qk ; qkC1; hk ) � ıqk

C FCd (qk ; qkC1; hk) � ıqkC1] ;

where the timestep hk is allowed to vary, and is chosento satisfy the variational principle. The variations in hk

yield an Euler–Lagrange equation that reduces to the con-servation of discrete energy in the absence of externalforces. By developing a theory of controlled Lagrangiansaround a geometric integrator based on the symplectic-energy-momentum version of the Lagrange–d’Alembertprinciple, one would potentially be able to use Lyapunovtechniques to study the global stability of the resulting nu-merical control algorithms.

Acknowledgments

TL and ML have been supported in part by NSF GrantDMS-0504747 and DMS-0726263. TL and NHM havebeen supported in part by NSF Grant ECS-0244977 andCMS-0555797.

Bibliography

Primary Literature

1. Auckly D, Kapitanski L, WhiteW (2000) Control of nonlinear un-deractuated systems. Commun Pure Appl Math 53:354–369

2. Benettin G, Giorgilli A (1994) On the Hamiltonian interpolationof near to the identity symplecticmappingswith application tosymplectic integration algorithms. J Stat Phys 74:1117–1143

3. Betts JT (2001) Practical Methods for Optimal Control UsingNonlinear Programming. SIAM, Philadelphia, PA

4. Bloch AM (2003) Nonholonomic Mechanics and Control. In:Interdisciplinary Applied Mathematics, vol 24. Springer, NewYork

Page 17: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

2018 D Discrete Control Systems

5. Bloch AM, Leonard N, Marsden JE (1997) Matching and stabi-lization using controlled Lagrangians. In: Proceedings of theIEEE Conference on Decision and Control. Hyatt Regency SanDiego, San Diego, CA, 10–12 December 1997, pp 2356–2361

6. Bloch AM, Leonard N, Marsden JE (1998) Matching and stabi-lization by the method of controlled Lagrangians. In: Proceed-ings of the IEEE Conference on Decision and Control. Hyatt Re-gencyWestshore, Tampa, FL, 16–18 December 1998, pp 1446–1451

7. Bloch AM, Leonard N, Marsden JE (1999) Potential shaping andthe method of controlled Lagrangians. In: Proceedings of theIEEE Conference on Decision and Control. Crowne Plaza Hoteland Resort, Phoenix, AZ, 7–10 December 1999, pp 1652–1657

8. Bloch AM, Leonard NE, Marsden JE (2000) Controlled La-grangians and the stabilization of mechanical systems I: Thefirst matching theorem. IEEE Trans Syst Control 45:2253–2270

9. Bloch AM, Chang DE, Leonard NE, Marsden JE (2001) Con-trolled Lagrangians and the stabilization of mechanical sys-tems II: Potential shaping. IEEE Trans Autom Contr 46:1556–1571

10. Bloch AM, Leok M, Marsden JE, Zenkov DV (2005) ControlledLagrangians and stabilization of the discrete cart-pendulumsystem. In: Proceedings of the IEEE Conference on Decisionand Control. Melia Seville, Seville, Spain, 12–15 December2005, pp 6579–6584

11. Bloch AM, Leok M, Marsden JE, Zenkov DV (2006) ControlledLagrangians and potential shaping for stabilization of discretemechanical systems. In: Proceedings of the IEEEConference onDecision and Control. Manchester Grand Hyatt, San Diego, CA,13–15 December 2006, pp 3333–3338

12. Bryson AE, Ho Y (1975) Applied Optimal Control. Hemisphere,Washington, D.C.

13. Chang D-E, Bloch AM, Leonard NE, Marsden JE, Woolsey C(2002) The equivalence of controlled Lagrangian and con-trolled Hamiltonian systems. Control Calc Var (special issuededicated to Lions JL) 8:393–422

14. Hairer E, Lubich C, Wanner G (2006) Geometric Numerical Inte-gration, 2nd edn. Springer Series in Computational Mathemat-ics, vol 31. Springer, Berlin

15. Hamberg J (1999) General matching conditions in the theoryof controlled Lagrangians. In: Proceedings of the IEEE Confer-ence on Decision and Control. Crowne Plaza Hotel and Resort,Phoenix, AZ, 7–10 December 1999, pp 2519–2523

16. Hamberg J (2000) Controlled Lagrangians, symmetries andconditions for strong matching. In: Lagrangian and Hamilto-nian Methods for Nonlinear Control. Elsevier, Oxford

17. Hussein II, Leok M, Sanyal AK, Bloch AM (2006) A discretevariational integrator for optimal control problems in SO(3).In: Proceedings of the IEEE Conference on Decision and Con-trol. Manchester Grand Hyatt, San Diego, CA, 13–15 December2006, pp 6636–6641

18. Iserles A, Munthe-Kaas H, Nørsett SP, Zanna A (2000) Lie-groupmethods. In: Acta Numerica, vol 9. Cambridge University Press,Cambridge, pp 215–365

19. JungeD,Marsden JE, Ober-BlöbaumS (2005) Discretemechan-ics and optimal control. In: IFAC Congress, Praha, Prague, 3–8July 2005

20. Kane C, Marsden JE, Ortiz M (1999) Symplectic-energy-momentum preserving variational integrators. J Math Phys40(7):3353–3371

21. Kane C, Marsden JE, OrtizM,West M (2000) Variational integra-

tors and the Newmark algorithm for conservative and dissipa-tive mechanical systems. Int J Numer Meth Eng 49(10):1295–1325

22. Kelley CT (1995) Iterative Methods for Linear and NonlinearEquations. SIAM, Philadelphia, PA

23. Lee T, Leok M, McClamroch NH (2005) Attitude maneuversof a rigid spacecraft in a circular orbit. In: Proceedings of theAmerican Control Conference. Portland Hilton, Portland, OR,8–10 June 2005, pp 1742–1747

24. Lee T, Leok M, McClamroch NH (2005) A Lie group variationalintegrator for the attitude dynamics of a rigid body with appli-cations to the 3D pendulum. In: Proceedings of the IEEE Con-ference on Control Applications. Toronto, Canada, 28–31 Au-gust 2005, pp 962–967

25. Lee T, Leok M, McClamroch NH (2006) Optimal control ofa rigid body using geometrically exact computations on SE(3).In: Proceedings of the IEEE Conference on Decision and Con-trol. Manchester Grand Hyatt, San Diego, CA, 13–15 December2006, pp 2710–2715

26. Lee T, Leok M, McClamroch NH (2007) Lie group variationalintegrators for the full body problem. Comput Method ApplMech Eng 196:2907–2924

27. Lee T, Leok M, McClamroch NH (2007) Lie group variational in-tegrators for the full body problem in orbital mechanics. CelestMech Dyn Astron 98(2):121–144

28. Leimkuhler B, Reich S (2004) Simulating Hamiltonian Dynam-ics. Cambridge Monographs on Applied and ComputationalMathematics, vol 14. Cambridge University Press, Cambridge

29. Leok M (2004) Foundations of Computational Geometric Me-chanics. PhD thesis, California Instittute of Technology

30. Marsden JE, Ratiu TS (1999) Introduction to Mechanics andSymmetry, 2nd edn. Texts in Applied Mathematics, vol 17.Springer, New York

31. Marsden JE, West M (2001) Discrete mechanics and variationalintegrators. In: Acta Numerica, vol 10. Cambridge UniversityPress, Cambridge, pp 317–514

32. Marsden JE, Pekarsky S, Shkoller S (1999) Discrete Eu-ler–Poincarée and Lie–Poisson equations. Nonlinearity12(6):1647–1662

33. Maschke B, Ortega R, van der Schaft A (2001) Energy-basedLyapunov functions for forced Hamiltonian systems with dis-sipation. IEEE Trans Autom Contr 45:1498–1502

34. Moser J, Veselov AP (1991) Discrete versions of some classi-cal integrable systems and factorization of matrix polynomials.Commun Math Phys 139:217–243

35. Ortega R, Spong MW, Gómez-Estern F, Blankenstein G (2002)Stabilization of a class of underactuated mechanical systemsvia interconnection and damping assignment. IEEE Trans Au-tom Contr 47:1218–1233

36. Sanz-Serna JM (1992) Symplectic integrators for Hamiltonianproblems: an overview. In: Acta Numerica, vol 1. CambridgeUniversity Press, Cambridge, pp 243–286

37. Scheeres DJ, Fahnestock EG, Ostro SJ, Margot JL, Benner LAM,Broschart SB, Bellerose J, Giorgini JD, Nolan MC, Magri C,Pravec P, Scheirich P, Rose R, Jurgens RF, De Jong EM, SuzukiS (2006) Dynamical configuration of binary near-Earth asteroid(66391) 1999 KW4. Science 314:1280–1283

38. Scheeres DJ, Hsiao F-Y, Park RS, Villac BF, Maruskin JM (2006)Fundamental limits on spacecraft orbit uncertainty and distri-bution propagation. J Astronaut Sci 54:505–523

Page 18: UCSD Mathematics | Home - DiscreteControlSystemsmleok/pdf/LeLeMc2009_dcs.pdf2004 D DiscreteControlSystems cretizingHamilton’sprinciple,ratherthandiscretizingthe continuousEuler–Lagrangeequation(see[31,34]).There-sultingintegrators

Disordered Elastic Media D 2019

39. Xiu D (2007) Efficient collocational approach for parametric un-certainty analysis. Comm Comput Phys 2:293–309

40. Zenkov DV, Bloch AM, Leonard NE, Marsden JE (2000) Match-ing and stabilization of low-dimensional nonholonomic sys-tems. In: Proceedings of the IEEE Conference on Decision andControl. Sydney Convention and Exhibition Centre, Sydney,NSW Australia; 12–15 December 2000, pp 1289–1295

41. Zenkov DV, Bloch AM, Leonard NE,Marsden JE (2002) Flat non-holonomic matching. In: Proceedings of the American ControlConference. HiltonAnchorage, Anchorage, AK, 8–10May 2002,pp 2812–2817

Books and ReviewsBloch AM (2003) Nonholonomic Mechanics and Control. Interdisci-

plinary Appl Math, vol 24. SpringerBullo F, Lewis AD (2005) Geometric control of mechanical systems.

Texts in AppliedMathematics, vol 49. Springer, New YorkHairer E, Lubich C, Wanner G (2006) Geometric Numerical Integra-

tion, 2nd edn. Springer Series in Computational Mathematics,vol 31. Springer, Berlin

Iserles A, Munthe-Kaas H, Nørsett SP, Zanna A (2000) Lie-groupmethods. In: Acta Numerica, vol 9. Cambridge University Press,Cambridge, pp 215–365

Leimkuhler B, Reich S (2004) Simulating Hamiltonian Dynamics.CambridgeMonographs on Appliedand ComputationalMath-ematics, vol 14. Cambridge University Press, Cambridge

Marsden JE, Ratiu TS (1999) Introduction toMechanics and Symme-try, 2nd edn. Texts in AppliedMathematics, vol 17. Springer

Marsden JE,West M (2001) Discretemechanics and variational inte-grators. In Acta Numerica, vol 10. Cambridge University Press,Cambridge, pp 317–514

Sanz-Serna JM (1992) Symplectic integrators for Hamiltonian prob-lems: an overview. In: Acta Numerica, vol 1. Cambridge Univer-sity Press, Cambridge, pp 243–286

Disordered ElasticMediaTHIERRY GIAMARCHIDPMC-MaNEP, University of Geneva,Geneva, Switzerland

Article Outline

GlossaryDefinition of the SubjectIntroductionStatic Properties of Disordered Elastic MediaPinning and DynamicsFuture DirectionsBibliography

Glossary

Pinning An action exerted by impurities on an object.The object has a preferential position in space, and will

move in response to an external force only if this forceis large enough.

Scaling The fact that each of two quantities varies ina power-law relationship to the other.

Randommanifold A single elastic structure (line, sheet)embedded in a random environment.

Bragg glass A periodic elastic structure embedded ina weakly disordered environment, nearly as ordered asa solid but exhibiting some characteristics normally as-sociated with glasses.

Creep Very slow response at finite temperature ofa pinned structure in response to an external force.

Definition of the Subject

Many seemingly different systems, with extremely differ-ent microscopic physics, ranging from magnets to super-conductors, share the same essential ingredients and canbe described under the unifying concept of disorderedelastic media. In all these systems, an internal elastic struc-ture, such as an interface between regions of opposite mag-netization in magnetic systems, is subject to the effects ofdisorder existing in the material. A specially interestingfeature of all these systems is that these disordered elas-tic structures can be set in motion by applying an exter-nal force on them (e. g. a magnetic field sets in motiona magnetic interface), and that motion will be drasticallyaffected by the presence of the disorder. What propertiesresult from this competition between elasticity and disor-der is a complicated problemwhich constitutes the essenceof the physics of disordered elastic media. The resultingphysics present characteristics similar to those of glasses.This poses extremely challenging fundamental questionsin determining the static and dynamic properties of thesesystems. Understanding both the static and dynamic prop-erties of these objects is not only an important questionfrom a fundamental point of view, but also has strongpractical applications. Indeed, being able to write an in-terface between two regions of magnetization or polar-ization, and the speed of writing and stability of such re-gions, is what conditions, in particular, our ability to storeinformation in such systems, as (for example) recordingson a magnetic hard drive. The physics pertaining to dis-ordered elastic media directly condition how we can usethese systems for practical applications.

Introduction

Understanding the statics and dynamics of elastic systemsin a random environment is a long-standing problem withimportant applications for a host of experimental systems.Such problems can be split into two broad categories: (a)