The Fundamental Reliance of Iterative Learning Control on Stability Robustness
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33090
The Fundamental Reliance of Iterative Learning Control on Stability Robustness

Authors: Richard W. Longman

Abstract:

Iterative learning control aims to achieve zero tracking error of a specific command. This is accomplished by iteratively adjusting the command given to a feedback control system, based on the tracking error observed in the previous iteration. One would like the iterations to converge to zero tracking error in spite of any error present in the model used to design the learning law. First, this need for stability robustness is discussed, and then the need for robustness of the property that the transients are well behaved. Methods of producing the needed robustness to parameter variations and to singular perturbations are presented. Then a method involving reverse time runs is given that lets the world behavior produce the ILC gains in such a way as to eliminate the need for a mathematical model. Since the real world is producing the gains, there is no issue of model error. Provided the world behaves linearly, the approach gives an ILC law with both stability robustness and good transient robustness, without the need to generate a model.

Keywords: Iterative learning control, stability robustness, monotonic convergence.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1333018

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593

References:


[1] Z. Bien and J.-X. Xu, editors, Iterative Learning Control: Analysis, Design, Integration and Applications, Kluwer Academic Publishers, Boston, 1998.
[2] J.-X. Xu and K. L. Moore, Guest Editors, Special Issue on Iterative Learning Control, International Journal of Control, Vol. 73, No. 10, July 2000.
[3] M. Phan and R. W. Longman, "A mathematical theory of learning control for linear discrete multivariable systems," Proceedings of the AIAA/AAS Astrodynamics Conference, Minneapolis, Minnesota, pp. 740-746, August 1988.
[4] R. W. Longman, "Iterative learning control and repetitive control for engineering practice," International Journal of Control, Special Issue on Iterative Learning Control, Bien and Xu, guest editors, vol. 73, no. 10, pp. 930-954, July 2000.
[5] D. H. Owens and N. Amann, Norm-Optimal Iterative Learning Control, Internal Report Series of the Centre for Systems and Control Engineering, University of Exeter, 1994.
[6] J. A. Frueh and M. Q. Phan, "Linear quadratic optimal learning control (LQL)," Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, pp. 678-683, Dec. 1998.
[7] K. Åström, P. Hagander, and J. Strenby, "Zeros of sampled systems," Proceedings of the 19th IEEE Conference on Decision and Control, pp. 1077-1081, 1980.
[8] P. A. LeVoci and R. W. Longman, "Intersample error in discrete time learning and repetitive control," Proceedings of the 2004 AIAA/AAS Astrodynamics Specialist Conference, Providence, RI, August 2004.
[9] R. W. Longman, P. A. LeVoci, and T. Kwon, "Making the impossible possible in iterative learning control," Proceedings of the Thirteenth Yale Workshop on Adaptive and Learning Systems, Center for System Science, Yale University, New Haven, CT, pp. 99-106, May-June 2005.
[10] R. W. Longman, T. Kwon, and P. A. LeVoci, "Making the learning control problem well posed - stabilizing intersample error," Advances in the Astronautical Sciences, vol. 123, pp. 1143-1162, 2006.
[11] Y. Li and R. W. Longman, "Addressing problems of instability in intersample error in iterative learning control," Advances in the Astronautical Sciences, vol. 129, pp. 1571-1591, 2008.
[12] R. W. Longman, C.-K. Chang, and M. Phan, "Discrete time learning control in nonlinear systems," A Collection of Technical Papers, 1992 AIAA/AAS Astrodynamics Specialist Conference, Hilton Head, South Carolina, pp. 501-511, August 1992.
[13] R. W. Longman and Y.-C. Huang, "Analysis and frequency response of the alternating sign learning and repetitive control laws," Journal of the Chinese Society of Mechanical Engineers, vol. 21, no. 1, pp. 107-118, 2000.
[14] H. Elci, R. W. Longman, M. Phan, J.-N. Juang, and R. Ugoletti, "Discrete frequency based learning control for precision motion control," Proceedings of the 1994 IEEE International Conference on Systems, Man, and Cybernetics, San Antonio, TX, pp. 2767-2773, Oct. 1994.
[15] R. W. Longman and Y.-C. Huang, "The phenomenon of apparent convergence followed by divergence in learning and repetitive control," Intelligent Automation and Soft Computing, Special Issue on Learning and Repetitive Control, Guest Editor: H. S. M. Beigi, vol. 8, no. 2, pp. 107-128, 2002.
[16] N. Amman, D. H. Owens, W. Rogers, and A. Wahl, "An H-infinity approach to linear iterative learning control design," International Journal of Adaptive Control and Signal Processing, vol. 10, pp. 767- 681, 1996.
[17] K. Takanishi, M. Q. Phan, and R. W. Longman, "Multiple-model probabilistic design of robust iterative learning controllers," Transactions of the North American Manufacturing Research Institution, Society of Manufacturing Engineers, vol. 33, pp. 533-540, May 2005.
[18] R. W. Longman and K. D. Mombaur, "Implementing linear iterative learning control laws in nonlinear systems," Advances in the Astronautical Sciences, to appear.
[19] H. Elci, M. Phan, R. W. Longman, J.-N. Juang, and R. Ugoletti, "Experiments in the use of learning control for maximum precision robot trajectory tracking," Proceedings of the 1994 Conference on Information Sciences and Systems, Princeton, NJ, pp. 951-958, March 1994.
[20] R. W. Longman and W. Kang, "Issues in robustification of iterative learning control using a zero-phase filter cutoff," Advances in the Astronautical Sciences, vol. 127, pp. 1683-1702, 2007.
[21] K. Chen and R. W. Longman, "Creating short time equivalents of frequency cutoff for robustness in learning control," Advances in the Astronautical Sciences, vol. 114, pp. 95-114, 2003.
[22] Y. Ye and D. Wang, "Zero phase learning control using reversed time input runs," ASME Journal of Dynamic Systems, Measurement, and Control, vol. 127, pp. 133-139, March 2005.
[23] R. W. Longman, T. Kwon, D. Wang, and Y. Ye, "Practical, model-free, completely robust learning control using reversed time input runs," Proceedings of the 2006 AIAA/AAS Astrodynamics Specialist Conference, Keystone, CO, Aug. 2006.