The fields of adaptive control and machine learning have evolved in parallel over the past few decades, with a significant overlap in goals, problem statements, and tools. Machine learning as a field has focused on computer based systems that improve and learn through experience. Oftentimes the process of learning is encapsulated in the form of a parameterized model such as a neural network, whose weights are trained in order to approximate a function. The field of adaptive control, on the other hand, has focused on the process of controlling engineering systems in order to accomplish regulation and tracking of critical variables of interest. Learning is embedded in this process via online estimation of the underlying parameters. In comparison to machine learning, adaptive control often focuses on limited-data problems where fast, on-line performance is critical. Whether in machine learning or adaptive control, this learning occurs through the use of input-output data. In both cases, the approach used for updating the parameters is often based on gradient descent-like and other iterative algorithms. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity. As the scope of problems in both topics increases, the associated complexity and challenges increase as well. In order to address learning and decision-making in real time, it is essential to understand these similarities and connections to develop new methods, tools, and algorithms.
This talk will examine the similarities and interconnections between adaptive control and optimization methods commonly employed in machine learning. Concepts in stability, performance, and learning, common to both fields will be discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis will be explored. High-order tuners and time-varying learning rates have been employed in adaptive control leading to very interesting results in dynamic systems with delays. We will explore how these methods can be leveraged to lead to provably correct methods for learning in real-time with guaranteed fast convergence. Examples will be drawn from a range of engineering applications.
Dr. Anuradha Annaswamy is Founder and Director of the Active-Adaptive Control Laboratory in the Department of Mechanical Engineering at MIT. Her research interests span adaptive control theory and its applications to aerospace, automotive, and propulsion systems as well as cyber physical systems such as Smart Grids, Smart Cities, and Smart Infrastructures. Her current research team of 15 students and post-docs is supported at present by the US Air-Force Research Laboratory, US Department of Energy, Boeing, Ford-MIT Alliance, and NSF. She has received best paper awards (Axelby; CSM), Distinguished Member and Distinguished Lecturer awards from the IEEE Control Systems Society (CSS) and a Presidential Young Investigator award from NSF. She is the author of a graduate textbook on adaptive control, co-editor of two vision documents on smart grids as well as two editions of the Impact of Control Technology report, and a member of the National Academy of Sciences Committee on the Future of Electric Power in the United States. She is a Fellow of IEEE and IFAC. She was the President of CSS in 2020.