Dr. Anuradha Annaswamy
Lessons from Adaptive Control: Towards Real-time Machine Learning
9-10 am, Tuesday, May 25, 2021
The fields of adaptive control and machine learning have evolved in parallel over the past few decades, with a significant overlap in goals, problem statements, and tools. Machine learning as a field has focused on computer based systems that improve and learn through experience. Oftentimes the process of learning is encapsulated in the form of a parameterized model such as a neural network, whose weights are trained in order to approximate a function. The field of adaptive control, on the other hand, has focused on the process of controlling engineering systems in order to accomplish regulation and tracking of critical variables of interest. Learning is embedded in this process via online estimation of the underlying parameters. In comparison to machine learning, adaptive control often focuses on limited-data problems where fast, on-line performance is critical. Whether in machine learning or adaptive control, this learning occurs through the use of input-output data. In both cases, the approach used for updating the parameters is often based on gradient descent-like and other iterative algorithms. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity. As the scope of problems in both topics increases, the associated complexity and challenges increase as well. In order to address learning and decision-making in real time, it is essential to understand these similarities and connections to develop new methods, tools, and algorithms.
This talk will examine the similarities and interconnections between adaptive control and optimization methods commonly employed in machine learning. Concepts in stability, performance, and learning, common to both fields will be discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis will be explored. High-order tuners and time-varying learning rates have been employed in adaptive control leading to very interesting results in dynamic systems with delays. We will explore how these methods can be leveraged to lead to provably correct methods for learning in real-time with guaranteed fast convergence. Examples will be drawn from a range of engineering applications.
Dr. Anuradha Annaswamy is Founder and Director of the Active-Adaptive Control Laboratory in the Department of Mechanical Engineering at MIT. Her research interests span adaptive control theory and its applications to aerospace, automotive, and propulsion systems as well as cyber physical systems such as Smart Grids, Smart Cities, and Smart Infrastructures. Her current research team of 15 students and post-docs is supported at present by the US Air-Force Research Laboratory, US Department of Energy, Boeing, Ford-MIT Alliance, and NSF. She has received best paper awards (Axelby; CSM), Distinguished Member and Distinguished Lecturer awards from the IEEE Control Systems Society (CSS) and a Presidential Young Investigator award from NSF. She is the author of a graduate textbook on adaptive control, co-editor of two vision documents on smart grids as well as two editions of the Impact of Control Technology report, and a member of the National Academy of Sciences Committee on the Future of Electric Power in the United States. She is a Fellow of IEEE and IFAC. She was the President of CSS in 2020.
Dr. Julia Badger
Autonomous Control and the Future of Human Spaceflight
9-10 am, Friday, May 28, 2021
As humans look to explore the solar system beyond low Earth orbit, the technology advancements required point heavily towards autonomy. The operation of complex human spacecraft has thus far been solved with heavy human involvement- full ground control rooms and nearly constantly inhabited spacecraft. As the goal of space exploration moves to beyond the International Space Station, the physical and budgetary constraints of business as usual become overwhelming. A new paradigm of delivering spacecraft and other assets capable of self-maintenance and self-operation prior to launching crew solves many problems- and at the same time, it opens up an array of interesting control problems. This talk will focus on robotic and autonomous vehicle system control development efforts that support the new concepts of human exploration of the solar system.
Dr. Julia Badger is the Autonomy and Vehicle Systems Manager (VSM) system manager for the Gateway program at NASA-Johnson Space Center. She also serves as the Autonomous Systems Technical Discipline Lead for JSC. She is responsible for the research and development of autonomous system capabilities, on the Earth, the International Space Station, the Gateway, and for future exploration, that include dexterous manipulation, autonomous spacecraft control and caretaking, and human-robot interfaces. Julia has a BS from Purdue University, and an MS and PhD from the California Institute of Technology, all in Mechanical Engineering. Her work has been honored with several awards, including NASA Software of the Year, Early Career, Director’s Commendation, and Exceptional Achievement Awards.
Dr. Sam Coogan
Formal Assurances for Autonomous Systems from Fast Reachability
9-10 am, Wednesday, May 26, 2021
Reachability analysis, which considers computing or approximating the set of future states attainable by a dynamical system over a time horizon, is receiving increased attention motivated by new challenges in, e.g., learning-enabled systems, assured and safe autonomy, and formal methods in control systems. Such challenges require new approaches that scale well with system size, accommodate uncertainties, and can be computed efficiently for in-the-loop or frequent computation. In this talk, we present and demonstrate a suite of tools for efficiently over-approximating reachable sets of nonlinear systems based on the theory of mixed monotone dynamical systems. A system is mixed monotone if its vector field or update map is decomposable into an increasing component and a decreasing component. This decomposition allows for constructing an embedding system with twice the states such that a single trajectory of the embedding system provides hyperrectangular over-approximations of reachable sets for the original dynamics. This efficiency can be harnessed, for example, to compute finite abstractions for tractable formal control verification and synthesis or to embed reachable set computations in the control loop for runtime safety assurance. We demonstrate these ideas on several examples, including an application to safe quadrotor flight that combines runtime reachable set computations with control barrier functions implemented on embedded hardware.
Dr. Sam Coogan is an assistant professor at Georgia Tech with a joint appointment in the School of Electrical and Computer Engineering and the School of Civil and Environmental Engineering. Prior to joining Georgia Tech in 2017, he was an assistant professor in the Electrical Engineering Department at UCLA from 2015-2017. He received the B.S. degree in Electrical Engineering from Georgia Tech and the M.S. and Ph.D. degrees in Electrical Engineering from the University of California, Berkeley. His research is in the area of dynamical systems and control and focuses on developing scalable tools for verification and control of networked, cyber-physical systems with an emphasis on transportation systems. He received the Outstanding Paper Award for the IEEE Transactions on Control of Network Systems in 2017, a CAREER award from the National Science Foundation in 2018, a Young Investigator Award from the Air Force Office of Scientific Research in 2018, and the Donald P Eckman Award from the American Automatic Control Council in 2020.
Dr. David Woods
The Discovery of Graceful Extensibility Reframes the Pursuit of Autonomy and Addresses the Brittleness Problem
9-10 am, Thursday, May 27, 2021
Since 1987 I have highlighted how attempts to deploy autonomous capabilities into complex, risky worlds of practice have been hampered by brittleness — descriptively, a sudden collapse in performance when events challenge system boundaries. This constraint has been downplayed on the grounds that the next advance in AI, algorithms, or control theory will lead to the deployment of systems that escape from brittle limits. However, the world keeps providing examples of brittle collapse such as the 2003 Columbia Space Shuttle accident or this years’ Texas energy collapse. Resilience Engineering, drawing on multiple sources including safety of complex systems, biological systems, & joint human-autonomy systems, discovered that (a) brittleness is a fundamental risk and (b) all adaptive systems develop means to mitigate that risk through sources for resilient performance.
The fundamental discovery, covering biological, cognitive, and human systems, is that all adaptive systems at all scales have to possess the capacity for graceful extensibility. Viability of a system, in the long run, requires the ability to gracefully extend or stretch at the boundaries as challenges occur. To put the constraint simply, viability requires extensibility, because all systems have limits and regularly experience surprise at those boundaries due to finite resources and continuous change (Woods, 2015; 2018; 2019).
The problem is that development of automata consistently ignores this constraint. As a result, we see repeated demonstrations of the empirical finding: systems-as-designed are more brittle than stakeholders realize, but fail less often as people in various roles adapt to fill shortfalls and stretch system performance in the face of smaller & larger surprises. (Some) people in some roles are the ad hoc source of the necessary graceful extensibility.
The promise comes from the science behind Resilience Engineering which highlights paths to build systems with graceful extensibility, especially systems that utilize new autonomous capabilities. Even better, designing systems with graceful extensibility draws on basic concepts in control engineering, though these are reframed substantially when combined with findings on adaptive systems from biology, cognitive work, organized complexity, and sociology.
Dr. David Woods is a Professor in the Department of Integrated Systems Engineering at the Ohio State University (PhD, Purdue University) has worked to improve systems safety in high risk complex settings for 40 years. These include studies of human coordination with automated and intelligent systems (see: https://youtu.be/b8xEpjW0Sqk and https://youtu.be/as0LipGTm5s) and accident investigations in aviation, nuclear power, critical care medicine, crisis response, military operations, and space operations. He developed Resilience Engineering on the dangers of brittle systems and the need to invest in sustaining sources of resilience beginning in 2000-2003 as part of the response to several NASA accidents. His results on proactive safety and resilience are in the book Resilience Engineering (2006) — see https://www.youtube.com/watch?v=GnVXfgC-5Jw&t=12s. The results of this work on how complex human-machine systems succeed and sometimes fail has been cited over 35,000 times (H-index > 91)
He developed the first comprehensive theory on how systems can build the potential for resilient performance despite complexity. Recently, he started the SNAFU Catchers Consortium, an industry-university partnership to build resilience in critical digital services (see https://snafucatchers.github.io).
He is Past-President of the Resilience Engineering Association and Past-President of the Human Factors and Ergonomics Society. He has received many awards including the Laurels Award from Aviation Week and Space Technology (1995), IBM Faculty Award, Google Faculty Award, Ely Best Paper Award and Kraft Innovator Award from the Human Factors and Ergonomic Society, the Jimmy Doolittle Fellow Award from the Air Force Association (2012).
He provides advice to many government agencies, companies in the US and internationally such as, US National Research Council on Dependable Software (2006), US National Research Council on Autonomy in Civil Aviation (2014), the FAA Human Factors and Cockpit Automation Team (1996; and its reprise in 2013), the Defense Science Board Task Force on Autonomy (2012), and he was an advisor to the Columbia Accident Investigation Board.