Workshop

W2: Cognition and Learning in Control Theory (0433)

Pramod P. Khargonekar, Jemin George and He Bai, Nikolai Matni, Kyriakos Vamvoudakis, Aranya Chakrabortty

Date & Time

-

Description

Organizer: Aranya Chakrabortty, North Carolina State University, Raleigh, NC

The objective of this workshop is to start a dialogue on how cognitive science and learning theory can benefit control theory and vice versa. We intend to have a clear understanding of why the value of data has traditionally been under-utilized and under-emphasized in the controls community, what new dimensions can control theory gain from data science, machine learning, neural networks, and cognition and psychology, and what primary analytical and experimental tools are needed to make this marriage more successful. A group of distinguished control theorists have been invited working in various aspects and applications of data-driven algorithms to give talks on their recent research findings on this subject. The discussions will span the underlying fundamentals in control, optimization, state estimation, system identification, inferencing, and learning with applications ranging from power grids, transportation, and smart cities. The importance of security and privacy of data in each of these domains, and their impact on public benefit will be emphasized.

Neuro-Cognitive Science Inspired Directions in Learning for Control

Pramod P. Khargonekar

Abstract: We will discuss some selected topics from neuroscience and cognitive science that hold potential for future research directions in learning for control. The main motivation is that human brain has very impressive abilities to perceive, learn and make decisions and deal with uncertainty, environmental changes, and achieve goals. In recent decades, a great deal of progress has been made in both neuroscience and cognitive science in these areas and they have had large influences on modern machine learning. First half of the presentation will begin with a very brief summary of the connectionist vs symbolic approaches to human cognition. Next, we will highlight key elements of cognition: perception, attention, memory, problem solving, and knowledge representation, drawing connections to recent breakthrough advances in deep learning and reinforcement learning. We will also highlight predictive brain hypothesis which bears very strong connections to estimation and filtering in systems theory. Where applicable, we will speculate on the possible implications for control architectures and algorithms. In the second half of the presentation, we will discuss some recent results from our work with Deepan Muthirayan on the (external) memory augmented neural adaptive controllers and meta-learning for control. These results point the way towards new control architectures and algorithms for enabling new capabilities.

Fast Reinforcement Learning Control using Decomposition and Hierarchical Approximation

Jemin George and He Bai

Abstract: Designing the optimal linear quadratic regulator (LQR) for a large-scale multiagent system (MAS) is time-consuming since it involves solving a large-size matrix Riccati equation. The situation is further exasperated when the design needs to be done in a model-free way using schemes such as reinforcement learning (RL). To reduce this computational complexity, in this talk we will present a way to decompose the large-scale LQR design problem into multiple sets of smaller-size LQR design problems. Considering the objective function to be specified over an undirected graph, we will cast the decomposition as a graph clustering problem. The graph is decomposed into two parts, one consisting of multiple decoupled subgroups of connected components, and the other containing edges that connect the different subgroups. Accordingly, the resulting controller has a hierarchical structure, consisting of two components. The first component optimizes the performance of each decoupled subgroup by solving the smaller-size LQR design problem in a model-free way using an algorithm. The second component accounts for the objective coupling different subgroups, which is achieved by solving a least squares problem in one shot. Although suboptimal, the hierarchical controller adheres to a particular structure as specified by the inter-agent coupling in the objective function and by the decomposition strategy. Mathematical formulations will be established to find a decomposition that minimizes required communication

links or reduces the optimality gap. Numerical simulations, including a video demo of multi-agent target-tracking, will be provided to highlight the pros and cons of the proposed designs.

Robust Guarantees for Perception-Based Control

Nikolai Matni

Abstract: Motivated by vision-based control of autonomous systems, we consider the problem of controlling a known linear dynamical system for which partial state information, such as vehicle position, can only be extracted from high-dimensional data, such as an image. Our approach is to learn a perception map from high-dimensional data to partial-state observation, and its corresponding error profile, and to then design a robust controller and corresponding safe set over which this error profile is provably valid. We show that this can be accomplished by integrating the learned perception map and error model into a novel robust control synthesis procedure that results in a perception-based control loop with favorable invariance and generalization properties. Throughout we emphasize the importance of integrating robust learning (to characterize uncertainty in the learned perception map) and robust control (to mitigate the effects of this uncertainty) when designing learning-enabled control systems. Finally, we illustrate the usefulness of our approach through experimental validation on simulation and hardware platforms.

Closed-loop On-off Adversarially Robust Reinforcement Learning

Kyriakos Vamvoudakis

Abstract: With the advent of the efficient integration between communications, computation, and control, there is no doubt that autonomous technologies with reinforcement learning (RL) mechanisms will dominate future systems providing a new source of revenue. But the increasing potential of threats exposes the need for new design principles to increase the resilience of autonomous systems that use RL-based agents for control. Given this projected growth, it is a rare and critical opportunity for the RL industry and defense manufacturers to be ahead of adversaries in understanding how AI/machine learning models can be exploited in order to develop safer next-gen technologies. In this presentation, we will show a closed-loop \on-o_" adversarial RL framework that will maintain nominal performance in the absence of attacks, and will be robust to physically plausible attacks, i.e., white-box attacks, as well as black-box attacks where access to gradients is not feasible, in terms of data-driven controllability and observability, while guaranteeing closed-loop system stability, robustness, and safety. Current black-box and white-box attack strategies do not apply to RL-based control systems, require pre-training, and can be easily defended by existing defense mechanisms. We will then extend the adversarial surface of the RL mechanism, and by the time an adversary discovers vulnerabilities, the closed-loop RL that is responsible for the safe, efficient, and effective control operation, will have changed its surface area so that another exploit against that vulnerability will be infective. We will show cases with unavailable training data and with two threat models: action manipulation and observation manipulation.

Psychology-Driven Bayesian Models for Distributed Cognitive Control

Aranya Chakrabortty

Abstract: Response inhibition is an important act of control in many domains of psychology and neuroscience. The driving goal of response inhibition is to enable subjects inhibiting an ongoing action in response to a stop signal. Performance of the subjects in this stop-signal task is understood as a race between a \go process" that underlies the action and a \stop process" that inhibits the action. Responses are inhibited if the stop process finishes before the go process. This talk will cover the fundamental concepts of these human cognition models in the context of distributed control, and discuss how response inhibition can be used to develop efficient control laws by modeling the distributed controllers as proxies of human subjects. We will start with the classical response inhibition model introduced by Logan and Cowan (1984), followed by the more recent models by Logan and Van Zandt (2014) and Bayesian models that assume each controller is a stochastic accumulator governed by a diffusion process. The talk will conclude by highlighting the applications of these cognitive models for distributed control of large, complex and spatially distributed human-in-the-loop cyber-physical systems. 

A panel session will be held at the end of the workshop


Date & Time

-