Plenary

The Discovery of Graceful Extensibility Reframes the Pursuit of Autonomy and Addresses the Brittleness Problem

Dr. David Woods

Date & Time

-

Abstract

Since 1987 I have highlighted how attempts to deploy autonomous capabilities into complex, risky worlds of practice have been hampered by brittleness — descriptively, a sudden collapse in performance when events challenge system boundaries.  This constraint has been downplayed on the grounds that the next advance in AI, algorithms, or control theory will lead to the deployment of systems that escape from brittle limits.  However, the world keeps providing examples of brittle collapse such as the 2003 Columbia Space Shuttle accident or this years’ Texas energy collapse.  Resilience Engineering, drawing on multiple sources including safety of complex systems, biological systems, & joint human-autonomy systems, discovered that (a) brittleness is a fundamental risk and (b) all adaptive systems develop means to mitigate that risk through sources for resilient performance.

The fundamental discovery, covering biological, cognitive, and human systems, is that all adaptive systems at all scales have to possess the capacity for graceful extensibility.  Viability of a system, in the long run, requires the ability to gracefully extend or stretch at the boundaries as challenges occur. To put the constraint simply, viability requires extensibility, because all systems have limits and regularly experience surprise at those boundaries due to finite resources and continuous change (Woods, 2015; 2018; 2019).

The problem is that development of automata consistently ignores this constraint.  As a result, we see repeated demonstrations of the empirical finding: systems-as-designed are more brittle than stakeholders realize, but fail less often as people in various roles adapt to fill shortfalls and stretch system performance in the face of smaller & larger surprises. (Some) people in some roles are the ad hoc source of the necessary graceful extensibility.

The promise comes from the science behind Resilience Engineering which highlights paths to build systems with graceful extensibility, especially systems that utilize new autonomous capabilities.  Even better, designing systems with graceful extensibility draws on basic concepts in control engineering, though these are reframed substantially when combined with findings on adaptive systems from biology, cognitive work, organized complexity, and sociology.


Description

Biography

Woods headshot

Dr. David Woods is a Professor in the Department of Integrated Systems Engineering at the Ohio State University (PhD, Purdue University) has worked to improve systems safety in high risk complex settings for 40 years. These include studies of human coordination with automated and intelligent systems (see: https://youtu.be/b8xEpjW0Sqk and https://youtu.be/as0LipGTm5s) and accident investigations in aviation, nuclear power, critical care medicine, crisis response, military operations, and space operations. He developed Resilience Engineering on the dangers of brittle systems and the need to invest in sustaining sources of resilience beginning in 2000-2003 as part of the response to several NASA accidents. His results on proactive safety and resilience are in the book Resilience Engineering (2006) — see https://www.youtube.com/watch?v=GnVXfgC-5Jw&t=12s. The results of this work on how complex human-machine systems succeed and sometimes fail has been cited over 35,000 times (H-index > 91)

He developed the first comprehensive theory on how systems can build the potential for resilient performance despite complexity.  Recently, he started the SNAFU Catchers Consortium, an industry-university partnership to build resilience in critical digital services (see https://snafucatchers.github.io).

He is Past-President of the Resilience Engineering Association and Past-President of the Human Factors and Ergonomics Society. He has received many awards including the Laurels Award from Aviation Week and Space Technology (1995), IBM Faculty Award, Google Faculty Award, Ely Best Paper Award and Kraft Innovator Award from the Human Factors and Ergonomic Society,  the Jimmy Doolittle Fellow Award from the Air Force Association (2012).

He provides advice to many government agencies, companies in the US and internationally such as, US National Research Council on Dependable Software (2006), US National Research Council on Autonomy in Civil Aviation (2014), the FAA Human Factors and Cockpit Automation Team (1996; and its reprise in 2013), the Defense Science Board Task Force on Autonomy (2012), and he was an advisor to the Columbia Accident Investigation Board.


Date & Time

-