Safety is not just an engineering challenge
IEEE Transportation Systems Magazine's Spring 2017 issue had a theme of safety and security for autonomous vehicles and co-founders, Mike Wagner and Phil Koopman were asked to weigh in.
The short, short version of the article:
- Understanding what "safe" even means in the context of an autonomous system is harder than it sounds, and requires cross-domain expertise.
- Validating inductive reasoning (as machine learning systems do) is inherently difficult and has been known to be at least since David Hume's time.
- Safe autonomy can be accomplished and deployed, but safety certification that incorporates input from safety engineers, security experts, software validation experts, HCI experts, a viable legal framework, and many others will have to meaningfully contribute to its structure.
Ensuring the safety of fully autonomous vehicles requires a multi-disciplinary approach across all the levels of functional hierarchy, from hardware fault tolerance, to resilient machine learning, to cooperating with humans driving conventional vehicles, to validating systems for operation in highly unstructured environments, to appropriate regulatory approaches. Significant open technical challenges include validating inductive learning in the face of novel environmental inputs and achieving the very high levels of dependability required for full-scale fleet deployment. However, the biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.