Dissertation Talk: Explainable and Advisable Learning for Self-driving Vehicles

Seminar: Dissertation Talk: CS | December 5 | 2-3 p.m. | 405 Soda Hall

 Jinkyu Kim, UC Berkeley

 Electrical Engineering and Computer Sciences (EECS)

Whereas classical AI systems involved carefully-crafted features, one of the new powers of deep learning methods is the ability to learn effective latent representations from data. Unfortunately, whereas human-designed feature maps are often easy-to-interpret, deep representations may not be. While there have been some successes in visualizing deep models on image data, many models remain cryptic.

Our work has focused on the challenge of generating "introspective” explanations of deep models for self-driving vehicles. Our work has explored both visual and textual explanations. We first developed an explanation model using visual attention in our controller. The attention model weights different areas of the image differently and effectively ignores certain areas completely. We next move to textual explanations. We use the BDD-X dataset and explore to generate textual explanations (e.g. ``the car slows down because the road is wet'') that are grounded in the model’s behavior via attention alignment.

These explainable systems represent an externalization of tacit knowledge. The network’s opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. We propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed).

 jinkyu.kim@berkeley.edu