What if we succeed?

Colloquium | October 18 | 11 a.m.-12:30 p.m. | Berkeley Way West, 2121 Berkeley Way, Room 1102

 Stuart Russell, Professor of Computer Science and Cognitive Science, UC Berkeley

 Neuroscience Institute, Helen Wills

It is reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. The "standard model" in AI and related disciplines aims to create systems that optimize arbitrary objectives. As we shift to a regime in which systems are more capable than human beings, this model fails catastrophically. Instead, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help. This uncertainty causes machine and human behavior to be inextricably (and game-theoretically) linked, while opening up many new avenues for research.

 nrterranova@berkeley.edu