Lecture | March 14 | 4:10-5:30 p.m. | 202 South Hall
From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this talk I will present two systems that mine and operationalize an understanding of human life from large text corpora. The first system, Augur, focuses on what people do in daily life: capturing many thousands of relationships between human activities (e.g., taking a phone call, using a computer, going to a meeting) and the scene context that surrounds them. The second system, Empath, focuses on what people say: capturing hundreds of linguistic signals through a set of pre-generated lexicons, and allowing computational social scientists to create new lexicons on demand. Between these projects, I will demonstrate how unsupervised learning can enable many new applications and analyses for interactive systems.