Skip to main content.
Advanced search >
<< Back to previous page Print

<< Friday, November 16, 2012 >>


Remind me

Tell a friend

Add to my Google calendar (bCal)

Download to my calendar

Bookmark and ShareShare


Valkyrie Savage and Peggy Chi: VCL Lunch Talk

Seminar: Departmental | November 16 | 12-1 p.m. | Soda Hall, VCL - 510 Soda


Valkyrie Savage, UC Berkeley; Peggy Chi

Electrical Engineering and Computer Sciences (EECS)


Speaker: Valkyrie Savage
Midas: Fabricating Custom Capacitive Touch Sensors to Prototype Interactive Objects

An increasing number of consumer products include user interfaces that rely on touch input. While digital fabrication techniques such as 3D printing make it easier to prototype the shape of custom devices, adding interactivity to such prototypes remains a challenge for many designers. We introduce Midas, a software and hardware toolkit to support the design, fabrication, and programming of flexible capacitive touch sensors for interactive objects. With Midas, designers first define the desired shape, layout, and type of touch sensitive areas, as well as routing obstacles, in a sensor editor. From this high-level specification, Midas automatically generates layout files with appropriate sensor pads and routed connections. These files are then used to fabricate sensors using digital fabrication processes, e.g., vinyl cutters and conductive ink printers. Using step-by-step assembly instructions generated by Midas, designers connect these sensors to the Midas microcontroller, which detects touch events. Once the prototype is assembled, designers can define interactivity for their sensors: Midas supports both record-and-replay actions for controlling existing local applications and WebSocket-based event output for controlling novel or remote applications. In a first-use study with three participants, users successfully prototyped media players. We also demonstrate how Midas can be used to create a number of touch-sensitive interfaces.


Speaker: Peggy Chi - EECS/UC Berkeley
MixT: Automatic Generation of Step-by-Step Mixed Media Tutorials

Users of complex software applications often learn concepts and skills through step-by-step tutorials. Today, these tutorials are published in two dominant forms: static tutorials composed of images and text that are easy to scan, but cannot effectively describe dynamic interactions; and video tutorials that show all manipulations in detail, but are hard to navigate. We hypothesize that a mixed tutorial with static instructions and per-step videos can combine the benefits of both formats. We present MixT, a system that automatically generates step-by-step mixed media tutorials from user demonstrations. MixT segments screencapture video into steps using logs of application commands and input events, applies video compositing techniques to focus on salient information, and highlights interactions through mouse trails. We'll show how we apply this concept beyond software applications to narrated physical task demonstrations with a mixed-initiative video-editing system.


510-643-2614