Dissertation Talk: Visual Appliance Identification and Control with Selective Mobile-Cloud Offloading
Seminar: Dissertation Talk: CS | May 4 | 1-2 p.m. | 380 Soda Hall
Kaifei Chen, UC Berkeley
Appliances in commercial buildings are connected to the Internet and becoming programmatically controllable. However, as the number of smart appliances increases, identifying and controlling one among thousands in a building becomes challenging. Existing methods have various problems when deployed in large commercial buildings. As examples, proprietary remote controllers and smartphone apps become unmanageable. Voice or gesture command assistants require users to memorize many control commands in advance. Attaching visual markers (e.g., QR codes) to appliances introduces considerable deployment overhead and cannot work at a distance.
In this talk, we first investigate the pros and cons using indoor localizations, such as Wi-Fi received signal strength fingerprinting and room-level acoustic background noise features, to reduce displayed appliances. We then introduce SnapLink, a visual appliance identification and interaction system using an image. It has a smartphone app that offloads images to the cloud, where images are localized against pre-built point clouds. SnapLink achieves 94% successful appliance identification accuracy with 1,526 test images of 179 appliances among 39 rooms. On top of SnapLink, we build MARVEL, which supports continuous video-based appliance identification and interaction. To avoid excessive computation offloading, MARVEL utilizes local inertial sensors and optical flow for localization, and offloads images to the cloud only when necessary. In this work, we investigate how to minimize the overhead of both local computation and offloading. Our evaluation on MARVEL reveals that efficient use of a mobile device's capabilities significantly lowers latency without sacrificing accuracy or consuming more energy.