Gesto: Mapping UI Events to Gestures and Voice Commands

Chang Min Park, Taeyeon Ki, Ali Ben Ali and Nikhil Pawar

Paper presentation at EICS 2019 (Best paper honorable mention).

Paper presentation at EICS 2019, which received the Best Paper Honorable Mention award.

Graduate Student Project

Introduction

How amazing if we can automate tasks in mobile applications using gestures and voice commands?                                   

I'm Chang Min Park, a 3rd year computer science PhD student in University at Buffalo, and I've worked under Dr. Ko as my advisor since I was an undergraduate student. Our research mainly focuses on mobile system challenges. For the past two years with four graduate students and three professors, our group had worked on solving challenges to enable task automation using gestures and voice commands in mobile applications.                         

Today's mobile application requires users to perform multiple user interface (UI) actions (e.g., typing texts, clicking buttons) before finishing a task, and it also requires full attention from users since they need to look at their device screens while interacting with the UIs. Currently only a few applications provide task automation features using gestures or voice commands, and even these are limited to only tasks app developers defined. To solve this issue, our research had focused on how to enable task automation features for existing apps without any help from developers.

Abstract

Gesto is a system that enables task automation for Android apps using gestures and voice commands. Using Gesto, a user can record a UI action sequence for an app, choose a gesture or a voice command to activate the UI action sequence, and later trigger the UI action sequence by the corresponding gesture/voice command. Gesto enables this for existing Android apps without requiring their source code or any help from their developers. In order to make such capability possible, Gesto combines bytecode instrumentation and UI action record-and-replay. To show the applicability of Gesto, we develop four use cases using real apps downloaded from Google Play. For each of these apps, we map a gesture or a voice command to a sequence of UI actions. According to our measurement, Gesto incurs modest overhead for these apps in terms of memory usage, energy usage, and code size increase.

See the Full Poster

Click on the file below to see the full poster in your browser. 

Digital Accessibility

The University at Buffalo is committed to ensuring digital accessibility for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards to ensure we provide equal access to all users. If you experience any difficulty in accessing the content or services on this website, or if you have suggestions about improving the user experience, please contact the Experiential Learning Network via email (ubeln@buffalo.edu) or phone (716-645-8177).