Cost: Free for students. If you like these presentations, join now for $20/year membership.
Each year the prestigious CHI conference rejects 100’s of high quality submissions in favor of the chosen few. For over 40 years the Dynamic Graphics Project (DGP) has been recognized world wide among the oldest and well respected labs in the space of Human Computer Interaction research. ToRCHI has the rare opportunity of previewing four chosen CHI’13 presentations by DGP researchers. Please join us in exploring these new research frontiers.
(1) Direct Space-Time Trajectory Control for Visual Media Editing, by Stephanie Santosia
We explore the design space for using object motion trajectories to create and edit visual elements in various media across space and time. We introduce a suite of pen-based techniques that facilitate fluid stylization, annotation and editing of space-time content such as video, slide presentations and 2D animation, utilizing pressure and multi-touch input. We implemented and evaluated these techniques in DirectPaint, a system for creating free-hand painting and annotation over video.
(2) TrailMap: Facilitating Information Seeking in a Multi-Scale Digital Map via Implicit Bookmarking, by Jian Zhao
Web applications designed for map exploration in local neighborhoods have become increasingly popular and important in everyday life. During the information-seeking process, users often revisit previously viewed locations, repeat earlier searches, or need to memorize or manually mark areas of interest. To facilitate rapid returns to earlier views during map exploration, we propose a novel algorithm to automatically generate map bookmarks based on a user’s interaction. TrailMap, a web application based on this algorithm, is developed, providing a fluid and effective neighborhood exploration experience. A one-week study is conducted to evaluate TrailMap in users’ everyday web browsing activities. Results showed that TrailMap’s implicit bookmarking mechanism is efficient for map exploration and the interactive and visual nature of the tool is intuitive to users.
(3) SeeSay and HearSay CAPTCHAs for Mobile Interaction, by Sajad Shirali-Shahreza
Speech certainly has advantages as an input modality for smartphone applications, especially in scenarios where using touch or keyboard entry is difficult, on increasingly miniaturized devices where useable keyboards are difficult to accommodate, or in scenarios where only small amounts of text need to be input, such as when entering SMS texts or responding to a CAPTCHA challenge. In this paper, we propose two new alternative ways to design CAPTCHAs in which the user says the answer instead of typing it with (a) output stimuli provided visually (SeeSay) or (b) auditorily (HearSay). Our user study results show that SeeSay CAPTCHA requires less time to be solved and users prefer it over current text-based CAPTCHA methods.
(4) How Fast is Fast Enough? A Study of the Effects of Latency in Direct-Touch Pointing Tasks, by Ricardo Jota
Although advances in touchscreen technology have provided us with more precise devices, touchscreens are still laden with latency issues. Common commercial devices present with latency up to 125ms. Although these levels have been shown to impact users’ perception of the responsiveness of the system, relatively little is known about the impact of latency on the performance of tasks common to direct-touch interfaces, such as direct physical manipulation.
In this paper, we study the effect of latency of a direct-touch pointing device on dragging tasks. Our tests show that user performance decreases as latency increases. We also find that user performance is more severely affected by latency when targets are smaller or farther away. We present a detailed analysis of users’ coping mechanisms for latency, and present the results of a follow-up study demonstrating user perception of latency in the land-on phase of the dragging task.