ACM ISS 2024
Sun 27 - Wed 30 October 2024

This program is tentative and subject to change.

Tue 29 Oct 2024 09:00 - 09:18 at Theatre C300 - Paper Session 3: Gestures

Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc.

This program is tentative and subject to change.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

09:00 - 10:15
Paper Session 3: GesturesPapers at Theatre C300
09:00
18m
Talk
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents
Papers
Xin Zeng Chinese Academy Of Sciences, Xiaoyu Wang The Hong Kong University of Science and Technology, Tengxiang Zhang Chinese Academy of Sciences, Chun Yu Tsinghua University, Shengdong Zhao City University of Hong Kong, Yiqiang Chen Chinese Academy of Sciences
DOI
09:18
18m
Talk
Hapstick-Figure: Investigating the Design of a Haptic Representation of Human Gestures from Theater Performances for Blind and Visually-Impaired People
Papers
Leyla Benhamida USTHB University, Slimane LARABI USTHB University, Oussama Metatla University of Bristol
DOI
09:37
18m
Talk
VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation
Papers
Temiloluwa Paul Femi-Gege University of Waterloo, Matthew Brehmer Tableau Research, Jian Zhao University of Waterloo
DOI Media Attached
09:56
18m
Talk
Gaze, Wall, and Racket: Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
Papers
Uta Wagner Aarhus University, Matthias Albrecht University of Konstanz, Andreas Asferg Jacobsen Aarhus University, Haopeng Wang Lancaster University, Hans Gellersen Lancaster University; Aarhus University, Ken Pfeuffer Aarhus University
DOI