PoseVEC: Authoring Adaptive Pose-aware Effects using Visual Programming and Demonstrations
October 28, 2023ยท
,,,ยท
1 min read
Yongqi Zhang
Cuong Nguyen
Rubaiat Habib Kazi
Lap-Fai Yu
Abstract
Pose-aware visual effects where graphics assets and animations are rendered reactively to the human pose have become increasingly popular, appearing on mobile devices, the web, or even headmounted displays like AR glasses. Yet, creating such effects still remains difficult for novices. In a traditional video editing workflow, a creator could utilize keyframes to create expressive but non-adaptive results which cannot be reused for other videos. Alternatively, programming-based approaches allow users to develop interactive effects, but are cumbersome for users to quickly express their creative intents. In this work, we propose a lightweight visual programming workflow for authoring adaptive and expressive pose effects. By combining a programming by demonstration paradigm with visual programming, we simplify three key tasks in the authoring process creating pose triggers, designing animation parameters, and rendering. We evaluated our system with a qualitative user study and a replicated example study, finding that all participants can create effects efficiently.
Type
Publication
The 36th Annual ACM Symposium on User Interface Software and Technology (UIST โ23)

Abstract
Pose-aware visual effects where graphics assets and animations are rendered reactively to the human pose have become increasingly popular, appearing on mobile devices, the web, or even headmounted displays like AR glasses. Yet, creating such effects still remains difficult for novices. In a traditional video editing workflow, a creator could utilize keyframes to create expressive but non-adaptive results which cannot be reused for other videos. Alternatively, programming-based approaches allow users to develop interactive effects, but are cumbersome for users to quickly express their creative intents. In this work, we propose a lightweight visual programming workflow for authoring adaptive and expressive pose effects. By combining a programming by demonstration paradigm with visual programming, we simplify three key tasks in the authoring process: creating pose triggers, designing animation parameters, and rendering. We evaluated our system with a qualitative user study and a replicated example study, finding that all participants can create effects efficiently.
Publication
In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST โ23)
Video

Authors
Yongqi Zhang
(she/her)
HCI + AI Research Scientist
She is an HCI and AI Researcher who recently earned a PhD in Computer Science from George Mason University. As a member of the Design Computing and eXtended Reality (DCXR) group under the advisement of Prof. Craig Yu, their research focused on the intersection of virtual reality (VR), computational design, and human-computer interaction. [Your Name] specializes in leveraging AI and computational techniques to develop personalized virtual experiences and automated scene generation. She is currently seeking new professional opportunities in research and development.