Rong-Hao Liang

ACM CHI 2015: ACM SIGCHI Conference on Human Factors in Computing Systems

Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices

Liwei Chan, Chi-Hao Hsieh, Yi-Ling Chen, Shuo Yang, Da-Yuan Huang, Rong-Hao Liang, Bing-Yu Chen

National Taiwan University

ACM Digital Library

Video

IMAGE ALT TEXT HERE

Abstract

This paper presents Cyclops, a single-piece wearable device that sees its user’s whole body postures through an ego-centric view of the user that is obtained through a fisheye lens at the center of the user’s body, allowing it to see only the user’s limbs and interpret body postures effectively. Unlike currently available body gesture input systems that depend on external cameras or distributed motion sensors across the user’s body, Cyclops is a single-piece wearable device that is worn as a pendant or a badge. The main idea proposed in this paper is the observation of limbs from a central location of the body. Owing to the ego-centric view, Cyclops turns posture recognition into a highly controllable computer vision problem. This paper demonstrates a proof-of-concept device, and an algorithm for recognizing static and moving bodily gestures based on motion history images (MHI) and a random decision forest (RDF). Four example applications of interactive bodily workout, a mobile racing game that involves hands and feet, a full-body virtual reality system, and interaction with a tangible toy are presented. The experiment on the bodily workout demonstrates that, from a database of 20 body workout gestures that were collected from 20 participants, Cyclops achieved a recognition rate of 79% using MHI and simple template matching, which increased to 92% with the more advanced machine learning approach of RDF.

Keywords

single-point wearable devices, posture recognition, full-body gesture input, ego-centric view

Cite this work (ACM)

Liwei Chan, Chi-Hao Hsieh, Yi-Ling Chen, Shuo Yang, Da-Yuan Huang, Rong-Hao Liang, and Bing-Yu Chen. 2015. Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices. In <i>Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems</i> (<i>CHI '15</i>). Association for Computing Machinery, New York, NY, USA, 3001–3009. DOI:https://doi.org/10.1145/2702123.2702464

Cite this work (Bibtex)

@inproceedings{10.1145/2702123.2702464,
author = {Chan, Liwei and Hsieh, Chi-Hao and Chen, Yi-Ling and Yang, Shuo and Huang, Da-Yuan and Liang, Rong-Hao and Chen, Bing-Yu},
title = {Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices},
year = {2015},
isbn = {9781450331456},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2702123.2702464},
doi = {10.1145/2702123.2702464},
abstract = {This paper presents Cyclops, a single-piece wearable device that sees its user's whole body postures through an ego-centric view of the user that is obtained through a fisheye lens at the center of the user's body, allowing it to see only the user's limbs and interpret body postures effectively. Unlike currently available body gesture input systems that depend on external cameras or distributed motion sensors across the user's body, Cyclops is a single-piece wearable device that is worn as a pendant or a badge. The main idea proposed in this paper is the observation of limbs from a central location of the body. Owing to the ego-centric view, Cyclops turns posture recognition into a highly controllable computer vision problem. This paper demonstrates a proof-of-concept device, and an algorithm for recognizing static and moving bodily gestures based on motion history images (MHI) and a random decision forest (RDF). Four example applications of interactive bodily workout, a mobile racing game that involves hands and feet, a full-body virtual reality system, and interaction with a tangible toy are presented. The experiment on the bodily workout demonstrates that, from a database of 20 body workout gestures that were collected from 20 participants, Cyclops achieved a recognition rate of 79% using MHI and simple template matching, which increased to 92% with the more advanced machine learning approach of RDF.},
booktitle = {Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems},
pages = {3001–3009},
numpages = {9},
keywords = {single-point wearable devices, posture recognition, full-body gesture input, ego-centric view},
location = {Seoul, Republic of Korea},
series = {CHI '15}
}