DigitSpace: Designing Thumb-to-Fingers Touch Interfaces for One-Handed and
Eyes-Free Interactions
Tracking Fingers
/
Huang, Da-Yuan
/
Chan, Liwei
/
Yang, Shuo
/
Wang, Fan
/
Liang, Rong-Hao
/
Yang, De-Nian
/
Hung, Yi-Ping
/
Chen, Bing-Yu
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.1526-1537
© Copyright 2016 ACM
Summary: Thumb-to-fingers interfaces augment touch widgets on fingers, which are
manipulated by the thumb. Such interfaces are ideal for one-handed eyes-free
input since touch widgets on the fingers enable easy access by the stylus
thumb. This study presents DigitSpace, a thumb-to-fingers interface that
addresses two ergonomic factors: hand anatomy and touch precision. Hand anatomy
restricts possible movements of a thumb, which further influences the physical
comfort during the interactions. Touch precision is a human factor that
determines how precisely users can manipulate touch widgets set on fingers,
which determines effective layouts of the widgets. Buttons and touchpads were
considered in our studies to enable discrete and continuous input in an
eyes-free manner. The first study explores the regions of fingers where the
interactions can be comfortably performed. According to the comfort regions,
the second and third studies explore effective layouts for button and touchpad
widgets. The experimental results indicate that participants could discriminate
at least 16 buttons on their fingers. For touchpad, participants were asked to
perform unistrokes. Our results revealed that since individual participant
performed a coherent writing behavior, personalized $1 recognizers could offer
92% accuracy on a cross-finger touchpad. A series of design guidelines are
proposed for designers, and a DigitSpace prototype that uses magnetic-tracking
methods is demonstrated.
GaussMarbles: Spherical Magnetic Tangibles for Interacting with Portable
Physical Constraints
Everyday Objects as Interaction Surfaces
/
Kuo, Han-Chih
/
Liang, Rong-Hao
/
Lin, Long-Fei
/
Chen, Bing-Yu
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4228-4232
© Copyright 2016 ACM
Summary: This work develops a system of spherical magnetic tangibles, GaussMarbles,
that exploits the unique affordances of spherical tangibles for interacting
with portable physical constraints. The proposed design of each magnetic sphere
includes a magnetic polyhedron in the center. The magnetic polyhedron provides
bi-polar magnetic fields, which are expanded in equal dihedral angles as robust
features for tracking, allowing an analog Hall-sensor grid to resolve the
near-surface 3D position accurately in real-time. Possible interactions between
the magnetic spheres and portable physical constraints in various levels of
embodiment were explored using several example applications.
GaussRFID: Reinventing Physical Toys Using Magnetic RFID Development Kits
Everyday Objects as Interaction Surfaces
/
Liang, Rong-Hao
/
Kuo, Han-Chih
/
Chen, Bing-Yu
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4233-4237
© Copyright 2016 ACM
Summary: We present GaussRFID, a hybrid RFID and magnetic-field tag sensing system
that supports interactivity when embedded in retrofitted or new physical
objects. The system consists of two major components -- GaussTag, a
magnetic-RFID tag that is combined with a magnetic unit and an RFID tag, and
GaussStage, which is a tag reader that is combined with an analog Hall-sensor
grid and an RFID reader. A GaussStage recognizes the ID, 3D position, and
partial 3D orientation of a GaussTag near the sensing platform, and provides
simple interfaces for involving physical constraints, displays and actuators in
tangible interaction designs. The results of a two-day toy-hacking workshop
reveal that all six groups of 31 participants successfully modified physical
toys to interact with computers using the GaussRFID system.
Empath: Understanding Topic Signals in Large-Scale Text
Search and Discovery
/
Fast, Ethan
/
Chen, Binbin
/
Bernstein, Michael S.
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4647-4657
© Copyright 2016 ACM
Summary: Human language is colored by a broad range of topics, but existing text
analysis tools only focus on a small number of them. We present Empath, a tool
that can generate and validate new lexical categories on demand from a small
set of seed terms (like "bleed" and "punch" to generate the category violence).
Empath draws connotations between words and phrases by deep learning a neural
embedding across more than 1.8 billion words of modern fiction. Given a small
set of seed words that characterize a category, Empath uses its neural
embedding to discover new related terms, then validates the category with a
crowd-powered filter. Empath also analyzes text across 200 built-in,
pre-validated categories we have generated from common topics in our web
dataset, like neglect, government, and social media. We show that Empath's
data-driven, human validated categories are highly correlated (r=0.906) with
similar categories in LIWC.
GaussRFID: Reinventing Physical Toys Using Magnetic RFID Development Kits
Video Showcase Presentations
/
Liang, Rong-Hao
/
Kuo, Han-Chih
/
Chen, Bing-Yu
Extended Abstracts of the ACM CHI'16 Conference on Human Factors in
Computing Systems
2016-05-07
v.2
p.8
© Copyright 2016 ACM
Summary: We present GaussRFID, a hybrid RFID and magnetic-field tag sensing system
that supports interactivity when embedded in retrofitted or new physical
objects. The system consists of two major components -- GaussTag, a
magnetic-RFID tag that is combined with a magnetic unit and an RFID tag, and
GaussStage, which is a tag reader that is combined with an analog Hall-sensor
grid and an RFID reader. A GaussStage recognizes the ID, 3D position, and
partial 3D orientation of a GaussTag near the sensing platform, and provides
simple interfaces for involving physical constraints, displays and actuators in
tangible interaction designs. The results of a two-day toy-hacking workshop
reveal that all six groups of 31 participants successfully modified physical
toys to interact with computers using the GaussRFID system.
GaussStudio: Designing Seamless Tangible Interactions on Portable Displays
Studio-Workshops
/
Liang, Rong-Hao
/
Kuo, Han-Chih
/
Alonso, Miguel Bruns
/
Chen, Bing-Yu
Proceedings of the 2016 International Conference on Tangible and Embedded
Interaction
2016-02-14
p.786-789
© Copyright 2016 ACM
Summary: The analog Hall-sensor grid, GaussSense, is a thin-form magnetic-field
camera technology for designing expressive occlusion-free, near-surface
tangible interactions on conventional portable displays. The studio will
provide hands-on experiences that combine physical designs and the GaussSense
technology. Through a series of brainstorming and making exercises,
participants will learn how to exploit natural hand and micro interactions
through designing the expressions and affordances of physical objects, and know
how to utilize physical constraints to provide additional kinesthetic awareness
and haptic feedback. The exercises will be including form-giving, electronic
prototyping, and hacking physical toys that are prepared by either the
organizers or participants.
CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a
Fisheye Ring
Session 8A: Hands and Fingers
/
Chan, Liwei
/
Chen, Yi-Ling
/
Hsieh, Chi-Hao
/
Liang, Rong-Hao
/
Chen, Bing-Yu
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.549-556
© Copyright 2015 ACM
Summary: This paper presents CyclopsRing, a ring-style fisheye imaging wearable
device that can be worn on hand webbings to enable whole-hand and context-aware
interactions. Observing from a central position of the hand through a fisheye
perspective, CyclopsRing sees not only the operating hand, but also the
environmental contexts that involve with the hand-based interactions. Since
CyclopsRing is a finger-worn device, it also allows users to fully preserve
skin feedback of the hands. This paper demonstrates a proof-of-concept device,
reports the performance in hand-gesture recognition using random decision
forest (RDF) method, and, upon the gesture recognizer, presents a set of
interaction techniques including on-finger pinch-and-slide input, in-air
pinch-and-motion input, palm-writing input, and their interactions with the
environmental contexts. The experiment obtained an 84.75% recognition rate of
hand gesture input from a database of seven hand gestures collected from 15
participants. To our knowledge, CyclopsRing is the first ring-wearable device
that supports whole-hand and context-aware interactions.
FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabrications
Using Single Shape-Sensing Strip
Session 9B: Pens, Mice and Sensor Strips
/
Chien, Chin-yu
/
Liang, Rong-Hao
/
Lin, Long-Fei
/
Chan, Liwei
/
Chen, Bing-Yu
Proceedings of the 2015 ACM Symposium on User Interface Software and
Technology
2015-11-05
v.1
p.659-663
© Copyright 2015 ACM
Summary: This paper presents FlexiBend, an easily installable shape-sensing strip
that enables interactivity of multi-part, deformable fabrications. The flexible
sensor strip is composed of a dense linear array of strain gauges, therefore it
has shape sensing capability. After installation, FlexiBend can simultaneously
sense user inputs in different parts of a fabrication or even capture the
geometry of a deformable fabrication.
The Influence of Individual Affective Factors on the Continuous Use of
Mobile Apps
Social Media for Business
/
Yeh, Yi-Hsuan
/
Chen, Belinda
/
Wu, Nien-Chu
HCIB 2015: 2nd International Conference on HCI in Business
2015-08-02
p.197-206
Keywords: Mobile apps; Task-Technology Fit; Value-Technology Fit; Subjective norm
© Copyright 2015 Springer International Publishing Switzerland
Summary: Mobile apps have attracted a substantial amount of attention in mobile
commerce. Usage behavior of consumers is always an important issue in this
research area. The objective of this study is to explore what factors will
affect an individual's continuance intention to use mobile apps. We proposes a
research model that integrates the Task-Technology Fit (TTF) and Theory of
Reasoned Action (TRA), which are augmented with concepts of affective factors.
We conduct an online survey and the results show that a higher degree of TTF
and VTF (Value-Technology Fit) resulted in a more positive attitude towards
using the mobile app. SN and attitude had strong significant impacts on users'
continuance intention to use the app. However, TTF and VTF had no significant
effect on the continuance intention to use the app.
WonderLens: Optical Lenses and Mirrors for Tangible Interactions on Printed
Paper
Tangible Interactions
/
Liang, Rong-Hao
/
Shen, Chao
/
Chan, Yu-Chien
/
Chou, Guan-Ting
/
Chan, Liwei
/
Yang, De-Nian
/
Chen, Mike Y.
/
Chen, Bing-Yu
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.1281-1284
© Copyright 2015 ACM
Summary: This work presents WonderLens, a system of optical lenses and mirrors for
enabling tangible interactions on printed paper. When users perform spatial
operations on the optical components, they deform the visual content that is
printed on paper, and thereby provide dynamic visual feedback on user
interactions without any display devices. The magnetic unit that is embedded in
each lens and mirror allows the unit to be identified and tracked using an
analog Hall-sensor grid that is placed behind the paper, so the system provides
additional auditory and visual feedback through different levels of embodiment,
further enhancing the interactivity with the printed content on the physical
paper.
Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices
Using Random Body Parts for Input
/
Chan, Liwei
/
Hsieh, Chi-Hao
/
Chen, Yi-Ling
/
Yang, Shuo
/
Huang, Da-Yuan
/
Liang, Rong-Hao
/
Chen, Bing-Yu
Proceedings of the ACM CHI'15 Conference on Human Factors in Computing
Systems
2015-04-18
v.1
p.3001-3009
© Copyright 2015 ACM
Summary: This paper presents Cyclops, a single-piece wearable device that sees its
user's whole body postures through an ego-centric view of the user that is
obtained through a fisheye lens at the center of the user's body, allowing it
to see only the user's limbs and interpret body postures effectively. Unlike
currently available body gesture input systems that depend on external cameras
or distributed motion sensors across the user's body, Cyclops is a single-piece
wearable device that is worn as a pendant or a badge. The main idea proposed in
this paper is the observation of limbs from a central location of the body.
Owing to the ego-centric view, Cyclops turns posture recognition into a highly
controllable computer vision problem. This paper demonstrates a
proof-of-concept device, and an algorithm for recognizing static and moving
bodily gestures based on motion history images (MHI) and a random decision
forest (RDF). Four example applications of interactive bodily workout, a mobile
racing game that involves hands and feet, a full-body virtual reality system,
and interaction with a tangible toy are presented. The experiment on the bodily
workout demonstrates that, from a database of 20 body workout gestures that
were collected from 20 participants, Cyclops achieved a recognition rate of 79%
using MHI and simple template matching, which increased to 92% with the more
advanced machine learning approach of RDF.
Cyclops: Wearable and Single-Piece Full-Body Gesture Input Devices
Video Showcase Presentations
/
Chan, Liwei
/
Hsieh, Chi-Hao
/
Chen, Yi-Ling
/
Yang, Shuo
/
Huang, Da-Yuan
/
Liang, Rong-Hao
/
Chen, Bing-Yu
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.159
© Copyright 2015 ACM
Summary: This work presents Cyclops, a single-piece wearable device that sees its
user's whole body postures through an ego-centric view of the user that is
obtained through a fisheye lens at the center of the user's body, allowing it
to see only the user's limbs and interpret body postures effectively. Unlike
currently available body gesture input systems that depend on external cameras
or distributed motion sensors across the user's body, Cyclops is a single-piece
wearable device that is worn as a pendant or a badge. Owing to the ego-centric
view, Cyclops turns posture recognition into a highly controllable computer
vision problem. We demonstrate a proof-of-concept device and an algorithm for
recognizing static and moving bodily gestures based on motion history images
(MHI) and a random decision forest (RDF). Four example applications of
interactive bodily workout, a mobile racing game that involves hands and feet,
a full-body virtual reality system, and interaction with a tangible toy are
presented.
Motorcycle Ride Care Using Android Phone
WIP Theme: Mobile Interactions
/
Chen, Bo-Han
/
Wong, Sai-Keung
/
Chang, Wei-Che
Extended Abstracts of the ACM CHI'15 Conference on Human Factors in
Computing Systems
2015-04-18
v.2
p.1525-1530
© Copyright 2015 ACM
Summary: We propose an Automatic Motorcycle Turn Signal (AMTS) system to assist
motorcyclists to automatically signal turns. The AMTS system utilizes the
gyroscope sensor of the Android phone to detect the turning direction of a
motorcycle and then the system turns on the light and sound accordingly. We
have evaluated the proposed AMTS system in indoor and outdoor environments. The
evaluation results indicate that the AMTS system could be useful to
automatically signal turns. We also conducted a survey of the questionnaire.
The survey result shows that the AMTS system is highly appreciated by the
motorcyclists.
"Twitter Archeology" of learning analytics and knowledge conferences
Curricula, network and discourse analysis
/
Chen, Bodong
/
Chen, Xin
/
Xing, Wanli
LAK'15: 2015 International Conference on Learning Analytics and Knowledge
2015-03-16
p.340-349
© Copyright 2015 ACM
Summary: The goal of the present study was to uncover new insights about the learning
analytics community by analyzing Twitter archives from the past four Learning
Analytics and Knowledge (LAK) conferences. Through descriptive analysis,
interaction network analysis, hashtag analysis, and topic modeling, we found:
extended coverage of the community over the years; increasing interactions
among its members regardless of peripheral and in-persistent participation;
increasingly dense, connected and balanced social networks; and more and more
diverse research topics. Detailed inspection of semantic topics uncovered
insights complementary to the analysis of LAK publications in previous
research.
It's about time: 4th international workshop on temporal analyses of learning
data
Workshop
/
Knight, Simon
/
Wise, Alyssa F.
/
Chen, Bodong
/
Cheng, Britte Haugan
LAK'15: 2015 International Conference on Learning Analytics and Knowledge
2015-03-16
p.388-389
© Copyright 2015 ACM
Summary: Interest in analyses that probe the temporal aspects of learning continues
to grow. The study of common and consequential sequences of events (such as
learners accessing resources, interacting with other learners and engaging in
self-regulatory activities) and how these are associated with learning
outcomes, as well as the ways in which knowledge and skills grow or evolve over
time are both core areas of interest. Learning analytics datasets are replete
with fine-grained temporal data: click streams; chat logs; document edit
histories (e.g. wikis, etherpads); motion tracking (e.g. eye-tracking,
Microsoft Kinect), and so on. However, the emerging area of temporal analysis
presents both technical and theoretical challenges in appropriating suitable
techniques and interpreting results in the context of learning. The learning
analytics community offers a productive focal ground for exploring and
furthering efforts to address these challenges. This workshop, the fourth in a
series on temporal analysis of learning, provides a focal point for analytics
researchers to consider issues around and approaches to temporality in learning
analytics.
Using point-light movement as peripheral visual guidance for scooter
navigation
Posters & Demonstrations
/
Tseng, Hung-Yu
/
Liang, Rong-Hao
/
Chan, Liwei
/
Chen, Bing-Yu
Proceedings of the 2015 Augmented Human International Conference
2015-03-09
p.177-178
© Copyright 2015 ACM
Summary: This work presents a preliminary study of utilizing point-light movement in
scooter drivers' peripheral vision for turn-by-turn navigation. We examine six
types of basic 1D point-light movement, and the results suggests several of
them can be easily picked up and comprehended by peripheral vision in parallel
with the on-going foveal vision task, and can be use to provide effective and
distraction-free route-guiding experiences for scooter driving.
Discovering the City by Mining Diverse and Multimodal Data Streams
Multimedia Grand Challenge
/
Kuo, Yin-Hsi
/
Chen, Yan-Ying
/
Chen, Bor-Chun
/
Lee, Wen-Yu
/
Wu, Chun-Che
/
Lin, Chia-Hung
/
Hou, Yu-Lin
/
Cheng, Wen-Feng
/
Tsai, Yi-Chih
/
Hung, Chung-Yen
/
Hsieh, Liang-Chi
/
Hsu, Winston
Proceedings of the 2014 ACM International Conference on Multimedia
2014-11-03
p.201-204
© Copyright 2014 ACM
Summary: This work attempts to tackle the IBM grand challenge -- seeing the daily
life of New York City (NYC) in various perspectives by exploring rich and
diverse social media content. Most existing works address this problem relying
on single media source and covering limited life aspects. Because different
social media are usually chosen for specific purposes, multiple social media
mining and integration are essential to understand a city comprehensively. In
this work, we first discover the similar and unique natures (e.g., attractions,
topics) across social media in terms of visual and semantic perceptions. For
example, Instagram users share more food and travel photos while Twitter users
discuss more about sports and news. Based on these characteristics, we analyze
a broad spectrum of life aspects -- trends, events, food, wearing and
transportation in NYC by mining a huge amount of diverse and freely available
media (e.g., 1.6M Instagram photos, 5.3M Twitter posts). Because transportation
logs are hardly available in social media, the NYC Open Data (e.g., 6.5B subway
station transactions) is leveraged to visualize temporal traffic patterns.
Furthermore, the experiments demonstrate that our approaches can effectively
overview urban life with considerable technical improvement, e.g., having 16%
relative gains in food recognition accuracy by a hierarchy cross-media learning
strategy, reducing the feature dimensions of sentiment analysis by 10 times
without sacrificing precision.
Automatic Facial Image Annotation and Retrieval by Integrating Voice Label
and Visual Appearance
Posters 2
/
Jheng, Hong-Wun
/
Chen, Bor-Chun
/
Chen, Yan-Ying
/
Hsu, Winston
Proceedings of the 2014 ACM International Conference on Multimedia
2014-11-03
p.1001-1004
© Copyright 2014 ACM
Summary: Annotation is important for managing and retrieving a large amount of
photos, but it is generally labor-intensive and time-consuming. However,
speaking while taking photos is straightforward and effortless, and using voice
for annotation is faster than typing words. To best reduce the manual cost of
annotating photos, we propose a novel framework which utilizes the scarce
spoken annotations recorded while capturing as voice labels and automatically
label every facial image in the photo collection. To accomplish this goal, we
employ a probabilistic graphical model which integrates voice labels and visual
appearances for inference. Combined with group prior estimation and gender
attribute association, we can achieve an outstanding performance on the
proposed synthesized group photo collections.
Facial Attribute Space Compression by Latent Human Topic Discovery
Posters 3
/
Lin, Chia-Hung
/
Chen, Yan-Ying
/
Chen, Bor-Chun
/
Hou, Yu-Lin
/
Hsu, Winston
Proceedings of the 2014 ACM International Conference on Multimedia
2014-11-03
p.1157-1160
© Copyright 2014 ACM
Summary: Facial attribute is important information for a variety of machine vision
tasks including recognition, classification, and retrieval. There arises a
strong need for detecting various facial attributes such as gender, age and
more which consume more computation and storage resources. Therefore, we
propose a compression framework to find fewer significant Latent Human Topics
(LHT) to approximate more facial attributes. LHT is a combination of attribute
correlation by transferring facial attribute space to compressional space with
Singular Value Decomposition (SVD). Using the proposed scheme, we can easily
detect the facial attributes from a face image via fast reconstructing the
compressed labels automatically detected by a few LHT classifiers. Experimental
results show that our system can achieve similar performance with substantially
fewer dimensions compared to the original number of facial attributes, and it
even shows slight improvements because LHT carry informative attribute
correlations learned from data.
GaussStones: shielded magnetic tangibles for multi-token interactions on
portable displays
Novel hardware II
/
Liang, Rong-Hao
/
Kuo, Han-Chih
/
Chan, Liwei
/
Yang, De-Nian
/
Chen, Bing-Yu
Proceedings of the 2014 ACM Symposium on User Interface Software and
Technology
2014-10-05
v.1
p.365-372
© Copyright 2014 ACM
Summary: This work presents GaussStones, a system of shielded magnetic tangibles
design for supporting multi-token interactions on portable displays. Unlike
prior works in sensing magnetic tangibles on portable displays, the proposed
tangible design applies magnetic shielding by using an inexpensive galvanized
steel case, which eliminates interference between magnetic tangibles. An analog
Hall-sensor grid can recognize the identity of each shielded magnetic unit
since each unit generates a magnetic field with a specific intensity
distribution and/or polarization. Combining multiple units as a knob further
allows for resolving additional identities and their orientations. Enabling
these features improves support for applications involving multiple tokens.
Thus, using prevalent portable displays provides generic platforms for tangible
interaction design.
Demo hour
Demo hour
/
Liang, Rong-Hao
/
Chan, Liwei
/
Tseng, Hung-Yu
/
Kuo, Han-Chih
/
Huang, Da-Yuan
/
Yang, De-Nian
/
Chen, Bing-Yu
/
Grosse-Puppendahl, Tobias
/
Beck, Sebastian
/
Wilbers, Daniel
/
Kuijper, Arjan
/
Heo, Heejeong
/
Park, Hyungkun
/
Kim, Seungki
/
Chung, Jeeyong
/
Lee, Geehyuk
/
Lee, Woohun
/
Unander-Scharin, Carl
/
Unander-Scharin, Åsa
/
Höök, Kristina
/
Elblaus, Ludvig
interactions
2014-09
v.21
n.5
p.6-9
© Copyright 2014 ACM
Summary: Interactivity is a unique forum of the ACM CHI Conference that showcases
hands-on demonstrations, novel interactive technologies, and artistic
installations. At CHI 2014, we aimed to create a "one of a CHInd" Interactivity
experience with more than 60 interactive exhibits to highlight the diverse
group of computer scientists, sociologists, designers, psychologists, artists,
and many more that make up the CHI community. Julie Rico Williamson and Steven
Benford, CHI Interactivity Chairs
Crowd Target Positioning under Multiple Cameras Based on Block
Correspondence
Developing Distributed, Pervasive and Intelligent Environments
/
Zhu, Qiuyu
/
Yuan, Sai
/
Chen, Bo
/
Wang, Guowei
/
Xu, Jianzhong
/
Zhang, Lijun
DAPI 2014: 2nd International Conference on Distributed, Ambient, and
Pervasive Interactions
2014-06-22
p.509-518
Keywords: multiple cameras; constraint of line-of-sight; target positioning; blocks
correspondence
© Copyright 2014 Springer International Publishing
Summary: In the research of crowd analysis in a multi-camera environment, the key
problem is how to get target correspondence between cameras. Two main popular
methods are epipolar geometric constraint and homography matrix constraint. For
large view-angle and wide baseline, these two methods exist obvious
disadvantages and have a low performance. The paper utilizes a new
correspondence algorithm based-on the constraint of line-of-sight for the crowd
positioning. Since the target area is discrete, the paper proposes to use
blocking policy: dividing the target regions into blocks with certain size. The
approach may provide appropriate redundancy information for each object and
decrease the risk of objects missing which is caused by large view-angle and
wide baseline between different perspective images. The experimental results
show that the method has a high accuracy and a lower computational complexity.
GaussBricks: magnetic building blocks for constructive tangible interactions
on portable displays
Tangible interactions and technologies
/
Liang, Rong-Hao
/
Chan, Liwei
/
Tseng, Hung-Yu
/
Kuo, Han-Chih
/
Huang, Da-Yuan
/
Yang, De-Nian
/
Chen, Bing-Yu
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.1
p.3153-3162
© Copyright 2014 ACM
Summary: This work describes a novel building block system for tangible interaction
design, GaussBricks, which enables real-time constructive tangible interactions
on portable displays. Given its simplicity, the mechanical design of the
magnetic building blocks facilitates the construction of configurable forms.
The form constructed by the magnetic building blocks, which are connected by
the magnetic joints, allows users to stably manipulate with various elastic
force feedback mechanisms. With an analog Hall-sensor grid mounted to its back,
a portable display determines the geometrical configuration and detects various
user interactions in real time. This work also introduce several methods to
enable shape changing, multi-touch input, and display capabilities in the
construction. The proposed building block system enriches how individuals
interact with the portable displays physically.
GaussBricks: magnetic building blocks for constructive tangible interactions
on portable displays
Video showcase presentations
/
Liang, Rong-Hao
/
Chan, Liwei
/
Tseng, Hung-Yu
/
Kuo, Han-Chih
/
Huang, Da-Yuan
/
Yang, De-Nian
/
Chen, Bing-Yu
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.181-182
© Copyright 2014 ACM
Summary: This work describes a novel building block system for tangible interaction
design, GaussBricks, which enables real-time constructive tangible interactions
on portable displays. Given its simplicity, the mechanical design of the
magnetic building blocks facilitates the construction of configurable forms.
The form constructed by the magnetic building blocks, which are connected by
the magnetic joints, allows users to stably manipulate with various elastic
force feedback mechanisms. With an analog Hall-sensor grid mounted to its back,
a portable display determines the geometrical configuration and detects various
user interactions in real time. This work also introduce several methods to
enable shape changing, multi-touch input, and display capabilities in the
construction. The proposed building block system enriches how individuals
interact with the portable displays physically.
Gaussbricks: magnetic building blocks for constructive tangible interactions
on portable displays
Interactivity
/
Liang, Rong-Hao
/
Chan, Liwei
/
Tseng, Hung-Yu
/
Kuo, Han-Chih
/
Huang, Da-Yuan
/
Yang, De-Nian
/
Chen, Bing-Yu
Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems
2014-04-26
v.2
p.587-590
© Copyright 2014 ACM
Summary: This work describes a novel building block system for tangible interaction
design, GaussBricks, which enables real-time constructive tangible interactions
on portable displays. Given its simplicity, the mechanical design of the
magnetic building blocks facilitates the construction of configurable forms.
The form constructed by the magnetic building blocks, which are connected by
the magnetic joints, allows users to stably manipulate with various elastic
force feedback mechanisms. With an analog Hall-sensor grid mounted to its back,
a portable display determines the geometrical configuration and detects various
user interactions in real time. This work also introduce several methods to
enable shape changing, multi-touch input, and display capabilities in the
construction. The proposed building block system enriches how individuals
interact with the portable displays physically.