| When Does a Difference Make a Difference? A Snapshot on Global Icon Comprehensibility | | BIBAK | Full-Text | 3-12 | |
| Sonja Auer; Ester Dick | |||
| Global markets require global solutions, especially in user interface
design. There are differences between cultures -- but do those differences call
for different icon designs? This paper provides a snapshot on icon
comprehensibility in China, the US and Germany. The icon set was derived of an
actual product to enable valid results. A web-based study with 135 participants
from China, the US and Germany was conducted. Icon recognition rates among the
Chinese participants were significantly lower than among US and German
participants. Still, the mean rating for all three countries was above 69% and
thus far removed from guesswork. Practical implications for global icon design
are discussed based on these findings. Keywords: Internationalization/Localization; Icon Evaluation; Icon Design; User
Interface Design; Visual Design; Quantitative Empirical User Studies | |||
| Interface and Visualization Metaphors | | BIBAK | Full-Text | 13-22 | |
| Vladimir L. Averbukh; Mikhail Bakhterev; Aleksandr Baydalin; Damir Ismagilov; Polina Trushenkova | |||
| The paper is devoted to problems of computer metaphors, such as Interface
metaphor and Visualization metaphor. Interface metaphor is considered as the
basic idea of likening between interactive objects and model objects of the
application domain. A visualization metaphor is defined as a map establishing
the correspondence between concepts and objects of the application domain under
modeling and a system of some similarities and analogies. This map generates a
set of views and a set of methods for communication with visual objects. Some
positions of the metaphor theory are discussed. Concept of metaphor action is
suggested.. "Formula" of metaphor is constructed. A set of examples of metaphor
was analyzed. Aprioristic quality criteria of interface and visualization
metaphors are suggested. These criteria allow evaluating as existing metaphors
and to search for adequate metaphors for designing new specialized systems. Keywords: Computer Metaphor; Visualization; Interface | |||
| Displays Attentive to Unattended Regions: Presenting Information in a Peripheral-Vision-Friendly Way | | BIBAK | Full-Text | 23-31 | |
| Mon-Chu Chen; Roberta L. Klatzky | |||
| This study proposes that a visual attentive user interface should present
information in a peripheral-vision-friendly way, rather than degrading the
display resolution for unattended areas, as is sometimes practiced. It suggests
that information presented in unattended areas could advantageously be
perceived by our peripheral vision without compromising the primary task
performance. The paper will discuss an empirical study in which several
motion-based stimuli were examined on the periphery in a dual-task scenario. A
proposed new design of GPS Navigation System design will then be described to
demonstrate the concept of peripheral-vision-friendliness. Keywords: Attentive User Interface; Peripheral Vision Friendly; Dual-Task Performance;
Peripheral Visual Design; GPS Navigation System | |||
| Screen Layout on Color Search Task for Customized Product Color Combination Selection | | BIBAK | Full-Text | 32-40 | |
| Cheih-Ying Chen; Ying-Jye Lee; Fong-Gong Wu; Chi-Fu Su | |||
| This article describes experimental investigations of the effects of color
name displayed and screen layout on customized product color combination
selection. In the experiment, 6 interface designs were developed by
systematically varying with 2 factors (interface layout type and color name
type) in order to explore the relations between the interface of the massive
number of product color combinations and users' performance. Results from the
experiment show the layout type of itemized color chips, which is divided into
some items according to the customized color module parts of product, has the
best grouping type for customized product color combination selection in search
time by users. Besides, users view various color combinations for products
directly with or without the aid of color names, it gets important whether
displayed colors are correct or not as users make judgment and choices
according to the product colors on the screen. Keywords: Screen layout; Mass customization; Product color; Color name | |||
| Experimental Comparison of Adaptive vs. Static Thumbnail Displays | | BIBAK | Full-Text | 41-48 | |
| Pilsung Choe; Chulwoo Kim; Mark R. Lehto; Jan P. Allebach | |||
| Keyword search is a very important method to find information on Web sites
along with link-based browsing. How an information retrieval system displays
search results is very important because users spend most of their time in
finding, reading and understanding retrieved information. As an application of
information retrieval systems, a self-help print quality troubleshooting system
was introduced. As a method to show search results, displaying thumbnails is
very useful in print defect diagnosis because users don't have to read and
understand complex information in text. This study compared static thumbnail(s)
and adaptive thumbnails to display search results to diagnose print defects.
Results showed that the one thumbnail static display was worse in terms of user
performance and preference. However there was no significant difference between
the three thumbnails static display and the adaptive display although the three
thumbnails static display was better than the adaptive display on average. Keywords: Adaptive display; static display; thumbnail display; keyword search; search;
information retrieval; diagnosis; troubleshooting; print defect diagnosis | |||
| Improving Document Icon to Re-find Efficiently What You Need | | BIBAK | Full-Text | 49-52 | |
| Changzhi Deng; Mingjun Zhou; Feng Tian; Guozhong Dai; Hongan Wang | |||
| It is common that documents are represented by document icon in graphical
user interfaces. The document icon facilitates user to retrieve documents, but
it is difficult to distinguish the document from a collection of documents that
user have accessed to. Our paper presents a document icon on which the users
can add some subjective values and mark. Then we describe a system ex-explorer
that users can browser and search the extent document icon. We found that it is
easy to re-find the document on which users added some annotation or mark by
themselves. Keywords: document icon; annotation; mark; subjective value | |||
| The Design of a Computer Mouse Providing Three Degrees of Freedom | | BIBAK | Full-Text | 53-62 | |
| Daniel Fallman; Anneli Mikaelsson; Björn Yttergren | |||
| We present the design process of designing and implementing a 3DOF mouse.
First, we provide a review of the current literature in the field. Then we
introduce a focus group workshop activity underlying the whole design process,
pointing us towards graphical design applications and 3D modeling tools. Third,
we present our prototype design process in some detail, especially denoting the
important role we believe product semantics plays. We argue that 3DOF mice are
most useful for small but precise rotation movements. If the extra degree of
freedom provided by 3DOF mice over 2DOF mice is limited to such subtle
manipulation tasks, we believe users might be more willing to accept them. Keywords: 3D Interaction; input devices; multiple degree-of-freedom; 3DOF; computer
mouse | |||
| Facilitating Conditional Probability Problems with Visuals | | BIBAK | Full-Text | 63-71 | |
| Vince Kellen; Susy S. Chan; Xiaowen Fang | |||
| In tasks such as disease diagnosis, interpretation of evidence in criminal
trials and management of security and risk data, people need to process
conditional probabilities to make critical judgments and decisions. As
dual-coding theory and the cognitive theory of multimedia learning (CTML) would
predict, visual representations (VRs) should aid in these tasks. Conditional
probability problems are difficult and require subjects to build a mental model
of set inclusion relationships to solve them. Evidence from neurological
research confirms that mental model construction relies on visual spatial
processing. Prior research has shown conflicting accounts of whether visuals
aid in these problems. Prior research has also revealed that individuals differ
in their ability to perform spatial processing tasks. Do visuals help solve
these problems? Do visualization interface designers need to take into account
the nuances of spatial processing and individual differences? This study uses a
3x2 factorial design to determine the relationship between subject's spatial
abilities (high or low) and visual and text representations on user performance
and satisfaction. Keywords: Information visualization; Bayesian reasoning; conditional probabilities;
dual-coding; cognitive theory of multimedia learning; mental models; individual
differences; spatial ability | |||
| Interface Design Technique Considering Visual Cohesion-Rate by Object Unit | | BIBA | Full-Text | 72-81 | |
| Chang-Mog Lee; Ok-Bae Chang | |||
| With the application development environment rapidly changing, the design of an interface that supports the complex interactions between humans and computers is required. In addition, profound knowledge of various fields is required to cover the requirements of different customers. Therefore, in this paper, we suggest an interface design technique that considers the cohesion-rate in each objects of user interface. To accomplish this, specific and detailed models to embody the user interface are presented on the basis of the following classification: 1) event object model, 2) task object model, 3) transaction object model, 4) form object model. If this detailed modeling work is performed, the visual cohesion of the prototype user interface can be improved and, on this basis, even unskilled designers can construct an improved user interface. Moreover, the proposed method promotes the understanding of the business process and reduces the frequency of system development. | |||
| A Color Adjustment Method for Automatic Seamless Image Blending | | BIBAK | Full-Text | 82-91 | |
| Xianji Li; Dongho Kim | |||
| In this paper we present a stable automatic system for image composition,
which can well control the color difference between two images, and produce a
seamless composite image with color continuity. This is a user-friendly system
that reduces the user's manual tasks. We observe that Poisson image editing
written by Perez et al. [8] blends well for seamless boundary automatically.
However, the color of user-selected region can be changed after applying this
method. So the object loses its original color tone after blending. To solve
this problem, firstly we check out the case of object color being changed
rapidly. It can be done by calculating color temperatures of two input images
and comparing the white balance with each other. Next, a distance ratio rule is
applied to controls the pixels included in the region between the user-selected
boundary and object boundary. Keywords: image composition; Poisson Image Editing; object color; color temperatures;
distance ratio rule | |||
| Interactive Visual Decision Tree Classification | | BIBAK | Full-Text | 92-105 | |
| Yan Liu; Gavriel Salvendy | |||
| Data mining (DM) modeling is a process of transforming information enfolded
in a dataset into a form amenable to human cognition. Most current DM tools
only support automatic modeling, during which uses have little interaction with
computing machines other than assigning some parameter values at the beginning
of the process. Arbitrary selection of parameter values, however, can lead to
an unproductive modeling process. Automatic modeling also downplays the key
roles played by humans in current knowledge discovery systems. Classification
is the process of finding models that distinguish data classes in order to
predict the class of objects whose class labels are unknown. Decision tree is
one of the most widely used classification tools. A novel interactive visual
decision tree (IVDT) classification process has been proposed in this research;
it aims to facilitate decision tree classification process regarding enhancing
users' understanding and improving the effectiveness of the process by
combining the flexibility, creativity, and general knowledge of humans with the
enormous storage capacity and computational power of computers. An IVDT for
categorical input attributes has been developed and experimented on twenty
subjects to test three hypotheses regarding its potential advantages. The
experimental results suggested that compared to the automatic modeling process
as typically applied in current decision tree modeling tools, IVDT process can
improve the effectiveness of modeling in terms of producing trees with
relatively high classification accuracies and small sizes, enhance users'
understanding of the algorithm, and give them greater satisfaction with the
task. Keywords: visual data mining; interactive modeling; model visualization; data
visualization | |||
| Anchored Maps: Visualization Techniques for Drawing Bipartite Graphs | | BIBAK | Full-Text | 106-114 | |
| Kazuo Misue | |||
| A method of drawing anchored maps for bipartite graphs is presented. Suppose
that the node set of a bipartite graph is divided into set A and set B. On an
anchored map of the bipartite graph, the nodes in A, which are called
"anchors," are arranged on the circumference, and the nodes in B, which are
called "free nodes," are arranged at suitable positions in relation to the
adjacent anchors. This article describes aesthetic criteria that are employed
according to the purpose of drawing anchored maps and indices which are used to
arrange the anchors so that they satisfy the criteria. It also shows an example
of taking overviews of networks by using the developed technique. Keywords: graph drawing; anchored map; bipartite graph; knowledge mining | |||
| ParSketch: A Sketch-Based Interface for a 2D Parametric Geometry Editor | | BIBAK | Full-Text | 115-124 | |
| Ferran Naya; Manuel Contero; Nuria Aleixos; Pedro Company | |||
| ParSketch is a software prototype to evaluate the usability and
functionality of a sketching interface aimed at defining 2D parametric
sections. Currently, ParSketch interprets strokes which can be recognized as
geometry (line, arc, circle, ellipse, or composed entities that are
automatically segmented into those basic entities), or graphic gestures
representing constraints (dimension, parallel, perpendicular, tangent,
concentric, horizontal or vertical). From the functionality point of view,
ParSketch compares to current commercial parametric CAD applications, as it
offers many of the features provided by such applications. A theoretical
analysis of the efficiency component of usability is provided that justifies
the potential capability of sketching interfaces to compete with classical WIMP
applications. Finally, a usability study is presented, which makes special
emphasis in the satisfaction component of usability. Keywords: sketching; parametric drawing; usability of sketching interfaces | |||
| The Effects of Various Visual Enhancements During Continuous Pursuit Tracking Tasks | | BIBAK | Full-Text | 125-132 | |
| Jaekyu Park; Sung Ha Park | |||
| The present study investigated the effects of visual enhancements in a
continuous pursuit tracking task. Participants performed simulated tracking
tasks and were instructed to maintain a pointer on a moving horizontal bar of
an indicator display using a computer mouse. Within-subject factorial design
was implemented with three levels of visual enhancement and three levels of
task difficulty. Subjective ratings of workload (using modified Cooper-Harper
rating scale) and tracking errors were obtained as performance measures. ANOVA
results showed that the tracking error and subjective workload were
significantly affected by each of the independent variables (i.e., the types of
visual enhancement and task difficulties). The result implies that visual
enhancement cues can provide additional visual information of target location
during tracking tasks. Keywords: Tracking Task; Visual Display; Visual Enhancement | |||
| Stylus Enhancement to Enrich Interaction with Computers | | BIBAK | Full-Text | 133-142 | |
| Yu Suzuki; Kazuo Misue; Jiro Tanaka | |||
| We introduce a technique to enrich user interaction with a computer through
a stylus. This technique allows a stylus to be manipulated in the air to
operate applications in new ways. To translate the stylus manipulation into
application behavior, we take an approach that we attach an accelerometer to
the stylus. Such a stylus allows control through new operations like rolling
and shaking, as well as through conventional operations like tapping or making
strokes. An application can use these operations to switch modes or change
parameters. We have implemented a number of applications, called the "Oh!
Stylus Series," that can be used with our proposed technique. Keywords: Stylus Enhancement; Interaction; Accelerometer; Behavior | |||
| An Experimental Evaluation of Information Visualization Techniques and Decision Style | | BIBAK | Full-Text | 143-150 | |
| Wan Adilah Wan Adnan; Nor Laila Md. Noor; Rasimah Aripin | |||
| This study aims to investigate the extent to which information visualization
(IV) techniques and decision style affect decision performance and user
preferences in a decision support environment. The study adopted an
experimental method. Findings from this study provide theoretical, empirical
and practical contributions. The results showed that there were significant
differences in decision performance and user preference across IV techniques
and decision style. The findings have important implications for the decision
support system (DSS) designers, and provide important research issues for
future work. Keywords: information visualization techniques; decision style; human computer
interaction; decision support system | |||
| Enhancing the Map Usage for Indoor Location-Aware Systems | | BIBAK | Full-Text | 151-160 | |
| Hui Wang; Henning Lenz; Andrei Szabo; Joachim Bamberger; Uwe D. Hanebeck | |||
| Location-aware systems are receiving more and more interest in both academia
and industry due to their promising prospective in a broad category of
so-called Location-Based-Services (LBS). The map interface plays a crucial role
in the location-aware systems, especially for indoor scenarios. This paper
addresses the usage of map information in a Wireless LAN (WLAN)-based indoor
navigation system. We describe the benefit of using map information in multiple
algorithms of the system, including radio-map generation, tracking, semantic
positioning and navigation. Then we discuss how to represent or model the
indoor map to fulfill the requirements of intelligent algorithms. We believe
that a vector-based multi-layer representation is the best choice for indoor
location-aware system. Keywords: Location-Aware Systems; WLAN Positioning; Map Representation | |||
| Freehand Sketching Interfaces: Early Processing for Sketch Recognition | | BIBAK | Full-Text | 161-170 | |
| Shu-xia Wang; Man-tun Gao; Le-hua Qi | |||
| Freehand sketching interfaces allow the user to directly interact with tasks
without worrying about low-level commands. The paper presents a method for
interpreting on-line freehand sketch and describes a human-computer interface
prototype system of freehand sketch recognition (FSR) that is designed to infer
designers' intention and interprets the input sketch into more exact 2D
geometric primitives: straight lines, polylines, circles, circular arcs,
ellipses, elliptical arcs, hyperbolas and parabolas. According to whether the
stroke needs to be segmented or not, it is divided into single primitives and
composite primitives correspondingly. Based on open/closed characteristic and
semi-invariant, conic type and category of freehand sketch were defined for
subdividing conic curve. Recognition approach for composite-primitive consists
of three stages. The effectiveness of the algorithm is demonstrated
preliminarily by experiments. Keywords: Freehand sketching interface; Recognition; Segmentation; Stroke | |||
| Bilingual Mapping Visualizations as Tools for Chinese Language Acquisition | | BIBA | Full-Text | 171-180 | |
| Jens Wissmann; Gisela Susanne Bahr | |||
| We present the approach and prototype of a system that supports the acquisition of bilingual knowledge. In this paper we focus on the domain of vocabulary acquisition for Chinese. Mapping visualizations, especially Bilingual Knowledge Maps [1], can be used to foster acquisition of word knowledge, dictionary navigation, and testing. We developed style sheets that map between knowledge representation and visualization tailored for Chinese-English bilingual data. This is on the one hand used to generate maps that visualize knowledge in a pedagogically reasonable way. On the other hand, user input and knowledge base can be queried. This more sophisticated functionality is enabled by using Semantic Web techniques. These techniques further allow us to integrate different (possibly distributed) data sources that contain relevant relationships. | |||
| The Perceptual Eye View: A User-Defined Method for Information Visualization | | BIBAK | Full-Text | 181-190 | |
| Liang-Hong Wu; Ping-Yu Hsu | |||
| With the growing volumes of data, exploring the relationships within the
huge amounts of data is difficult. Information visualization uses the human
perception system to assist users in analyzing complex relationships and
graphical hierarchy trees used commonly to present the relationship among the
data. Conventional information visualization approaches fail to consider human
factors, they only provide fixed degree of detail to different users. However,
different users have different perceptions. A well-known information
visualization called 'Magic Eye View' uses a three-dimensional interaction to
allow the user to control the degree of detail he would like. However, it fails
to consider some important focus + context features such as the smooth
transition of the focus region and the global context. In this paper, we
propose a novel information visualization method, called the 'Perceptual Eye
View,' by which users may control the focus points three-dimensionally enabling
different users to view their user-defined degree of detail of information
space and to perceive based on their own knowledge and perception. The results
demonstrate that our proposed method improve the 'Magic Eye View' by providing
smooth transition of the focus region and the global context, which are
important focus+context features that the 'Magic Eye View' fails to consider. Keywords: Human-Computer Interaction; Information Visualization; Human Perception | |||
| A Discriminative Color Quantization Depending on the Degree of Focus | | BIBAK | Full-Text | 191-196 | |
| Hong-Taek Yang; Doowon Paik | |||
| In this paper, we propose a discriminative color quantization algorithm
depending on the degree of focus of the regions. When we take pictures, we
usually focus the object that we want to emphasize. This means that focused
area on the photograph contains important information. If the focused area is
displayed with more colors, we can express the important information in more
detail. This paper proposes a color quantization method that determines the
focused area and assigns more colors for the area. Keywords: Color quantization; focus measure; focused area detection | |||
| Getting Lost? Touch and You Will Find! The User-Centered Design Process of a Touch Screen | | BIBAK | Full-Text | 197-206 | |
| Bieke Zaman; Rogier Vermaut | |||
| Recent reforms in office concepts have led to new intensification strategies
that aim at more flexibility and efficiency. Hot desking is one of these new
working practices that reduces office space per worker. These new office
concepts however, pose new challenges and problems to solve. This paper
describes the development phase of an innovative touch screen application for
location based services to overcome the problematic edge effects of hot desking
such as missing workspace awareness and poor communication. We followed a
user-centered design (UCD) process to develop and test the interface so that it
could be gradually modified and tailored to the demands and expectations of the
end users. First, a methodological overview of the different phases of the
UCD-process is given. Then, the results of each phase are discussed, focusing
on several interface elements. Finally, the most important and striking results
are summarized. Keywords: Touch screen interface; hot desking; user-centered design process; usability
testing; contextual inquiry; conceptual model; paper prototyping | |||
| CoConceptMap: A System for Collaborative Concept Mapping | | BIBAK | Full-Text | 207-213 | |
| Mingjun Zhou; Xiang Ao; Lishuang Xu; Feng Tian; Guozhong Dai | |||
| Concept mapping is a technique for visualizing the relationships between
different concepts, and collaborative concept mapping is used to model
knowledge and transfer expert knowledge. Because of lacking some features,
existing systems can't support collaborative concept mapping effectively. In
this paper, we analysis the collaborative concept mapping process according to
the theory of distributed cognition, and argue the functions effective systems
ought to include. A collaborative concept mapping system should have the
following features: visualization of concept map, flexible collaboration style,
supporting natural interaction, knowledge management and history management.
Furthermore, we describe every feature in details. Finally, a prototype system
has been built to fully explore the above technologies. Keywords: collaborative concept mapping; distributed cognition; pen-based user
interface | |||
| User Expectations from Dictation on Mobile Devices | | BIBA | Full-Text | 217-225 | |
| Santosh Basapur; Shuang Xu; Mark Ahlenius; Young Seok Lee | |||
| Mobile phones, with their increasing processing power and memory, are enabling a diversity of tasks. The traditional text entry method using keypad is falling short in numerous ways. Some solutions to this problem include: QWERTY keypads on phone, external keypads, virtual keypads on table tops (Seimens at CeBIT '05) and last but not the least, automatic speech recognition (ASR) technology. Speech recognition allows for dictation which facilitates text input via voice. Despite the progress, ASR systems still do not perform satisfactorily in mobile environments. This is mainly due to the complexity of capturing large vocabulary spoken by diverse speakers in various acoustic conditions. Therefore, dictation has its advantages but also comes with its own set of usability problems. The objective of this research is to uncover the various uses and benefits of using dictation on a mobile phone. This study focused on the users' needs, expectations, and their concerns regarding the new input medium. Focus groups were conducted to investigate and discuss current data entry methods, potential use and usefulness of dictation feature, users' reaction to errors from ASR during dictation, and possible error correction methods. Our findings indicate a strong requirement for dictation. All participants perceived dictation to be very useful, as long as it is easily accessible and usable. Potential applications for dictation were found in two distinct areas namely communication and personal use. | |||
| Design Guidelines for PDA User Interfaces in the Context of Retail Sales Support | | BIBAK | Full-Text | 226-235 | |
| Rainer Blum; Karim Khakzar | |||
| The development of an in-store sales support system that focuses on the
"virtual try-on" of clothing is the main goal of the research project
"IntExMa". Within this scope we investigate to what extent mobile devices can
support the tasks of the retail sales staff in their daily personal sales
activities. In doing so, usability is regarded as the central quality criteria.
This paper addresses the resulting question, how a user-friendly design of, in
our case, a PDA is characterized. We introduce a compilation of seventeen
design principles and detail each with a range of exemplary rules to facilitate
practical applicability. Keywords: PDA; handheld; mobile; guidelines; interface; design | |||
| Influence of Culture on Attitude Towards Instant Messaging: Balance Between Awareness and Privacy | | BIBA | Full-Text | 236-240 | |
| Jinwei Cao; Andrea Everard | |||
| This research-in-progress paper investigates how attitudes towards privacy and awareness mediate the relationship between culture and attitude towards Instant Messaging (IM). A conceptual model is proposed to explain the relationships between culture and attitude towards privacy and between culture and attitude towards awareness. Attitudes towards privacy and awareness are then hypothesized to affect attitude towards IM. Related research about IM, user attitudes, and cultural dimensions are reviewed and a proposed survey study is described. | |||
| Usability Evaluation of Designed Image Code Interface for Mobile Computing Environment | | BIBAK | Full-Text | 241-251 | |
| Cheolho Cheong; Dong-Chul Kim; Tack-Don Han | |||
| Recently, image code interfaces and designed image codes, which can present
visual information such as shapes, colors, text, images, and textures within
the image code, have attracted increasing interest for use in mobile computing
environments. In this paper, we introduce designed image codes and their basic
decoding techniques; furthermore, we compare and analyze user preferences by
performing a user study. We also implement a high-fidelity prototype of an
image code interface based on a mobile computing environment and evaluate its
usability by performing a user evaluation. From the evaluation, it is observed
that the participants prefer a color-based image code, and this image code has
a better merit rating. Keywords: Image code; designed code; barcode; 2D code; color-based image code | |||
| The Effects of Gender Culture on Mobile Phone Icon Recognition | | BIBA | Full-Text | 252-259 | |
| Shunan Chung; Chiyi Chau; Xufan Hsu; Jim Jiunde Lee | |||
| Mobile phones have rapidly become the most important communication device in our daily life. According to a recent survey of The Directorate of Telecommunications, Ministry of Transportation and Communications in 2005, the penetration rate of telecom service subscribers in the Taiwan area is 97.37%. That is, on average, almost every Taiwanese citizen owns a mobile phone. This has resulted in extremely keen competition among mobile phone vendors. When compared with others, teenagers have long been viewed as the primary users in the Taiwanese mobile phone market. Regardless of vendors' various kinds of promotion strategies such as special price discounts or newly-added fancy functions, what really matters is whether this daily communication device has been designed according to the true needs and experience of this special age group. The small screen interface design is one of the newest research focuses of the Human-Computer Interaction domain. Due to the limited screen space, icons have been deemed as the dominant mode in the operational process of a mobile phone. The present study is dedicated to exploring the icon design of the mobile phone, especially for the teenage user group in Taiwan. | |||
| Designing for Mobile Devices: Requirements, Low-Fi Prototyping and Evaluation | | BIBAK | Full-Text | 260-269 | |
| Marco de Sá; Luís Carriço | |||
| This paper describes the design process of a set of ubiquitous applications
for critical scenarios (e.g., psychotherapy and education). Accordingly, we
address the problems that occurred on the various design stages, particularly
those that pertain to the mobility and ubiquity of the devices. Regarding
these, we detail the various solutions that were adopted along the way,
particularly on the data gathering and requirements' assessment, prototyping
and evaluation stages. We introduce a set of dimensions to the concept of
context and how it is utilized on the design of mobile applications. We
describe new prototyping techniques and explain how they improve usability
evaluation. Overall, we aim to share the learnt lessons and how they can be
used in mobile application's design. Keywords: low-fidelity prototyping; ubiquitous computing; mobile devices; usability
evaluation | |||
| Playback of Rich Digital Books on Mobile Devices | | BIBAK | Full-Text | 270-279 | |
| Carlos Duarte; Luís Carriço; Fernando Morgado | |||
| This paper presents the mobile version of Rich Book Player, a Digital
Talking Book player for mobile devices. The mobile version is based on a
desktop version with multimodal and adaptive features. The development of the
mobile version tried to retain the look and feel of the desktop version and as
much as possible of the features required for an advanced Digital Talking Book
player. We describe how the intrinsic characteristics of mobile devices
impacted the performance and interaction aspects of the application. Keywords: Mobile devices; Digital Talking Books; Speech output; Evaluation | |||
| Using Mobile Devices to Improve the Interactive Experience of Visitors in Art Museums | | BIBA | Full-Text | 280-287 | |
| José A. Gallud; María Dolores Lozano; Ricardo Tesoriero; Victor M. Ruiz Penichet | |||
| Many people use a PDA or a smart phone as a daily working tool. These devices allow us to communicate; to organize our life and so on. In this sense, the key question underlying this paper is if this new technology could be used to enrich our experience when we visit museums or other cultural spaces. Museums and art galleries are provided with some electronic guides in order to do more pleasant the visit to the exhibition. It is interesting to know how to use these new devices as a medium to guide and improve the visitors' experience. In this paper we describe a real system deployed in an emblematic museum in Spain, called Cutlery Museum of Albacete. Our approach uses a PDA -- offered to visitors -- that work jointly with a wireless network to show additional information about old knives, jack-knives and scissors which are showed physically in the museum. The system supports four languages and incorporates special functions for disabled people. The users' satisfaction results collected during the last 18 months demonstrate the validity of our proposal. | |||
| Model-Based Approaches to Quantifying the Usability of Mobile Phones | | BIBA | Full-Text | 288-297 | |
| Dong-Han Ham; Jeongyun Heo; Peter Fossick; William Wong; Sanghyun Park; Chiwon Song; Mike Bradley | |||
| Several factors make it difficult to quantify the usability of mobile phones. Nevertheless, a quantified value of the usability could be used for several purposes, such as design innovation and benchmarking. This paper proposes three approaches (task centred, usability indicator-based, and design area-based quantification) to quantifying the usability of mobile phones on the basis of a hierarchical model of usability factors. Each of them provides process and rules for calculating the usability score of a mobile phone by applying weighting value assignment methods. Through two case studies, we could obtain empirical data to be used for determining the weighting value for quantification and confirm the usefulness of the proposed approaches. | |||
| Accelerated Rendering of Vector Graphics on Mobile Devices | | BIBAK | Full-Text | 298-305 | |
| Gaoqi He; Baogang Bai; Zhigeng Pan; Xi Cheng | |||
| With the great development of mobile communication and devices, graphics on
mobile device catches more and more attentions. Compare with static bitmap,
vector graphics (VG) is more fit to mobile devices because of small file size
and scalability for any display size. Emergence of OpenVG standard motivates
the research of VG. This paper focuses on the time-consuming performance of VG
rendering and exploits accelerated rendering algorithms. Layered implementation
structure and algorithm of drawing one path are introduced firstly. According
to the obtained time-consuming data of tiger sample, analysis methodologies are
constructed and results are presented. Optimization directions can be concluded
into three major aspects: rasterizer, stroke and tessellate. Accelerated
rendering methods are discussed with experiments to validate non-uniform
subdivision algorithm for Bézier curves. The tiger sample is rendered
with improved performance using the proposed algorithm. Keywords: vector graphics; OpenVG; non-uniform subdivision; rasterizer; tessellate | |||
| Pulling Digital Data from a Smart Object: Implementing the PullMe-Paradigm with a Mobile Phone | | BIBAK | Full-Text | 306-310 | |
| Steve Hinske | |||
| This paper presents the PullMe paradigm, an interaction technique for easily
initializing and confirming the transmission of digital data using a mobile
phone. The main idea and benefit is the substitution of less feasible
techniques such as manually selecting or confirming a service (e.g., by
entering a password) with a simple hand gesture. We describe a prototypical
implementation that utilizes an acceleration sensor and radio frequency
identification (RFID) technology integrated into a mobile phone. The
transmission of data is realized using Bluetooth. We furthermore discuss how
near-field communication (NFC) is likely to enable interaction patterns like
the PullMe paradigm in the near future. Keywords: PullMe paradigm; human-computer interaction; HCI; semantic mapping of
physical and virtual action; mobile phone; acceleration sensor; radio frequency
identification; RFID; near-field communication; NFC | |||
| Reading Performance of Chinese Text with Automatic Scrolling | | BIBAK | Full-Text | 311-319 | |
| Yao-Hung Hsieh; Chiuhsiang Joe Lin; Hsiao-Ching Chen; Ting-Ting Huang; James C. Chen | |||
| Auto-scrolling is useful when the reader wishes to move the text
continuously to where the reader's eyes are fixated on the visual display
without having to press the control button on the scrolling device all the
time. In this paper, we conducted an experiment to study the effects of scroll
speed in text error search tasks. The study considered three experimental
factors, scroll speed, error type, and article length. Reading performance and
fatigue were measured with the error search accuracy and subjective evaluation.
The result indicates that scrolling at a high speed would cause a decrease in
error identification, affecting the quality of reading. Keywords: Auto-Scrolling; Error Search; Reading Efficiency; Visual Fatigue | |||
| WAP Access Methods on Mobile Phones | | BIBA | Full-Text | 320-325 | |
| Zhang Hua; Cui Yoon Ping | |||
| With the development in telecommunication technology, application on Wireless Application Protocol (WAP) has dramatically increased all over the world. Users can browse the Internet anytime, anywhere, by using small mobile devices. The number of Wireless Application Protocol (WAP) pages grew dramatically on the Internet. In just eight months, the number has grown from almost zero to 4.4 million WAP pages in 2000 [1]. | |||
| Evaluation of Content Handling Methods for Tabletop Interface | | BIBAK | Full-Text | 326-335 | |
| Ryoko Ishido; Keigo Kitahara; Tomoo Inoue; Ken-ichi Okada | |||
| We focused on face-to-face collaborative learning in a classroom using
spatio-temporal contents, which is typically conducted after outdoor class in
an elementary school. We have developed a tangible collaborative learning
support system that uses physical objects and associating spatio-temporal
contents. We have implemented a few methods to handle these contents. Two
experiments were conducted to confirm that better information accessibility was
provided by the system than conventional pen and paper method. Keywords: CSCL; TUI; tabletop; physical objects | |||
| Interacting with a Tabletop Display Using a Camera Equipped Mobile Phone | | BIBAK | Full-Text | 336-343 | |
| Seokhee Jeon; Gerard Jounghyun Kim; Mark Billinghurst | |||
| Today's mobile phones have not only become the most representative device in
the new ubiquitous computing era but also dramatically improved in terms of
their multi-modal sensing and display capabilities. This advance makes the
mobile phone an ideal candidate for a more natural interaction device in
ubiquitous computing environment. This paper proposes techniques which use
camera-equipped mobile phones for interacting with 2D and 3D applications on a
tabletop display environment. The camera acts as the main sensor for a
gesture-based interaction. Using the mobile phone with an interactive touch
screen allows the use of techniques that move beyond single hand/finger input
to improve task performance. The interaction performances of the proposed
techniques and design guidelines are also described in this paper. Keywords: Tabletop; Motion flow; Interaction techniques; Cell/Mobile phones | |||
| Mobile Video Editor: Design and Evaluation | | BIBAK | Full-Text | 344-353 | |
| Tero Jokela; Minna Karukka; Kaj Mäkelä | |||
| Mobile phones have evolved from voice-centric communication devices to
powerful personal multimedia devices. Among other multimedia features, they
enable the users to capture video clips with built-in digital cameras. However,
due to the continuous nature of video, it is often difficult to capture a video
clip exactly as intended -- in many cases the possibility to edit the clip
after capture would be useful. We describe the design of a video editor
application for mobile devices. We present the main user goals for video
editing in the mobile context based on a Contextual Inquiry study and an
application design that supports these goals. We demonstrate that video editing
on mobile devices is feasible and report a usability evaluation with
encouraging results. Keywords: Mobile devices; video editing; multimedia authoring; user interfaces; user
study; contextual inquiry; interaction design; usability | |||
| Perceived Magnitude and Power Consumption of Vibration Feedback in Mobile Devices | | BIBAK | Full-Text | 354-363 | |
| Jaehoon Jung; Seungmoon Choi | |||
| This paper reports a systematic study on the perceived magnitude of
vibrations generated from a vibration motor fastened on the user's thenar
eminence and its electric power consumption. The vibration motor is widely used
in mobile devices for vibration feedback due to its small size and inexpensive
price. However, a critical drawback of the vibration motor is that the
amplitude and frequency of vibrations generated from it are correlated due to
its operating principles that allow only one control variable (applied
voltage). Motivated by this fact, we have investigated a relationship between
the perceived magnitude of vibrations produced by the motor and its power
consumption with the applied voltage as a common parameter. The results showed
that using more power does not necessarily increase the sensation magnitude,
which indicates vibrations of the same perceived magnitude can be rendered
while extending the life span of a mobile device battery. Keywords: Vibration feedback; vibration motor; perceived magnitude; power consumption;
mobile device | |||
| Application of a Universal Design Evaluation Index to Mobile Phones | | BIBAK | Full-Text | 364-373 | |
| Miyeon Kim; Eui S. Jung; Sungjoon Park; Jongyong Nam; Jaeho Choe | |||
| Universal design is considerably analogous to ergonomic design in a way that
it takes the capabilities and limitations of users into consideration during
the product development process. However, relatively few studies have been
devoted to reflect the practical use of ergonomic principles on universal
design. This research attempts to develop a universal design evaluation index
for mobile phone design to quantify how well a product complies to the
principles of universal design. The research also emphasizes on ergonomic
principles as a basis of evaluation. A generation of the evaluation lists was
done by cross-checking among the personal, activity and product components.
Personal components consist of human characteristics including age, physique,
perceptual capacity, life style, etc. Activity components were derived from the
scenarios of mobile phone use while product components were composed of the
parts to which a user interacts. A universal design index was generated
systematically from the relationship matrices among the three components. The
index was then used to test its suitability by applying it to the evaluation of
mobile phones currently on the market. This study demonstrates a development
process through which evaluations can be made possible for universal design.
The research suggests an improved approach to the appraisal of how well mobile
phones are universally designed based on ergonomic principles. Keywords: Universal design; Mobile phone usability; Evaluation process | |||
| Understanding Camera Phone Imaging: Motivations, Behaviors and Meanings | | BIBAK | Full-Text | 374-383 | |
| Grace Kim; Wilson Chan | |||
| This paper explores the range and diversity of capture and share practices
associated with camera phones and camera phone images. We offer a high level
usage model that illustrates some of the key relationships among consumers'
behaviors, motivations and meanings and provides an initial framework for
building segmentation models as well as recommendations for specific product
and marketing strategies. Keywords: camera phones; mobile phones; mobile imaging; qualitative research; user
research; market research | |||
| The Design and Evaluation of a Diagonally Splitted Column to Improve Text Readability on a Small Screen | | BIBA | Full-Text | 384-393 | |
| Yeon-Ji Kim; Woohun Lee | |||
| Most previous studies comparing paper and computer screen readability show that screens are less readable than paper. There are many factors that could affect the readability of computer screens. However, exactly what factors reduce reading performance on computer screens is not clear. Therefore, this study has tried to find an alternative way to improve the readability of screen displays. A novel suggestion is designed to give readers sense of rhythm by diagonally dividing up screen areas. The diagonal division layout was read significantly faster (24.4%) than a normal layout for 5.5 inches display of e-Book readers. For 3.5 inches display, it was read 13.0% faster. The reason why the total reading time was cut down by the diagonal division layout is that the right-upper part played a decisive role. The number of texts on each line is systematically decreased in the right-upper part. 31.6% of total reading time was shorten by the right-upper part. However, there were no significant differences in subjective satisfaction between the two layout conditions. | |||
| Development of Interactive Logger for Understanding User's Interaction with Mobile Phone | | BIBAK | Full-Text | 394-400 | |
| Daeeop Kim; Kun-Pyo Lee | |||
| User's mobility while using mobile devices requires new types of usability
testing methods different from conventional testing methods which are static,
verbal, quantitative, indoor, virtual and unnatural. Software, IMOLO
(interactive mobile logger) was developed for more practical usability testing
for mobile devices. IMOLO allows designers to easily convert their visual
interface design of display of mobile devices into a prototype. Conversion is
made interactively by simply dragging and dropping. Users can join usability
testing with real products in real context. All log data is automatically
transmitted to the researcher's server and is replayed exactly the same way the
user interacted with mobile devices. IMOLO was found to be useful especially in
the stage of validation test before product release. Keywords: Mobile phone prototyping; Mobile usability | |||
| An Improved Model to Evaluate Menu Hierarchies for Mobile Phones | | BIBAK | Full-Text | 401-407 | |
| Jeesu Lee; Doowon Paik | |||
| This study presents a GOMS based model that can predict the performance time
of hierarchical menu interface. In the existing GOMS based models, to predict
the performance time of hierarchical menu interface, the prediction models are
structured under the assumption that all tasks of selecting an item on the menu
are experienced tasks. This study presents a model that can predict the
performance time of experienced tasks and that of inexperienced tasks
separately. When this model is applied, more accurate prediction of the
performance time of hierarchical menu interface of mobile phone that include
both the experienced and inexperienced tasks will be possible.
This model is designed by measuring the performance time of actual users and the accuracy of prediction is evaluated through an experiment. Keywords: GOMS; mobile phone; interface evaluation | |||
| Support Zooming Tools for Mobile Devices | | BIBAK | Full-Text | 408-417 | |
| Kwang B. Lee | |||
| Mobile devices are quickly becoming powerful devices enough to run personal
computers. However to control large visual information is still the critical
issue for them. This paper introduces several zooming tools such as a focus
zoom, a file zoom and a search zoom which are based on geometric and semantic
zooming methods as new viewing methods. These zooming tools are developed to
devise an efficient method for handling the viewing issue, displaying large
visual information on a small size screen. In addition, the paper introduces
usability testing method on mobile devices to find hidden issues in the tool,
and mentions further progress to improve the usability of working on devices. Keywords: Mobile Devices; Personal Digital Assistants (PDAs); Zoomable User Interfaces
(ZUIs); Popup and Shadow Zooming; Distorted View and Non-distorted View;
Geometric and Semantic Zooming | |||
| Design of a Pen-Based Electric Diagram Editor Based on Context-Driven Constraint Multiset Grammars | | BIBAK | Full-Text | 418-428 | |
| Sébastien Macé; Éric Anquetil | |||
| This paper deals with the computer-aided design of pen-based interfaces for
structured document composition. In order to take advantage of the interaction
with the user, the goal is to interpret the user hand-drawn strokes
incrementally, i.e. directly as the document is being drawn. We present a
generic approach for such purpose: it is based on a new formalism,
Context-Driven Constraint Multiset Grammars (CDCMG), and its associated
incremental parser. CDCMG model how documents of a given nature are composed;
they can be applied on various natures of documents. We demonstrate how it has
been exploited to develop, in collaboration with a society that spreads out
industrial pen-based solutions, a prototype for electric diagram composition
and editing. We also present an evaluation of the system. Experimental results
first emphasize the gain of time in comparison with more classical user
interfaces. They also demonstrate its user-friendliness and its usability. Keywords: Pen-based interfaces; on-line interpretation; structured document analysis;
visual languages; incremental parsing; software assessing | |||
| To Effective Multi-modal Design for Ringtones, Ringback Tones and Vibration of Cell Phones | | BIBA | Full-Text | 429-437 | |
| Taezoon Park; Wonil Hwang; Gavriel Salvendy | |||
| Multimedia content downloading service is one of the primary sources of revenue for the wireless service provider other than basic voice call. In this paper, the attitude of the consumers to the existing and newly suggested customizable ringtones, ringback tones, and vibration is explored from the result of the survey. Among the existing services, the inexperienced user showed the highest willingness to use of customizable ringtones although the experienced users are most satisfied by the personalized ringtones. The attitude to vibration services and push-type advertisement ringtones appeared to be negative. Since the attitude to new technology does not always connects to the willingness to use new services, it is needed to find the motivation which can bridge the gap between the attitude and willingness. | |||
| Automatic Word Detection System for Document Image Using Mobile Devices | | BIBA | Full-Text | 438-444 | |
| Anjin Park; Keechul Jung | |||
| In the current age of ubiquitous computing age that uses high bandwidth network, wearable and hand-held mobile devices with small cameras and wireless communication will be widespread in the near future. Thus, computer vision and image processing for mobile devices have recently attracted a lot of attention. Especially, many approaches to detect image texts containing useful information for automatic annotation, indexing, and structuring of image are important for a prerequisite stage of recognition in dictionary application using mobile devices equipped with a camera. To detect image texts on the mobile devices that have limited computational resources, recent works are based on two methodologies; the image texts are detected not by automatically but by manually using stylus pen to reduce the computational resources, and the server is used to detect image texts requiring many floating-computations. The main disadvantage of the manual method is that users directly select tentative text regions, and recall and precision rates are determined by the selected regions. The second method to automatically detect the image texts is difficult to perform it in real-time, due to transmission time between the mobile device and the server. Accordingly, this paper proposes a real-time automatic word detection system without support of the server. To minimize the computational time, one word in the central region of the image is considered as a target of the system. The word region is tentatively extracted by using edge density and window transition, and the tentatively extracted region is then verified by measuring uniform distribution among sub-windows of the extracted region. In the experiments, the proposed method showed high precision rates for one word in the central region of the image, and showed fast computational time on the mobile devices. | |||
| User Customization Methods Based on Mental Models: Modular UI Optimized for Customizing in Handheld Device | | BIBAK | Full-Text | 445-451 | |
| Boeun Park; Scott Song; Joonhwan Kim; Wanje Park; Hyunkook Jang | |||
| The ongoing conflict designers face between the Universal User Interface,
focusing on general predispositions, and the Customized User Interface,
adjusted to individualistic characteristics, is more prevalent than ever. One
reason for this enduring conflict is that mobile devices require that user
interfaces (UI) be optimized for each individual user across a global
marketplace. This issue inspired us to build a conceptual model of a UI, which
supports the maximization of customization and optimization by reflecting
personal characteristics. This Modular (each application is defined as its own
modular and can be assembled and disassembled) UI was based on four premises:
what is reflected, how it was grouped, how it was provided, its affect.
Usability testing of the UI was performed in three countries with 8 user groups
from each country. Web surveys and FGI (Focus Group Interview) for with 8 user
groups from the three countries showed that this type of modular UI can
effectively optimize interactions for largely diverse groups of users. This
research on customization for user experience is significant because it can
generate its users with an optimum interface that aligns with their unique set
of biological and cultural characteristics. This study also shows a need for
additional research analyzing cultural elements of optimized UI in order to
deepen our level of understanding of the influence of cultural factors on the
usability of handheld device UIs. Keywords: User Customization; Handheld Devices; Modular UI; Optimization; HCI | |||
| Fisheye Keyboard: Whole Keyboard Displayed on PDA | | BIBAK | Full-Text | 452-459 | |
| Mathieu Raynal; Philippe Truillet | |||
| In this article, we propose a soft keyboard with interaction inspired by
research on visualisation information. Our objective is to find a compromise
between readability and usability on a whole character layout for a PDA soft
keyboard. The proposed interactions allow displaying all keys on a small screen
while making pointing easier for the user by expanding any given key as a
function of its distance from the stylus. Keywords: soft keyboard; personal digital assistant (PDA); fisheye view | |||
| Mobile Phone Video Camera in Social Context | | BIBAK | Full-Text | 460-469 | |
| Erika Reponen; Jaakko Lehikoinen; Jussi Impiö | |||
| Video recording is becoming available in various everyday situations, thanks
to the quickly spreading video capabilities of modern mobile phones. Recording
decision is now often made spontaneous, as the recording devices are constantly
available without explicit planning. We discuss the effect of this change in
the social environment on the basis of a study where four groups of people used
digital video cameras in their everyday life. While this new way of
communicating enables new social patterns, it also raises new concerns for
privacy and trust. We discuss the relation of context and video recording
through a model of primary and secondary contexts. We also analyze
acceptability and transparency of video recording as functions of time. Keywords: Mobile phones; cameras; video; context; privacy | |||
| Developing a Motion-Based Input Model for Mobile Devices | | BIBA | Full-Text | 470-479 | |
| Mark Richards; Tim Dunn; Binh Pham | |||
| This paper discusses a new input model for camera-equipped mobile devices which is more efficient, intuitive and intelligent. This model takes advantage of the motion captured from the camera: the general movement of both device and user. These movements are mapped to a three-dimension space and input methods are devised depending on the current situational context. Such an interface allows users to far more quickly interact with their devices, avoids navigation though nested menus and allows the device to perform natural tasks without the user having to knowingly interact with it. | |||
| Designing Input Method of Hand-Held Device with International User Studies | | BIBAK | Full-Text | 480-485 | |
| Scott Song; Joonhwan Kim; Wanje Park; Boeun Park; Hyunkook Jang | |||
| In a small hand-held device, a common method of interaction design is
expressed by metaphors. Additionally, the labeling can help to understand the
meaning of metaphors. Two elements are important to express information and
functions appropriately within the physically limited range of the device. For
global products, the metaphors should be recognized and transmitted by a large
number of users in this labeling. This study conducted a user preference test
in the USA, the United Kingdom and China to find the optimal form for the
arrangement of such input method design and labeling. As a result of the test,
the study could figure out the effective labeling metaphor that would express
the arrangement of appropriate input-buttons and their functions in the
hand-held device. In addition, the metaphor would be applied to actual product. Keywords: Hand-held Device; Input Method; International User Study; Cultural
Differences; User Experience Design; Metaphor; Interface design; Button Label | |||
| Positional Mapping Multi-tap for Myanmar Language | | BIBAK | Full-Text | 486-495 | |
| Ye Kyaw Thu; Yoshiyori Urano | |||
| This paper is an attempt to enable a practical and efficient composing of
message text with Myanmar language on a mobile phone. In this paper, we propose
a new idea of key mapping (Positional Mapping) for Myanmar language. Positional
Mapping is the key mapping idea for mobile phones based on Myanmar language
characters writing positions. We compared our new mapping idea with our
proposed Multi-tap keypad layout in terms of key strokes and users tapping
speed. Although key strokes requirement for typing Myanmar consonants in
Positional Mapping is higher, average tapping speed is 22.5% faster than
Multi-tap model. And we can also prove that our Positional Mapping idea is
simple and easier to memorize for users from the user studies. Our Positional
Mapping idea can be applied not only to Myanmar language but also to other
similar phonetic based languages such as Khamer, Thai, Hindi and Bangla etc. Keywords: Text Input; User Interface; Positional Mapping; Human Computer Interface in
Mobile; Mobile Phone Keypad Layout; Myanmar Language | |||
| Pen-Based User Interface Based on Handwriting Force Information | | BIBAK | Full-Text | 496-503 | |
| ZhongCheng Wu; LiPing Zhang; Fei Shen | |||
| Pen-based computing attracts many researchers recent years. Pen-computer
generally uses pen tip position and pen pressure information at present.
Actually, the force between the pen tip and writing plate is a
three-dimensional vector, which represents more important information in the
process of writing. In this paper we firstly describe an innovative force
sensitive device (F-Tablet) for pen-based computing, and it can acquire three
perpendicular forces and position of the pen-tip simultaneously. The second
part deals with problems of Pen-User Interface design based on this tablet.
Experimental results show that this pen-user interface considers context
awareness and some characteristics of user cognition. Furthermore the pen-user
interface can complete most functions of keyboard and mouse. Keywords: Pen-based computing; Pen-computer; Pen-User Interface; handwriting | |||
| BetweenKeys: Looking for Room Between Keys | | BIBAK | Full-Text | 504-512 | |
| Youngwoo Yoon; Geehyuk Lee | |||
| A conventional keyboard has a large footprint because it should serve
numerous functions with independent keys. With demand for smaller keyboards,
the present paper analyzes a practical method that leverages overlap of two
ordinary keys and may reduce the keyboard size. By analyzing overlap
frequencies and key stroke timings in normal use, we investigated whether key
overlaps are practically usable. From the results, four pairs are recommended
that are practically available and preferred by users. Hangul / English mode
transition is described as an application. Keywords: keyboard; chord keyboard; key overlap; key stroke timing; Hangul / English
toggling | |||
| Mobile Magic Hand: Camera Phone Based Interaction Using Visual Code and Optical Flow | | BIBAK | Full-Text | 513-521 | |
| Yuichi Yoshida; Kento Miyaoku; Takashi Satou | |||
| We propose the "Mobile Magic Hand" interface; it is an extension of our
previous visual code-based interface system. Once the user acquires the visual
code of interest, the user can then manipulate the related virtual
object/system without having to keep the camera centered on the visual code.
Our new interface does this analyzing the optical flow as captured by the
camera. For example, consider a visual code that represents a 3D object, such
as a dial. After selecting the code, the user can freely rotate and/or move the
virtual object without having to keep the camera pointed at the code. This
interface is much more user friendly and is more intuitive since the user's
hand gestures can be more relaxed, more natural, and more extensive. In this
paper, we describe "Mobile Magic Hand", some applications, and a preliminary
user study of a prototype system. Keywords: mobile; visual code; gestural interface | |||
| Online Chinese Characters Recognition Based on Force Information by HMM | | BIBAK | Full-Text | 522-528 | |
| Mozi Zhu; Fei Shen; ZhongCheng Wu | |||
| Pen computing draws more and more researchers' attention. One of the most
important problems is online Chinese Characters Recognition (OCCR). The
information of Pen-tip position is widely used for handwritten recognition
traditionally. In this paper, a new method using force information is proposed
for OCCR, in which the force directions are extracted as the feature of
strokes. In our method, every stroke has its own Hidden Markov Model (HMM) and
the process of building stroke HMM is described in details. Handwritten
Characters are recognized following the stroke tree after every stroke has been
recognized as described. Finally experimental results on analysis of five
persons' handwriting are presented. Keywords: Force Information; HMM; OCCR; Stroke Recognition | |||
| Comparative Characteristics of a Head-Up Display for Computer-Assisted Instruction | | BIBAK | Full-Text | 531-540 | |
| Kikuo Asai; Hideaki Kobayashi | |||
| Computer-assisted instruction (CAI) using a head-up display (HUD) is a new
way of providing information on operating complicated equipment. A head-mounted
display (HMD) with a camera enables head-up interaction, presenting information
related to what a user is looking at. We previously examined a practical
HUD-based CAI system used to operate a transportable earth station. In our
trial, participants using the HUD-based CAI system performed better than those
using other media such as printed material and laptop PCs. However, it was not
clear which part of the system was responsible for the improved performance. To
clarify this, we conducted a laboratory experiment in which participants read
articles and answers questions about them. The goals were to evaluate how
readable the display of the HMD is, how easy it is to search information using
the system, and how the system affects work efficiency. Participants using the
HUD system found the articles faster, but took longer to read the articles and
to answer the questions than participants using other media. Keywords: CAI; HUD; HMD; user study | |||
| Flight Crew Perspective on the Display of 4D Information for En Route and Arrival Merging and Spacing | | BIBAK | Full-Text | 541-550 | |
| Vernol Battiste; Walter W. Johnson; Nancy H. Johnson; Stacie Granada; Arik-Quang V. Dao | |||
| This paper introduces and describes a 3D Cockpit Situation Display (CSD)
that includes the display of ownship and traffic, flight path intent, terrain
and terrain alerting relative to current and proposed flight paths, conflict
alerting and 3D weather. The primary function of the 3D CSD is to support the
task of flight path management, although it is also designed to promote visual
momentum and task awareness across displays. In this paper, we will discuss the
approach to the design of the 3D CSD as well as describe the features of the
display and their importance to traffic, terrain and general situation
awareness. We will also list some of the pilot identified benefits of the 3D
display. Finally, we will report on flight crew ratings of the usefulness and
usability of the 3D display to support en route and arrival decision making. Keywords: Primary Flight Display (PFD); NAV Display (ND); 2D Display; 3D display;
Cockpit Situation Display (CSD); Cockpit Display Of Traffic Information (CDTI) | |||
| Designing a Direct Manipulation HUD Interface for In-Vehicle Infotainment | | BIBAK | Full-Text | 551-559 | |
| Vassilis Charissis; Martin Naef; Stylianos Papanastasiou; Marianne Patera | |||
| This paper introduces a novel design approach for an automotive direct
manipulation interface. The proposed design, as applied in a full-windshield
Head-Up Display system, aims to improve the driver's situational awareness by
considering information as it becomes available from various sources such as
incoming mobile phone calls, text and email messages. The vehicle's windshield
effectively becomes an interactive display area which allows the system to
increase the quality as well as throttle the quantity of information distilled
to the driver in typical driving situations by utilising the existing mobile
phone network. Opting for a simplistic approach of interaction, the interface
elements are based on minimalist visual representation of real objects. This
paper discusses the challenges involved in the HUD design, introduces the
visual components of the interface and presents the outcome of a preliminary
evaluation of the system on a group of ten users, as performed using a driving
simulator. Keywords: HCI; direct manipulation; infotainment; HUD | |||
| Using Agent Technology to Study Human Action and Perception Through a Virtual Street Simulator | | BIBAK | Full-Text | 560-568 | |
| Chiung-Hui Chen; Mao-Lin Chiu | |||
| Human activities are the foundation of the social processes that drive the
urban system. The emergence of information technology provides opportunities to
extend the transformation of the physical city into the digital city or virtual
city. As navigation in virtual environments is evidently difficult and as many
virtual worlds have been designed to be used by untrained users that explore
the environment, navigation supports are critically needed. Furthermore, users
or participants within the digital cities are often foreign to the environment
without navigational aids. Therefore, this paper is aimed to build an
agent-based system in a virtual environment to study user behaviors and
interactions. The study aims to indirectly collect information about the user's
desires in order to build a model of user preference and produce simulative
scenarios that more closely match it. Keywords: agent; behavior; simulation; virtual environment; street design | |||
| Visualizing Interaction in Digitally Augmented Spaces: Steps Toward a Formalism for Location-Aware and Token-Based Interactive Systems | | BIBAK | Full-Text | 569-578 | |
| Yngve Dahl; Dag Svanæs | |||
| Location and token-based methods of interaction form two broad sets of
interaction techniques for ubiquitous computing systems. Currently there are
few tools available that allow designers to pay attention to the physicality
that characterizes human-computer interaction with such systems, and how users
experience them. This paper reports on ongoing work that focuses on creating a
visual formalism that addresses these limitations. The current approach is
inspired by established storyboard techniques, and aims to complement de facto
modeling formalisms such as UML. Keywords: Ubiquitous computing; Modeling formalisms; Interaction design; Embodied
interaction; Visual design | |||
| Assessment of Perception of Visual Warning Signals Generated Using an Augmented Reality System | | BIBAK | Full-Text | 579-586 | |
| Marek Dzwiarek; Anna Luczak; Andrzej Najmiec; Cezary Rzymkowski; Tomasz Strawinski | |||
| One of the important measures to prevent industrial accidents consists in
informing a machine operator about the appearance of hazardous situation
quickly and effectively enough. The main aim of the paper is to present and
discuss a methodology of proving -- by means of the perception assessment --
that the warning signals generated using the AR approach reveal the same
effectiveness as standard visual signals coming from an industrial signalling
device that is of common use in machinery. Twenty volunteers constituted a paid
subject population for the study. In the course of experimental task execution
the warning visual signals were generated using either a standard industrial
signalling device or augmented reality glasses. Despite the fact that the
experimental procedure has not been completed yet very promising primary
results have been obtained has indicating that both the objective and
subjective assessment indicators are better for the AR warning signals. Keywords: safety of machinery; warnings; augmented reality | |||
| Force Field Based Expression for 3D Shape Retrieval | | BIBAK | Full-Text | 587-596 | |
| Xi Geng; Wenyu Liu; Hairong Liu | |||
| In this paper, we established an algorithm to obtain 3D shape descriptors
based on a novel force field model. Based on this model, 3D models as surface
particle sets are assumed and the force interactions between particles are
computed to constitute a spherical descriptor. During the force computation,
the mass of the particle is defined to maintain local information which
improves the discrimination of the spherical descriptor. The experimental
results show that this algorithm is valid for 3D shape matching with high
efficiency. Keywords: force field model; 3D shape retrieval; surface flatness | |||
| Comparing Two Head-Mounted Displays in Ultrasound Scanning | | BIBAK | Full-Text | 597-604 | |
| Juha Havukumpu; Jukka Häkkinen; Eija Grönroos; Pia Vähäkangas; Göte Nyman | |||
| Head-mounted displays have been tested in various medical fields. According
to some results, using a head-mounted display makes medical operations faster,
more effective and accurate than using a conventional table display. In this
study we aimed to examine midwives experiences of using a head-mounted display
during an ultrasound scan. Our preliminary results indicate that a head-mounted
display in an ultrasound scan could work better than the conventional method
which is still in common use. We also noticed that the use of a monocular
head-mounted display was more comfortable than a see-through display. Keywords: Head-mounted displays; monocular display; see-through display; user
experience; ultrasound scan | |||
| Evaluating the Usability of an Auto-stereoscopic Display | | BIBAK | Full-Text | 605-614 | |
| Zhao Xia Jin; Ya Jun Zhang; Xin Wang; Thomas Plocher | |||
| A considerable number of different auto-stereoscopic display systems are
available on the market. Increased resolution of flat panel displays and
greatly reduced cost have made auto-stereoscopic displays practical to use in
applications for games, 3D television, the military, and industrial
manufacturing. However, the usability and qualitative user experience provided
by auto-stereoscopic 3D display has not been widely studied. This study sought
to evaluate the qualitative user experiences with auto-stereoscopic 3D displays
and their potential shortcomings by testing specific user tasks and comparing
the difference between a stereo 3D display and flat 3D display. The results
provide a good reference for the product application developer trying to select
a display system and for the user interface designer. Keywords: Autostereoscopic; 3D display; usability; stereo user interface | |||
| Aspiring for a Virtual Life | | BIBAK | Full-Text | 615-623 | |
| Hee-Cheol Kim | |||
| There has been a drastic change in the ways that computers are used. They
have evolved from being tools for dealing with reality, e.g., for recording
data and calculating complex equations, to being tools for fashioning
virtuality, including virtual reality systems, on-line games, and virtual
communities. This phenomenon, to some extent, stems from rapid technological
development. What is more important, however, is that this phenomenon is also
deeply rooted in the human longing for a virtual world. This paper argues for
the importance of understanding such a desire, and discusses, in this context,
how virtual reality may become a promising realm of the future, by setting out
a theoretical foundation to examine it. Keywords: Human computer interaction; mental representation; virtuality; virtual
reality; virtual space | |||
| Immersive Viewer System for 3D User Interface | | BIBAK | Full-Text | 624-633 | |
| Dongwuk Kyoung; Yunli Lee; Keechul Jung | |||
| 3D user interface research is increasing rapidly as development of area in
virtual environment, virtual reality and augmented reality. Recently, the 3D
user interface is not favorable because it needs high cost and uses cumbersome
devices. However, the 3D user interface gives a great impact to the user which
motivated us to implement an immersive viewer system (Im-viewer system) to test
the 3D user interface. The Im-viewer system uses a tiled display to provide an
immersive image to user, and it uses the proposed 3D volume reconstruction and
representation for input interface with low computational as advantage. The
input interface is used to control the tiled display operation such as on/off,
next/previous slide command in Im-viewer system. The experimental results show
that the proposed techniques of 3D user interface perform well on the Im-viewer
system. Keywords: 3D User Interface; Tiled Display; 3D Shape Reconstruction; Gesture
Recognition; Dimension Reduction | |||
| Resolving Occlusion Between Virtual and Real Scenes for Augmented Reality Applications | | BIBAK | Full-Text | 634-642 | |
| Lijun Li; Tao Guan; Bo Ren | |||
| In this paper, we propose a method to resolve occlusion problem for tabletop
AR based city plan system using stereo vision and accurate boundary depth
recovery of foreground objects. Firstly, we design a color and brightness based
foreground subtraction approach to avoid the negative effect of shadows. Then,
we get the depth information for represent correct occlusion between virtual
and real objects based on contour matching and depth interpolation. Some
experiments have been carried out to demonstrate the validity of the proposed
approach. Keywords: Augmented Reality; Occlusion; Stereo Vision; Epipolar Geometry; Depth
Recovery | |||
| Augmented Reality E-Commerce Assistant System: Trying While Shopping | | BIBAK | Full-Text | 643-652 | |
| Yuzhu Lu; Shana Smith | |||
| Traditional electronic commerce (e-commerce) is limited, because it cannot
provide enough direct information about products to online consumers. The
technology presented in this paper shows how Augmented Reality (AR) can be used
to help overcome the limitations and enhance e-commerce systems. An e-commerce
assistant tool was developed, using user-centered design principles. The tool
was developed as an Internet plugin, so it can be used on different kinds of
computers and handheld devices. A usability experiment was conducted, to
compare the developed AR e-commerce assistant tool with traditional e-commerce
and Virtual Reality (VR) e-commerce systems. Results show that an AR e-commerce
system can provide more direct information about products than traditional or
VR e-commerce systems. Keywords: Augmented Reality; Electronic Commerce; User Centered Design | |||
| RealSound Interaction: A Novel Interaction Method with Mixed Reality Space by Localizing Sound Events in Real World | | BIBAK | Full-Text | 653-662 | |
| Mai Otsuki; Asako Kimura; Takanobu Nishiura; Fumihisa Shibata; Hideyuki Tamura | |||
| We developed a mixed reality (MR) system which merges the real and the
virtual worlds in both audio and visual senses. Our new approach "RealSound
Interaction" is based on the idea that the sound events in the real world can
work as interaction devices with an MR space. Firstly, we developed a sound
detection system which localizes a sound source. The system consisted of two
types of microphone arrays, fixed type and wearable type. Secondly, we
evaluated the accuracy of the system, and proposed three practical usages of
the sound events as interactive devices for MR attractions. Keywords: Mixed Reality; Sound Input; Microphone Array; Sound Source Localization;
Interactive Device | |||
| A New Model of Collaborative 3D Interaction in Shared Virtual Environment | | BIBAK | Full-Text | 663-672 | |
| Nassima Ouramdane-Djerrah; Samir Otmane; Malik Mallem | |||
| Recent advances in both Virtual Reality (VR) systems and Computer-Supported
Cooperative Work (CSCW) technologies have resulted in the appearance of the
Collaborative Virtual Environments (CVEs) systems supporting different forms of
collaboration and interaction between users. The collaboration in these systems
refers to the simultaneous interaction (collaborative interaction) of multiple
users on a virtual object in an immersive or semi-immersive Virtual Environment
(VE). However, in some cases, the collaborative interaction is reduced to a
simple communication between users. In this paper, we propose a new model of
collaborative interaction that supports group interaction in CVEs. Our model
defines the functional role and the functional clover of the 3D interaction.
This Model is based on group awareness concepts (focus, nimbus and degree of
interaction) combined with 3D interaction paradigms (navigation, selection and
manipulation). The aim of our model is to manage and control the simultaneous
user actions. Keywords: 3D interaction; collaborative interaction; collaborative virtual
environment; virtual reality | |||
| Multi-finger Haptic Interface for Collaborative Tasks in Virtual Environments | | BIBAK | Full-Text | 673-680 | |
| María Oyarzábal; Manuel Ferre; Salvador Cobos; Mary Monroy; Jordi Barrio; Javier Ortego | |||
| Haptic devices allow a high level of immersion in virtual environments by
providing the sense of touch. We present a two-finger device that allows the
performance of power and precision tasks in virtual environments. We have also
developed a mathematical model of the human hand and a statistical procedure to
identify different hand gestures. Both have been implemented in the virtual
environment in order to have a mathematical model of haptic interactions, which
runs in real time so as to provide contact forces and object movements
depending on the manipulation commands received from the haptic device. Keywords: haptic interface; multi-finger; collaborative task; manipulation | |||
| Measuring Presence in Mobile 3D | | BIBAK | Full-Text | 681-688 | |
| Hyun Jong Ryu; Rohae Myung; Byongjun Lee | |||
| In this paper, we developed valid mobile presence measurements, and proposed
the factor structure of the resulting scale. The measurements of items came
from previously published questionnaires in the area of VR and from the
experience of mobile 3D developers. We also added our concept factors. The 60
subjects experienced the mobile 3D game for about 40 mins. After finishing the
mobile 3D game, they completed the questionnaire immediately. Factor analysis
was performed on the data. The factors of mobile 3D presence were divided into
4 super-factors; condition factor, interface factor, attention factor, and
feedback, and were further divided into 20 sub-factors. Keywords: mobile 3D game; presence; measurement; presence factors | |||
| IMPROVE: Designing Effective Interaction for Virtual and Mixed Reality Environments | | BIBA | Full-Text | 689-699 | |
| Pedro Santos; André Stork; Thomas Gierlinger; Alain Pagani; Bruno Araújo; Ricardo Jota; Luis Bruno; Joaquim A. Jorge; João Madeiras Pereira; Martin Witzel; Giuseppe Conti; Raffaele de Amicis; Iñigo Barandarian; Céline Paloc; Maylu Hafner; Don McIntyre | |||
| In this paper we present evaluation results of an innovative application designed to make collaborative design review in the architectural and automotive domain more effective. Within IMPROVE, a European research project in the area of advanced displays, we are combining high resolution multi-tile displays, TabletPCs and head-mounted displays with innovative 2D and 3D Interaction Paradigms to better support collaborative mobile mixed reality design reviews. Our research and development is motivated by application scenarios in the automotive domain involving FIAT Elasis from Naples, Italy and in the architectural domain involving Page/Park architects from Glasgow, Scotland. User evaluation took place at Glasgow (UK), Naples (ITA) and Darmstadt (GER), where we tested the integrated IMPROVE prototype application. The tests were conducted based on several heuristics such as ergonomics and psychomotorial factors and they were conducted based on guidelines recommended by ISO 9241 to verify whether the developed interfaces were suitable for the applications scenarios. Evaluation results show that there is a strong demand for more interactive design review systems, allowing users greater flexibility and greater choice of input and visualization modalities as well as their combination. | |||
| Evaluation of Wayfinding Aids Interface in Virtual Environment | | BIBAK | Full-Text | 700-709 | |
| Anna Wu; Wei Zhang; Bo Hu; Xiaolong Zhang | |||
| It is difficult for a navigator to find a way to a given target location in
an unfamiliar environment. Often, wayfinding guidance such as an overview map
is provided to assist the navigator. However, overview maps can only show
survey knowledge at one particular scale, and cannot provide other kinds of
spatial knowledge (e.g. procedure knowledge) or survey knowledge at different
scales. In this study, we compared effectiveness, efficiency and satisfaction
of three wayfinding aids, View-in-View Map (VVM), Animation Guide (AG) and
Human-System Collaboration (HSC) in support of navigation in virtual reality.
Our experiment results show that while an overview still outperforms AG and
HSC, AG serves better for most people with ordinary spatial ability and people
with superior spatial ability tends to perform better using HSC. Keywords: Wayfinding; Virtual environments; Interactive techniques; Spatial cognition | |||
| A 3D Sketching Interacting Tool for Physical Simulation Based on Web | | BIBAK | Full-Text | 710-719 | |
| Ziyi Zheng; Lingyun Sun; Shouqian Sun | |||
| Sketching interface, as a user-friendly means for expression and
communication, is not only an important medium for inputting 3D objects, but
also a significant step of visualizing user's conceptual ideas. To bring early
sketching interface to network utilities, the paper defines several rules that
can create and edit 3D models. These gesture schemes support both regular and
freeform modeling, and can be embedded in multi-user interface through network.
A brief introduction about sketch-based collaboration in client-server
architecture is proposed. A tool with this sketch interface and physical
simulation functionalities is presented. The experimental results show that it
can explore users' ideas in aiding 3D collaboration in network environment. Keywords: Computer aided sketching; pen-based gesture interaction; sketch-based 3D
modeling; physical simulation; 3D collaboration | |||
| Visual and Auditory Information Specifying an Impending Collision of an Approaching Object | | BIBAK | Full-Text | 720-729 | |
| Liu Zhou; Jingjiang Yan; Qiang Liu; Hong Li; Chaoxiang Xie; Yinghua Wang; Jennifer L. Campos; Hong-jin Sun | |||
| Information about the impending collision of an approaching object can be
specified by visual and auditory means. We examined the discrimination
thresholds for vision, audition, and vision/audition combined, in the
processing of time-to-collision (TTC) of an approaching object. The stimulus
consisted of a computer simulated car approaching on a flat ground towards the
participants which disappeared at a certain point before collision. After the
presentation of two approaching movements in succession, participants pressed a
button to indicate which of the two movements would result in the car colliding
with the viewpoint sooner from the moment it disappeared. The results
demonstrated that most participants were sensitive to TTC information provided
by a visual source, but not when provided by an auditory source. That said,
auditory information provided effective static distance information. When both
sources of information were combined, participants used the most accurate
source of information to make their judgments. Keywords: visual; auditory; multisensory integration; time-to-collision;
motion-in-depth; looming | |||
| Coin Size Wireless Sensor Interface for Interaction with Remote Displays | | BIBAK | Full-Text | 733-742 | |
| Ayman Atia; Shin Takahashi; Jiro Tanaka | |||
| Human gestures are typical examples of non-verbal communication, and help
people communicate smoothly [1]. However, using camera to recognizing gesture
needs high processing power and suffer from delays in recognition [2].
Sometimes distance between large screen and user is a problem as for example in
pen based interaction user must be attached to screen. So our main motivation
is how we should design a user interface that use cookie wireless sensor [3] as
an input device. In this paper we describe the interface setting, method of
extracting motion and direction from 3D accelometer, using the tilting gesture.
Then we proposed a method that allows users to define their own tilting
positions and refer it to certain directions. Then we describe a menu selection
interface that is based on pie menu for interaction with remote displays. An
evaluation of the proposed interface in terms of accuracy, time and attached
objects has been conducted. Keywords: Wireless sensor; interaction with large screen display; Human computer
interaction | |||
| Hit Me Baby One More Time: A Haptic Rating Interface | | BIBAK | Full-Text | 743-747 | |
| Christoph Bartneck; Philomena Athanasiadou; Takayuki Kanda | |||
| As the importance of recommender systems increases, in combination with the
explosion in data available over the internet and in our own digital libraries,
we suggest an alternative method of providing explicit user feedback. We create
a tangible interface, which will not only facilitate multitasking but provide
an enjoyable way of completing an otherwise frustrating and perhaps tiresome
task. Keywords: explicit feedback; recommender system; tangible interface | |||
| Minimising Pedestrian Navigational Ambiguities Through Geoannotation and Temporal Tagging | | BIBAK | Full-Text | 748-757 | |
| Ashweeni Kumar Beeharee; Anthony Steed | |||
| The increasing power and ubiquity of smart devices such as mobile phones and
PDAs means that a visitor to a city now carries with them a device capable of
giving location-specific guiding and routing information. Whilst there have
been a number of studies on the use of photographs to supplement text and
map-based guiding applications for mobile devices, in this paper we want to
propose and give an initial exploratory study of a guiding system that utilises
geoannotation to mark photographs. In geoannotation, each photograph is
selected from a repository of photographs based on the content and its
relevance to the route. The photograph itself is then geoannotated with arrows
and other markers on the fly so as to give routing information. Because the
photograph in the database will not be taken from the location of the visitor
who needs routing information, we need to take care and design cues that are
unambiguous. The main contribution of this paper is the discussion of the
geoannotation technique, and some informal results from pilot trials on how it
helps in addressing certain navigational ambiguities arising in the use of
photographs in pedestrian navigation systems. Keywords: Pedestrian Navigation; Design; Human Factors; Ambiguities; Geoannotation | |||
| Paper Metaphor for Tabletop Interaction Design | | BIBAK | Full-Text | 758-767 | |
| Guillaume Besacier; Gaëtan Rey; Marianne Najm; Stéphanie Buisine; Frédéric Vernier | |||
| The aim of this paper is to explore new metaphors for interaction design on
tabletop system. Tabletop systems are shared horizontal surface for co-located
collaboration, which leads to original problems when designing interactions. We
propose two metaphors based on the paper: the peeling metaphor, and the slot
metaphor, and then suggest a way of using them to design new interactions for
solving some of the problems of tabletop systems: documents organization,
documents transmission and documents duplication. Keywords: Tabletop; interaction design; paper metaphor | |||
| Advanced Drivers Assistant Systems in Automation | | BIBA | Full-Text | 768-777 | |
| Caterina Calefato; Roberto Montanari; Fabio Tango | |||
| One of the current research areas in automotive field is aimed at improving driving safety with regards to the development of preventive support systems, also called ADAS (Advanced Driver Assistance Systems). These systems are able to detect a critical situation and to inform timely the driver, so that a repairing maneuver can be performed. From the human factors point of view, driving is considered as a complex cognitive task that can be summarized by four main sub-processes: perception, analysis, decision and action. To be performed, each phase presumes the achievement of the previous one, An exception occurs when humans overcome planning / decision phase and go directly from analysis / interpretation to action / execution (almost in automatic way). This paper intends to propose, following the main literature on human-centered automation, how the ADAS intervention can be designed without negative impact on driving safety. In particular, a forward collision warning has been studied. For this study, the Levels Of Automation (LOA) classified by Parasuramam and Sheridan (2000) has been used as well as the studies in the domain of the so-called Adaptive Automation (AA) (Kaber Riley, Endsley 2001; Scerbo 1996), that allow to adapt the information to the driver's workload and to the context level of dangerousness. | |||
| Implementing an Interactive Collage Table System with Design Puzzle Exploration | | BIBAK | Full-Text | 778-787 | |
| Teng-Wen Chang; Yuan-Bang Cheng | |||
| By combining puzzles as the metaphor as well as the mechanism, design
puzzles provide an interesting computational method to both design exploration
and representing the design ideas via collage images. By using multi-touch
technology such as FTIR, this research implements a physical controlling device
called Collage Tables (CoTa) for directly manipulating the design collages,
which using the mechanism developed over the years from design puzzles. By
focusing on the interactive behaviours of design collage, the exploration rules
as well as puzzle rules are further developed and elaborated in this paper. The
exploration search mechanism and the system (CoTa) in both hardware and
software part of CoTa is also evaluated and computed for the purpose of
reification. A set of possible interaction is also documented in this paper. Keywords: interactive collage table; design collage; multi-touch and sketch; design
puzzle; design exploration | |||
| Designing Smart Living Objects -- Enhancing vs. Distracting Traditional Human-Object Interaction | | BIBAK | Full-Text | 788-797 | |
| Pei-Yu Chi; Jen-hao Chen; Shih-yen Liu; Hao-Hua Chu | |||
| To realize Weiser's vision of ubiquitous computing, a popular approach has
been to create so-called smart living objects, which are everyday objects in
our living environment augmented with digital technology. In this paper, we
survey different smart living objects and classify their design choices into
different types of digital enhancement. These design choices are about choosing
the relation between the object's digital enhancement and its traditional use
-- (1) whether the object's digital function enhances or distracts its original
function, and (2) whether the object's digital interaction matches or conflicts
with its original interaction. Finally, we formulate design heuristics that new
digital enhancement should consider the object's traditional function and
interaction method, and avoid conflict between the digital enhancement and the
traditional use. Keywords: smart object; smart living object; smart living space; human-computer
interaction; context-aware computing; ubiquitous computing | |||
| Drawing Type Tactile Presentation for Tactile Letter Recognition | | BIBAK | Full-Text | 798-807 | |
| Ju-Hui Cho; Minsoo Hahn | |||
| Tactile displays don't disturb other people and transfer the information
secretly by directly contacting with person's body. Tactile letter recognition
means that the users recognize the ordinary language conveyed on the skin
surface. We propose stimulus presentation of drawing type for tactile letter
recognition. It is the type that is drawing the line directly to the skin with
a pen instead of dot stimulus of array type. We built the prototype of drawing
type device. The tracing mode was a good at recognition rate of the letters
averagely than static mode in tactile letter recognition. But it was still hard
to recognize special letters having same number of stroke or similar stroke
style, because touch sensory is dull of localization and perception of
stimulus. To improve recognition rate of confused letters, we redesigns stroke
patterns to a new sequence pattern having less stroke number and unique
pattern. Keywords: Tactile letter recognition | |||
| MKPS: A Multi-level Key Pre-distribution Scheme for Secure Wireless Sensor Networks | | BIBAK | Full-Text | 808-817 | |
| Sung Jin Choi; Hee Yong Youn | |||
| Key distribution is one of the most challenging issues for secure
communication in wireless sensor networks. Even though the random key
pre-distribution approach is suitable for sensor nodes of low power and
resource, a shared key between a pair of nodes is not guaranteed to be found
and thus they cannot communicate with each other. This paper proposes a new
robust key pre-distribution scheme solving this problem while security is not
compromised even though the data exchanged between the nodes are tapped by an
adversary. This is achieved by using the keys assigned through LDU
decomposition of the symmetric matrix of a pool of keys. A general form
solution of L, D, and U matrix is also developed to minimize the time overhead
of LDU decomposition. Computer simulation reveals that the proposed scheme also
significantly improves the energy efficiency compared with the existing random
key pre-distribution scheme. Keywords: Energy efficiency; key pre-distribution; LDU decomposition; security;
wireless sensor network | |||
| AGV Simulator and Implementer Design | | BIBAK | Full-Text | 818-826 | |
| Qiang Huang; TianHao Pan; WenHuan Xu | |||
| Vision navigation [1] has been a significant research area in recent year
for robotic industrial. Various algorithms for obstacle detection and avoidance
have been developed. Successfully testing of these algorithms require
implementation in a realistic robot vehicle, which demands extra effort for
researcher. The developed integrated autonomous guided vehicle [2] simulator
and implementer (AGV-SI) emulates the realistic robot vehicle operating
environment. Researchers can develop the algorithms based on the commonly used
language -- Matlab. Then simply input the algorithm and testing environment
settings into the AGV-SI, an evaluation result is obtained. With the AGV-SI
user can also choose to practically implement the algorithm by downloading the
algorithm into a robot vehicle connected to the PC. With the support of the
AGV-SI a novel algorithm was developed with the integration of adaptive median
filter [3], inverse perspective map [4] and edge detection techniques. Both
simulation and practical implementation validate the feasibility of the
algorithm. Keywords: AGV; computer vision; simulator; implementer; human robot interaction | |||
| Interactive Browsing of Large Images on Multi-projector Display Wall System | | BIBA | Full-Text | 827-836 | |
| Zhongding Jiang; Xuan Luo; Yandong Mao; Binyu Zang; Hai Lin; Hujun Bao | |||
| With the precision of data acquisition increases, large images that may occupy terabytes, become common in research and industry fields. Since multi-projector display wall systems can provide higher resolution, they becomes paramount to display the large images. In this paper, we present one large image viewing system designed for display wall system. Our system need not totally downloading the whole image data to each rendering node. It enables users to browse the out-of-core images in real time using data streaming techniques. In the system, the original out-of-core raw image is compressed and represented using one hierarchical structure in multi-resolution manner. We design one proxy architecture that interactive streams data from remote data server to all rendering nodes. Our system allows users to interactively pan, and zoom the large images with versatile graphical user interface. | |||
| Wearable Healthcare Gadget for Life-Log Service Based on WPAN | | BIBAK | Full-Text | 837-844 | |
| Sang-Hyun Kim; Dong-Wan Ryoo; Changseok Bae | |||
| The advent of ubiquitousness has been changing the uses as well as the
paradigm of services, and emphasizing the importance of services being
personalization. In this paper, we propose Wearable Healthcare Gadget based on
Wireless Personal Area Network (WPAN) to gather health information of user and
a new approach to life-log service using the Wearable Healthcare Gadget. It can
be wearable and gather physiological information and environmental information
which can provide life-log service. Also it has the processing and networking
capability. Keywords: healthcare; gadget; ECG; accelerometer; GPS | |||
| Vision Based Laser Pointer Interaction for Flexible Screens | | BIBAK | Full-Text | 845-853 | |
| Nam Woo Kim; Seung Jae Lee; Byung-Gook Lee; Joon-Jae Lee | |||
| In recent years high quality interaction devices have become very popular in
our environment. The industries are also currently undergoing rapid change and
various technologies have been explored to enable these capabilities.
Projection systems using beam projectors and laser pointer became the
ubiquitous infrastructure for command technology. Group meetings and other
non-desk situations require that people should be able to interact at a
distance from a display surface. This paper presents new interaction techniques
that use a laser pointer to directly interact with display on a large screen.
The camera is subsequently used to detect the position of the pointing device
(such as a laser pointer dot) on the screen, allowing the laser pointer to
emulate the pointing actions of the mouse. The laser pointer will behave as a
active point on the projected display where the user can interact. This
vision-based system is augmented with a natural interface that enables the user
to interactively refine the suggested rectification. This makes it very easy
for users to execute fast and continuous commands. The interaction model
developed behaves like a "smart interaction system." The vision based
interaction system requires no special hardware and runs on a standard
computer. Keywords: Vision-based interaction; determining the mouse interaction; recognize laser
spot; camera calibration; nonlinear mapping function | |||
| Implementation of Multi-touch Tabletop Display for HCI (Human Computer Interaction) | | BIBAK | Full-Text | 854-863 | |
| Song-Gook Kim; Jang-Woon Kim; Chil-Woo Lee | |||
| In this paper, we describe the development of multi-touch tabletop display
system and the classification of hand gesture commands for interacting with our
system. And also, we analyze the suitability for interactive tabletop in light
of the respective input and output degrees of freedom, as well as the precision
and completeness provided by each. Our system is based on FTIR (Frustrated
Total Internal Reflection) principle and hand gestures for necessary
instructions are predefined using position sensing and tracking of multi-touch
points and the number of fingertips. The system consists of two
beam-projectors, diffuser film, four infrared cameras and large acrylic screen
that attached infrared LED. In recognition process, gesture commands are
analyzed by comparing with predefined gesture instructions according to the
number of contacted fingertips, Euclidean distance and angles between two
bright spots. In this paper, vision based tabletop display system that we
proposed provides much advantages understanding human and computer interaction.
Also the Efficiency of proposed method can be proved through controlling
Google-earth. Keywords: Multi-touch; Tabletop display; Frustrated Total Internal Reflection (FTIR);
human computer interaction | |||
| End User Tools for Ambient Intelligence Environments: An Overview | | BIBAK | Full-Text | 864-872 | |
| Irene Mavrommati; John Darzentas | |||
| New elements that are introduced by the nature of living and interacting
within an Ambient Intelligence (AmI) environment lead to new HCI paradigms.
While AmI User Interfaces are moving off the desktop and the GUI paradigm, and
become augmented and diffused within the ubiquitous environments, a new
generation of User Interface Design Tools to facilitate the design and
realization of AmI applications, is emerging. Issues and specific shifts
related to Human Computer Interaction in AmI environments, which affect the
design of these tools, is outlined in this paper. The high level
characteristics of End User Tools that facilitate users to reason as well as
manipulate the behavior of the AmI environment are outlined. Keywords: Human Computer Interaction; Ubiquitous Computing; End User Tools; Ambient
Intelligence Environments | |||
| Tangible Interaction Based on Personal Objects for Collecting and Sharing Travel Experience | | BIBAK | Full-Text | 873-882 | |
| Elena Mugellini; Elisa Rubegni; Omar Abou Khaled | |||
| The paper presents a case study which addresses the design of a system which
supports the recollection of memories and the creation of storytelling by
combining physical objects with digital resources. The purpose of our research
is twofold. The first aim is to investigate the experience of travelling
focusing on how objects and information support human activity. The second aim
is to explore the Tangible User Interfaces (TUIs) framework and the enabling
technology in order to support recalling and sharing of travel experience. Keywords: human-computer interaction; tangible user interface; travel activity;
personal object; souvenir; case study; RFID technology | |||
| Attentive Information Support with Massive Embedded Sensors in Room | | BIBAK | Full-Text | 883-892 | |
| Hiroshi Noguchi; Taketoshi Mori; Tomomasa Sato | |||
| We constructed informational support system based on massive sensor data in
a room. In the room, called "Sensing Room", approximate 600 sensors are
distributed. Pressure sensors are embedded in a floor, a table, chairs and bed.
Switch sensors and electric current sensors are attached on furniture and
electric appliances. RFID tag readers are embedded into room planes. The room
monitors human activities without restriction of occupants. Information support
system includes steerable active projector on the ceiling. The projector
displays information on all planes of the room. The occupant watch information
wherever he/she exists in the room without restriction. Based on captured
activities, information support system decides timing, position and contents
fitting the occupants' activities. In this way, massive sensors data achieves
attentive support. We demonstrate notification, decision support and navigation
by informational support system. Keywords: Smart Room; Informational Support; Active Projector; Sensing Room; Massive
Embedded Sensors | |||
| A Novel Infrastructure of Digital Storytelling Theme Museums Based on RFID Systems | | BIBA | Full-Text | 893-900 | |
| Myunjin Park; Keechul Jung | |||
| This paper suggests a storytelling service infrastructure using RFID systems for theme museums with relevant multimedia contents providing an active experience space and drawing more interest. The proposed storytelling system uses RFID tags, a wireless LAN, and a mobile device for the theme museum, and relevant technical issues involving an overall architecture of RFID system will be introduced in this paper. Therefore, the proposed storytelling service infrastructure could be further applicable for games and animation to experience the people realistic scene through providing additional story and virtual information related to each object using RFID, a wireless LAN and a remote storytelling server. | |||
| A Novel Human-Computer Interface Based on Passive Acoustic Localisation | | BIBAK | Full-Text | 901-909 | |
| Duc Truong Pham; Ze Ji; Ming Yang; Zuobin Wang; Mostafa Al-Kutubi | |||
| This paper describes work aimed at developing new tangible computer
interfaces that can be created out of almost any surface by detecting the
vibrations generated when a user's finger interacts with the surface. Two modes
of interaction have been considered: discrete impact and continuous scratching.
Two methods for localising the point of impact have been investigated: Time
Difference of Arrival (TDOA) and Location Template Matching (LTM). Tracking of
the continuous movement of a finger scratching on a surface has been
implemented by extending the TDOA method. These methods have been tested using
solid objects of different materials and shapes. Experimental results have
shown the potential of the presented technologies for real-time impact
localisation in this new form of human-computer interfaces. Keywords: HCI; time reversal; TDOA; Kalman filter; tangible acoustic interfaces | |||
| Inhabitant Guidance of Smart Environments | | BIBA | Full-Text | 910-919 | |
| Parisa Rashidi; G. Michael Youngblood; Diane J. Cook; Sajal K. Das | |||
| With the convergence of technologies in artificial intelligence, human-computer interfaces, and pervasive computing, the idea of a "smart environment" is becoming a reality. While we all would like the benefits of an environment that automates many of our daily tasks, a smart environment that makes the wrong decisions can quickly becoming annoying. In this paper, we describe a simulation tool that can be used to visualize activity data in a smart home, play through proposed automation schemes, and ultimately provide guidance to automating the smart environment. We describe how automation policies can adapt to resident feedback, and demonstrate the ideas in the context of the MavHome smart home. | |||
| Application of Tangible Acoustic Interfaces in the Area of Production Control and Manufacturing | | BIBAK | Full-Text | 920-925 | |
| Wolfgang Rolshofen; Peter Dietz; Günter Schäfer | |||
| This article explores how physical objects like machine housings can be
transformed into natural, seamless, unrestricted touch interfaces. The
objective is to design multimodal tangible acoustic interfaces (TAI) that
bridge the gap between the virtual and physical worlds. The methods presented
are based on the principle that interacting with a physical object modifies its
surface acoustic patterns. By visualizing and characterizing such acoustic
patterns, it is possible to transform almost any object (for example, a
machine, wall, window, table top, giant screen, or arbitrary 3D objects) into
an interactive interface. Because of their numerous advantages over other
methods, acoustic-based interfaces will have potential for the whole computer
and information industry, as well as manufacturing. Keywords: Tangible Acoustic Interface; Acoustic Source Localization; Production
Control; Manufacturing | |||
| Cyberwalk: Implementation of a Ball Bearing Platform for Humans | | BIBA | Full-Text | 926-935 | |
| Martin C. Schwaiger; Thomas Thümmel; Heinz Ulbrich | |||
| This paper presents an advanced model of a treadmill using balls which are
actuated by a belt on a turntable. The platform is able to run at high speeds
which are close to normal walking speed and can withstand the load of a 100kg
human. Several different tests have been performed to prove the principle on
the one hand and to evaluate the upscalability of the system on the other hand.
First, a human is walking on the platform at different speeds. The speed is
increased until the unit gets instable at about 1.5 m/s. Methods for further
stabilization at higher speeds are discussed. Second, a vehicle simulates the
movements of a human in an urban environment at a downscaled level. The control
recenters the vehicle and the resulting accelerations on the vehicle are
calculated and upscaled.
These results are compared to the preliminary results of our partner Max Planck Society Tübingen where the research about the human perception of accelerations is done. | |||
| A Huge Screen Interactive Public Media System: Mirai-Tube | | BIBAK | Full-Text | 936-945 | |
| Akio Shinohara; Junji Tomita; Tamio Kihara; Shinya Nakajima; Katsuhiko Ogawa | |||
| We develop an interaction framework for huge public displays with multiple
users in public spaces such as concourses, lobbies in buildings, and rendezvous
spots. Based on this framework, we introduce an interactive system for public
spaces called Mirai-Tube. This system creates a scalable interactive media
space and has a scalable real-time recognizer. A Mirai-Tube system was
installed in the underground concourse of Minato Mirai Station in Yokohama. We
conducted a demonstration experiment from 1st Feb. to 31st Oct. 2004. This
trial represents the world's first experiment in a real public space in terms
of its scale and its time period. We evaluate our interactive media system from
three points of view: acceptability as a public media, how much attention the
public pays to it, and its understandability as an advertising media. This
paper describes the features, implementation, and operation of the interactive
media system and the results of the evaluation. Keywords: interactive public displays; ambient displays; interactive advertising;
pattern recognition; subtle interaction | |||
| Kitchen of the Future and Applications | | BIBAK | Full-Text | 946-955 | |
| Itiro Siio; Reiko Hamada; Noyuri Mima | |||
| A kitchen is a place where food is prepared and education and communication
activities relating to food are carried out. As it is a place that witnesses
more activity when compared to the other parts of the house, there are many
potential ubiquitous computing applications that can be installed in a kitchen.
We are developing a computer-augmented kitchen environment, the Kitchen of the
Future, that incorporates various computing elements into a standard kitchen
unit. In this paper, we describe an overview of the Kitchen of the Future
system and three applications, that is, recording and replaying of a cooking
process, videoconferencing cooking instructions, and interactive cooking
navigation. Keywords: Kitchen of the Future; Ubiquitous computing; computer-augmented kitchen;
home computing; computer-aided cooking; remote instruction | |||
| A Tangible Game Interface Using Projector-Camera Systems | | BIBAK | Full-Text | 956-965 | |
| Peng Song; Stefan Winkler; Jefry Tedjokusumo | |||
| We designed and implemented a tangible game interface using projector-camera
systems. The system offers a simple and quick setup and economic design. The
projection onto a paper board held by the user provides more direct viewing as
well as more natural and flexible interaction than bulky HMD's or monitor-based
game interfaces. Homography calibration techniques are used to provide
geometrically compensated projections on the board with robustness and
accuracy. Keywords: Projector-camera systems; projector-based display; interface design;
tilt-board; homography calibration; augmented reality; tangible user interface
(TUI) | |||
| Context-Aware Mobile AR System for Personalization, Selective Sharing, and Interaction of Contents in Ubiquitous Computing Environments | | BIBAK | Full-Text | 966-974 | |
| Youngjung Suh; Youngmin Park; Hyoseok Yoon; Yoonje Chang; Woontack Woo | |||
| With advances in tracking and increased computing power, mobile AR systems
are popular in our daily life. Researchers in mobile AR technology have
emphasized the technical challenges involved in the limitations imposed from
mobility. They did not consider context-aware service with user-related
information annotation, even if in ubiquitous computing environment, various
contexts of both a user and an environment can be utilized easily as well as
effectively. Moreover, it is difficult to have access to pervasive but
invisible computing resources. At the same time, the more smart appliances
become evolved with more features, the harder their user interfaces tend to
become to use. Thus, in this paper, we propose Context-aware Mobile Augmented
Reality (CaMAR) system. It lets users interact with their smart objects through
personalized control interfaces on their mobile AR devices. Also, it supports
enabling contents to be not only personalized but also shared selectively and
interactively among user communities. Keywords: context-aware; mobile AR; personalization; selective sharing | |||
| Center or Corner? The Implications of Mura Locations on LCD Displays | | BIBAK | Full-Text | 975-981 | |
| Kuo-Hao Tang; Yueh-Hua Lee; Kuo Hsun Ku | |||
| With the consideration of Mura area and contrast, SEMI Standard provides a
guideline for Mura inspection. However, for an end user, due to different tasks
and environments, when interacting with a computer, Mura location on the screen
may be an important factor affecting a user's overall satisfaction with the LCD
display. Regression analysis from this study showed that both Mura level and
Mura location affect the perceived value of an LCD by a user. Further analysis
showed that this perceived value may change across different Mura Level ranges.
When Mura levels are high, the small correlation coefficient between markdown
value of a display due to Mura and Mura location suggests that the display
won't be accepted by customers regardless of Mura location. On the other hand,
if the Mura level is moderate, a higher correlation can be obtained, which
means customers are sensitive to the Mura location. Keywords: Mura; User evaluation; LCD display; Mura location | |||
| A Taxonomy of Physical Contextual Sensors | | BIBAK | Full-Text | 982-989 | |
| Philippe Truillet | |||
| In this article, we propose to introduce taxonomy of physical contextual
sensors. Indeed, applications are now becoming more and more interactive,
mobile and pervasive in which context is a big issue. Knowledge from the user
and context in which user interacts is crucial in the design. Then, our
objective is to help designers to choose best sensors to use from context thy
want to collect. We extended and refined taxonomy of context by associating
sensors to this taxonomy. Keywords: physical sensors; taxonomy; design | |||
| Human-Robot Interaction in the Home Ubiquitous Network Environment | | BIBA | Full-Text | 990-997 | |
| Hirotada Ueda; Michihiko Minoh; Masaki Chikama; Junji Satake; Akihiro Kobayashi; Kenzabro Miyawaki; Masatsugu Kidode | |||
| The situation recognition ability of the robot is enhanced by connecting a home ubiquitous network and conversational robots. The situation explanation ability of the robot is also enhanced by acquiring information through the network. The new development of the human robot interaction can be expected in total. In this paper we describe a prototype system that is developed in the experimental house based on such a concept. Then, the study of the actual proof experimental life in the house is discussed. | |||
| Measuring User Experiences of Prototypical Autonomous Products in a Simulated Home Environment | | BIBAK | Full-Text | 998-1007 | |
| Martijn H. Vastenburg; David V. Keyson; Huib de Ridder | |||
| Advances in sensor technology, embedded processing power, and modeling and
reasoning software, have created the possibility for everyday products to sense
the environment and pro-actively anticipate user needs. There is however a risk
of creating environments in which people experience a lack of control. The aim
of this study is to explore the degree in which people are willing to delegate
control to a pro-active home atmosphere control system. The findings suggest
that participants are willing to delegate control to easy-to-use systems, and
they do not want to delegate control to complex and unpredictable systems. It
is argued that the willingness to delegate should not be considered as a fixed
degree, rather system initiative might depend on the situation at hand or on
changes in time. Design research on mixed initiative systems faces a
methodological challenge, in terms of measuring user experience of autonomous
prototypes in a controlled way, while still preserving the sense of a realistic
experience. The paper describes advantages and disadvan tages of testing in a
simulated home environment versus testing in the field. Keywords: Smart environments; user studies; intelligent interfaces; mixed initiative;
interaction design; user adaptivity | |||
| Evaluation of Tangible User Interfaces (TUIs) for and with Children -- Methods and Challenges | | BIBAK | Full-Text | 1008-1017 | |
| Diana Yifan Xu; Janet C. Read; Emanuela Mazzone; Stuart MacFarlane; Martin Brown | |||
| In recent years, creating alternative computer environments, especially
Tangible User Interfaces (TUIs) for children, has become increasingly popular.
However, up till now, the evaluation of tangible technologies has been rather
scarce. This paper focuses on evaluating children's technologies that go beyond
the desktop computer. A qualitative case study on our TUI prototype will be
presented by using selected methods: Think Aloud (TA), Peer Tutoring (PT) and
Drawing Intervention (DI). We found some limitations to the methods and there
were lessons that were learnt when the evaluation studies were carried out with
children. The research will contribute to the paradigms, such as the design and
evaluation of 'the disappearing computer' and 'tangible computing'. Keywords: Evaluation; Children; Tangible User Interface (TUI); Usability; Fun | |||
| Social Intelligence as the Means for Achieving Emergent Interactive Behaviour in Ubiquitous Computing Environments | | BIBAK | Full-Text | 1018-1029 | |
| Ioannis D. Zaharakis; Achilles D. Kameas | |||
| This work introduces a framework for modelling the main actors (human,
artefacts and services) in a symbiotic Ambient Intelligence environment. It,
also, proposes an architectural scheme that associates the social behaviour,
which is not an inherent characteristic of the participants, during interaction
with the functional behaviour of the participants of a Ubiquitous Computing
application. The overall approach is demonstrated by a specific example of
application which illustrates its concepts through a more technical point of
view. Keywords: Ambient Intelligence; Emergent Behaviour; Human-Computer Interaction; Social
Intelligence; Ubiquitous Environments | |||
| The Research on Human-Computer Interaction in Ambient Intelligence | | BIBAK | Full-Text | 1030-1039 | |
| Yong Zhang; Yibin Hou; Zhangqin Huang; Hui Li; Rui Chen; Haitao Shang | |||
| So far the research on Ambient Intelligence (AmI) has been lunched in Europe
and Human-Computer Interaction becomes an important technical aspect of it. AmI
is the integration of ubiquitous computing, ubiquitous communications and the
application of user interface, and its goal is to design and realize the
brand-new intelligent, personalized and connective systems or services. In this
paper, we put forward an experimental system, which incarnate the essential
characteristics of Ambient Intelligence. The architecture and context-aware
mechanisms of this system is discussed briefly and the detail of some
Human-Computer-Interaction techniques is expounded. With the building of
research platform, some Human-Computer-Interaction techniques are worthy of
further study in the future. Keywords: Ambient Intelligence; Human-Computer Interaction; Context-aware; Agent;
Human facial-orientation | |||
| The Universal Control Hub: An Open Platform for Remote User Interfaces in the Digital Home | | BIBAK | Full-Text | 1040-1049 | |
| Gottfried Zimmermann; Gregg C. Vanderheiden | |||
| This paper describes the application of an international user interface
standard in the digital home in a gateway-based approach: The Universal Control
Hub. ISO/IEC FDIS 24752 specifies the Universal Remote Console framework,
promoting a "user interface socket" and pluggable user interfaces. The
Universal Control Hub implements this standard in a way that facilitates the
operation of existing controlled devices and controller devices. It retrieves
user interfaces for the discovered target devices from resource servers that
hold registered resources from multiple parties for any target device. Thus an
open platform for user interfaces in the digital home is created that decouples
the user interface from the device. This approach is expected to lead to more
usable and more accessible user interfaces. Keywords: Remote user interfaces; task-based user interfaces; digital home; usability;
accessibility; Universal Control Hub; Universal Remote Console | |||
| An Investigation of Usability Evaluation for Smart Clothing | | BIBAK | Full-Text | 1053-1060 | |
| Haeng-Suk Chae; Ji-Young Hong; Hyun-Seung Cho; Kwang-Hee Han; Joohyeon Lee | |||
| The purpose of this paper is to develop usability evaluation for smart
clothing. In this paper, we propose evaluation factor and object through user
centered evaluation process. The basic idea of this paper is to know the
thought from wearable user. Also, we gathered the opinion from expert group. As
a result, we adopted evaluation item categories. By examining some empirical
data which is obtained from Observation Evaluation (OE) and Wearability
Evaluation (WE), we conclude usability evaluation of Smart Clothing. The design
process in creating a successful wearable usability is no longer about
providing technical success, rather about creating an optimal user experience.
Their studies provide guideline on the types of limit process that can affect
user's situation in wearable computing. In this paper, usability of smart
clothing is described, including improving reinforces of previous studies [2],
[4], [5]. In addition, we provide a framework for usability to support new
types. First, we were composed of systematic experience factor structure from
an extraction factor as qualitative process based on scenario. We made
factor-object matrix to check usability factor and wearable object position.
Second, we finished developing questions about evaluation domain. Third, we
took evaluation by questionnaire as observational and wearable types. Last, we
did analysis and structure evaluation. We implemented factor analysis which was
possible to access theoretic structure and delete meaningless questionnaire. Keywords: Smart Clothing; Observation Evaluation (OE); Wearability Evaluation (WE);
Wearable Computer; Usability Evaluation; Comport Rate Scale (CRS); Wearability | |||
| Textile Touch Visualization for Clothing E-Business | | BIBAK | Full-Text | 1061-1069 | |
| G. Cho; S. Jang; J. Chae; Kyeong-Ah Jeong; Gavriel Salvendy | |||
| The purpose of this study is to investigate the effect of textile touch
visualization on e-commerce web site. Two e-commerce web sites (Gap and
Anthropology) were selected and 160 female subjects took part in this study. To
visualize the tactile sensation, clothing materials of each brand was measured
by KES and the values were displayed in bar chart. This visualized tactile
information, lightness, and washing method was added on the original web site.
Each brand had two web sites (one is with original, the other is modified with
added information) and 40 participants answered the questionnaire on each web
site. The questionnaire was constructed with two parts which would be answered
before and after touch. In results, the modified web sites showed a significant
difference between 'no visual information' and 'with visual information' on
most of the questions. This may indicate that the modified web sites provide
enough tactile information about clothing material to customers more
accurately. However, both modified web sites didn't show any significant
difference between before and after touching while the original web sites
showed significant difference. Keywords: textile touch visualization; Kawabata evaluation system; mechanical
property; primary hand value; converted primary hand value; tactile sensation;
visual information; internal consistency; purchasability; return rate; decision
making time | |||
| A Development of Design Prototype of Smart Healthcare Clothing for Silver Generation Based on Bio-medical Sensor Technology | | BIBAK | Full-Text | 1070-1077 | |
| Hakyung Cho; Joohyeon Lee | |||
| Recently, "Smart Clothing" technology, along with a rapid change of
lifestyle and customers' needs, has been studied and developed for various
applications such as Entertainment, Business, and Health Care and Sports areas.
Among the number of types in "Smart Clothing" technologies, "Smart Clothing"
for health care is anticipated as one of the highly demanded products in future
market. The demand of "Smart Clothing" based on bio-medical sensor technology
will increase due to the entry of an aging society where a high demand of
development in both medical industry and medical welfare rises. This trend is
related to the new condition of technological and social transition built
through the rapid development of Ubiquitous environment. On the view of
customers' demand, "Smart Clothing" technology coincides with the macro-flow of
customers' general demand on clothing product. In domestic market, however, the
research and development of "Smart Clothing" based on a bio-medical sensor
technology are insufficient. In this research, with a consideration in clothing
suitability and prevalence rate, a prototype of "Smart Clothing" is firstly
developed for diagnosing basic vital signs of cardiac disorder and respiratory
disease. A prototype of smart healthcare clothing developed in this research,
maintaining similar appearance to common clothing, is equipped with
textile-based sensors and other devices which senses and transmits vital signs,
while keeping comfortable fits of clothing. When a person having it on, sensed
vital signals are transmitted to computer in hospital through wireless
transaction for a real-time monitoring process. It is also designed to send
back alarm command to the wearer's cellular phone when emergent condition is
detected in his body. The RFID tag, equipped on the inside of "Smart Clothing,"
stores wearer's medical history and personal data, so that a rescuer can
collect the data in emergency case and send them to hospital for faster and
efficient treatment for the rescue. An evaluation on usability and comfort of
the firstly derived prototype in this research were evaluated. Based on the
result from evaluation on the two aspects, the design prototype of sensor-based
smart healthcare clothing was revised. Keywords: Smart clothing for health care; Design prototype; Bio-medical sensor; Vital
signs; Usability | |||
| Design and Evaluation of Textile-Based Signal Transmission Lines and Keypads for Smart Wear | | BIBAK | Full-Text | 1078-1085 | |
| Jayoung Cho; Jihye Moon; Moonsoo Sung; Keesam Jeong; Gilsoo Cho | |||
| The present paper was intended to prove the applications of
surface-conductive fabrics as electronic textiles. First, we tested the
electrical durability of a Cu/Ni electro-less plated fabric reinforced by PU
(polyurethane) sealing. Using the fabric, we constructed textile-based signal
transmission lines and textile-based keypads. For performance tests, we
compared the output signals between the textile transmission lines and Cu
cables and evaluated textile-based keypads by means of operation force and
subjective operation feeling. PU sealing was effective to yield electrical
durability for surface-conductive fabrics, thus the repeatedly-laundered fabric
showed almost identical output signal with that of Cu, successfully operating
an MP3 player. Subjective evaluation and operation force measurement identified
that the rubber dome switch keypad was preferred due to a low operation force
and less pressure on the skin when the keypad-mounted clothing is worn. The
paper suggested specific applications and evaluation methods of electronic
textiles as essential components for smart wear. Keywords: electronic textiles; smart wear; electro-less metal plating; conductive
fabric; electrical resistance; transmission lines; textile-based keypads;
switches; operation force | |||
| Display Button: A Marriage of GUI and PUI | | BIBAK | Full-Text | 1086-1095 | |
| Stanley Chung; Jung-Hyun Shim; Changsu Kim | |||
| GUI (graphical user interface) output and PUI (physical user interface)
input are two major concepts of user interface design for commercial products.
GUIs and PUIs have been normally implemented on individual devices such as a
screen and sets of buttons, respectively. However, recent researches show
several cases of combining GUI and PUI into one user interface module, which
can display visual information and collect user inputs simultaneously. With
this combination, the expressiveness and flexibility of GUI can compensate
impassive face of physical buttons, and the straight-forward and intuitive way
of using PUI, like pushing a button, can remedy old tradition of GUI's
learnability and memorability issues. This study presents Display Button, a
modular component combining GUI and PUI with least production cost increase,
and surveys its characteristics and merits in several contexts of consumer
appliances. Display Button modules were applied in forms of additional
components of conventional display and independent components. In order to
design user interfaces of mobile phone and digital camera with Display Button
modules, pattern-oriented approach is used to ensure smoother migration from
traditional user interface to novel use of Display Button. Display Button can
be applied to various kinds of consumer electronics with many possible
alternative combinations, and hence increase their market competency with both
of better end-user experience and less production cost. Keywords: graphical user interface; physical user interface; display button; usability | |||
| Construction and Validation of a Neurophysio-technological Framework for Imagery Analysis | | BIBAK | Full-Text | 1096-1105 | |
| Andrew J. Cowell; Kelly S. Hale; Chris Berka; Sven Fuchs; Angela Baskin; David Jones; Gene Davis; Robin Johnson; Robin Fatch | |||
| Intelligence analysts are bombarded with enormous volumes of imagery, which
they must visually filter through to identify relevant areas of interest.
Interpretation of such data is subject to error due to (1) large data volumes,
implying the need for faster and more effective processing, and (2)
misinterpretation, implying the need for enhanced analyst/system effectiveness.
This paper outlines the Revolutionary Accelerated Processing Image Detection
(RAPID) System, designed to significantly improve data throughput and
interpretation by incorporating advancing neurophysiological technology to
monitor processes associated with detection and identification of relevant
target stimuli in a non-invasive and temporally precise manner. Specifically,
this work includes the development of innovative electroencephalographic (EEG)
and eye tracking technologies to detect and flag areas of interest, potentially
without an analyst's conscious intervention or motor responses, while detecting
and mitigating problems with tacit knowledge, such as anchoring bias in
real-time to reduce the possibility of human error. Keywords: Augmented Cognition; electroencephalography; eye tracking; imagery analysis | |||
| A Study on the Acceptance Factors of the Smart Clothing | | BIBAK | Full-Text | 1106-1112 | |
| Ji-Young Hong; Haeng-Suk Chae; Kwang-Hee Han | |||
| This study aims to predict user acceptance of smart clothing. The present
research develops and validates new products for smart clothing. Studies
suggest that further analysis of the process be undertaken to better establish
properties for smart clothing, underlying structures and stability over
innovative technologies. The findings reported in this paper should be useful
methods which identify user needs. Such findings in now provide a way to
explain technology acceptance. Both of qualitative and quantitative methods
were applied to this study in order to find out user needs for smart clothing.
We are writing scenarios and conducting both focused group interviews and a
survey to assess the user's interest. The purpose of the survey is to evaluate
the importance of the functions and to evaluate the degree of the participant's
feeling and attitude. Furthermore, we explore the nature and specific
influences of factors that may affect the user perception and usage. Keywords: smart clothing; wearable computing; user acceptance factor; SNFD; HCI; user
needs | |||
| A Wearable Computing Environment for the Security of a Large-Scale Factory | | BIBAK | Full-Text | 1113-1122 | |
| Jiung-yao Huang; Chung-Hsien Tsai | |||
| This paper studies the issue of using the wearable computer as a remote
sensing device of a large-scale factory. The infrastructure of ubiquitous
security environment to realize the remote sensing capability for the security
guard is presented in this paper. This paper also scheme out a wearable
computing scenario for the security guard under such a ubiquitous security
environment. That is, through the help of the wearable computer, the security
guard can remotely sensing the security status of each building when he is
patrolling a large-scale factory. To achieve a seamless remote sensing
environment, we use the wireless AP (Access Point) as the relay between static
sensor networks installed inside each building and the mobile sensing device
worn on the security guard. The AP enable the wearable computer to seamless
receive security status of each building and upload the wearer status to the
security control center at the same time. Furthermore, this research adopts the
technology of embedded Linux to design a middleware for the wearable computer.
The proposed architecture of the wearable computer is scalable, flexible and
modular for the mobile computing system. Finally, this paper elaborates a
seamless connection approach for the wearable computer within a ubiquitous
security environment. Keywords: Wearable computer; Mobile Computing; Remote sensing; Ubiquitous security
environment | |||
| Modification of Plastic Optical Fiber for Side-Illumination | | BIBAK | Full-Text | 1123-1129 | |
| Min Ho Im; Eun Ju Park; Chang Heon Kim; Moo Sung Lee | |||
| In this study, we investigated the effect of solvent etching and physical
treatment on the sidelighting of POF, even qualitatively. Even though the two
were effective in making sidelight POF, the tensile properties of POF were
decreased as a result of surface damaged during the treatment. We also
investigated how to overcoat side-illuminating POF in order not to be broken
during weaving process to make side-illuminating POF fabric. In the view of
clarity and interfacial adhesion with POF, AC-100 based on acrylic polymer was
chosen for overcoating material. However, tensile strength of notched POF was
rather decreased even after overcoating, maybe due to toluene used as diluent,
which is also solvent for the core of POF, i.e., PMMA. Keywords: plastic optical fiber; POF sidelight; overcoating | |||
| Exploring Possibilities of ECG Electrodes for Bio-monitoring Smartwear with Cu Sputtered Fabrics | | BIBAK | Full-Text | 1130-1137 | |
| Seeun Jang; Jayoung Cho; Keesam Jeong; Gilsoo Cho | |||
| This article deals with a way of developing E-textiles using sputtering
method and their possibilities as ECG electrodes for a bio-monitoring
smartwear. As the market of smartwear is growing, researches toward E-textiles
become more important. Among various ways of providing conductivity on
textiles, we selected sputtering technology. Through the sputtering, we
developed E-textiles deposited with thin Cu layer on the surface of the fabrics
with thickness of about 2 micrometer. Then we measured the electrical
resistances, examined their performances as ECG (electrocardiogram) electrodes
and compared ECG signal measured with general AgCl electrodes. In result, ECG
signals from Cu sputtered electrodes showed big potentials as textile-based
electrodes by showing little difference in its signals compared with commonly
used AgCl electrodes. Keywords: bio-monitoring; smartwear; E-textiles; sputtering; Electrocardiogram (ECG) | |||
| Development of Educational Program for Quick Response System on Textile and Fashion E-Business | | BIBAK | Full-Text | 1138-1146 | |
| Kyung-Yong Jung; Jong-Hun Kim; Jung-Hyun Lee; Young-Joo Na | |||
| It is normal to develop fashion product through predicting purchase needs of
consumers in Textile and Fashion industry. If failed in the prediction, that
is, if consumers would not purchase the product, there comes problems that
discount sale is inevitable or stock increases tremendously. Otherwise, Quick
Response System allows that the company observes the consumer's needs
consistently and establishes manufacture schedule rapidly so that they could
prohibit the products unnecessarily stocked. Consumer's preference is collected
and analyzed through the data generated by POS system, and this is provided to
the related manufacturer through network in real time, so that the manufactures
could merchandise rapidly, produce and deliver the products according to the
consumer's need. Thus, this study developed POS system-education program; the
merchandising of apparel product by prediction and confirmation and shorten
product-lead time; the cooperative system among apparel company, retailers and
manufacturer, through Internet Technology in Textile and Fashion Industry. Keywords: Quick response system; Consumer's needs; POS-educational program; Textile
and fashion e-business | |||
| Preparation of Conductive Materials for Smart Clothing: Doping and Composite of Conducting Polymer | | BIBAK | Full-Text | 1147-1154 | |
| Jooyong Kim; Nowoo Park | |||
| Polyaniline (PANI) is a conjugated conducting polymer which can be doped by
either protonation with a protonic acid or by charge transfer with an oxidation
agent. The p-type polyaniline was obtained by a protonic acid doping while the
n-type polyanilne was produced by doping a strong reductant. The p-type
polyaniline was stable in moisture at the room temperature unlike the n-type
polyaniline. In spite of CNTs excellent capability in electrical and mechanical
improvements, it has been repeatedly pointed out that the phase separation
between the polymer matrix and CNTs leads to very limited applications. In this
research, CNTs were pre-treated by a concentrated nitric acid which produced
carboxylic acid groups at the defect sites for improving the dispersion ability
in the PANI matrix. Keywords: polyaniline; carbon nanotube; doping | |||
| A Feasibility Study of Sixth Sense Computing Scenarios in a Wearable Community | | BIBAK | Full-Text | 1155-1164 | |
| Seunghwan Lee; Hojin Kim; Sumi Yun; Geehyuk Lee | |||
| We propose a communication method for more abundant interaction using a
sixth sensory channel for diffusion of multimedia files in a wearable computing
environment. We suggest applications connecting people with their possessing
media files as the most suitable applications for introduction of the wearable
computer to the public. Sixth sense computing is a name that we use for the
study of new possibilities that will be enabled by a wireless multimedia
channel between wearcomp users. Scenarios enabled by sixth sense computing are
developed and implemented in a wearable computer platform. We demonstrate
possible requirements for new types of interaction style, an interface for a
wearable computer application, and dynamic variation of society by varying
system parameters such as the media selection method for diffusion. Keywords: Wearable computer; ubiquitous computing; wearable community; sixth sense
computing; new interaction style | |||
| Wearable Computers IN the Operating Room Environment | | BIBAK | Full-Text | 1165-1172 | |
| Qi Ma; Peter Weller; Gerlinde Mandersloot; Arjuna Weerasinghe; Darren Morrow | |||
| High technology is a common feature in the modern operating room. While this
situation enables a wide range of patient related data to be collected and
analysed, the optimal viewing of this information becomes problematic. This
situation is particularly acute in a busy operating theatre or while the
clinician is moving around the hospital. The WINORE (Wearable computers IN the
Operating Room Environment) project is a possible solution to this dilemma. It
uses wearable computers and head mounted displays to provide an enhanced
delivery of patient information, wirelessly collected from a range of devices,
to surgeons, anaesthetists, and supervising clinicians. A crucial dimension to
the project is how the clinicians interface with the system given the
restrictions of sterile conditions and reduced dexterity due to operating
procedures. In this paper we present the WINORE project concept, the background
ideas and some findings from our trials. Keywords: Wearable Computer; Operating Room; Head Mounted Display | |||
| Coupling the Digital and the Physical in Therapeutic Environments | | BIBAK | Full-Text | 1173-1182 | |
| Patrizia Marti; Leonardo Giusti | |||
| The Multi-sensory Room is an ongoing project aiming to develop non
pharmacological therapeutic protocols and IT solutions for the treatment of
dementia in institutionalized contexts. The project exploits the potential of
ambient technologies and tangible media for developing a therapeutic
environment to stimulate patients' residual cognitive, behavioral and physical
abilities. The Multi-sensory room is currently used in an Italian Home Care.
Initial trials and preliminary results are described in the paper. Keywords: Ambient technologies; Tangible media; Dementia Care; Therapeutic
environments | |||
| Functional Brain Imaging for Analysis of Reading Effort for Computer-Generated Text | | BIBAK | Full-Text | 1183-1192 | |
| Erin M. Nishimura; Evan D. Rapoport; Benjamin A. Darling; Jason P. Cervenka; Jeanine Stefanucci; Dennis Proffitt; Traci H. Downs; J. Hunter Downs | |||
| This paper discusses two functional brain imaging techniques, functional
magnetic resonance imaging (fMRI) and functional near-infrared (fNIR) imaging,
and their applications for quantitative usability analysis. This application is
demonstrated through a two-phase study on reading effort required for varying
degrees of font degradation. The first phase used fMRI to map cortical
locations that were active while subjects read fonts of varying quality. The
second phase used fNIR imaging, which showed higher levels of activity (and
thus greater cognitive effort) in the visual processing area of the brain
during a reading task with text presented in degraded fonts. The readability
analysis techniques demonstrated in this study also generalize to applications
requiring an objective analysis of interface usability. Keywords: quantitative usability analysis; functional brain imaging; functional
magnetic resonance imaging (fMRI); functional near-infrared (fNIR) | |||
| Smart Furoshiki: A Context Sensitive Cloth for Supporting Everyday Activities | | BIBAK | Full-Text | 1193-1199 | |
| Ryo Ohsawa; Kei Suzuki; Takuya Imaeda; Masayuki Iwai; Kazunori Takashio; Hideyuki Tokuda | |||
| This paper introduces a novel system for supporting everyday activities.
Recent researches have proposed the embedding of computers and sensors in user
environments so as to provide assistance in certain scenarios [1]. However, it
is difficult for users to make the environments. Our goal is to develop a
technology that will enable novice users to create such environments easily. In
order to achieve this goal, we have developed a sensorized cloth called "Smart
Furoshiki." Keywords: Furoshiki; Smart Cloth; RFID; Context Awareness | |||
| Information Display of Wearable Devices Through Sound Feedback of Wearable Computing | | BIBAK | Full-Text | 1200-1209 | |
| Park Young-hyun; Han Kwang-hee | |||
| Functions in wearable devices came to be various and specialized during the
developmental process of wearable computing. However, such surfacing of new
functions made it difficult and time-consuming for the devices to display the
condition of their status, such as whether they are turned on or off. Moreover,
mere dependency on visual display could lead to an overload of users' visual
cognition. In this research, sounds were used for relaying information feedback
of the status of wearable devices. We first verified the usefulness of adding
sound feedback, and next, the effect of each device-specific sound was
confirmed as a sound feedback. Keywords: wearable computing; information display; sound feedback; auditory icon;
earcon | |||
| An Evaluation Framework for the Design Concepts of Tangible Interface on New Collaborative Work Support System | | BIBAK | Full-Text | 1210-1219 | |
| Youngbo Suh; Cheol Lee; Joobong Song; Minjoo Jung; Myung Hwan Yun | |||
| This study aims to suggest a systematic evaluation framework to evaluate
design concepts of a new product at the conceptual design phase based on users'
requirements and tasks, development trends of relevant technologies, and the
CPV. The proposed framework to evaluate design concepts of a new product
consists of three phases. In phase 1, we identify and analyze users' needs,
functional requirements and their expected tasks by utilizing user
scenario-based analysis and hierarchical task analysis. In phase 2, by
deploying a relevant technology roadmap, we investigate technology alternatives
for satisfying the user needs or functional requirements. In phase 3, we
evaluate the design concepts using evaluation checklist, which is based on
functional requirements derived from relationships analysis, utilizing CPV
attribute for a quantifiable measure. A case study was demonstrated to evaluate
the design concepts of a new CSCW-based tangible interface that was recently
designed to support group decision making activities. Keywords: Conceptual Design; Concept Evaluation; CPV; Technology Trends Analysis;
Scenario; HTA | |||
| The Research of Using Image-Transformation to the Conceptual Design of Wearable Product with Flexible Display | | BIBAK | Full-Text | 1220-1229 | |
| Yung-Chin Tsao; Li-Chieh Chen; Shaio-Chung Chan | |||
| Wearable computer offers job-critical information to people whose hands must
be free for other works. By adopting the flexible electronic technology,
designs of such apparatus provide the features of highly mobility and
durability. This paper takes the duties of the police as a case study. By
analyzing the activities of the police and the problems of currently used
apparatus, thirty-three problems were elicited. A Quantification Method Type IV
and a cluster analysis were employed to analyze the problem's structure and six
groups of problems were highlighted. The design specifications were derived
from the previous studies. Four conceptual designs of wearable computers were
proposed by transferring the soft and wearable images in the design processes.
Developing and evaluating the experimental prototypes will be undertaken for
further analysis. Keywords: flexible display; wearable computer; image-transformation | |||
| User Interaction Design for a Wearable and IT Based Heart Failure System | | BIBAK | Full-Text | 1230-1239 | |
| Elena Villalba; Ignacio Peinado; María Teresa Arredondo | |||
| In Europe, Cardiovascular Diseases (CVD) are the leading source of death,
causing 45% of all deaths. Besides, Heart Failure, the paradigm of CVD, mainly
affects people older than 65. In the current aging society, the European
MyHeart Project was created, whose mission is to empower citizens to fight CVD
by leading a preventive lifestyle and being able to be diagnosed at an early
stage. This paper presents the design of the user interaction of a Heart
Failure Management System, based on daily monitoring of Vital Body Signals,
with wearable and mobile technologies, for the continuous assessment of this
chronic disease. The user interaction in such systems plays a role of major
importance, enabling the usage of technical solutions which motivate people to
adopt healthy lifestyles. Keywords: user interaction; usability test; wearable systems; health monitoring;
personalized applications; personas; goal-oriented design | |||
| VortexBath: Study of Tangible Interaction with Water in Bathroom for Accessing and Playing Media Files | | BIBAK | Full-Text | 1240-1248 | |
| Jun-ichiro Watanabe | |||
| We describe an interface for places where people use water, such as the
kitchen, toilet, or bathroom. We developed a prototype that can be operated by
users by tangibly interacting with water. It enables image files and movie
files to be browsed by projecting the images onto water, it plays music files
with album jacket images, and it reads out information from the Internet using
speech-synthesis technology. We also describe a demonstration of how users can
access their media files by tangibly interacting with water, which is different
from conventional way such as pressing buttons on a remote control. Keywords: Water interaction; ambient display; tangible interaction; content browsing | |||