| BigKey: A Virtual Keyboard for Mobile Devices | | BIBAK | Full-Text | 3-10 | |
| Khaldoun Al Faraj; Mustapha Mojahid; Nadine Vigouroux | |||
| This paper describes BigKey virtual keyboard for mobile devices, designed to
make the keys of virtual keyboard easier to acquire. The tiny size of keys
makes efficient selection difficult. To overcome this drawback, we propose to
expand key size that corresponds to next character entry. The proposed solution
helps to facilitate the selection task by expanding the next entry. Moreover
the prediction system reduces the visual scanning time to find letters that one
is looking for. Users' performance study showed that they were 25.14% faster
and more accurate with BigKey virtual keyboard than with normal virtual
keyboard. Keywords: Virtual keyboard; text input; PDAs; expanding targets; letter prediction | |||
| TringIt: Easy Triggering of Web Actions from a Phone | | BIBAK | Full-Text | 11-20 | |
| Vinod Anupam | |||
| Much information that is of interest to mobile users is available on the
Web, yet is difficult to access for most users. We introduce a novel method for
users to interact with network-connected computers using their phones, and
describe a system called TringIt that implements the method. TringIt enables
users to trigger Web actions by simply dialing specific numbers -- an action
that we call a Phone Click. TringIt can be used out-of-the-box by any phone
user. The Phone Click is most useful from mobile phones that can receive
messages in response to the click. TringIt enables users to easily initiate
interaction with businesses and content owners by simply dialing numbers
discovered in offline media (e.g. print, TV, radio) as well as online media
(e.g. Web, SMS, MMS.) It makes every mobile phone a more compelling
information, interaction and participation device. Keywords: Phone Click; Tring; Dial-to-Click; Call Triggered Messaging;
User-to-Application Signaling; SMS/MMS Click-through; Dial-able hyperlinks | |||
| Context Awareness and Perceived Interactivity in Multimedia Computing | | BIBAK | Full-Text | 21-29 | |
| Xiao Dong; Pei-Luen Patrick Rau | |||
| Context awareness and perceived interactivity are two factors that might
benefit mobile multimedia computing. This research takes mobile TV
advertisements as a scenario and verifies the impacts of perceived
interactivity and its interaction with context awareness. Seventy-two
participants were recruited and an experiment was conducted in order to
identify those impacts. The main findings indicated the following: (1) the
effect of high perceived interactivity advertisement is significantly better
than the effect of low perceived interactivity advertisement; (2) the
interaction of context awareness and perceived interactivity has a significant
influence on the effect of mobile TV advertising. Keywords: Context awareness; perceived interactivity; mobile TV advertising | |||
| Human Computer Interaction with a PIM Application: Merging Activity, Location and Social Setting into Context | | BIBAK | Full-Text | 30-38 | |
| Tor-Morten Grønli; Gheorghita Ghinea | |||
| Personal Information Managers exploit the ubiquitous paradigm in mobile
computing technology to integrate services and programs for business and
leisure. Recognizing that every situation is constituted by information and
events, this context will vary depending on the situation users are in, and the
tasks they are about to commit. The value of context as a source of information
is highly recognized and for individual dimensions context has been both
conceptually described and prototypes implemented. The novelty in this paper is
a new implementation of context by integrating three dimensions of context:
social information, activity information and geographical position. Based on an
application developed for Microsoft Window Mobile these three dimensions of
context are explored and implemented in an application for mobile telephone
users. Experiment conducted show the viability of tailoring contextual
information in three dimensions to provide user with timely and relevant
information. Keywords: PIM; context; context-aware; Microsoft pocket outlook; ubiquitous computing;
HCI | |||
| CLURD: A New Character-Inputting System Using One 5-Way Key Module | | BIBAK | Full-Text | 39-47 | |
| Hyunjin Ji; Taeyong Kim | |||
| A character inputting system using one 5-way key module has been developed
for use in mobile devices such as cell phones, MP3 players, navigation systems,
and remote controllers. All Korean and English alphabet characters are
assembled by two key clicks, and because the five keys are adjacent to each
other and the user does not have to monitor his/her finger movements while
typing, the speed of generating characters can be extremely high and its
convenience is also remarkable. Keywords: Character Input; Typing; 5-way Key Module; Mobile Device; Keyboard; Wearable
Computer | |||
| Menu Design in Cell Phones: Use of 3D Menus | | BIBAK | Full-Text | 48-57 | |
| Kyungdoh Kim; Robert W. Proctor; Gavriel Salvendy | |||
| The number of mobile phone users has been steadily increasing due to the
development of microtechnology and human needs for ubiquitous communication.
Menu design features play a significant role in cell phone design from the
perspective of customer satisfaction. Moreover, small screens of the type used
on mobile phones are limited in the amount of available space. Therefore, it is
important to obtain good menu design. Review of previous menu design studies
for human-computer interaction suggests that design guidelines for mobile
phones need to be reappraised, especially 3D display features. We propose a
conceptual model for cell phone menu design with 3D displays. The three main
factors included in the model are: the number of items, task complexity, and
task type. Keywords: cell phones; menu design; 3D menu; task complexity; task type | |||
| Mobile Interfaces in Tangible Mnemonics Interaction | | BIBA | Full-Text | 58-66 | |
| Thorsten D. Mahler; Marc Hermann; Michael Weber | |||
| The Tangible Reminder Mobile brings together tangible mnemonics with ambient displays and mobile interaction. Based on the Tangible Reminder Project we present a new interface for mobile devices that is capable of viewing and editing data linked to real world objects. An intelligent piece of furniture equipped with RFID-sensors and digitally controlled lighting keeps track of appointments linked to real world objects that are placed in its trays. The mobile interface now allows the complete waiving of classic computer interaction for this ambient shelf. Instead, by implementing the toolglas metaphor, the mobile interface can be used to edit and view linked data to objects. | |||
| Understanding the Relationship between Requirements and Context Elements in Mobile Collaboration | | BIBAK | Full-Text | 67-76 | |
| Sergio F. Ochoa; Rosa Alarcón; Luis A. Guerrero | |||
| The development of mobile collaborative applications involves several
challenges, and one of the most important is to deal with the always changing
work context. This paper describes the relationship between these applications
requirements and the typically context elements that are present in mobile
collaborative work. This article presents a house of quality which illustrates
this relationship and shows the trade-off involved in several design decisions. Keywords: Context Elements; Software Requirement; Mobile Collaboration; House of
Quality | |||
| Continuous User Interfaces for Seamless Task Migration | | BIBA | Full-Text | 77-85 | |
| Pardha S. Pyla; Manas Tungare; Jerome Holman; Manuel A. Pérez-Quiñones | |||
| In this paper, we propose the Task Migration framework that provides a vocabulary and constructs to decompose a task into its components, and to examine issues that arise when it is performed using multiple devices. In a world of mobile devices and multiple computing devices, users are often forced to interrupt their tasks, move their data and information back and forth among the various devices manually, recreate the interaction context, and then resume the task on another device. We refer to this break from the task at hand as a task disconnect. Our objective is to study how software can bridge this task disconnect, enabling users to seamlessly transition a task across devices using continuous user interface. The framework is intended to help designers of interactive systems understand where breaks in task continuity may occur, and to proactively incorporate features and capabilities to mitigate their impact or avoid such Task Disconnects altogether. | |||
| A Study of Information Retrieval of En Route Display of Fire Information on PDA | | BIBAK | Full-Text | 86-94 | |
| Weina Qu; Xianghong Sun; Thomas Plocher; Li Wang | |||
| This study was concentrated on which way is the most convenient for
firefighter to get information, comparing among audio display, text display,
and combined multi-modal display. Can fire commanders effectively obtain key
fire information while they are en route to the fire, especially when they
sitting in a moving and bumpy car? The task includes free-browse, free-recall
and searching information. The result showed that: (1) Audio only always made
firefighter taking the longest time to browse and search, but the introduction
of audio display made the two combined displays more quickly to access
information, and more easy to remember. (2) Searching in a moving environment
took a little longer than searching in lab. (3) Comparing in the lab and in
moving car, it was found that searching in a moving environment took a little
longer than in lab. (4) It was proved that text display was still a necessary
and indispensable way to display information. Keywords: Information retrieval; Display; PDA; Free-browse; Free-recall; Search | |||
| A Mobile and Desktop Application for Enhancing Group Awareness in Knowledge Work Teams | | BIBAK | Full-Text | 95-104 | |
| Timo Saari; Kari Kallinen; Mikko Salminen; Niklas Ravaja; Marco Rapino | |||
| In this paper we present a first prototype for a mobile and desktop system
and application for enhancing group awareness in knowledge work teams. The
prototype gathers information from the interactions of the group within the
application and analyses it. Results are displayed to members of the group as
key indexes describing the activity of the group as a whole and the individual
members of the group. The advantages of using the prototype are expected to be
increased awareness within group possibly leading to positive effects on group
performance. Keywords: Group awareness; emotional awareness; knowledge work; mobile application;
desktop application | |||
| A Study of Fire Information Detection on PDA Device | | BIBAK | Full-Text | 105-113 | |
| Xianghong Sun; Weina Qu; Thomas Plocher; Li Wang | |||
| This study was concentrated on how useful the en route information display
system for firefighters' information accessing, current situation
understanding, and decision making, we did a series of tests to investigate the
efficiency of the system, to compare different display ways including audio,
text, and their combinations to find the most appropriate one. The result
showed that: (1) Audio only always made firefighter taking the longest time to
information detection, but the introduction of audio display made the two
combined displays (text + audio, and text + 3rd level audio) more quickly to
access information, and more easy to remember. (2) It should be clarified that
en route system could be used very well either in quiet and static environment,
or in a moving and a little bumping environment if user could get some training
before using it. Keywords: Information detection; PDA; fire | |||
| Empirical Comparison of Task Completion Time between Mobile Phone Models with Matched Interaction Sequences | | BIBAK | Full-Text | 114-122 | |
| Shunsuke Suzuki; Yusuke Nakao; Toshiyuki Asahi; Victoria Bellotti; Nick Yee; Shin'ichi Fukuzumi | |||
| CogTool is a predictive evaluation tool for user interfaces. We wanted to
apply CogTool to an evaluation of two mobile phones, but, at the time of
writing, CogTool lacks the necessary (modeling baseline) observed human
performance data to allow it to make accurate predictions about mobile phone
use. To address this problem, we needed to collect performance data from both
novice users' and expert users' interactions to plug into CogTool. Whilst
novice users for a phone are easy to recruit, in order to obtain observed data
on expert users' performance, we had to recruit owners of our two target mobile
phone models as participants. Unfortunately, it proved to be hard to find
enough owners of each target phone model. Therefore we asked if multiple
similar models that had matched interaction sequences could be treated as the
same model from the point of view of expert performance characteristics. In
this paper, we report an empirical experimental exercise to answer this
question. We compared identical target task completion time for experts across
two groups of similar models. Because we found significant differences in some
of the task completion times within one group of models, we would argue that it
is not generally advisable to consider multiple phone models as equivalent for
the purpose of obtaining observed data for predictive modeling. Keywords: Cognitive Model; CogTool; Evaluation; Human Centered Design; Human
Interface; Mobile Phone; Systematization; Usability Test | |||
| Nine Assistant Guiding Methods in Subway Design -- A Research of Shanghai Subway Users | | BIBAK | Full-Text | 125-132 | |
| Linong Dai | |||
| In big cities, it often occurs that passengers (users) have great
difficulties to recognize subway stations. Except improving the signs of subway
stations, based on large amounts of field researches, we find 9 practical and
effective methods to help passengers to identify subway stations. These 9
methods include visual design, aural design, and tactual design etc. This paper
also tries to apply some theories of cognitive psychology about human memory in
the research of subways. These methods are also applicable to other space
design in subway and even general underground space design. Keywords: User Research; Subway Station; Quick-Identification | |||
| Pull and Push: Proximity-Aware User Interface for Navigating in 3D Space Using a Handheld Camera | | BIBA | Full-Text | 133-140 | |
| Mingming Fan; Yuanchun Shi | |||
| In the 3D object controlling or virtual space wandering tasks, it is necessary to provide the efficient zoom operation. The common method is using the combination of the mouse and keyboard. This method requires users familiar with the operation which needs much time to practice. This paper presents two methods to recognize the zoom operation by sensing users' pull and push movement. People only need to hold a camera in hand and when they pull or push hands, our approach will sense the proximity and translate it into the zoom operation in the tasks. By user studies, we have compared different methods' correct rate and analyzed the factors which will affect the approach's performance. The results show that our methods are real-time and high accurate. | |||
| A Study on the Design of Voice Navigation of Car Navigation System | | BIBAK | Full-Text | 141-150 | |
| Chih-Fu Wu; Wan-Fu Huang; Tung-Chen Wu | |||
| This study tries to find the designing blind spots of the voice prompt
function in the current car navigation systems and make improvement
suggestions. The experimental plan was implemented through videotape analysis
of the voice-prompt mode, referring to Urban Road Classification Regulations
and the questionnaire survey results. Driving simulation tests were conducted
with 15 experimental subjects, 13 road combinations, and 3 running speeds, and
different prompt modes which were run synchronously were also included.
Compared with the present mode (prompt time is determined by distance.), the
newly-designed mode (prompt time is determined by running speed.) significantly
improved driving performance and reduced mental workload. When driving on a
main artery with fast lanes and slow lanes, adding a changing-lane prompt with
a clear sound to the system can help increasing the driving accuracy rate. Keywords: navigation systems; voice prompt function; driving accuracy rate | |||
| Front Environment Recognition of Personal Vehicle Using the Image Sensor and Acceleration Sensors for Everyday Computing | | BIBAK | Full-Text | 151-158 | |
| Takahiro Matsui; Takeshi Imanaka; Yasuyuki Kono | |||
| In this research, we propose the method for detecting moving objects in
front of the Segway by detecting running state for the Segway. Running state of
the personal vehicle Segway is detected with both an image sensor and an
acceleration sensor mounted on the Segway. When objects are moving in front of
the Segway, the image sensor can capture the motion while the acceleration
sensor shows a different result. By analyzing the difference our method
successfully recognizes moving objects from environment. Keywords: Segway; Image Sensor; Acceleration Sensor; Optical Flow | |||
| Common Interaction Schemes for In-Vehicle User-Interfaces | | BIBAK | Full-Text | 159-168 | |
| Simon Nestler; Marcus Tönnis; Gudrun Klinker | |||
| In this paper different interaction schemes which are currently implemented
by major automotive manufacturers have been identified and analyzed. Complete
overviews on all in-vehicle user-interface concepts are rarely spread. This
paper gives a deeper insight in interaction schemes and user-interface concepts
which are implemented in current cars. Additionally an expert review with 7
experts was performed to get a first impression which user-interface
interaction schemes work well in the in-vehicle context. In order to get an
impression of the suitability of the interaction schemes for the development of
usable in-vehicle user-interfaces we performed different tests. The results are
reported in text and tables. Keywords: User Interface Design; In-vehicle information systems; IVIS | |||
| Dynamic Maps for Future Navigation Systems: Agile Design Exploration of User Interface Concepts | | BIBA | Full-Text | 169-178 | |
| Volker Paelke; Karsten Nebe | |||
| Maps have traditionally been used to support orientation and navigation. Navigation systems shift the focus from printed maps to interactive systems. The key goal of navigation systems is to simplify specific tasks, e.g. route planning or route following. While users of navigation systems need less skills in navigation specific activities, e.g. reading maps or manual route planning, they must now interact with the user interface of the navigation device, which requires a different set of skills. Current navigation systems aim to simplify the interaction by providing interfaces that use basic interaction mechanisms (e.g. button based interfaces on a touch-screen), exploiting the fact that many users are already familiar with such techniques. In the presentation of the information most navigation systems employ map-like displays, possibly combined with additional information, again to exploit familiarity. While such an approach can help with early adoption, it can also limit usefulness and usability. There is, however, a large opportunity to improve input, output and functionality of navigation systems. In this paper we expand a model of classical map based communication to identify possibilities where "dynamic maps" can enhance map based communication in navigation systems. We report on how an agile design exploration process was applied to examine the design space spanned by the new model and to develop system probes. We discuss the user feedback and its implications for future interface concepts for navigation systems. | |||
| Flight Searching -- A Comparison of Two User-Interface Design Strategies | | BIBAK | Full-Text | 179-188 | |
| Antti Pirhonen; Niko Kotilainen | |||
| The most usable user-interface is not necessarily the most popular. For
example, the extent to which an interaction is based on graphics can depend
highly on convention rather than usability. This study compares contemporary
flight search applications in order to investigate whether a more extensive use
of graphics can enhance usability. Two user-interfaces are compared: one
follows the ideal principles of graphical user-interfaces and direct
manipulation, while the second interface requires text to be entered with a
keyboard. The results of the comparison indicate that even an early prototype
of the graphics based alternative performed better than the typical formula
based search application for several measurements of usability. Keywords: Flight search; direct manipulation; graphical user interface | |||
| Agent-Based Driver Abnormality Estimation | | BIBA | Full-Text | 189-198 | |
| Tony Poitschke; Florian Laquai; Gerhard Rigoll | |||
| For enhancing current driver assistance and information systems with regard to the capability to recognize an individual driver's needs, we conceive a system based on fuzzy logic and a multi-agent-framework. We investigate how it is possible to gain useful information about the driver from typical vehicle data and apply the knowledge on our system. In a pre-stage, the system learns the driver's regular steering manner with the help of fuzzy inference models. By comparing his regular and current manner, the system recognizes whether the driver is possibly impaired and betakes in a risky situation. Furthermore, the steering behavior and traffical situation are continuously observed for similar pattern. According to the obtained information, the system tries to conform its assistance functionalities to the driver's needs. | |||
| Enhancing the Accessibility of Maps with Personal Frames of Reference | | BIBA | Full-Text | 199-210 | |
| Falko Schmid | |||
| The visualization of geographic information requires large displays. Even large screens can be insufficient to visualize e.g. a long route in a scale, such that all decisive elements (like streets and turns) and their spatial context can be shown and understood at once. This is critical as the visualization of spatial data is currently migrating to mobile devices with small displays. Knowledge based maps, such as µMaps are a key to the visual compression of geographic information: those parts of the environment which are familiar to a user are compressed while the unfamiliar parts are displayed in full detail. As a result µMaps consist of elements of two different frames of reference: a personal and a geographic frame of reference. In this paper we argue for the integration personally meaningful places in µMaps. Their role is to clarify the spatial context without increasing the visual representation and they serve as an experienced based key to different scales (the compressed and uncompressed parts of the environment) of µMaps. | |||
| Augmented Interaction and Visualization in the Automotive Domain | | BIBAK | Full-Text | 211-220 | |
| Roland Spies; Markus Ablaßmeier; Heiner Bubb; Werner Hamberger | |||
| This paper focuses on innovative interaction and visualization strategies
for the automotive domain. To keep the increasing amount of information in
vehicles easily accessible and also to minimize the mental workload for the
driver, sophisticated presentation and interaction techniques are essential. In
this contribution a new approach for interaction the so-called augmented
interaction is presented. The new idea is an intelligent combination of
innovative visualization and interaction technologies to reduce the driver's
mental transfer effort that is necessary between displayed information, control
movement and reality. Using contact-analog head-up displays relevant
information can be presented exactly where it is needed. For control, an
absolute natural and direct way of interaction is delivered by touch
technologies. However, to leave the eyes on the road, the driver needs haptic
feedback to handle a touchpad blindly. Therefore, the touchpad presented in
this contribution, is equipped with a haptic adjustable surface. Combining both
technologies delivers an absolutely innovative way for in-vehicle interaction.
It enables the driver to interact in a very direct way by sensing the
corresponding environment on the touchpad. Keywords: head-up display; touch; haptic feedback; interaction; automotive; augmented
reality | |||
| Proposal of a Direction Guidance System for Evacuation | | BIBAK | Full-Text | 221-227 | |
| Chikamune Wada; Yu Yoneda; Yukinobu Sugimura | |||
| In this paper, we propose a device that indicates the direction to evacuate.
Our proposed system, which would present the direction through the tactile
sensation of the head, could be used in no visibility environment such as
filled smoke. This paper describes a feasibility of our proposed system and
indicates problems to be solved. Keywords: Evacuation; Smoke; Direction; Guidance; Tactile sensation | |||
| A Virtual Environment for Learning Airport Emergency Management Protocols | | BIBAK | Full-Text | 228-235 | |
| Telmo Zarraonandia; Mario Rafael Ruiz Vargas; Paloma Díaz; Ignacio Aedo | |||
| This paper presents a virtual environment designed to enhance the learning
of airport emergency management protocols. The learning is performed in an
informal manner, with each learner playing a different role in a particular
emergency simulation. Learners interact within the virtual environment,
managing the available information and following the steps prescribed for each
type of emergency in the Airport Emergency Plan of the Spanish Civil Defence
Organization. The simulation can be run in different modes of difficulty, and
can be used as a learning tool as well as an evaluation tool to measure the
accuracy of the learner's actuation within the protocol. It can also support
stand-alone training having some of the emergency roles played out by the
computer. The virtual environment has been built using DimensioneX, an open
source multi-player online game engine. Keywords: Virtual environment; emergency; game engine; simulation | |||
| User Profiling for Web Search Based on Biological Fluctuation | | BIBAK | Full-Text | 239-247 | |
| Yuki Arase; Takahiro Hara; Shojiro Nishio | |||
| Because of the information flood on the Web, it has become difficult to
search necessary information. Although Web search engines assign authority
values to Web pages and show ranked results, it is not enough to find
information of interest easily, as users have to comb through reliable but out
of the focus information. In this situation, personalization of Web search
results is effective. To realize the personalization, a user profiling
technique is essential, however, since the users' interests are not stable and
are versatile, it should be flexible and tolerant to change of the environment.
In this paper, we propose a user profiling method based on the model of the
organisms' flexibility and environmental tolerance. We review the previous user
profiling methods and discuss the adequacy of applying this model to user
profiling. Keywords: User profile; Web search; biological fluctuation | |||
| Expression of Personality through Avatars: Analysis of Effects of Gender and Race on Perceptions of Personality | | BIBAK | Full-Text | 248-256 | |
| Jennifer Cloud-Buckner; Michael Sellick; Bhanuteja Sainathuni; Betty Yang; Jennie J. Gallimore | |||
| Avatars and virtual agents are used in social, military, educational,
medical, training, and other applications. Although there is a need to develop
avatars with human-like characteristics, many applications include avatars
based on stereotypes. Prabhala and Gallimore (2007) conducted research to
develop collaborative computer agents with personality. Using the Big Five
Factor Model of personality they investigated how people perceive personality
based on actions, language, and behaviors of two voice-only computer agents in
a simulation. However, these computer agents included no visual features in
order to avoid stereotypes. The objective of the current research extends the
work of Prabhala and Gallimore by investigating the effects of personality,
race, and gender on perceived personality of avatars with animated faces.
Results showed that subjects were able to distinguish the different
personalities and race and gender significantly affected perceptions on a
trait-by-trait basis. Keywords: avatar; virtual agent; personality; Big Five Factor | |||
| User-Definable Rule Description Framework for Autonomous Actor Agents | | BIBAK | Full-Text | 257-266 | |
| Narichika Hamaguchi; Hiroyuki Kaneko; Mamoru Doke; Seiki Inoue | |||
| In the area of text-to-video research, our work focuses on creating video
content from textual descriptions, or more specifically, the creation of TV
program like content from script like descriptions. This paper discusses a
description framework that can be used to specify rough action instructions in
the form of a script that can be used to produce detailed instructions
controlling the behavior and actions of autonomous video actor agents. The
paper also describes a prototype text-to-video system and presents examples of
instructions for controlling an autonomous actor agent with our designed
descriptive scheme. Keywords: Autonomous Actor Agent; Digital Storytelling; Text-to-Video; TVML;
Object-Oriented Language | |||
| Cognitive and Emotional Characteristics of Communication in Human-Human/Human-Agent Interaction | | BIBAK | Full-Text | 267-274 | |
| Yugo Hayashi; Kazuhisa Miwa | |||
| A psychological experiment was conducted to capture the nature of
Human-Human and Human-Agent Interactions where humans and computer agents
coexist in a collaborative environment. Two factors were manipulated to
investigate the influences of the 'schema' about and the 'actual partner' on
the characteristics of communication. The first factor, expectation about the
partner, was controlled by the experimenter's instruction, manipulating with
which partner (human or computer agent) participants believed to be
collaborating. The second factor, the actual partner, was controlled by
manipulating with which partner (human or computer agent) participants actually
collaborated. The results of the experiments suggest that the degree of the
refinement of the conversation controlled as the actual partner factor affected
the emotional and cognitive characteristics of communication; however the
schema about the opponent only affected the emotional characteristics of
communication. Keywords: Collaboration; Human-Human Interaction; Human-Agent Interaction;
Communication | |||
| Identification of the User by Analyzing Human Computer Interaction | | BIBAK | Full-Text | 275-283 | |
| Rüdiger Heimgärtner | |||
| This paper describes a study analyzing the interaction of users with a
computer system to show that user identification is possible only by analyzing
the user interaction behavior. The identification of the user can be done with
a precision of up to 99.1% within one working session. This classification rate
can be improved using additional interaction indicators. Moreover, this kind of
protection method using the analysis of the interaction of the user with the
system cannot be betrayed because of the uniqueness of the user interaction
patterns. The method and the results of the study will be presented and
discussed. Keywords: user interface; user identification; HCI analysis; interaction analysis;
interaction indicator; tool; theft protection; computer protection; culture;
user interface design; personalization; identification; recognition | |||
| The Anticipation of Human Behavior Using "Parasitic Humanoid" | | BIBAK | Full-Text | 284-293 | |
| Hiroyuki Iizuka; Hideyuki Ando; Taro Maeda | |||
| This paper proposes the concept of Parasitic Humanoid (PH) that can realize
a wearable robot to establish intuitive interactions with wearers rather than
conventional counter-intuitive ways like key-typing. It requires a different
paradigm or interface technology which is called behavioral or ambient
interface that can harmonize human-environment interactions to naturally lead
to a more suitable state with the integration of information science and
biologically inspired technology. We re-examine the use of wearable computers
or devices from the viewpoint of behavioral information. Then, a possible way
to realize PH is shown as integrated wearable interface devices. In order that
PH establishes the harmonic interaction with wearers, a mutually anticipated
interaction between a computer and human is necessary. To establish the
harmonic interaction, we investigate the social interaction by experiments of
human interactions where inputs and outputs of subjects are restricted in a low
dimension at the behavioral level. The results of experiments are discussed
with the attractor superimposition. Finally, we will discuss integrated PH
system for human supports. Keywords: Ambient interface; parasitic humanoid; behavior-based Turing test; attractor
superimposition | |||
| Modeling Personal Preferences on Commodities by Behavior Log Analysis with Ubiquitous Sensing | | BIBA | Full-Text | 294-303 | |
| Naoki Imamura; Akihiro Ogino; Toshikazu Kato | |||
| Consumers may take some specific behavior preference or favorite items to get more information, such as the material and the price, in shopping. We have been developing a smart room to estimate their preference and favorite items through observation using ubiquitous sensors, such as RFID and Web cameras. We assumed the decision decision-making process in shopping as AIDMA rule, and detected specific behavior, which are "See", "Touch" and "Take", to estimate user's interest. We found that we can classify consumers by their behavior patterns of the times and duration of the behaviors. In our experiment we have tested twenty-eight subjects on twenty-four T-shirts. In the experiment, we got better precision ratio for each subjects on estimating preference and favorite items by discriminate analysis on his or her behavior log, and behavior patterns classification above. | |||
| A System to Construct an Interest Model of User Based on Information in Browsed Web Page by User | | BIBA | Full-Text | 304-313 | |
| Kosuke Kawazu; Masakazu Murao; Takeru Ohta; Masayoshi Mase; Takashi Maeno | |||
| In these days, they expect that computers comprehend characteristics of the user, for example interest and liking, to interact with computers. In this study, we constructed a system to construct an interest model of the user based on information in browsed Web pages by the user by extracting words and interword relationships. In this model, metadata is appended to words and interword relationships. Kinds of metadata of words are six, personal name, corporate name, site name, name of commodity, product name and location name. And metadata of interword relationships is prepared to clarify relationships of these words. This system makes a map by visualizing this model. And this system has functions to zoom and modify this map. We showed efficacy of this system by using evaluation experiment. | |||
| Adaptive User Interfaces for the Clothing Retail | | BIBAK | Full-Text | 314-319 | |
| Karim Khakzar; Jonas George; Rainer Blum | |||
| This paper presents the results of a research project that identifies the
most important concepts for adaptive user interfaces in the context of
e-commerce, such as online shops, and evaluates these concepts using a
formalized method and standardized criteria. As a result, recommendations for
the design of adaptive user interfaces are derived. Keywords: Adaptive User Interfaces; Concepts; Evaluation; Retail Shops | |||
| Implementing Affect Parameters in Personalized Web-Based Design | | BIBAK | Full-Text | 320-329 | |
| Zacharias Lekkas; Nikos Tsianos; Panagiotis Germanakos; Constantinos Mourlas; George Samaras | |||
| Researchers used to believe that emotional processes are beyond the scope of
a scientific study. Recent advances in cognitive science and artificial
intelligence, however, suggest that there is nothing mystical about emotional
processes. Affective neuroscience and psychology have reported that human
affect and emotional experience play a significant, and useful, role in human
learning and decision making. Emotions are considered to play a central role in
guiding and regulating learning, performance, behaviour and decision making, by
modulating numerous cognitive and physiological activities. Our purpose is to
improve learning performance and, most importantly, to personalize web-content
to users' needs and preferences, eradicating known difficulties that occur in
traditional approaches. Affect parameters are implemented, by constructing a
theory that addresses emotion and is feasible in Web-learning environments. Keywords: affect; emotions; mood; disposition; regulation; personalization;
decision-making; learning | |||
| Modeling of User Interest Based on Its Interaction with a Collaborative Knowledge Management System | | BIBAK | Full-Text | 330-339 | |
| Jaime Moreno-Llorena; Xavier Alamán Roldán; Ruth Cobos Pérez | |||
| SKC is a prototype system for knowledge management in the Web by means of
semantic information without supervision and tries to select the knowledge
contained in the system by paying attention to its use. This paper explains
user activity analysis in order to find out their interest for knowledge
elements in the system, and the application of this interest for users
classification and knowledge identification for their interest, inside and
outside SKC. As a result a model for user interest based on interaction is
obtained. Keywords: user interest model; user interaction; user profiling; data mining;
knowledge management; CSCW | |||
| Some Pitfalls for Developing Enculturated Conversational Agents | | BIBAK | Full-Text | 340-348 | |
| Matthias Rehm; Elisabeth André; Yukiko I. Nakano | |||
| A review of current agent-based systems exemplifies that a Western
perspective is predominant in the field. But as conversational agents focus on
rich multimodal interactive behaviors that underlie face-to-face encounters, it
is indispensable to incorporate cultural heuristics of such behaviors into the
system. In this paper we examine some of the pitfalls that arise in developing
such systems. Keywords: Embodied Conversational Agents; Cultural Heuristics; Multimodal Interaction | |||
| Comparison of Different Talking Heads in Non-Interactive Settings | | BIBAK | Full-Text | 349-357 | |
| Benjamin Weiss; Christine Kühnel; Ina Wechsung; Sebastian Möller; Sascha Fagel | |||
| Six different talking heads have been evaluated in two consecutive
experiments. Two text-to-speech components and three head components have been
used. Results from semantic differentials show a clear preference for the most
human-like and expressive head. The analysis of the semantic differentials
reveals three factors each. These factors show different patterns for the head
components. Overall quality is strongly related to one factor, which covers the
quality aspect 'appearance'. Another factor found in both experiments comprises
'human likeliness' and 'naturalness' and is much less correlated with overall
quality. While subjects have been able to clearly separate all head components
with different factors of the semantic differential, only some of these factors
are relevant for explicit quality ratings. A good appearance seems to affect
the perception of sympathy and the ascription of reliability. Keywords: talking heads; evaluation; quality aspects; smart home domain | |||
| Video Content Production Support System with Speech-Driven Embodied Entrainment Character by Speech and Hand Motion Inputs | | BIBAK | Full-Text | 358-367 | |
| Michiya Yamamoto; Kouzi Osaki; Tomio Watanabe | |||
| InterActor is a speech-input-driven CG-embodied interaction character that
can generate communicative movements and actions for entrained interaction.
InterPuppet, on the other hand, is an embodied interaction character that is
driven by both speech input, like the InterActor, and hand motion input, like a
puppet. In this study, we apply InterPuppet to video content production and
construct a system to evaluate the content production. Self-evaluation of
long-term (5-day) video content production demonstrates the effectiveness of
the developed system. Keywords: Human communication; human interaction; embodied interaction; embodied
communication; video content | |||
| Autonomous Turn-Taking Agent System Based on Behavior Model | | BIBAK | Full-Text | 368-373 | |
| Masahide Yuasa; Hiroko Tokunaga; Naoki Mukawa | |||
| In this paper, we propose a turn-taking simulation system using animated
agents. To develop our system, we analyzed eye-gaze and turn-taking behaviors
of humans during actual conversations. The system, which can generate a wide
variety of turn-taking patterns based on the analysis, will play an important
role for modeling the behaviors at turn-takings, such as gazes, head
orientations, facial expressions and gestures. The paper describes the system
concept, its functions, and implementations. The findings obtained from
investigations using the system will contribute to development of future
conversational systems in which agents and robots communicate with users in a
lively and emotional manner. Keywords: animated agents; turn-taking; nonverbal information; conversation | |||
| An Interoperable Concept for Controlling Smart Homes -- The ASK-IT Paradigm | | BIBAK | Full-Text | 377-386 | |
| Evangelos Bekiaris; Kostas Kalogirou; Alexandros Mourouzis; Maria Panou | |||
| This paper presents an interoperable home automation infrastructure that
offers new levels of mobility, accessibility, independence, comfort, and
overall quality of life. Building on previous experience with similar systems
and existing gaps over the full potential of automated support, both at home
and on the move, new concepts and objectives are defined for R&D on smart
homes. The paper outlines the proposed integrated and holistic solution,
discusses design and development issues, provides indicative evaluation results
emerging from a case study conducted in the European ASK-IT project, and
concludes by highlighting open issues and future steps. Keywords: Smart home; Ambient assisted living; Accessibility; Infomobility | |||
| Towards Ambient Augmented Reality with Tangible Interfaces | | BIBAK | Full-Text | 387-396 | |
| Mark Billinghurst; Raphael Grasset; Hartmut Seichter; Andreas Dünser | |||
| Ambient Interface research has the goal of embedding technology that
disappears into the user's surroundings. In many ways Augmented Reality (AR)
technology is complimentary to this in that AR interfaces seamlessly enhances
the real environment with virtual information overlay. The two merge together
in context aware Ambient AR applications, which allow users to easily perceive
and interact with Ambient Interfaces by using AR overlay of the real world. In
this paper we describe how Tangible Interaction techniques can be used for
Ambient AR applications. We will present a conceptual framework for Ambient
Tangible AR Interface, a new generation of software and hardware tools for
development and methods for evaluating Ambient Tangible AR Interfaces. Keywords: Augmented Reality; Ambient Interfaces; Tangible Interfaces | |||
| Rapid Prototyping of an AmI-Augmented Office Environment Demonstrator | | BIBA | Full-Text | 397-406 | |
| Dimitris Grammenos; Yannis Georgalis; Nikolaos Partarakis; Xenophon Zabulis; Thomas Sarmis; Sokratis Kartakis; Panagiotis Tourlakis; Antonis A. Argyros; Constantine Stephanidis | |||
| This paper presents the process and tangible outcomes of a rapid prototyping activity towards the creation of a demonstrator, showcasing the potential use and effect of Ambient Intelligence technologies in a typical office environment. In this context, the hardware and software components used are described, as well as the interactive behavior of the demonstrator. Additionally, some conclusions stemming from the experience gained are presented, along with pointers for future research and development work. | |||
| Challenges for User Centered Smart Environments | | BIBAK | Full-Text | 407-415 | |
| Fabian Hermann; Roland Blach; Doris Janssen; Thorsten Klein; Andreas Schuller; Dieter Spath | |||
| Future smart environments integrate information on persons, ambient
resources and objects. Many rich visions of smart environments have been
developed, and current technological and market developments promise to bring
aspects of these visions of into everyday life. The paper delineates the role
of mobile and decentralized communities, semantic technologies, and virtual
reality. Key challenges for a user centered development of smart environments
are discussed, in particular the controllability of personal identity data,
reliable user interfaces for autonomous systems, and seamless interaction in
integrated virtual and physical environments. Keywords: smart environments; adaptive systems; system autonomy; mixed reality; social
software; semantic technology; digital identity; privacy; user controllability | |||
| Point and Control: The Intuitive Method to Control Multi-device with Single Remote Control | | BIBAK | Full-Text | 416-422 | |
| Sung Soo Hong; Ju Il Eom | |||
| Remote controls are mainly used to control most of the CE devices in home
environment these days. As the number of electronic devices increases in home,
each device's corresponding remote may also be added, and user frequently
controls several devices at one time. This situation makes a user feel
difficulty in finding desired remote control among many of other controllers.
To alleviate this inconvenience, a technique for controlling multiple
electronic devices with a single remote, well-known as a universal remote
control technique, has attracted attention. Generally, when a user uses the
universal remote control, she must input a key code of a desired device. If a
user were to control several devices interchangeably, she may hang out pushing
the key code for one device after the other. This kind of maneuver can be very
tiresome, and it may lead to dropping the usability drastically. This paper
propose the hardware, and software structure of Point and Control (PAC), which
uses the metaphor of pointing a objective target, to select the device user
intend to control. By using PAC, users can easily select and control the target
device among many of candidates in real time with just simple behavior. Keywords: Remote control; Universal Remote control; IR LED; IR Image Sensor; Point and
Control; PAC; Multi device Control; Concurrent Control | |||
| New Integrated Framework for Video Based Moving Object Tracking | | BIBA | Full-Text | 423-432 | |
| Md. Zahidul Islam; Chi-Min Oh; Chil-Woo Lee | |||
| In this paper, we depict a novel approach to improve the moving object tracking system with particle filter using shape similarity and color histogram matching by a new integrated framework. The shape similarity between a template and estimated regions in the video sequences can be measured by their normalized cross-correlation of distance transformation image map. Observation model of the particle filter is based on shape from distance transformed edge features with concurrent effect of color information. The target object to be tracked forms the reference color window and its histogram are calculated, which is used to compute the histogram distance while performing a deterministic search for matching window. For both shape and color matching reference template window is created instantly by selecting any object in a video scene and updated in every frame. Experimental results have been offered to show the effectiveness of the proposed method. | |||
| Object Scanning Using a Sensor Frame | | BIBAK | Full-Text | 433-439 | |
| Soonmook Jeong; Taehoun Song; Gihoon Go; Key Ho Kwon; Jae Wook Jeon | |||
| This paper focuses on object scanning using sensors. The objects are
articles in daily use. Everyday objects, such as cups, bottles and vessels are
good models to scan. The sensor scan represents the objects as 3D images on the
computer monitor. Our research proposes a new device to scan real world
objects. The device is a square frame, similar to a picture frame, but, except
for the frame, is empty. The infrared sensors are arranged on the device frame.
These sensors detect the object and extract the coordinates from the detected
object. These coordinates are transmitted to the computer and the 3D creation
algorithm represents these coordinates as a 3D image. The operating principle
is simple, similar to scanning a person at a checkpoint. The user passes the
object through the sensor frame to obtain the 3D image, creating the 3D image
corresponding to the real object. Thus, the user can easily obtain the 3D
object image. This approach uses a low-cost infrared sensor, rather than a
high-cost sensor, such as a laser. Keywords: Sensor frame; 3D image; Scanning; Infrared sensor | |||
| Mixed Realities -- Virtual Object Lessons | | BIBAK | Full-Text | 440-445 | |
| Andreas Kratky | |||
| The question of how to design and implement efficient virtual classroom
environments gains a new quality in the light of extensive digital education
projects such as the One Laptop Per Child (OLPC) initiative. At the core of
this consideration is not only the task of developing content for very
different cultural settings but also the necessity to reflect the effects of
learning processes that operate exclusively with digitally mediated content.
This paper traces the design of the project Venture to the Interior, an
interactive experience that presents selected objects from the collections of
the Museum of Natural History in Berlin and reflects them as building blocks
for the Enlightenment-idea of a building of knowledge. The project investigates
the role of objects as a knowledge device and the possibilities for a
translation of the didactic effects of experiential learning into virtual
environments. Keywords: Virtual Museums; Virtual Reality; Mixed Reality; Virtual Classroom; Distance
Learning; Photorealism | |||
| New Human-Computer Interactions Using Tangible Objects: Application on a Digital Tabletop with RFID Technology | | BIBAK | Full-Text | 446-455 | |
| Sébastien Kubicki; Sophie Lepreux; Yoann Lebrun; Philippe Dos Santos; Christophe Kolski; Jean Caelen | |||
| This paper presents a new kind of interaction between users and a tabletop.
The table described is interactive and associated with tangible and traceable
objects using RFID technology. As a consequence, some Human-Computer
Interactions become possible implying these tangible objects. The multi-agent
architecture of the table is also explained, as well as a case study based on a
scenario. Keywords: Human-Computer Interaction; RFID; tabletop; tangible objects; Multi-Agent
System | |||
| Context-Aware Cognitive Agent Architecture for Ambient User Interfaces | | BIBAK | Full-Text | 456-463 | |
| Youngho Lee; Choonsung Shin; Woontack Woo | |||
| An ambient user interface is a set of hidden intelligent interfaces that
recognize user's presence and provides services to immediate needs. There are
several research activities on user interfaces and interactions which combining
VR/AR, ubiquitous computing/ambient interfaces, and artificial intelligence.
However, real-time and intelligent responses of user interfaces are still
challenging problems. In this paper, we introduce the design of Context-aware
Cognitive Agent Architecture (CCAA) for real-time and intelligent responses of
ambient user interfaces in ubiquitous virtual reality, and discuss possible
scenarios for realizing ambient interfaces. CCAA applies a vertically layered
two-pass agent architecture with three layers. The three layers are AR
(augmented reality) layer, CA (context-aware) layer, and AI layer. The two
passes interconnect the layers as an input or output. One of the passes of each
layer is an input path from a lower layer or environmental sensors describing a
situation. The other pass is an output path and deliveries a set of
appropriated actions based on the understanding of the situation. This
architecture enables users interact with ambient smart objects through an
ambient user interface in various ways of intelligence by exploiting context
and AI techniques. Based on the architecture, several possible scenarios about
recognition problems and higher level intelligent services for ambient
interaction are suggested. Keywords: Ambient user interface; ubiquitous virtual reality; context-awareness;
augmented reality | |||
| An Embodied Approach for Engaged Interaction in Ubiquitous Computing | | BIBAK | Full-Text | 464-472 | |
| Mark O. Millard; Firat Soylu | |||
| A particular vision of ubiquitous computing is offered to contribute to the
burgeoning, dominant interaction paradigm in human-computer interaction (HCI).
An engaged vision of ubiquitous computing (UbiComp) can take advantage of
natural human abilities and tendencies for interaction. The HCI literature is
reviewed to provide a brief overview of promising interaction styles and
paradigms in order to situate them within ubiquitous computing. Embodied
interaction is introduced as a key theoretical framework for moving UbiComp
forward as an engaged interaction paradigm. Keywords: ubiquitous computing; HCI; embodied interaction; tangible interaction | |||
| Generic Framework for Transforming Everyday Objects into Interactive Surfaces | | BIBAK | Full-Text | 473-482 | |
| Elena Mugellini; Omar Abou Khaled; Stéphane Pierroz; Stefano Carrino; Houda Chabbi Drissi | |||
| According to Mark Weiser, smart environments are physical worlds that are
richly and invisibly interwoven with sensors, actuators, displays, and
computational elements, embedded seamlessly in the everyday objects of our
lives. At present however turn everyday objects into interactive ones is a very
challenging issue and this limits their widespread diffusion. In order to
address this issue we propose a framework to turn everyday objects, such as a
table or a mirror, into interactive surfaces allowing to access and manipulate
digital information. The framework integrates several interaction technologies
such as electromagnetic, acoustic and optical one, supporting rapid prototype
development. Two prototypes, an interactive table and an interactive tray, have
been developed using the toolkit to validate the proposed approach. Keywords: human-computer interaction; interactive surfaces; RFID; electromagnetic;
acoustic | |||
| mæve -- An Interactive Tabletop Installation for Exploring Background Information in Exhibitions | | BIBAK | Full-Text | 483-491 | |
| Till Nagel; Larissa Pschetz; Moritz Stefaner; Matina Halkia; Boris Müller | |||
| This paper introduces the installation mæve: a novel approach to
present background information in exhibitions in a highly interactive, tangible
and sociable manner. Visitors can collect paper cards representing the exhibits
and put them on an interactive surface to display associated concepts and
relations to other works. As a result, users can explore both the unifying
themes of the exhibition as well as individual characteristics of exhibits. On
basis of metadata schemata developed in the MACE (Metadata for Architectural
Contents in Europe) project, the system has been put to use the Architecture
Biennale to display the entries to the Everyville student competition. Keywords: Metadata; visualization; concept networks; tangible interface; exhibition;
user experience | |||
| Relationality Design toward Enriched Communications | | BIBA | Full-Text | 492-500 | |
| Yukiko I. Nakano; Masao Morizane; Ivan Tanev; Katsunori Shimohara | |||
| We have been conducting research on how to design relationality in complex systems composed of intelligent tangible or intangible, artificial artifacts, by using evolutionary computation and network science as methodologies. This paper describes the research concept, methodologies, and issues of relationality design. As one of research on relationality, we investigate here significance of linkage between a real world and a virtual world in a learning system. | |||
| Ultra Compact Laser Based Projectors and Imagers | | BIBAK | Full-Text | 501-510 | |
| Harald Schenk; Thilo Sandner; Christian Drabe; Michael Scholles; Klaus Frommhagen; Christian Gerwig; Hubert Lakner | |||
| 2D micro scanning mirrors are presented which make use of a degressive
spring allowing to achieve an optical scan range of up to 112° x 84°,
optically. The scanning mirrors are deployed for highly miniaturized monochrome
and full color projectors as well as for laser imagers. The projectors allow
for projection with VGA resolution at 50 Hz frame rate. The laser imager
supports full color SVGA resolution at 30 Hz frame rate. Both, the projector
and the imager are based on a single 2D scanner chip and thus could be combined
in a single ultra compact system for simultaneous imaging and projection with
high depth of focus. Keywords: scanner; projection; imager; MEMS; micro scanning mirror | |||
| Understanding the Older User of Ambient Technologies | | BIBAK | Full-Text | 511-519 | |
| Andrew Sixsmith | |||
| This paper reports on the user-driven research and development (R&D)
approach adopted with the EU-funded SOPRANO project
(http://www.soprano-ip.org/) to develop an "ambient assisted living" (AAL)
system to enhance the lives of frail and disabled older people. The paper
describes the conceptual framework and methods used within SOPRANO and briefly
presents some of the results from requirements capture, use case development
and initial prototype development. The focus of the research is on
understanding the potential user of the SOPRANO AAL system using a holistic
ecological model of person and context and using methods that aimed to explore
different experiential "realities". The results demonstrate the usefulness of
the approach for involving user in all stages of R&D and in generating and
evaluating ideas for prototype development. Keywords: Ambient assisted living; older people; user-driven research | |||
| Multi-pointing Method Using a Desk Lamp and Single Camera for Effective Human-Computer Interaction | | BIBAK | Full-Text | 520-525 | |
| Taehoun Song; Thien Cong Pham; Soonmook Jung; Ji-Hwan Park; Key Ho Kwon; Jae Wook Jeon | |||
| Multi-pointing has become an important research interest, and is used in
many computer applications to allow users to interact effectively with a
program. Multi-pointing is used as an input method, and can also be fun and
very user-friendly. However, in order to use the method, a complex and
expensive hardware configuration is required. This paper presents a new and low
cost method of multi-pointing based on a simple hardware configuration. Our
method uses dual hand recognition, a table lamp, and a single CMOS camera. The
table lamp provides a steady illumination environment for image processing, and
the CMOS camera is mounted to maintain good stability. A single camera is used
for dual hand recognition to achieve multi-pointing. Therefore, image
processing does not require intensive computing which allows us to use a
stand-alone system (including a 32 bit RISK processor). The results of the
proposed method show that effective control navigation of applications such as
Google Earth or Google Maps can be achieved. Keywords: Multi-Pointing; Hand Recognition; Human-Computer Interaction | |||
| Communication Grill/Salon: Hybrid Physical/Digital Artifacts for Stimulating Spontaneous Real World Communication | | BIBAK | Full-Text | 526-535 | |
| Koh Sueda; Koji Ishii; Takashi Miyaki; Jun Rekimoto | |||
| One of the problems encountered in face-to-face communication involves
conversational imbalances among the participants caused by differences in
conversational interests and social positions. It is common for us not to be
able to communicate well with an unfamiliar person. On the other hand, old
customs in the real world, such as the Japanese tea ceremony, effectively use
physical artifacts to enable smoother conversation. In this project, we
designed two communication systems that facilitate casual communication using
physical/digital artifacts, such as a meal and text-chat, in order to clarify
that real world communication can be supported by digital technology. The first
system, called the "Communication Grill," connects a grill for cooking meat to
a chat system. The grill is heated by the chatting activity. Thus, people must
continue conversing to roast the meat. The second system is called the
"Communication Salon." It is a computer-enhanced tea ceremony with a chat
screen displayed at a tearoom. Using these systems, we conducted user
evaluations at SIGGRAPH and other open events. Based on the chat logs at these
events, we found that conversational topics gradually shifted from topics about
the systems to more general topics. An analysis of these chat logs revealed
that the participants began to communicate spontaneously using this system. Keywords: Augmented reality; Chat; Chat-augmented meal; merging virtual and real;
Communication Grill/Salon | |||
| Motion Capture System Using an Optical Resolver | | BIBAK | Full-Text | 536-543 | |
| Takuji Tokiwa; Masashi Yoshidzumi; Hideaki Nii; Maki Sugimoto; Masahiko Inami | |||
| In this paper, we present a novel position measurement method that makes use
of a couple of plane light sources created from an IR-LED matrix array and a
photo-detector. The light sources emit light with the same frequency, but
different phases, while the optical axes of the sources are set up
orthogonally. Then, the signal place is diffused by the space with phase
differences in each position. Finally, the signal received by the
photo-detector is analyzed to determine the position. Keywords: Motion Capture; Position Detection | |||
| The Effects of an Anti-glare Sleeve Installed on Fluorescent Tube Lamps on Glare and Reading Comfort | | BIBAK | Full-Text | 544-553 | |
| Shiaw-Tsyr Uang; Cheng-Li Liu; Mali Chang | |||
| Our previous study has demonstrated the benefits of a reflective sleeve to
redirect lighting and to enhance luminous intensity of fluorescent tube lamps
in certain light projecting angles. A reflective sleeve is composed of a
plastic reflector and a transparent refractor. However, the intensive
centralized lighting may increase the possibilities of producing glare. In this
study, the transparent refractor of the sleeve is replaced with a diffuser to
compose an anti-glare sleeve. This study adopts measurement, optical software
simulation, and experiment methods to investigate the effects of an anti-glare
sleeve on redirecting lighting and reducing glare. The results demonstrate that
luminous intensity towards viewing objects of a fluorescent tube lamp enhances
after adopting an anti-glare sleeve. In addition, software simulation indicates
an anti-glare sleeve increases light uniformity and reduces glare. The
subjective evaluation also shows that florescent tube lamps with anti-glare
sleeves produce less light reflection on various papers and more comfortable
reading. Keywords: Glare; Reading comfort; Fluorescent tube lamp; Lamp sleeve | |||
| Electromyography Focused on Passiveness and Activeness in Embodied Interaction: Toward a Novel Interface for Co-creating Expressive Body Movement | | BIBAK | Full-Text | 554-562 | |
| Takabumi Watanabe; Norikazu Matsushima; Ryutaro Seto; Hiroko Nishi; Yoshiyuki Miwa | |||
| In expressive body movement created by one person and his/her partner, a
sense of nonseparation, as if one's body and his/her partner's body are united,
is experienced. For such a relationship between the two, a process to feel
passiveness and activeness physically is important. The objective of this study
is to capture passiveness and activeness in bodily interaction. We focused on
myoelectric (ME) potential from which time of generation and amplitude differ
in voluntary and reaction movements. A measurement system was developed using
ME potential in bodily interaction. This technique was validated by our data. Keywords: embodied interaction; expressive body movement; passiveness and activeness;
surface EMG | |||
| An Integrated Approach to Emotion Recognition for Advanced Emotional Intelligence | | BIBAK | Full-Text | 565-574 | |
| Panagiotis D. Bamidis; Christos A. Frantzidis; Evdokimos I. Konstantinidis; Andrej Luneski; Chrysa D. Lithari; Manousos A. Klados; Charalampos Bratsas; Christos L. Papadelis; Costas Pappas | |||
| Emotion identification is beginning to be considered as an essential feature
in human-computer interaction. However, most of the studies are mainly focused
on facial expression classifications and speech recognition and not much
attention has been paid until recently to physiological pattern recognition. In
this paper, an integrative approach is proposed to emotional interaction by
fusing multi-modal signals. Subjects are exposed to pictures selected from the
International Affective Picture System (IAPS). A feature extraction procedure
is used to discriminate between four affective states by means of a Mahalanobis
distance classifier. The average classifications rate (74.11%) was encouraging.
Thus, the induced affective state is mirrored through an avatar by changing its
facial characteristics and generating a voice message sympathising with the
user's mood. It is argued that multi-physiological patterning in combination
with anthropomorphic avatars may contribute to the enhancement of affective
multi-modal interfaces and the advancement of machine emotional intelligence. Keywords: Emotion; Affective Computing; EEG; Skin Conductance; Avatar; Mahalanobis;
classifier | |||
| Addressing the Interplay of Culture and Affect in HCI: An Ontological Approach | | BIBAK | Full-Text | 575-584 | |
| Emmanuel G. Blanchard; Riichiro Mizoguchi; Susanne P. Lajoie | |||
| Culture and affect are closely tied domains that have been considered
separately in HCI until now. After carefully reviewing research done in each of
those domains, a formal ontology engineering approach brings us to identify and
structure useful concepts for considering their interplay. Keywords: Affective Computing; Cultural Computing; Ontology Engineering; Awareness;
Adaptation | |||
| Love at First Encounter -- Start-Up of New Applications | | BIBAK | Full-Text | 585-594 | |
| Henning Breuer; Marlene Kettner; Matthias Wagler; Nathalie Preuschen; Fee Steinhoff | |||
| Whereas most research on usability focuses on known applications we explore
the first encounters. Starting up new applications expectancy, impression
management, initial dialogues and acquaintance, and ritualizing operations have
to be handled. We present the research approach and document short histories of
learning and fascination. Focussing on business users of mobile services we
conducted diary research and expert interviews, reviewed design guidelines, and
conducted a pattern-driven and resource-oriented innovation workshop. We
present insights and results from the synthesis of guidelines, and ideas
translated into experience prototypes. Keywords: Start-up; seven touchpoints; learnability; service innovation; creativity;
diary research; experience design | |||
| Responding to Learners' Cognitive-Affective States with Supportive and Shakeup Dialogues | | BIBAK | Full-Text | 595-604 | |
| Sidney K. D'Mello; Scotty D. Craig; Karl Fike; Arthur C. Graesser | |||
| This paper describes two affect-sensitive variants of an existing
intelligent tutoring system called AutoTutor. The new versions of AutoTutor
detect learners' boredom, confusion, and frustration by monitoring
conversational cues, gross body language, and facial features. The sensed
cognitive-affective states are used to select AutoTutor's pedagogical and
motivational dialogue moves and to drive the behavior of an embodied
pedagogical agent that expresses emotions through verbal content, facial
expressions, and affective speech. The first version, called the Supportive
AutoTutor, addresses the presence of the negative states by providing
empathetic and encouraging responses. The Supportive AutoTutor attributes the
source of the learners' emotions to the material or itself, but never directly
to the learner. In contrast, the second version, called the Shakeup AutoTutor,
takes students to task by directly attributing the source of the emotions to
the learners themselves and responding with witty, skeptical, and enthusiastic
responses. This paper provides an overview of our theoretical framework, and
the design of the Supportive and Shakeup tutors. Keywords: affect; emotion; affect-sensitive AutoTutor; ITS | |||
| Trust in Online Technology: Towards Practical Guidelines Based on Experimentally Verified Theory | | BIBAK | Full-Text | 605-614 | |
| Christian Detweiler; Joost Broekens | |||
| A large amount of research attempts to define trust, yet relatively little
research attempts to experimentally verify what makes trust needed in
interactions with humans and technology. In this paper we identify the
underlying elements of trust-requiring situations: (a) goals that involve
dependence on another, (b) a perceived lack of control over the other, (c)
uncertainty regarding the ability of the other, and (d) uncertainty regarding
the benevolence of the other. Then, we propose a model of the interaction of
these elements. We argue that this model can explain why certain situations
require trust. To test the applicability of the proposed model to an instance
of human-technology interaction, we constructed a website which required
subjects to depend on an intelligent software agent to accomplish a task. A
strong correlation was found between subjects' level of trust in the software
and the ability they perceived the software as having. Strong negative
correlations were found between perceived risk and perceived ability, and
between perceived risk and trust. Keywords: Trust; user modeling; empirical research | |||
| Influence of User Experience on Affectiveness | | BIBAK | Full-Text | 615-620 | |
| Ryoko Fukuda | |||
| Affectiveness is frequently discussed based on the first impression to the
appearance of a product. However, experience in use of that product can also
influence affectiveness. In order to clarify the influence of user experience
on affectiveness, user perception of products should be investigated in several
phases of using a product. In this paper, two experiments were presented, which
compared user perception between before and after using products and
investigated user perception during repeated use of products. The results
suggested that user experience can affect affectiveness in several forms. Keywords: user experience; affectiveness; attachment | |||
| A Human-Centered Model for Detecting Technology Engagement | | BIBA | Full-Text | 621-630 | |
| James Glasnapp; Oliver Brdiczka | |||
| This paper proposes a human-centered engagement model for developing interactive media technology. The human-centered engagement model builds on previous interaction models for publicly located ambient displays. It is designed from ethnographic observation with the aim of informing technological innovation from the perspective of the user. The model will be presented along with technological mechanisms to detect human behavior with the aim of responsive media technology development. | |||
| Relationship Learning Software: Design and Assessment | | BIBAK | Full-Text | 631-640 | |
| Kyla A. McMullen; Gregory H. Wakefield | |||
| Interface designers have been studying how to construct graphical user
interfaces (GUIs) for a number of years, however adults are often the main
focus of these studies. Children constitute a unique user group, making it
necessary to design software specifically for them. For this study, several
interface design frameworks were combined to synthesize a framework for
designing educational software for children. Two types of learning,
relationships and categories, are the focus of the present study because of
their importance in early-child learning as well as standardized testing. For
this study the educational game Melo's World was created as an experimental
platform. The experiments assessed the performance differences found when
including or excluding subsets of interface design features, specifically
aesthetic and behavioral features. Software that contains aesthetic, but lack
behavioral features, was found to have the greatest positive impact on a
child's learning of thematic relationships. Keywords: human computer interaction; educational technology; interactive systems
design; user interface design | |||
| Relationship Enhancer: Interactive Recipe in Kitchen Island | | BIBAK | Full-Text | 641-650 | |
| Tsai-Yun Mou; Tay-Sheng Jeng; Chun-Heng Ho | |||
| HCI researches on kitchen have been focusing on creating new devices to
facilitate cooking works and eliminate mistakes. However, kitchen is also a
place where family and friends create meaning and memories. Therefore, here we
develop an interactive recipe in kitchen island that aims to enhance social
bonds and pleasure among people. The system utilizes tangible interaction for
creative recipe and keeps records of people's favorite foods. Groups of family
and friends participated in the study. The results indicate that there existed
a cognition gap in people's understanding of each other's food preference.
Participants agreed that the interactive recipe increased communication when
preparing food for others. In creativity aspect, the numbers of new dish did
not increase because of collaboration, but instead people showed more creative
dish ideas. On the other hand, individual developed more dish variations but
ordinary recipe design. Keywords: HCI; recipe; communication; creativity; social interaction | |||
| ConvoCons: Encouraging Affinity on Multitouch Interfaces | | BIBAK | Full-Text | 651-659 | |
| Michael A. Oren; Stephen B. Gilbert | |||
| This paper describes the design of ConvoCons, a system to promote affinity
of group members working in a co-located multitouch environment. The research
includes an exploratory study that led to the development of ConvoCons as well
as the iterative evolution of the ConvoCon system, design trade-offs made, and
empirical observations of users that led to design changes. This research adds
to the literature on social interaction design and offers interface designers
guidance on promoting affinity and increased collaboration via the user
interface. Keywords: Multitouch; affinity; table computing; collaboration; virtual assembly;
creativity support | |||
| Development of an Emotional Interface for Sustainable Water Consumption in the Home | | BIBAK | Full-Text | 660-669 | |
| Mehdi Ravandi; Jon Mok; Mark H. Chignell | |||
| The design of an application to monitor, analyze and report individual water
consumption within a household is introduced. An interface design incorporating
just-in-time feedback, positive and negative reinforcement, ecological
contextualization, and social validation is used to promote behavior change.
Reducing water consumption behavior in the shower is targeted, as it is the
leading source of discretionary indoor water use in a typical home. In both
in-shower and out-of-shower scenarios, interface designs aim to address user
needs for information, context, control, reward, and convenience to reduce
water consumption. Keywords: Emotional design; Water Conservation; Home; Shower; Sustainability | |||
| Influences of Telops on Television Audiences' Interpretation | | BIBA | Full-Text | 670-678 | |
| Hidetsugu Suto; Hiroshi Kawakami; Osamu Katai | |||
| The influence of text information, known as "telops," on the viewers of television programs is discussed. In recent television programs, textual information, i.e., captions and subtitles, is abundant. Production of a television program is facilitated by using telops, and therefore, the main reason for using this information is the producers' convenience. However, the effect on audiences cannot be disregarded when thinking about the influence of media on humans' lives. In this paper, channel theory and situation theory are introduced, and channel theory is expanded in order to represent the mental states and attitudes of an audience. Furthermore, the influence of telops is considered by using a scene of a quiz show as an example. Some assumptions are proposed based on the considerations, and experiments are carried out in order to verify the assumptions. | |||
| Extracting High-Order Aesthetic and Affective Components from Composer's Writings | | BIBAK | Full-Text | 679-682 | |
| Akifumi Tokosumi; Hajime Murai | |||
| A digital humanities technique for the network analysis of words with a text
is applied to capture the subtle and sensitive contents of essays written by a
contemporary composer of classical music. Based on analysis findings, the
possible contributions of digital humanities to affective technology are
discussed. This paper also provides a systematic view of digital humanities and
affective technology. Keywords: high-order cognition; emotion; music; art; network analysis; digital
humanities | |||
| Affective Technology, Affective Management, towards Affective Society | | BIBAK | Full-Text | 683-692 | |
| Hiroyuki Umemuro | |||
| In this paper, the term affective is defined as "being capable to evoke
affects in people's mind" or "being capable to deliberate affects to be evoked
in people's mind". This paper discusses potential impact of concept of
affectiveness on development of technological products and services,
management, and value systems of societies. Keywords: Affect; emotion; feeling; management; mood; quality; usability | |||
| Bio-sensing for Emotional Characterization without Word Labels | | BIBAK | Full-Text | 693-702 | |
| Tessa Verhoef; Christine L. Lisetti; Armando Barreto; Francisco Ortega; Tijn van der Zant; Fokie Cnossen | |||
| In this article, we address some of the issues concerning emotion
recognition from processing physiological signals captured by bio-sensors. We
discuss some of our preliminary results, and propose future directions for
emotion recognition based on our lessons learned. Keywords: Emotion Recognition; Affective Computing; Bio-sensing | |||
| An Affect-Sensitive Social Interaction Paradigm Utilizing Virtual Reality Environments for Autism Intervention | | BIBAK | Full-Text | 703-712 | |
| Karla Conn Welch; Uttama Lahiri; Changchun Liu; Rebecca Weller; Nilanjan Sarkar; Zachary Warren | |||
| This paper describes the design and development of both software to create
social interaction modules on a virtual reality (VR) platform and
individualized affective models for affect recognition of children with autism
spectrum disorders (ASD), which includes developing tasks for affect
elicitation and using machine-learning mathematical tools for reliable affect
recognition. A VR system will be formulated that can present realistic social
communication tasks to the children with ASD and can monitor their affective
response using physiological signals, such as cardiovascular activities
including electrocardiogram, impedance cardiogram, photoplethysmogram, and
phonocardiogram; electrodermal activities including tonic and phasic responses
from galvanic skin response; electromyogram activities from corrugator
supercilii, zygomaticus major, and upper trapezius muscles; and peripheral
temperature. This affect-sensitive system will be capable of systematically
manipulating aspects of social communication to more fully understand its
salient components for children with ASD. Keywords: Human-computer interaction; Physiological responses; Virtual Reality;
Autism; Affective model | |||
| Recognizing and Responding to Student Affect | | BIBAK | Full-Text | 713-722 | |
| Beverly Park Woolf; Toby Dragon; Ivon Arroyo; David G. Cooper; Winslow Burleson; Kasia Muldner | |||
| This paper describes the use of wireless sensors to recognize student
emotion and the use of pedagogical agents to respond to students with these
emotions. Minimally invasive sensor technology has reached such a maturity
level that students engaged in classroom work can us sensors while using a
computer-based tutor. The sensors, located on each of 25 student's chair,
mouse, monitor, and wrist, provide data about posture, movement, grip tension,
facially expressed mental states and arousal. This data has demonstrated that
intelligent tutoring systems can provide adaptive feedback based on an
individual student's affective state. We also describe the evaluation of
emotional embodied animated pedagogical agents and their impact on student
motivation and achievement. Empirical studies show that students using the
agents increased their math value, self-concept and mastery orientation. Keywords: intelligent tutoring systems; wireless sensors; student emotion; pedagogical
agents | |||
| Usability Studies on Sensor Smart Clothing | | BIBAK | Full-Text | 725-730 | |
| Haeng-Suk Chae; Woon Jung Cho; Soo Hyun Kim; Kwang-Hee Han | |||
| This paper presents approach to usability evaluation on sensor smart
clothing that the methodologies can be divided into two categories. 1)
usability evaluation that gather data from actual users on sensor smart
clothing. 2) investigation weight values which is calculated for evaluation
item. The result of usability evaluation shows that SC (sensor controller)
influence on overall usability of sensor smart clothing. Effective item and
module is social acceptance of SC, wearability of GC (general connector) &
PA (platform appearance), usefulness of GC & PA and maintenance (400) of PA
& SC. To estimate the sensor smart clothing, task process was applied and
the components on the response of user were investigated. This study was
performed to determine how effects the properties of sensor smart clothing. Our
study suggests that usability evaluation may be important within design process
of sensor smart clothing. Keywords: Smart Clothing; Usability; Evaluation; Sensor; Wearable Computing;
Wearability | |||
| Considering Personal Profiles for Comfortable and Efficient Interactions with Smart Clothes | | BIBAK | Full-Text | 731-740 | |
| Sébastien Duval; Christian Hoareau; Gilsoo Cho | |||
| Profiles describing the abilities and specificities of individual wearers
enable smart clothes to fundamentally and continuously personalize their
behavior, suggesting or selecting useful, comfortable and efficient services
and interaction modes. First, we suggest foundations for the design of personal
profiles for the general public based on perception, bodily characteristics,
culture, language, memory, and spatial abilities. Then, we sketch reactions
towards profiles for oneself and one's family based on a 2008 pilot study in
Japan. Accordingly, we discuss the creation, update, use and dissemination of
profiles, and finally perspectives for future social investigations. Keywords: General public; Interaction; Smart clothes; Sociology; Ubiquitous computing;
Personal Profile; User profile | |||
| Interaction Wearable Computer with Networked Virtual Environment | | BIBAK | Full-Text | 741-751 | |
| Jiung-yao Huang; Ming-Chih Tung; Huan-Chao Keh; Ji-Jen Wu; Kun-Hang Lee; Chung-Hsien Tsai | |||
| The goal of this research is to propose a technique to integrate the mobile
reality system into the legacy networked virtual environment. This research
composes of two essential research domains, one is networked virtual
environment (NVE) and the other is mobile computing. With the proposed
technique, a user can use a mobile device to join a networked virtual
environment and interact with desktop users of the same virtual environment. To
achieve this goal, three technical issues have to be solved including mobile
networking, resource-shortage and coordinates coordination. The paper presents
solutions to all of these issues. Further, a Mobility Supporting Server (MSS)
is proposed to implement presented solutions into an existing networked virtual
environment, called 3D virtual campus, Taiwan. The result of this experimental
research enlightens the possibility of building a Multiplayer Mobile Mixed
Reality (M3R) environment in the near future. Keywords: Networked Virtual Environment (NVE); Mobile Computing; Mobile Supporting
Server; Multiplayer Mobile Mixed Reality | |||
| The Impact of Different Visual Feedback Presentation Methods in a Wearable Computing Scenario | | BIBA | Full-Text | 752-759 | |
| Hendrik Iben; Hendrik Witt; Ernesto Morales Kluge | |||
| Interfaces for wearable computing applications have to be tailored to task and usability demands. Critical information has to be presented in a way allowing for fast absorption by the user while not distraction from the primary task. In this work we evaluated the impact of different information presentation methods on the performance of users in a wearable computing scenario. The presented information was critical to fulfill the given task and was displayed on two different types of head mounted displays (HMD). Further the representations were divided into two groups. The first group consisted of qualitative representations while the second group focused on quantitative information. Only a weak significance could be determined for effect the different methods used have on the performance but there is evidence that familiarity has an effect. A significant effect was found for the type of HMD. | |||
| Gold Coating of a Plastic Optical Fiber Based on PMMA | | BIBAK | Full-Text | 760-767 | |
| Seok Min Kim; Sung Hun Kim; Eun Ju Park; Dong Lyun Cho; Moo Sung Lee | |||
| We investigated the adhesion between gold thin film and poly (methyl
methacrylate) (PMMA) and poly (vinylidene fluoride-co-hexafluoropropylene)
(P(VDF-co-HFP)) substrates with the aim of imparting electrical conductivity to
plastic optical fibers (POFs). The two polymers were used as the core and the
cladding of POF, respectively. Gold thin film of 50nm thickness was deposited
by ion sputtering onto the polymers and also POF. Several approaches, which
were well known to be effective in enhancing adhesive strength between gold and
polymers, were applied in this study: introduction of polar functionality on
the substrate surface by plasma treatment, buffer layer insertion, and physical
surface roughening. The variation of wettability and adhesion with plasma
conditions was investigated through water contact angle measurement and cross
hatch cut test. Even though the contact angles of substrates were decreased
after Ar or O2 plasma treatment, irrespective of the polymer type, the
adhesion of polymers with gold layer was very poor. The Ti buffer layer of 5nm
thickness, which was deposited between PMMA substrate and gold layer, did not
contribute to improve the adhesion. However, P(VDF-co-HFP) substrates with
rough surface of 13.44nm RMS shows 3B class adhesion to gold from the cross
hatch tape test. The gold-coated POF showed the electrical conductivity of
1.35×10³Scm-1 without significant optical loss. The result may
be used for developing a medical device capable of simultaneously applying
electrical and optical stimulus. Keywords: plastic optical fiber; POF; sidelight; overcoating | |||
| Standardization for Smart Clothing Technology | | BIBAK | Full-Text | 768-777 | |
| Kwangil Lee; Yong Gu Ji | |||
| Smart clothing is the next generation of apparel. It is a combination of new
fabric technology and digital technology, which means that the clothing is made
with new signal-transfer fabric technology installed with digital devices.
Since this smart clothing is still under development, many problems have
occurred due to the absence of the standardization of technology. Therefore,
the efficiency of technology development can be strengthened through industrial
standardization. This study consists of three phases. The first phase is
selecting standardization factors to propose a standardization road map. The
second phase is to research and collect related test evaluation methods of
smart clothing. For this, we selected two categories, which are clothing and
electricity/electron properties. The third phase is establishing a
standardization road map for smart clothing. In this study, test evaluations
have not yet been conducted and proved. However, this study shows how to
approach standardization. We expect that it will be valuable for developing
smart clothing technology and standardization in the future. Keywords: smart clothing; standardization; new fabric technology; clothing property;
electricity/electron property | |||
| Wearable ECG Monitoring System Using Conductive Fabrics and Active Electrodes | | BIBAK | Full-Text | 778-783 | |
| Su Ho Lee; Seok Myung Jung; Chung Ki Lee; Kee Sam Jeong; Gilsoo Cho; Sun Kook Yoo | |||
| The aim of this paper is to develop nonintrusive type ECG monitoring system
based on active electrode with conductive fabric. Our developed electrode can
measure ECG signal without the electrolyte gel or the adhesives causing skin
trouble. For the stable measurement of ECG signal, the buffer amplifier with
high input impedance and the noise bypassing shield with conductive fabric were
developed. This system involves real-time ECG signal monitoring, and wireless
communication using the ZigBee protocol. We show experimental results for
developing wearable ECG monitoring system and demonstrate how it can be applied
to the design of nonintrusive electrode with conductive fabric. Keywords: active electrode; conductive fabric; wearable; ZigBee; portable | |||
| Establishing a Measurement System for Human Motions Using a Textile-Based Motion Sensor | | BIBAK | Full-Text | 784-792 | |
| Moonsoo Sung; Keesam Jeong; Gilsoo Cho | |||
| We developed a human motion measurement system using textile-based motion
sensors whose electrical resistance changes with textile length. Eight body
locations were marked and used for measurement, based on previous studies
investigating the relationship between human muscles and activities. Five male
subjects participated to the experiment, walking and running while the
electrical resistance of each sensor was measured. Measuring and analyzing the
variations in the electrical resistances of our sensors allowed us to
successfully evaluate body postures and motions. Keywords: human motion; human posture; measurement; textile-based motion sensor;
electronic textile | |||
| A Context-Aware AR Navigation System Using Wearable Sensors | | BIBA | Full-Text | 793-801 | |
| Daisuke Takada; Takefumi Ogawa; Kiyoshi Kiyokawa; Haruo Takemura | |||
| We have been developing a networked wearable AR system that determines the user's current context to provide appropriate annotations. This system allows for annotation management based on the relationship between annotations and the real environment along with data transfer routines that dynamically calculate annotations' priority to transfer just enough data from the annotation server to the wearable PC worn by the user. Furthermore, this system recognizes the user's activity to predict the kind and level of detail of annotations the user needs at a given time. This information can be used for dynamic annotation filtering and switching of rendering modes. | |||
| Emotional Smart Materials | | BIBAK | Full-Text | 802-805 | |
| Akira Wakita; Midori Shibutani; Kohei Tsuji | |||
| To build affective and emotional interaction, we pay attention to materials
of interface. We introduce two non-emissive displays as examples showing our
concept. Fabcell is a fabric pixel that changes its color with non-emissive
manner. Matrix arrangement of Fabcell enables information display with fabric
texture. Jello Display is composed of gel blocks with moisture, coldness and
softness. The unique look and feel enables organic information display. These
kinds of haptic and organic information displays have ability adding rich
affectivity to the artifacts used in our everyday life. Keywords: smart material; affective computing; ubiquitous computing; tangible
interface | |||
| Novel Stretchable Textile-Based Transmission Bands: Electrical Performance and Appearance after Abrasion/Laundering, and Wearability | | BIBAK | Full-Text | 806-813 | |
| Yoonjung Yang; Gilsoo Cho | |||
| In this paper, we (1) compare the electrical performances and appearance
changes of two textile-based transmission bands after repeated abrasion and
laundering, and (2) evaluate their wearability with MP3 player jackets. The
bands were made with non-stretchable Teflon-coated stainless steel yarns, or
stretchable silicon-coated stainless steel yarns and spandex. The electrical
resistance of the bands after repeated abrasion and laundering was measured
with a RCL (resistance capacitance inductance) meter. The appearance changes
were observed using a digital microscope. For wear tests, five subjects
evaluated the degree of convenience while doing specific actions and other wear
sensations using questionnaires with a 7-point Likert-type scale. Both
non-stretchable and stretchable transmission bands were evaluated as excellent
on electrical performances. Appearance changes after abrasion were tolerable,
and there were neither exposure nor disconnection of stainless steel yarns.
Convenience and other wear sensations for the MP3 player jacket using
stretchable silicon-coated bands were evaluated as better than non-stretchable
Teflon-coated bands. Keywords: stretchable textile-based transmission band; silicon-coated stainless steel
multifilament yarn; abrasion; laundering; electrical resistance; image
analysis; MP3 player jacket; wear sensation | |||