| Qualitative Spatial Modelling of Human Route Instructions to Mobile Robots | | BIBAK | Full-Text | 1-6 | |
| Hui Shi; Cui Jian; Bernd Krieg-Brückner | |||
| This paper describes our work on improving interaction between humans and
mobile robots on navigation tasks. To represent and reason about humans' route
instructions, a user-centered qualitative spatial model -- the Conceptual Route
Graph -- is introduced. Three reasoning strategies based on this conceptual
model are discussed, which enable a mobile robot to generate different
clarification responses if spatial mismatches are detected in route
instructions. Moreover, results of an empirical study to evaluate and compare
their effects on people's navigation activities are presented. Keywords: qualitative spatial representation and reasoning, human-robot interaction,
mode confusion, user focus | |||
| A Preliminary Framework for Differentiating the Paradigms of Human-Technology Interaction Research | | BIBAK | Full-Text | 7-12 | |
| Hannu Karvonen; Pertti Saariluoma; Tuomo Kujala | |||
| The purpose of this paper is to clarify the differences between approaches
in the research field of human-technology interaction. We are especially
interested in individuating user psychology from the more traditional
paradigms. Therefore, we suggest a preliminary theoretical framework of
criteria for distinguishing and individuating the different interaction
research paradigms. The framework consists of five dimensions in which the
paradigms may vary from each other. In this paper, we also discuss how
ubiquitous computing is related to some of the dimensions. In addition, we
focus on defining the new elements user psychology can bring to the discussion
and analysis of human-technology interaction. To demonstrate the usage of the
framework, we apply it to differentiate user psychology from traditional HCI
research. Keywords: human-technology interaction, user psychology, human-computer interaction,
paradigms, psychological and metascientific foundations for designing
human-technology interaction | |||
| Multi-language Ontology-Based Search Engine | | BIBAK | Full-Text | 13-18 | |
| Leyla Zhuhadar; Olfa Nasraoui; Robert Wyatt; Elizabeth Romero | |||
| One of the first Multi-Language Information Retrieval (MLIR) systems was
implemented in 1969 by Gerard Salton who enhanced his SMART system to retrieve
multilingual documents in two languages, English and German. However, the
research field of MLIR is still struggling since the majority of information
retrieval systems are monolingual and more precisely English-based, even though
only 6% of the world's population native language have as English [14]. This
paper presents a Multi-Language Information Retrieval (MLIR) approach that
falls into the area of Domain Specific Information Retrieval (E-learning being
the domain). The approach we followed is a synergistic approach between (1)
Thesaurus-based Approach and (2) Corpus-based Approach. This research has been
implemented on a real platform called HyperManyMedia1 at Western Kentucky
University. Keywords: multi-language information retrieval; cross language, information retrieval;
search engine; ontology; elearning | |||
| Towards Cognitively Accessible Web Pages | | BIBAK | Full-Text | 19-24 | |
| Till Halbach | |||
| Considering the design of inclusive interfaces of static and dynamic
webpages, this work focuses on the group of users with cognitive/intellectual
disabilities, while simultaneously accounting for the needs of users with
mobility and sensory deficits. A number of specific universal design principles
are derived from a variety of cognitive disabilities, such as problems with
linguistics (text and language), learning and problem solving, orientation,
focus and attention span, memory, and visual comprehension. The principles have
been implemented and evaluated by means of personas testing with results
showing that a much more universally accessible solution of a login mechanism
to a web service could be achieved as compared to today's solution. Keywords: Cognitive disabilities, intellectual deficits, impairement, deficiencies,
accessibility, e-inclusion, universal design, web pages | |||
| Simulating On-the-Road Behavior Using a Driving Simulator | | BIBAK | Full-Text | 25-31 | |
| Andreas Riener | |||
| In this paper, we summarize the initial results with regard to the question
to what extent driving simulators can be used to serve as cheap and easy
realizable environments for simulating on-the-road behavior. The aim of these
first studies was to determine whether or not it is possible to replace real
driving studies with experiments and furthermore, to identify parameters and/or
restrictions for a second experimental series with improved settings. We have
conducted two studies comparing the driver's reaction time in real and
simulated environments with the final goal to provide a universal metric
describing the differences in reaction time. The events were, in the case of
simulation, triggered trace-driven or, in the real driving experiment, manually
activated by the experimenter and notifications were forwarded to the driver
using either a visual, auditory, or haptic sensory channel. The comparison of
the two studies showed that (i) both settings provide similar results for the
order of average response using the three feedback modalities and (ii) the
experiment using a simulator performed, for the measure of reaction time,
better in the range of 13% compared to the real driving study (the reason for
this result is most likely caused by the fact that driving in a real world
environment is much more challenging than in a driving simulator). Keywords: driving experiments, trace-driven simulation, driver-vehicle interaction
(DVI), feedback modalities, performance evaluation, user-centered design | |||
| Evaluating the Usability of Transactional Web Sites | | BIBAK | Full-Text | 32-37 | |
| Renato Otaiza; Cristian Rusu; Silvana Roncagliolo | |||
| Most of the usability evaluation methods may be used in order to evaluate
transactional web applications. The problem arises when deciding which
usability evaluation methods bring more information. A study has been done in
order to develop a methodology for the usability evaluation of transactional
web applications. The methodology was developed and validate trough a number of
case studies. Keywords: Usability Evaluations, Evaluation Methodology, Transactional Web
Applications | |||
| Human Tactile Ability to Discriminate Variations in Small Ridge Patterns through a Portable-Wearable Tactile Display | | BIBAK | Full-Text | 38-43 | |
| Nadia Garcia-Hernandez; Nikos G. Tsagarakis; Darwin G. Caldwell | |||
| This work presents a quantitative evaluation of subjects' tactile ability to
discriminate small virtual ridge patterns through a portable'wearable tactile
device. The virtual patterns have been recreated by controlling the vertically
moving pins of the device. Psychophysical experiments were performed to measure
subjects' thresholds for spatial variation discrimination of ridge patterns.
Moreover, for comparison reasons, further psychophysical experiments were
performed with real ridge patterns using a non-actuated version of the tactile
device and touching directly with the bear finger. During experiments, the
exploration velocity was monitored. The present results help to understand,
compare and characterize the tactile display when rendering small ridge
patterns. The output of the presented study can also assist in the development
of new tactile systems. Keywords: tactile device, tactile discrimination, psychophysical experiments | |||
| The Influence of Telemanipulation-Systems on Fine Motor Performance | | BIBAK | Full-Text | 44-49 | |
| Lena Geiger; Michael Popp; Berthold Färber; Jordi Artigas; Philipp Kremer | |||
| Extravehicular activities (EVAs) are a hazardous and expensive procedural
method to operate in outer space. A possible support or alternative for manned
missions in terms of on-orbit servicing are telemanipulation-systems. Whether
or not such systems can actually achieve the efficiency of suited astronauts
remains a central issue in telemanipulation research. Both scenarios,
extravehicular activities as well as telemanipulation-systems, are restricted
by different environmental factors, especially in terms of tasks that require
fine motor skills. For suited astronauts, different factors, such as restricted
mobility and reduced tactile feedback through the gloves, as well as a
restricted field of view, impair fine motor skills. On the other hand, time
delay, limited degrees of freedom and restricted haptic and visual feedback are
amongst the factors, which may cause impairment of performance during the work
with telemanipulation-systems. In order to compare the efficiency of both
scenarios, a testbed equipped with typical mounting tasks was developed. An
experimental study showed that the testbed is a valid measure of fine motor
skills. In two follow-up studies, the influence of some factors debilitating
fine motor performance in telemanipulation-systems and simulated
extra-vehicular activities was analysed and compared. Keywords: fine motor skills; gloves; on-orbit servicing; telemanipulation | |||
| Multiple Parallel Vision-Based Recognition in a Real-Time Framework for Human-Robot-Interaction Scenarios | | BIBAK | Full-Text | 50-55 | |
| Tobias Rehrl; Alexander Bannat; Jürgen Gast; Frank Wallhoff; Gerhard Rigoll; Christoph Mayer; Zadid Riaz; Bernd Radig; Stefan Sosnowski; Kolja Kühnlenz | |||
| Every day human communication relies on a large number of different
communication mechanisms like spoken language, facial expressions, body pose
and gestures, allowing humans to pass large amounts of information in short
time. In contrast, traditional human-machine communication is often unintuitive
and requires specifically trained personal. In this paper, we present a
real-time capable framework that recognizes traditional visual human
communication signals in order to establish a more intuitive human-machine
interaction. Humans rely on the interaction partner's face for identification,
which helps them to adapt to the interaction partner and utilize context
information. Head gestures (head nodding and head shaking) are a convenient way
to show agreement or disagreement. Facial expressions give evidence about the
interaction partners' emotional state and hand gestures are a fast way of
passing simple commands. The recognition of all interaction queues is performed
in parallel, enabled by a shared memory implementation. Keywords: real-time image processing, gesture recognition, human-robot interaction,
facial expressions | |||
| Terrain-Aware Path Guided Mobile Robot Teleoperation in Virtual and Real Space | | BIBA | Full-Text | 56-65 | |
| Ray Jarvis | |||
| This paper concerns the development of a force feedback enhanced teleoperation system for outdoor robotic vehicles navigating in rough terrain where true-colour 3D virtual world models of the working environment, created from laser and colour image scans collected offline, can be explored by walk-throughs both before and during the robot navigation mission itself. In other words, the physical mission planned can be partially rehearsed in cyberspace. Further, during a mission, the location and orientation of the vehicle are continually determined and global collision-free paths to selected goal locations made available as advice to the operator, who can follow or ignore such advice at will. Live (real-time) 3D laser range data also provides an up-to-date scan of the volume immediately surrounding the vehicle as it moves so that dynamic obstacles can be avoided. Local terrain-roughness is taken into account in the provision of local collision-free paths, the sub-goals of which, are operator determined. This live range data is matched with the pre-scanned range data to calculate the accurate robot vehicle localisation (position and orientation) which is provided continuously during the navigation mission. A force feedback 3D joystick reflects terrain roughness as a vibration in one axis and the other two axes are used to provide a 2D force to attract the operator towards following the local optimal collision-free path, but this attraction can be easily overridden by the operator. The instrumentation and methodologies used are presented, together with some preliminary experimental results. | |||
| Transparency Trade-Offs for a 3-Channel Controller Revealed by the Bounded Environment Passivity Method | | BIBAK | Full-Text | 66-72 | |
| Bert Willaert; Brecht Corteville; Dominiek Reynaerts; Hendrik Van Brussel; Emmanuel B. Vander Poorten | |||
| In this paper, the Bounded Environment Passivity method [1] is applied to a
3-channel controller. This method enables the design of teleoperation
controllers that show passive behaviour for interactions with a bounded range
of environments. The resulting tuning guidelines, derived analytically, provide
interesting tuning flexibility, which allows to focus on different aspects of
transparency. As telesurgery is the motivation behind this work, the focus lies
on correctly reflecting the stiffness properties of the environment. A
comparison between the transparency and stability properties of this 3-channel
controller and the same properties of the Position-Force controller
demonstrates the interesting properties of the 3-channel controller. The
theoretical results are verified experimentally on a 1 d.o.f. master-slave
setup. Keywords: Teleoperation, human-robot interaction, transparency, passivity | |||
| Theatre as a Discussion Tool in Human-Robot Interaction Experiments -- A Pilot Study | | BIBAK | Full-Text | 73-78 | |
| Amiy R. Chatley; Kerstin Dautenhahn; Mick L. Walters; Dag S. Syrdal; Bruce Christianson | |||
| In the field of Human-Robot Interaction (HRI), a novel experimental
methodology is presented for carrying out studies which uses a theatrical
presentation with an actor interacting and cooperating with robots in realistic
scenarios before an audience. This methodology has been inspired by previous
research in Human-Computer Interaction. The actor also stays in role for a
post-theatre session, answering questions and encouraging the audience to
discuss their respective opinions and viewpoints relating to the HRI scenario
enactment. The development and running of a first exploratory pilot experiment
using the new Theatre HRI (THRI) methodology is presented and critically
reviewed. Based on this review and the associated findings from the audience
discussion session, it is concluded that the Theatre-based HRI (THRI)
methodology is viable for performing HRI user studies. Keywords: Theatre, Human-Robot Interaction, User Evaluation, Robot Performance,
Usability, Scenarios | |||
| A Simulation Framework for Human-Robot Interaction | | BIBAK | Full-Text | 79-84 | |
| Norbert Schmitz; Jochen Hirth; Karsten Berns | |||
| The development of human-robot interaction scenarios is a strongly
situation-dependent as well as an extremely dynamic task. Humans interacting
with the robot directly react on observed stimuli; changes in the environment
are not avoidable. Therefore it is impossible to test and verify interaction
scenarios in real environments in a repeatable manner. In this paper, we
propose a robot development framework that is able to simulate all required
modules of the robot, its sensor system as well as its environment including
persons. The simulation is able to represent all actuators of a humanoid robot
like body, head and arm movements as well as facial expression. Besides the
simulation of actuators all sensors are modeled directly in the framework. It
is possible to integrate cameras, microphones, distance sensors, and RFI D tags
and reader. These sensors provide the input for the robot control system based
on the environmental situation including static elements like furniture and
walls as well as movable objects like humans. The implementation of human
movements is based on the H-Anim standard and a modeling tool which enables the
user to record and integrate self-designed motions. Keywords: Human-Robot Interaction, Simulation, Software Framework | |||
| Semi-automatically Configured Fission for Multimodal User Interfaces | | BIBAK | Full-Text | 85-90 | |
| Dominik Ertl; Jürgen Falb; Hermann Kaindl | |||
| Fission of several output modalities poses hard problems, and
(semi-)automatically configuring it is even more difficult. However, it is
important to address the latter in order to broaden the scope of providing user
interfaces semi-automatically. Our approach starts from a high-level discourse
model created by a human interaction designer. It is modality-independent, so a
modality-annotated discourse model is semi-automatically generated. Based on
it, our fission is semiautomatically configured. It currently supports output
modalities graphical user interface, (canned) speech output, and a new modality
that we call movement as communication. The latter involves movements of a
semi-autonomous robot in 2D-space for reinforcing the communication of the
other modalities. Keywords: Multimodal fission, movement as communication, interaction design | |||
| Enhanced Synthesized Text Reader for Visually Impaired Users | | BIBAK | Full-Text | 91-94 | |
| Jaka Sodnik; Grega Jakus; Sašo Tomazic | |||
| In this paper we propose a prototype of a spatialized text reader for
visually impaired users. The basic functions of the system are reading
arbitrary files, converting text into speech using different synthesized voices
and spatializing synthesized speech. Visually impaired users can thus listen to
the content of a file read by various synthesized voices at different spatial
positions. Some metadata (e.g. pre-inserted tags) has to be added to the file
before processing in order to define the voice, pitch, reading rate and
originating spatial position for any part of the content. We believe such a way
of electronic book reading can be a significant improvement for visually
impaired users if compared to mundane and dull screen readers. The core of the
system is based on Java platform using FreeTTS speech synthesizer and JOAL
positioning library. The latter is improved by the external MIT Head Related
Impulse Response (HRIR) library. The use of headphones is obligatory in order
to perceive spatial sound correctly. The system is a work in progress and is
currently under evaluation by twelve visually impaired test subjects. Keywords: Visually impaired users; speech synthesis; spatial positioning; HRIR library | |||
| Facilitating the Design of Multi-channel Interfaces for Ambient Computing | | BIBAK | Full-Text | 95-100 | |
| José Rouillard; Xavier Le Pallec; Jean-Claude Tarby; Raphaël Marvie | |||
| Ambient computing is one of the more significant recent advances in
computer-human interactions. With the ambient intelligence paradigm, computers
become embedded in our natural surroundings. As they are context sensitive and
adaptable, they better provide smart services to humans. But ambient computing
requires communication between several heterogeneous components that are not
supposed to communicate each other. This paper describes how we use a workflow
to facilitate the design of multichannel interfaces for ambient computing. Our
results show that different devices (such as Wiimote, multi-touch screen,
telephone, etc.) can be managed in order to activate real things (such as lamp,
fan, robot, webcam, etc.). A smart digital home case study illustrates a
possible implementation of our approach and shows how it allows redesigning
easily some parts of the ambient system just by modifying the workflow. Keywords: Pervasive computing, ubiquitous computing ambient intelligence,
multi-channel interaction, workflow, smart digital home | |||
| Results of the Improvement on Synthesis System's Speech Quality for Spanish Using Adaptive Automatas | | BIBAK | Full-Text | 101-106 | |
| Rosalia Caya; Claudia Zapata | |||
| This article presents the experimentation and results of applying the
Adaptive Automata technique to a Spanish voice synthesizer looking for its
improvement. It shows a review of the design and implementation of the proposed
solution, both aspects are fully explained in a previous work. It also
highlights the formal methods available to measure the improvement on the voice
quality, specially the naturalness feature. Lastly, it presents the results of
experimentation with users for both versions, the original and modified system.
These results show that significant improvement can be made in understanding
the meaning of read texts by incorporating linguistic concepts through the use
of adaptive technology. Keywords: adaptive automata, natural language interfaces, speech synthesis | |||
| Detecting Self-Collisions Using a Hybrid Bounding Volume Algorithm | | BIBAK | Full-Text | 107-112 | |
| F. A. Madera; Stephen D. Laycock; Andy M. Day | |||
| A discrete collision detection algorithm to detect self-collisions between
deformable objects is presented, this is built up using a Bounding Volume
Hierarchy (BVH) and a Feature-based method. The deformations are represented by
the features of the mesh, which are within the bounding volumes and
consequently the updating time for the BVH is reduced. The algorithm compares
the minimum bounded geometry, the 1-ring, with the other spheres of the
hierarchy in order to cull away Bounding Volumes (BV) that are far apart. The
3D objects utilised are surface-based and are deformed by warping, control
points of splines, and a mass-spring model. Keywords: collision detection, deformable models, computer graphics, Bounding Volume
Hierarchy | |||
| FLEXIBLE RULES: A Player Oriented Board Game Development Framework | | BIBAK | Full-Text | 113-118 | |
| Fulvio Frapolli; Amos Brocco; Apostolos Malatras; Béat Hirsbrunner | |||
| When comparing digital board games with their traditional counterparts, it
becomes clear that certain features such as graphics, mundane task automation
or saving and restoring the state of the game have been greatly improved.
Nonetheless, the transition to a digital environment leads to a loss of the
flexibility that makes traditional board games inherently popular. While
modifying aspects of the game is straightforward in traditional board games,
achieving such a level of customization in the digital domain requires deep
knowledge of and access to the game source code. In this paper we focus on
board games and by means of an in-depth online survey we validate our previous
observation, namely that enhancements should be made to digital board games by
incorporating gaming facets found in the physical environment, e.g. support for
flexibility by means of house rules. To this end, we introduce a conceptual
model for the design of digital boardgames, which is supported by a set of
visual programming tools to enable game development according to the principles
set out by our proposed model. The set of the tools along with the underlying
intuitive model comprise the FLEXIBLERULES framework, which enables and
facilitates flexible and extensible game design and development. Keywords: Games, Survey, Development Framework, Human-Computer Interaction | |||
| Design and Development of 3D Mobile Games | | BIBAK | Full-Text | 119-124 | |
| M. Zameer Jhingut; Ibtihaaj M. Ghoorun; Soulakshmee D. Nagowah; Raj Moloo; Leckraj Nagowah | |||
| A major issue arising nowadays is the lack of entertainment due to up-going
level of stress. In this era of technology, where mobile has become more of a
necessity than a luxury, a new form of distraction has taken birth, that of
mobile games. Mobile games are one of the primary entertainment applications at
present. These games provide a means of relaxation and help to draw users'
attention away from routine tensions. However, mobile game development is more
difficult than desktop application development because of scarce resources.
Performance is one of the critical requirements for mobile games. With the
advancement in technology, mobile multiplayer games have started to evolve.
This paper discusses about 3D mobile games development for both single player
and multiplayer and also evaluates two different APIs namely MIDP 2.0 GAME API
and M3G API. Keywords: 3D games, mobile games, MIDP 2.0 game API, M3G API | |||
| Music Box: An Algorithm for Producing Visual Music | | BIBAK | Full-Text | 125-129 | |
| Lindsay Grace | |||
| This research proposes a method for producing music via visual composition
in a computer-game like environment. This is accomplished through the
development of artificial intelligence software that applies the visual rules
of standard emergent behaviors to the algorithmic arrangement of musical tones.
This research presents the proposed system, defining the algorithm and
demonstrating its implementation. Keywords: User Interfaces, Music, computer graphics, computer games | |||
| A Game-Based 3D Simulation of Otranto in the Middle Ages | | BIBAK | Full-Text | 130-133 | |
| Lucio T. De Paolis; Giovanni Aloisio; Maria G. Celentano; Luigi Oliva; Pietro Vecchio | |||
| In educational sense, the Virtual Reality shows its value when the user can
actively participate in the creation and development of his knowledge.
According to the MediaEvo Project and with the multiplayer educational game
realized, the paper shows that the Entertainment Games Platforms can also be
used to develop platforms for multi-channel and multi-sensory cultural
edutainment. Herein we present the process for collecting and processing data,
the methodology and the tools used in the work and the multi-playing and
Artificial Intelligence models implemented in the project. At the moment the
MediaEvo Project is working in progress. Keywords: 3D Game, Edutainment, Virtual Cultural Heritage | |||
| Behavior Analysis through Games Using Artificial Neural Networks | | BIBAK | Full-Text | 134-138 | |
| Didier Puzenat; Isabelle Verlut | |||
| This paper demonstrates that a human being using an interface can be
efficiently evaluated -- in real time -- by embedding basic measurements in the
interface and using a suitable trained artificial neural network. The approach
is introduced through video games but is suitable for any machine capable of
valuable measurements on user actions. Of course, the quality of the
"diagnostic" depends of the learnability of the task and of the size and
quality of the learning base. Typical applications include the detection of
fatigue, stress, emotions, the influence of a drug or of medical treatments;
screening a deficit or adequateness to a task, etc. Two successful prototypes
are presented, one to predict the mental age of children through a set of
simple basic games, and the other to detect if a subject is right-handed of
left-handed through a racing car simulation. Keywords: User interfaces, Games, Neural network applications, Cognitive science,
Psychology, Human factors | |||
| Hapto-visual Virtual Reality as a Tool in Psychophysical Research on Roughness Sensitivity | | BIBAK | Full-Text | 139-142 | |
| Marcos Hilsenrat; Miriam Reiner | |||
| In this study, we present a method for psychophysical measurements of
surface roughness sensitivity by indirect touch, using a 3D hapto-visual
virtual reality (VR) device. In a texture-difference-recognition test subjects
glided a pen-like stylus on a virtual surface. The surface was divided into
five areas: one central, and four surrounding areas. The roughness of the
central area was kept constant throughout the experiment. In each run, three of
the four surrounding areas were kept at the same roughness as the central
surface, and one, randomly, was different. From run to run, surface roughness
was changed following a binary search paradigm. If a subject recognized the
portion of the surface with a different roughness, then the roughness was
reduced by half; if not, the roughness increased, and so on, until the desired
number of steps was achieved. This approach allows us to take advantage of the
programmable capabilities of a VR, so that the limit of aware recognition
between surfaces with different roughness can be measured with more precision
than in conventional methods. To illustrate this paradigm, results from a
preliminary study are presented. Keywords: haptics; psychophysics; indirect touch; roughness sensitivity | |||
| A Framework for Abstract Representation and Recognition of Gestures in Multi-touch Applications | | BIBAK | Full-Text | 143-147 | |
| Martin Thomas Görg; Michael Cebulla; Sandro Rodriguez Garzon | |||
| Tangible user interfaces allowing multiple simultaneous contacts are by now
well known as multi-touch devices. They add new dimensions to input
possibilities compared with traditional single-pointer devices. At the same
time, complexity of input interpretation increases, as well. Particularly
recognition of predefined gestures requires sophisticated techniques to cope
with multiple points of interaction. In this paper we present a novel approach
for recognition of multi-touch gestures by means of a mathematical rule
calculus. It allows abstract definition of multi-touch gestures. We further
outline how to implement a framework to encapsulate the recognition algorithm,
and demonstrate its success through a practical example obtained from an
existing application. Keywords: Human-computer interface, human-machine interface, gesture recognition,
gesture representation, multi-touch | |||
| The Effectiveness of Commercial Haptic Devices for Use in Virtual Needle Insertion Training Simulations | | BIBAK | Full-Text | 148-153 | |
| Timothy Richard Coles; Nigel W. John | |||
| A needle insertion is a widely performed procedure used either to inject
fluids, to retrieve samples or as an introducing conduit for more advanced
procedures. Needle insertions, like most medical procedures, pose an inherent
risk of complication to the patient. This risk has prompted the development of
a variety of haptic training simulators to aid in the education of
practitioners before they attempt needle procedures upon a human. A common
trend in needle insertion simulation is to use and possibly modify pre-existing
commercial haptic solutions, saving on development cost and time. This study
reviews current needle inserting simulations, focusing on the haptic hardware
solutions used. Five commercially available haptic devices (SensAble
Technologies Omni, Desktop and Premium 1.5 6 DOF, Novint's Falcon and Mimics
Mantis), are then tested to evaluate their effectiveness for use in needle
insertion simulations. A conclusion is drawn, with advice for those producing
needle insertion simulation solutions. Keywords: Needle insertion; Simulation; Training; Analysis; Evaluation; Haptic; Omni;
Desktop; Premium; Falcon; Mantis; Linkage; Tension | |||
| Vibrotactile Display of Music on the Human Back | | BIBAK | Full-Text | 154-159 | |
| Carmen Branje; Michael Maksimouski; Maria Karam; Deborah I. Fels; Frank Russo | |||
| We present an experiment designed to reveal characteristics of a tactile
display that presents vibrations representing music to the back of the body.
Based on the model human cochlea, a sensory substitution system aimed at
translating music into vibrations, we are investigating the use of larger
contactor sizes (over 10mm in diameter) as an effective device for the
detection of signals originating from music. Using the method of limits, we
measured ability to discriminate the frequency of vibrotactile stimuli across a
wide range of frequencies common to western classical harmonic music.
Vibrotactile stimuli were presented to artificially deafened participants using
a large contactor applied to the back. Between 65 Hz (C2) and 1047 Hz (C6),
frequency difference limens (FDL) were consistently less than 1/3 of an octave
and as small as 200 cents. These findings suggest that vibrotactile information
can be used to support the experience of music even in the absence of sound,
and that voice coils are effective in presenting some characteristics of sound
as vibrations. Keywords: Tactile displays, sensory aids, psychology, user interfaces | |||
| Collaborative Educational System Analysis and Assessment | | BIBAK | Full-Text | 160-165 | |
| Ion Ivan; Cristian Ciurea; Daniel Milodin | |||
| The paper presents definitions of collaborative systems, their
classification and the study of collaborative systems in education. It
describes the key concepts of collaborative educational systems. There are
listed main properties and quality characteristics of collaborative educational
systems. It analyzes the application for the assessment of text entities
orthogonality within a collaborative system in education, represented by a
virtual campus. It implements a metric for evaluating the orthogonality. Keywords: collaborative system; education; virtual campus; evaluation; orthogonality | |||
| OpenSurg: An Ibero-American Project for the Teaching in Medical Robotics | | BIBAK | Full-Text | 166-168 | |
| Jose María Sabater; Oscar Andrés Vivas Alban | |||
| This article presents the Ibero-American Open-Surg project, that attempts to
develop several medical robotic applications, based on open source software.
This project expects that the developed software tools can be extensively used
on teaching techniques about medical procedures in Ibero-America. Objectives
and advances of this project are presented in this document. Keywords: medical robotics; open software; computer teaching techniques | |||
| A Mixed-Reality Training System for Teleoperated Biomanipulations | | BIBAK | Full-Text | 169-174 | |
| Leonardo Mattos; Darwin G. Caldwell | |||
| This paper presents a mixed-reality system for the training of operators
(biologist/neuroscientists) on fully teleoperated biomanipulations. These tasks
are traditionally performed via direct manual control of the biomanipulation
equipment while looking through the binoculars of a microscope. However, direct
manual control makes the conventional systems susceptible to even very small
operator errors, and extensive training is normally required to attain a
satisfactory proficiency. To improve this area, a fully teleoperated
biomanipulation system has been previously developed, but efficient operation
of that system also requires some training. Therefore, the system presented
here has been created to help new operators became familiar with the
teleoperated system environment, introducing them to the system controls and
joysticks functions. Two mixed-reality training games were designed,
implemented and tested for this purpose: A "move-and-shoot" game focused on
precise positioning training; and a trajectory following game intended to
develop precise motion control skills on new operators. Preliminary experiments
performed with 20 totally novice operators demonstrated that this new training
system is effective in terms of the initial development of control skills for
real teleoperated biomanipulations. Experimental metrics demonstrated an
exponential learning curve for these novice operators, who achieved good
performance values after only two practice runs on the system. In addition,
training here was proven safe and inexpensive since no real cells, biochemical
products, or several pipettes were needed for this initial training phase. Keywords: mixed-reality, teleoperation, biomanipulation, micromanipulation | |||
| Using Rotoscopy Technique to Assist the Teaching of Handwriting for Children with Dyspraxia | | BIBAK | Full-Text | 175-178 | |
| Muhammad Fakri Othman; Wendy Keay-Bright | |||
| The paper will report on work in progress that aims to give children with
dyspraxia a playful and physical experience in developing handwriting skills
using the specialist animation technique known as rotoscopy. This technique has
been investigated in order to identify whether motivation and engagement can be
increased by using a performance-led approach which would allow children to
practice handwriting using gross motor skills, for example bodily movement and
gesture. A user-centred methodology is being used to identify requirements of
dyspraxic children and a pedagogical context for the rotoscopy prototypes. Keywords: Rotoscopy, handwriting, dyspraxia, inclusive design | |||
| The 3A Interaction Model: Towards Bridging the Gap between Formal and Informal Learning | | BIBAK | Full-Text | 179-184 | |
| Sandy El Helou; Na Li; Denis Gillet | |||
| This paper discusses the adoption of bottom-up social software tools in
formal learning environments. This is believed to enhance the learning
experience of today's young generation characterized by being technology savvy
and keen on social networking. As a first step towards this objective, the 3A
interaction model that aims at aiding the design of personal and collaborative
learning platforms is presented. It accounts for interaction paradigms widely
used in Web 2.0 applications and builds on Distributed Cognition and Activity
Theory while remaining at the right level of abstraction to be easily
"translatable" into tangible applications supporting both formal and informal
learning. Keywords: collaborative learning, CSCW, CSCL, Web 2.0, interaction model, social
software | |||
| Training Undergraduate Students in User-Centered Design | | BIBAK | Full-Text | 185-190 | |
| Cynthia Y. Lester | |||
| Computer software is typically developed according to software engineering
methodologies. However, it has been noted that many software development
projects fail to achieve their goals. Further it has been stated that some
estimates of the failure rate to produce a software product is as high as 60
percent. Many of these problems can be attributed to poor communication between
customers and system developers or between end-users and developers. However,
many software development life cycles do not focus on understanding the
business needs of an organization or how organizational issues may influence
system development. As educators of the next generation of computing
professionals it is our responsibility to train students in development
methodologies where careful attention is paid to understanding the needs of
stakeholders. The aim of this paper is to present a conceptual framework for
training undergraduate. Keywords: human computer interaction, software engineering, user-centered design,
undergraduate students | |||
| Model-Based Personalization within an Adaptable Human-Machine Interface Environment that is Capable of Learning from User Interactions | | BIBAK | Full-Text | 191-198 | |
| Sandro Rodriguez Garzon; Michael Cebulla | |||
| The paper describes a multimodal interface architecture that is capable of
automatic adaptation to the user by learning from the user's interaction
patterns. The introduced architecture consists of a general way to specify
multimodal HMI systems based on models with modes, transitions and guards as
well as a mechanism to apply adaptations during the runtime process.
Adaptations are defined as modifications of models which are specified during
the specification process and used during the runtime process to control the
program flow. To provide the developer with a flexible but secure way to define
personalization uses cases in form of adaption rules. we introduce modification
boundaries that are defined as an additional model based on the same formalism
as the models used for the program flow. The framework will be discussed by
means of an example: Analyzing the interaction of the user after a recognition
of a yet unknown speech command to infer and apply adequate modifications of
the model to connect the unknown speech command with a typical user
interaction. Keywords: Modeling, Adaptation, Personalization, Human-Machine Interface, Learning | |||
| HyPhIVE: A Hybrid Virtual-Physical Collaboration Environment | | BIBAK | Full-Text | 199-204 | |
| Senaka Buthpitiya; Ying Zhang | |||
| Virtual world conferences have been shown to give users an increased sense
of presence in a collaboration as opposed to teleconferences, video-conferences
and web-conferences. Such telepresence encourages remote participants to engage
in the collaboration. Current virtual world collaboration applications rely on
mouse/keyboard interfaces to create pure-virtual collaborations. In this paper
we propose HyPhIVE, a system to address hybrid collaboration between the
physical world and virtual worlds. In hybrid collaboration scenarios, a group
of people collaborate in the real world and others join them remotely via a
virtual world. HyPhIVE uses non-intrusive mobile sensors to detect real world
users' collaboration context such as their position, direction of gaze,
gestures and voice. HyPhIVE projects the sensed real world collaboration into a
virtual world in a way that collaboration patterns are preserved. Remote users
join the collaboration using virtual world clients and interact with other
users' avatars. User studies have shown that HyPhIVE effectively projects real
world collaborations into a virtual world and it improves users' experience of
remote collaboration. Keywords: collaborative virtual environments, ubiquitous computing, mobile computing | |||
| Synergistic Annotation of Multimedia Content | | BIBAK | Full-Text | 205-208 | |
| Chris Creed; Peter Lonsdale; Robert Hendley; Russell Beale | |||
| We describe work in progress toward a new approach for multimedia annotation
in which the system and user work synergistically together. This work in
progress is particularly focused on enabling journalists to efficiently
annotate articles for submission to news agencies. Initial work on gathering
user requirements is detailed along with several interesting findings that
resulted from this process: capturing mood and emotion is needed as well as
descriptive content. Important areas for future research are also highlighted
and discussed. Keywords: video, annotation, intelligent, user study, multimedia, user centered design | |||
| A Combined Relevance Feedback Approach for User Recommendation in E-commerce Applications | | BIBAK | Full-Text | 209-214 | |
| Vincenzo Moscato; Antonio Picariello; Antonio M. Rinaldi | |||
| Recommender systems in e-commerce applications help consumers with
information useful to decide which products to purchase, suggesting products,
services, and information items to potential consumers. Nowadays recommender
systems interfaces are more oriented to technical people than to normal
consumer, that are really not necessarily expert of statistic, scores and so
on. In this paper we propose to join the capabilities of relevance feedback
with recommendation strategies in a more useful architecture based on 3D
navigation systems. A general framework is described, together with novel
techniques oriented to an effective human-computer interaction. An example of
the proposed system is also discussed. Keywords: Human-Computer interaction, 3D interface; relevance feedback, recommendation
systems | |||