| Development of a Virtual Keyboard System Using a Bio-signal Interface and Preliminary Usability Test | | BIBAK | Full-Text | 3-9 | |
| Kwang-Ok An; Da-Hey Kim; Jongbae Kim | |||
| People with severe speech or language problems rely on augmentative and
alternative communication (AAC) to supplement existing speech or replace speech
that is not functional. However, many people with severely motor disabilities
are limited to use AAC, because most of AAC use the mechanical input devices.
In this paper, to solve the limitations and offer a practical solution to
disabled person, a virtual keyboard system using a bio-signal interface is
developed. The developed system consists of bio-signal interface, training and
feedback program, connecting module and virtual keyboard. In addition, we
evaluate how well do subjects control the system. From results of preliminary
usability test, the usefulness of the system is verified. Keywords: augmentative and alternative communication; bio-signal interface;
preliminary usability test; virtual keyboard system | |||
| Unifying Conceptual and Spatial Relationships between Objects in HCI | | BIBAK | Full-Text | 10-18 | |
| David Blezinger; Ava Fatah gen. Schieck; Christoph Hölscher | |||
| To design interfaces which occupy a continuous space of interaction, the
conceptual model of an interface needs to be transferred to a spatial model. To
find mappings between conceptual and spatial structure which are natural to
people, an experiment is undertaken in which participants organize objects in a
semi-circle of shelves around their body. It is analyzed how conceptual
relationships between objects such as categorial relationships and sequential
relationships within task performance are represented in spatial configurations
of objects as chosen by the participants. In these configurations, a strong
correlation between conceptual and spatial relationships is observed between
objects. Keywords: HCI frameworks; spatial interface; conceptual model; information
architecture; navigation; object-based; task-based; spatial configuration;
spatial cognition; embodied interaction; categories; visual identity | |||
| Context-Aware Multimodal Sharing of Emotions | | BIBAK | Full-Text | 19-28 | |
| Maurizio Caon; Leonardo Angelini; Yong Yue; Omar Abou Khaled; Elena Mugellini | |||
| Computer mediated interaction often lacks of expressivity, in particular for
emotion communication. Therefore, we present a concept for context-aware
multimodal sharing of emotions for human-to-computer-to-human interaction in
social networks. The multimodal inputs and outputs of this system are
distributed in a smart environment in order to grant a more immersive and
natural interaction experience. The context information is used to improve the
opportuneness and the quality of feedback. We implemented an evaluation
scenario and we conducted an observation study during some events with the
participants. We reported our considerations at the end of this paper. Keywords: affective computing; multimodal interaction; computer mediated
communication; social sharing of emotions | |||
| Supportive User Interfaces for MOCOCO (Mobile, Contextualized and Collaborative) Applications | | BIBAK | Full-Text | 29-38 | |
| Bertrand David; René Chalon; Florent Delomier | |||
| Enhancing interaction with supplementary Supportive User Interfaces:
Meta-UIs, Mega-UIs, Extra-UIs, Supra-UIs, etc. is a relatively new challenge
for HCI. In this paper, we describe our view of supportive user Interfaces for
AmI applications taking into account Mobility, Collaboration and
Contextualization. We describe proposed formalisms and their working
conditions: initially created for designers in the design stage; we consider
that they can now also be used by final-users for dynamic adjustment of working
conditions. Keywords: Interactive and collaborative model architectures; formalisms; Ambient
Intelligence; pervasive and ubiquitous computing; tangible UI | |||
| RFID Mesh Network as an Infrastructure for Location Based Services for the Blind | | BIBAK | Full-Text | 39-45 | |
| Hugo Fernandes; Jose Faria; Paulo Martins; Hugo Paredes; João Barroso | |||
| People with visual impairments face serious challenges while moving from one
place to another. This is a difficult challenge that involves obstacle
avoidance, staying on street walks, finding doors, knowing the current location
and keeping on track through the desired course, until the destination is
reached. While assistive technology has contributed to the improvement of the
quality of life of people with disabilities, people with visual impairment
still face enormous limitations in terms of their mobility. There is still an
enormous lack of availability of information that can be used to assist them,
as well as a lack of sufficient precision in terms of the estimation of the
user's location. This paper proposes an infrastructure to assist the estimation
of the user's location with high precision using Radio Frequency
Identification, providing seamless availability of location based services for
the blind, whether indoor or outdoor. Keywords: Computer-augmented environments; blind; navigation; rfid | |||
| An Ontology-Based Interaction Concept for Social-Aware Applications | | BIBAK | Full-Text | 46-55 | |
| Alexandra Funke; Sören Brunk; Romina Kühn; Thomas Schlegel | |||
| With the usage of mobile devices becoming more and more ubiquitous, access
to social networks such as Facebook and Twitter from those devices is
increasing at a fast rate. Many different social networking applications for
mobile devices exist but most of them only enable access to one social network.
As users are often registered in multiple social networks, they have to use
different applications for mobile access. Furthermore, most applications do not
consider the users' social context to aid them with their intentions. This
paper presents our idea to model the user's social context and intentions in
social networks within an ontology. Based on this ontology we describe an
interaction concept that allows publishing information in different social
networks in a flexible way. We implemented a prototype to show how our findings
can be presented. To conclude, we highlight some possibilities for the future
of ontology-based social-aware applications. Keywords: interaction; ontology; semantic modeling; social-aware; social media | |||
| Sensor-Based Adaptation of User Interface on Android Phones | | BIBAK | Full-Text | 56-61 | |
| Tor-Morten Grønli; Gheorghita Ghine; Jarle Hansen | |||
| The notion of context-aware computing is generally the ability for the
devices to adapt their behavior to the surrounding environment, ultimately
enhancing usability. Sensors are an important source of information input in
any real world context and several previous research contributions look into
this topic. In our research, we combine sensor-generated context information
received both from the phone itself and information retrieved from cloud-based
servers. All data is integrated to create a context-aware mobile device, where
we implemented a new customized home screen application for Android enabled
devices. Thus, we are also able to remotely configure the mobile devices
independent of the device types. This creates a new concept of
context-awareness and embraces the user in ways previously unavailable. Keywords: sensor; interface adaptation; Android | |||
| Perception and BDI Reasoning Based Agent Model for Human Behavior Simulation in Complex System | | BIBAK | Full-Text | 62-71 | |
| Jaekoo Joo | |||
| Modeling of human behaviors in systems engineering has been regarded as an
extremely complex problem due to the ambiguity and difficulty of representing
human decision processes. Unlike modeling of traditional physical systems, from
which active humans are assumed to be excluded, HECS has some peculiar
characteristics which can be summarized as follows: 1) Environments and human
itself are nondeterministic and dynamic that there are many different ways in
which they dynamically evolve. 2) Human perceives a set of perceptual
information taken locally from surrounding environments and other humans in the
environment, which will guide human actions toward his or her goal achievement.
In order to overcome the challenges due to the above characteristics, we
present an human agent model for mimicking perception-based rational human
behaviors in complex systems by combining the ecological concepts of affordance
-- and the Belief-Desire-Intention (BDI) theory. Illustrative models of fire
evacuation simulation are developed to show how the proposed framework can be
applied. The proposed agent model is expected to realize their potential and
enhance the simulation fidelity in analyzing and predicting human behaviors in
HECS. Keywords: Human Behavior; Affordance theory; BDI theory; Agent-based Simulation;
Social Interaction | |||
| Long-Term Study of a Software Keyboard That Places Keys at Positions of Fingers and Their Surroundings | | BIBAK | Full-Text | 72-81 | |
| Yuki Kuno; Buntarou Shizuki; Jiro Tanaka | |||
| In this paper, we present a software keyboard called Leyboard that enables
users to type faster. Leyboard makes typing easier by placing keys at the
positions of fingers and their surroundings. To this end, Leyboard
automatically adjusts its key positions and sizes to users' hands. This design
allows users to type faster and more accurately than using ordinary software
keyboards, the keys of which are unperceptive. We have implemented a prototype
and have performed a long-term user study. The study has proved the usefulness
of Leyboard and its pros and cons. Keywords: Touch screen; text entry; software keyboard; long-term study | |||
| Fast Dynamic Channel Allocation Algorithm for TD-HSPA System | | BIBAK | Full-Text | 82-91 | |
| Haidong Li; Hai-Lin Liu; Xueyi Liang | |||
| In order to make full use of channel, a new dynamic channel allocation
algorithm for TD-HSPA system is proposed. The proposed algorithm gives priority
to consider the time slot distribution in uplink channels. This paper uses low
order modulation coding in uplink channels, but uses high order modulation
coding in downlink channels. The transmission rate of uplink and downlink are
asymmetric. In his paper, we propose a criterion sharing channel for each other
through main and auxiliary frequency when the voice channel is idle. As a
result, the system capacity is increased 50% larger than the past. Simulation
results show that the proposed algorithm can decrease the call blocking ratio
and dropping packet rate of data service, improve the channel utilization
efficiency, and increase the number of data users dramatically. Keywords: TD-HSPA; asymmetric transmission; frequency sharing; dynamic channel
allocation | |||
| Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application | | BIBAK | Full-Text | 92-101 | |
| Brian Y. Lim; Anind K. Dey | |||
| Intelligibility has been proposed to help end-users understand context-aware
applications with their complex inference and implicit sensing. Usable
explanations can be generated and designed to improve user understanding.
However, will users want to use these intelligibility features? How much
intelligibility will they use, and will this be sufficient to improve their
understanding? We present a quasi-field experiment of how participants used the
intelligibility features of a context-aware application. We investigated how
many explanations they viewed, how that affected their understanding of the
application's behavior, and suggestions they had for improving its behavior. We
discuss what constitutes successful intelligibility usage, and provide
recommendations for designing intelligibility to promote its effective use. Keywords: Context-Awareness; Intelligibility; Explanations; User Study | |||
| Strangers and Friends | | BIBAK | Full-Text | 102-111 | |
| Nikita Mattar; Ipke Wachsmuth | |||
| We demonstrate how an artificial agent's conversational style can be adapted
to different interlocutors by using a model of Person Memory. While other
approaches so far rely on adapting an agent's behavior according to one
particular factor like personality or relationship, we show how to enable an
agent to take diverse factors into account at once by exploiting social
categories. This way, our agent is able to adapt its conversational style
individually to reflect interpersonal relationships during conversation. Keywords: embodied conversational agents; conversational style; social categories;
personality; relationships; situational context | |||
| suGATALOG: Fashion Coordination System That Supports Users to Choose Everyday Fashion with Clothed Pictures | | BIBAK | Full-Text | 112-121 | |
| Ayaka Sato; Keita Watanabe; Michiaki Yasumura; Jun Rekimoto | |||
| When deciding what to wear, we normally have to consider several things,
such as color and combination of clothes, as well as situations that might
change every day, including the weather, what to do, where to go, and whom to
meet with. Trying on many possible combinations can be very tedious; thus,
computer support would be helpful. Therefore, we propose suGATALOG, a fashion
coordination system that allows users to choose and coordinate clothes from
their wardrobe. Previous studies have proposed systems using computer images of
clothes to allow users to inspect their clothing ensemble. Our system uses
pictures of users actually wearing the clothes to give a more realistic
impression. suGATALOG compares several combinations by swapping top and bottom
images. In this paper, we describe the system architecture and its user
interface, as well as an evaluation experiment and a long-term trial test to
verify the usefulness of the system. Keywords: Fashion coordinate; Clothes; Life-log | |||
| Interacting with a Context-Aware Personal Information Sharing System | | BIBA | Full-Text | 122-131 | |
| Simon Scerri; Andreas Schuller; Ismael Rivera; Judie Attard; Jeremy Debattista; Massimo Valla; Fabian Hermann; Siegfried Handschuh | |||
| The di.me userware is a decentralised personal information sharing system with a difference: extracted information and observed personal activities are exploited to automatically recognise personal situations, provide privacy-related warnings, and recommend and/or automate user actions. To enable reasoning, personal information from multiple devices and online sources is integrated and transformed to a machine-interpretable format. Aside from distributed personal information monitoring, an intuitive user interface also enables the i) manual customisation of advanced context-driven services and ii) their semi-automatic adaptation across interactive notifications. In this paper we outline how average users interact with the current user interface, and our plans to improve it. | |||
| Design and Evaluation of Eco-feedback Interfaces to Support Location-Based Services for Individual Energy Awareness and Conservation | | BIBAK | Full-Text | 132-140 | |
| Yang Ting Shen; Po Chun Chen; Tay Sheng Jeng | |||
| The Eco-feedback technology has widely applied to the energy conservation.
Eco-feedback technology is usually represented as any kind of interactive
device or interface targeted at revealing energy consumption in order to
promote users' energy awareness and then trigger more ecologically responsible
behaviors. In this paper, the primary goal is to help the individual user
understand his comparative energy consumption through the Eco-feedback energy
visualization. The energy information we provide is the comparison between the
historical average energy consumption and the instant energy consumption. Based
on the instant comparative energy consumption, the user can intuitively
understand the current energy consumption is higher or lower than usualness. We
develop the location-based individual energy consumption feedback system named
EME (Energy MEter). Integrated with the concepts of historical comparison and
incentives, three kinds of eco-feedback interface prototypes including the
Dichotomy type, the Accumulation type, and the Numeral type are designed and
deployed in practical fields. The user study both from quantitative and
qualitative surveys is conducted in order to find out the potential interface
which links user and energy consumption data better. Keywords: Eco-feedback; Energy awareness; Energy conservation; Comparative energy
consumption | |||
| Fuzzy Logic Approach for Adaptive Systems Design | | BIBAK | Full-Text | 141-150 | |
| Makram Soui; Mourad Abed; Khaled Ghedira | |||
| Adaptive system is a field in rapid development. Adaptation is an effective
solution for reducing complexity when searching information. This article
presents how to personalize user interface (UI) using fuzzy logic. Our approach
is based on the definition of relations for selection of appropriate and not
appropriate of UI components. These relations are based the degree of certainty
about the meaning coincidence of metadata elements and user' preferences. The
proposed approach has been validated by applying it in e-learning field. Keywords: Adaptation; Adaptive Systems; Fuzzy logic; Evaluation; User Interface (UI) | |||
| Semi-supervised Remote Sensing Image Segmentation Using Dynamic Region Merging | | BIBAK | Full-Text | 153-162 | |
| Ning He; Ke Lu; Yixue Wang; Yue Gao | |||
| This paper introduces a remote sensing image segmentation approach by using
semi-supervised and dynamic region merging. In remote sensing images, the
spatial relationship among pixels has been shown to be sparsely represented by
a linear combination of a few training samples from a structured dictionary.
The sparse vector is recovered by solving a sparsity-constrained optimization
problem, and it can directly determine the class label of the test sample.
Through a graph-based technique, unlabeled samples are actively selected based
on the entropy of the corresponding class label. With an initially segmented
image based semi-supervised, in which the many regions to be merged for a
meaningful segmentation. By taking the region merging as a labeling problem,
image segmentation is performed by iteratively merging the regions according to
a statistical test. Experiments on two datasets are used to evaluate the
performance of the proposed method. Comparisons with the state-of-the-art
methods demonstrate that the proposed method can effectively investigate the
spatial relationship among pixels and achieve better remote sensing image
segmentation results. Keywords: Semi-supervised; Remote Sensing Image; Image segmentation; Dynamic region
merging | |||
| Correcting Distortion of Views into Aquarium | | BIBAK | Full-Text | 163-170 | |
| Yukio Ishihara; Makio Ishihara | |||
| In this paper, we discuss a way to correct light distortion of views into an
aquarium. When we see fish in an aquarium, they appear closer also distorted
due to light distortion. In order to correct the distortion, the light rays
travelling in the aquarium directly towards an observer should hit him/her
after emerging from the aquarium. A basic idea is to capture those light rays
by a reference camera, then merge the rays as a single view, which is displayed
to the observer. An experiment in a real world environment shows that light
distortion of a view into an aquarium can be corrected using the multiple
reference camera views. Keywords: distortion correction; aquarium; light distortion | |||
| A Dense Stereo Matching Algorithm with Occlusion and Less or Similar Texture Handling | | BIBA | Full-Text | 171-177 | |
| Hehua Ju; Chao Liang | |||
| Due to image noise, illumination and occlusion, to get an accurate and dense disparity with stereo matching is still a challenge. In this paper, a new dense stereo matching algorithm is proposed. The proposed algorithm first use cross-based regions to compute an initial disparity map which can deal with regions with less or similar texture. Secondly, the improved hierarchical belief propagation scheme is employed to optimize the initial disparity map. Then the left-right consistency check and mean-shift algorithm are used to handle occlusions. Finally, a local high-confidence strategy is used to refine the disparity map. Experiments with the Middlebury dataset validate the proposed algorithm. | |||
| Robust Face Recognition System Using a Reliability Feedback | | BIBAK | Full-Text | 178-185 | |
| Shotaro Miwa; Shintaro Watanabe; Makito Seki | |||
| In the real world there are a variety of lighting conditions, and there
exist many directional lights as well as ambient lights. These directional
lights cause partial dark and bright regions on faces. Even if auto exposure
mode of cameras is used, those uneven pixel intensities are left, and in some
cases saturated pixels and black pixels appear. In this paper we propose robust
face recognition system using a reliability feedback. The system evaluates the
reliability of the input face image using prior distributions of each
recognition feature, and if the reliability of the image is not enough for face
recognition, it capture multiple images by changing exposure parameters of
cameras based on the analysis of saturated pixels and black pixels. As a result
the system can cumulates similarity scores of enough amounts of reliable
recognition features from multiple face images. By evaluating the system in an
office environment, we can achieve three times better EER than the system only
with auto exposure control. Keywords: Face Recognition; Prior Probability; Probabilistic Model | |||
| A Developer-Oriented Visual Model for Upper-Body Gesture Characterization | | BIBAK | Full-Text | 186-195 | |
| Simon Ruffieux; Denis Lalanne; Omar Abou Khaled; Elena Mugellini | |||
| This paper focuses on a facilitated and intuitive representation of
upper-body gestures for developers. The representation is based on the user
motion parameters, particularly the rotational and translational components of
body segments during a gesture. The developed static representation aims to
provide a rapid visualization of the complexity for each body segment involved
in the gesture for static representations. The model and algorithms used to
produce the representation have been applied to a dataset of 10 representative
gestures to illustrate the model. Keywords: natural interaction; human-computer interaction; multimodality;
visualization tools; developer-oriented | |||
| Annotate. Train. Evaluate. A Unified Tool for the Analysis and Visualization of Workflows in Machine Learning Applied to Object Detection | | BIBAK | Full-Text | 196-205 | |
| Michael Storz; Marc Ritter; Robert Manthey; Holger Lietz; Maximilian Eibl | |||
| The development of classifiers for object detection in images is a complex
task that comprises the creation of representative and potentially large
datasets from a target object by repetitive and time-consuming intellectual
annotations, followed by a sequence of methods to train, evaluate and optimize
the generated classifier. This is conventionally achieved by the usage and
combination of many different tools. Here, we present a holistic approach to
this scenario by providing a unified tool that covers the single development
stages in one solution to facilitate the development process. We prove this
concept by the example of creating a face detection classifier. Keywords: Model-driven Annotation; Image Processing; Machine Learning; Object
Detection; Workflow Analysis | |||
| A New Real-Time Visual SLAM Algorithm Based on the Improved FAST Features | | BIBA | Full-Text | 206-215 | |
| Liang Wang; Rong Liu; Chao Liang; Fuqing Duan | |||
| The visual SLAM is less dependent on hardware, so it attracts growing interests. However, the visual SLAM, especially the Extend Kalman Filter-based monocular SLAM is computational expensive, and is hard to fulfill real-time process. In this paper, we propose an algorithm, which uses the binary robust independent elementary Features descriptor to describe the features from accelerated segment test feature aiming at improving feature points extraction and matching, and combines with the 1-point random sample consensus strategy to speedup the EKF-based visual SLAM. The proposed algorithm can improve the robustness of the EKF-based visual SLAM and make it operate in real-time. Experimental results validate the proposed algorithm. | |||
| A Coastline Detection Method Based on Level Set | | BIBAK | Full-Text | 216-226 | |
| Qian Wang; Ke Lu; Fuqing Duan; Ning He; Lei Yang | |||
| This paper proposes a level set based coastline detection method by using
the template initialization and local energy minimization. It can complete the
sea-land boundary detection in infrared channel image. This method is an
improvement on the traditional level set algorithm by using the information of
GSHHS to optimize the initialization procedure, which can reduce the number of
iterations and numerical errors. Moreover, this method optimizes regional
energy functional, and can achieve the rapid coastline detection. Experiments
on the IR image of FY-2 satellite show that the method has fast speed and high
accuracy. Keywords: Edge detection; level set method; IR image processing | |||
| Tracking End-Effectors for Marker-Less 3D Human Motion Estimation in Multi-view Image Sequences | | BIBAK | Full-Text | 227-235 | |
| Wenzhong Wang; Zhaoqi Wang; Xiaoming Deng; Bin Luo | |||
| We propose to track the end-effectors of human body, and use them as
kinematic constraints for reliable marker-less 3D human motion tracking. In the
presented approach, we track the end-effectors using particle filtering. The
tracked results are then combined with image features for 3D full pose
tracking. Experimental results verified that the inclusion of end-effectors'
constraints improves the tracking performances. Keywords: end-effectors; motion tracking; particle filtering | |||
| Kernel Based Weighted Group Sparse Representation Classifier | | BIBAK | Full-Text | 236-245 | |
| Bingxin Xu; Ping Guo; C. L. Philip Chen | |||
| Sparse representation classification (SRC) is a new framework for
classification and has been successfully applied to face recognition. However,
SRC can not well classify the data when they are in the overlap feature space.
In addition, SRC treats different samples equally and ignores the cooperation
among samples belong to the same class. In this paper, a kernel based weighted
group sparse classifier (KWGSC) is proposed. Kernel trick is not only used for
mapping the original feature space into a high dimensional feature space, but
also as a measure to select members of each group. The weight reflects the
importance degree of training samples in different group. Substantial
experiments on benchmark databases have been conducted to investigate the
performance of proposed method in image classification. The experimental
results demonstrate that the proposed KWGSC approach has a higher
classification accuracy than that of SRC and other modified sparse
representation classification. Keywords: Group sparse representation; kernel method; image classification | |||
| Kernel Fuzzy Similarity Measure-Based Spectral Clustering for Image Segmentation | | BIBAK | Full-Text | 246-253 | |
| Yifang Yang; Yuping Wang; Yiu-ming Cheung | |||
| Spectral clustering has been successfully used in the field of pattern
recognition and image processing. The efficiency of spectral clustering,
however, depends heavily on the similarity measure adopted. A widely used
similarity measure is the Gaussian kernel function where Euclidean distance is
used. Unfortunately, the Gaussian kernel function is parameter sensitive and
the Euclidean distance is usually not suitable to the complex distribution
data. In this paper, a novel similarity measure called kernel fuzzy similarity
measure is proposed first, Then this novel measure is integrated into spectral
clustering to get a new clustering method: kernel fuzzy similarity based
spectral clustering (KFSC). To alleviate the computational complexity of KFSC
on image segmentation, Nyström method is used in KFSC. At last, the
experiments on three synthetic texture images are made, and the results
demonstrate the effectiveness of the proposed algorithm. Keywords: spectral clustering; kernel fuzzy-clustering; image segmentation;
Nyström method | |||
| Depth Camera Based Real-Time Fingertip Detection Using Multi-view Projection | | BIBAK | Full-Text | 254-261 | |
| Weixin Yang; Zhengyang Zhong; Xin Zhang; Lianwen Jin; Chenlin Xiong; Pengwei Wang | |||
| We propose a real-time fingertip detection algorithm based on depth
information. It can robustly detect single fingertip regardless of the position
and direction of the hand. With the depth information of front view, depth map
of top view and side view is generated. Due to the difference between finger
thickness and fist thickness, we use thickness histogram to segment the finger
from the fist. Among finger points, the farthest point from palm center is the
detected fingertip. We collected over 3,000 frames writing-in-the-air sequences
to test our algorithm. From our experiments, the proposed algorithm can detect
the fingertip with robustness and accuracy. Keywords: Kinect; depth image; finger detection; fingertip detection; multiview
projection | |||
| Evaluation of Hip Impingement Kinematics on Range of Motion | | BIBAK | Full-Text | 262-269 | |
| Mahshid Yazdifar; Mohammadreza Yazdifar; Pooyan Rahmanivahid; Saba Eshraghi; Ibrahim Esat; Mahmoud Chizari | |||
| Femoroacetabulare impingement (FAI) is a mechanical mismatch between femur
and acetabulum. It would bring abnormal contact stress and potential joint
damage. This problem is more common on people with high level of motion
activity such as baler dancer and athletics. FAI causes pain in hip joints and
consequently would lead to reduction in range of motion. This study
investigates whether changing the kinematics parameters of hip joint with
impingement can improve range of motion or not. Hip joint model is created in
finite element environment, and then the range of motion was detected. The
original boundary conditions are applied in the initial hip impingement model.
Then gradually the gap between femur and acetabulum in the model was changed to
evaluate the changing kinematics factors on range of motion.
Mimics (Materialise NV) software was used to generate the surface mesh of three-dimensional (3D) models of the hip joint from computerised tomography (CT) images of the subject patients diagnosed with FAI. The surface mesh models created in Mimics were then exported to Abaqus (Simulia Dassault Systems) to create a finite element (FE) models that will be suitable for mechanical analysis. The surface mesh was converted into a volumetric mesh using Abaqus meshing modules. Material properties of the bones and soft tissues were defined in the FE model. The kinematic values of the joint during a normal sitting stance, which were obtained from motion capture analysis in the gait lab, were used as boundary conditions in the FE model to simulate the motion of the hip joint during a normal sitting stance and find possible contact at the location of the FAI. The centre of rotation for a female hip model with impingement was changed and range of motion was measured in Abaqus. The results were compared to investigate the effect of centre of rotation on range of motion for hip with femoroacetabular impingement. There was a significant change on range of motion with changing the gap between femur and acetabulum. Decreasing the distance between femur and acetabulum decreases the range of motion. When the distance between femur and acetabulum changes the location of impingement shifted. Increasing the distance between femur and acetabulum, there is no noticeable change in the location of impingement. This study concludes that changing the kinematics of hip with impingement changes the range of motion. Keywords: hip joint; femoroacetabular impingement; finite element; kinematics | |||
| Tracking People with Active Cameras | | BIBAK | Full-Text | 270-279 | |
| Alparslan Yildiz; Noriko Takemura; Yoshio Iwai; Kosuke Sato | |||
| In this paper, we introduce a novel method on tracking multiple people using
multiple active cameras. The aim is to capture as many targets as possible at
any time using a limited number of active cameras.
In our context, an active camera is a statically located PTZ (pan-tilt-zoom) camera. Using active cameras for tracking is not researched thoroughly, since it is relatively easier to use increased number of fully static cameras. However, we believe this is costly and a deeper research on the employment of active cameras is necessary. Our contributions include the removal of necessity for the detection of each person individually in an efficient way and estimating the future states of the system using a simplified fluid simulation. Keywords: multiple view; tracking; active cameras | |||
| Classification Based on LBP and SVM for Human Embryo Microscope Images | | BIBAK | Full-Text | 280-288 | |
| Yabo Yin; Yun Tian; Weizhong Wang; Fuqing Duan; Zhongke Wu; Mingquan Zhou | |||
| Embryo transfer is an extremely important step in the process of in-vitro
fertilization and embryo transfer (IVF-ET). The identification of the embryo
with the greatest potential for producing a child is a very big challenge faced
by embryologists. Most current scoring systems of assessing embryo viability
are based on doctors' subjective visual analysis of the embryos' morphological
features. So it provides only a very rough guide to potential. A classifier as
a computer-aided method which is based on Pattern Recognition can help to
automatically and accurately select embryos. This paper presents a classifier
based on the support vector machine (SVM) algorithm. Key characteristics are
formulated by using the local binary pattern (LBP) algorithm, which can
eliminate the inter-observer variation, thus adding objectivity to the
selection process. The experiment is done with 185 embryo images, including 47
"good" and 138 "bad" embryo images. The result shows our proposed method is
robust and accurate, and the accurate rate of classification can reach about
80.42%. Keywords: embryo microscope images; feature extraction; automatic classifier; local
vector pattern; support vector machine | |||
| Semantic Annotation Method of Clothing Image | | BIBAK | Full-Text | 289-298 | |
| Lu Zhaolao; Mingquan Zhou; Wang Xuesong; Fu Yan; Tan Xiaohui | |||
| Semantic annotation is an essential issue for image retrieval. In this
paper, we take the online clothing product images as sample. In order to
annotate images. we first segment the image into regions, then remove the
background and noise information. The illumination and light interference is
considered too. Cloth position and region are determined by rules. Images are
translated into some features. Visual words are prepared by human and calculate
methods. Finally, Image features are mapped to different visual words.
Pre-processing and post-processing steps which uses face recognition method and
background rule analysis are applied. Finally, some segmentation and annotation
results are given to discuss the method. Keywords: Semantic annotation; Image segmentation; Graph cut | |||
| Audio-Based Pre-classification for Semi-automatic Facial Expression Coding | | BIBA | Full-Text | 301-309 | |
| Ronald Böck; Kerstin Limbrecht-Ecklundt; Ingo Siegert; Steffen Walter; Andreas Wendemuth | |||
| The automatic classification of the users' internal affective and emotional states is nowadays to be considered for many applications, ranging from organisational tasks to health care. Developing suitable automatic technical systems, training material is necessary for an appropriate adaptation towards users. In this paper, we present a framework which reduces the manual effort in annotation of emotional states. Mainly it pre-selects video material containing facial expressions for a detailed coding according to the Facial Action Coding System based on audio features, namely prosodic and mel-frequency features. Further, we present results of first experiments which were conducted to give a proof-of-concept and to define the parameters for the classifier that is based on Hidden Markov Models. The experiments were done on the EmoRec I dataset. | |||
| Sentimental Eyes! | | BIBAK | Full-Text | 310-318 | |
| Amitava Das; Björn Gambäck | |||
| A closer look at how users perform search is needed in order to best design
a more efficient next generation sentiment search engine and understand
fundamental behaviours involved in online review/opinion search processes. The
paper proposes utilizing personalized search, eye tracking and sentiment
analysis for better understanding of end-user behavioural characteristics while
making a judgement in a Sentiment Search Engine. Keywords: Sentiment Analysis; Sentiment Search; Eye Tracking | |||
| Developing Sophisticated Robot Reactions by Long-Term Human Interaction | | BIBA | Full-Text | 319-328 | |
| Hiromi Nagano; Miho Harata; Masataka Tokumaru | |||
| In this study, we proposed an emotion generation model for robots that considers mutual effects of desires and emotions. Many researchers are developing partner robots for communicating with people and entertaining them, rather than for performing practical functions. However, people quickly grow tired of these robots owing to their simplistic emotional responses. To solve this issue, we attempted to implement the mutual effects of desires and emotions using internal-states, such as physiological factors. Herein, the simulation results verified that the proposed model expresses complex emotions similar to humans. The results confirmed that the emotions expressed by the proposed model are more complex and realistic than those expressed by a reference model. | |||
| An Awareness System for Supporting Remote Communication -- Application to Long-Distance Relationships | | BIBAK | Full-Text | 329-338 | |
| Tomoya Ohiro; Tomoko Izumi; Yoshio Nakatani | |||
| Recently, the methods of conducting long distance communication have
dramatically changed due to improvements in communication technology including
TV phones, e-mail, and SNS (Social Networking Services). However, people still
have difficulty in enjoying sufficient long distance communication because
subtle nuance and atmosphere are difficult to be felt in a distant place. For
example, there are many romantic partners with feelings of anxiety about
long-distance relationships. This is because an environment that allows the
partners to understand each other has not been sufficiently supported. The
purpose of this study is to help people separated by a long distance to
understand each other by enabling the sensing of a partner's feelings from the
partner's behavior. Our target is long-distance romantic partners. When people
feel, sense, or are conscious of another person's existence or state, this
ability or state is called "awareness.". Awareness is nonverbal communication.
Awareness sharing among people is very important for managing relationships
successfully, especially for people separated by a long distance. This is
because a partner will develop feelings of unease if awareness sharing is not
adequate. Our approach is as follows. First, examine what kind of action is
useful for representing the feeling of love. Next, monitor these actions in
partners. Third, summarize actions to quantitative indications. The prototype
system was evaluated through evaluation experiments. Three pairs of partners
used the system for two weeks. The result verified the effectiveness of this
system as it promoted mutual communication. Keywords: long distance communication; nonverbal communication; awareness | |||
| Emotion Sharing with the Emotional Digital Picture Frame | | BIBAK | Full-Text | 339-345 | |
| Kyoung Shin Park; Yongjoo Cho; Minyoung Kim; Ki-Young Seo; Dongkeun Kim | |||
| This paper presents the design and implementation of emotional digital
picture frame system, which is designed for a group of users to share their
emotions via photographs with their own emotional expressions. This system
detects user emotions using physiological sensor signals in real-time and
changes audio-visual elements of photographs dynamically in response to the
user's emotional state. This system allows user emotions to be shared with
other users in remote locations. Also, it provides the emotional rule authoring
tool to enable users to create their own expression for audio-visual element to
fit their emotion. In particular, the rendering elements of a photograph can
appear differently when another user's emotion is received. Keywords: Emotional Digital Picture Frame; Emotional Intelligent Contents; Emotional
Rule Authoring Tool | |||
| Vision Based Body Dither Measurement for Estimating Human Emotion Parameters | | BIBAK | Full-Text | 346-352 | |
| Sangin Park; Deajune Ko; Mincheol Whang; Eui Chul Lee | |||
| In this paper, we propose a new body dither analyzing method in order to
estimating various kinds of intention and emotion of human. In previous
researches for quantitatively measuring human intention and emotion, many kinds
of physiological sensors such as ECG, PPG, GSR, SKT, and EEG have been adopted.
However, these sensor based methods may supply inconvenience caused by sensor
attachment to user. Also, therefrom caused negative emotion can be a noise
factor in terms of measuring particular emotion. To solve these problems, we
focus on facial dither by analyzing successive image frames captured from
conventional webcam. For that, face region is firstly detected from the
captured upper body image. Then, the amount of facial movement is calculated by
subtracting adjacency two image frames. Since the calculated successive values
of facial movement has the form of 1D temporal signal, all of conventional
temporal signal processing methods can be used to analysis that. Results of
feasibility test by inducing positive and negative emotions showed that more
facial movement when inducing positive emotion was occurred compared with the
case of negative emotion. Keywords: Body dither measurement; Emotion recognition; Image subtraction | |||
| Evaluating Emotional State during 3DTV Viewing Using Psychophysiological Measurements | | BIBAK | Full-Text | 353-361 | |
| Kiyomi Sakamoto; Seiji Sakashita; Kuniko Yamashita; Akira Okada | |||
| Using a 50-inch 3DTV, we experimentally estimated the relationship between
TV viewers' emotional states and selected physiological indices. Our
experiments show complex emotional states to be significantly correlated with
these physiological indices, which comprise near-infrared spectroscopy (NIRS),
representing central nervous system activity, and the low frequency/high
frequency ratio (LF/HF), representing sympathetic nervous system activity.
These are useful indices for evaluating emotional states that include "feeling
of involvement." Keywords: emotional states; physiological and psychological measurements; NIRS; HR
variability; 3DTV; TV viewing | |||
| Affect-Based Retrieval of Landscape Images Using Probabilistic Affective Model | | BIBAK | Full-Text | 362-371 | |
| Yunhee Shin; Eun Yi Kim; Tae-Eung Sung | |||
| We consider the problem of ranking the web image search using human affects.
For this, a Probabilistic Affective Model (PAM) is presented for predicting the
affects from color compositions (CCs) of images, then the retrieval system is
developed using them. The PAM first segments an image into seed regions, then
extracts CCs among seed regions and their neighbors, finally infer the
numerical ratings of certain affects by comparing the extracted CCs with
pre-defined human-devised color triplets. The performance of the proposed
system has been studied at an online demonstration site where 52 users search
16,276 landscape images using affects, then the results demonstrated its
effectiveness in affect-based image annotation and retrieval. Keywords: Affect-based image retrieval; probabilistic affective model; meanshift
clustering; color image scale | |||
| A Study on Combinative Value Creation in Songs Selection | | BIBAK | Full-Text | 372-380 | |
| Hiroko Shoji; Jun Okawa; Ken Kaji; Ogino Akihiro | |||
| Recently, advances in information and communications technology have allowed
us to easily download our favorite songs from the Internet. A song in general
is more often played in sequence with other various ones than listened
separately. The evolution of devices, however, has caused an increased number
of portable songs and thus frequent difficulties in nicely combining multiple
songs from a flood of songs to make a satisfactory playlist. There are many
existing research works on songs search and retrieval, such as a songs each
system using affective words and a songs recommendation system in consideration
for the user's preference. These existing researches, however, are intended for
"selecting a single song suited to the user's image", and never takes into
consideration a combination of multiple songs. Therefore, it is difficult that
existing systems automatically generate a desired playlist. Keywords: combination value; playlist; recommendation; onomatopoeia | |||
| The Influence of Context Knowledge for Multi-modal Affective Annotation | | BIBAK | Full-Text | 381-390 | |
| Ingo Siegert; Ronald Böck; Andreas Wendemuth | |||
| To provide successful human-computer interaction, automatic emotion
recognition from speech experienced greater attention, also increasing the
demand for valid data material. Additionally, the difficulty to find
appropriate labels is increasing.
Therefore, labels, which are manageable by evaluators and cover nearly all occurring emotions, have to be found. An important question is how context influences the annotators' decisions. In this paper, we present our investigations of emotional affective labelling on natural multi-modal data investigating different contextual aspects. We will explore different types of contextual information and their influence on the annotation process. In this paper we investigate two specific contextual factors, observable channels and knowledge about the interaction course. We discover, that the knowledge about the previous interaction course is needed to assess the affective state, but that the presence of acoustic and video channel can partially replace the lack of discourse knowledge. Keywords: emotion comparison; affective state; labelling; context influence | |||
| Generation of Facial Expression Emphasized with Cartoon Techniques Using a Cellular-Phone-Type Teleoperated Robot with a Mobile Projector | | BIBA | Full-Text | 391-400 | |
| Yu Tsuruda; Maiya Hori; Hiroki Yoshimura; Yoshio Iwai | |||
| We propose a method for generating facial expressions emphasized with cartoon techniques using a cellular-phone-type teleoperated android with a mobile projector. Elfoid is designed to transmit the speaker's presence to their communication partner using a camera and microphone, and has a soft exterior that provides the look and feel of human skin. To transmit the speaker's presence, Elfoid sends not only the voice of the speaker but also emotional information captured by the camera and microphone. Elfoid cannot, however, display facial expressions because of its compactness and a lack of sufficiently small actuator motors. In this research, facial expressions are generated using Elfoid's head-mounted mobile projector to overcome the problem. Additionally, facial expressions are emphasized using cartoon techniques: movements around the mouth and eyes are emphasized, the silhouette of the face and shapes of the eyes are varied by projection effects, and color stimuli that induce a particular emotion are added. In an experiment, representative face expressions are generated with Elfoid and emotions conveyed to users are investigated by subjective evaluation. | |||
| A Biofeedback Game for Training Arousal Regulation during a Stressful Task: The Space Investor | | BIBA | Full-Text | 403-410 | |
| Olle Hilborn; Henrik Cederholm; Jeanette Eriksson; Craig Lindley | |||
| Emotion regulation is a topic that has considerable impact in our everyday lives, among others emotional biases that affect our decision making. A serious game that was built in order to be able to train emotion regulation is presented and evaluated here. The evaluation consisted of a usability testing and then an experiment that targeted the difficulty of the game. The results suggested adequate usability and a difficulty that requires the player to engage in managing their emotion in order to have a winning strategy. | |||
| Responses Analysis of Visual and Linguistic Information on Digital Signage Using fNIRS | | BIBAK | Full-Text | 411-420 | |
| Satoru Iteya; Atsushi Maki; Toshikazu Kato | |||
| When customers receive recommended information through digital signage, it
is important not only to choose suitable commodities matching each customer's
preferences, but also to choose suitable information media to express their
features. This paper proposes a method to estimate their preferences on
information media by measuring brain activity. First step in order to achieve
our final goal, we disclose that there are significant differences in brain
activity in case subjects receive recommended information. The result of
analysis shows there are significant differences in brain activity, especially
visual cortex and language area. Keywords: fNIRS; Preference on Commodities and Information Parts; Information
Recommendation | |||
| A Method for Promoting Interaction Awareness by Biological Rhythm in Elementary School Children | | BIBAK | Full-Text | 421-430 | |
| Kyoko Ito; Kosuke Ohmori; Shogo Nishida | |||
| Recently, in Japan, education about the ability to make decisions as part of
a group composed of children with different ways of thinking has become more
important. Therefore, discussion activities have been adopted in elementary
school education. This study considers a method that supports discussion
activities by making children aware of the "state" (i.e., atmosphere, progress)
of their group during discussion, and of the ways they are influencing this
state themselves. We developed a system which allows us to visualize the
entrainment of the biological rhythm to present the group's state. An
experiment using this system was conducted to clarify whether the children were
aware of the group state during discussion, and how they were affected by this
awareness. We found that this system has the potential to support children when
considering ways of participating in the discussion. Also, it was found that
the system can act as an interface, encouraging children to think about the
importance of their listening to others in the group. Keywords: Education support; Elementary school education; discussion activity;
interaction; biological rhythm | |||
| Internet Anxiety: Myth or Reality? | | BIBAK | Full-Text | 431-440 | |
| Santosh Kumar Kalwar; Kari Heikkinen; Jari Porras | |||
| The purpose of this paper is to determine if Internet anxiety is a myth or
reality using literature, questionnaires, and analysis of the collected data.
Results showed that the Internet anxiety phenomenon is mostly reality. By
placing strong emphasis on the existent Internet anxiety phenomenon, the HCI
community could constructively build effective tools and techniques to mitigate
users' anxiety. Keywords: Internet; anxiety; concept; qualitative; myth; reality | |||
| Brain Function Connectivity Analysis for Recognizing Different Relation of Social Emotion in Virtual Reality | | BIBAK | Full-Text | 441-447 | |
| Jonghwa Kim; Dongkeun Kim; Sangmin Ann; Sangin Park; Mincheol Whang | |||
| Social emotions are emotion that can be induced from human social
relationships when people are interacting with others. In this study, we are
aim to analyze a brain function connectivity in terms of different relations of
social emotions. The brain function connectivity can be used to observe the
neural responses with features of EEG coherences during a cognitive process. In
this study, the EEG coherence is measured according to different social emotion
evocations. The auditory and visual stimulus for inducing social emotions was
presented to participants during 20.5 sec (±3.1 sec). The participants
were asked to imagine and explain about similar emotion experience after
watching each video clips. The measured EEG coherence was grouped into two
different social emotion categories; the information sharing relation and
emotion sharing relation, and compared with the results of subjective
evaluation and independent T-test. The information sharing relation was related
with the brain connectivity of the right temporo-occipital position associated
with a language memory. The emotion sharing relation was related with the brain
connectivity of the left fronto-right parietal position associated with a
visual information processing area. Keywords: Emotion; Social emotion; Emotion relation; EEG coherence; Brain function
connectivity | |||
| A Mobile Brain-Computer Interface for Freely Moving Humans | | BIBAK | Full-Text | 448-453 | |
| Yuan-Pin Lin; Yijun Wang; Chun-Shu Wei; Tzyy-Ping Jung | |||
| Recent advances in mobile electroencephalogram (EEG) systems featuring dry
electrodes and wireless telemetry have promoted the applications of
brain-computer interfaces (BCIs) in our daily life. In the field of
neuroscience, understanding the underlying neural mechanisms of unconstrained
human behaviors, i.e. freely moving humans, is accordingly in high demand. The
empirical results of this study demonstrated the feasibility of using a mobile
BCI system to detect steady-state visual-evoked potential (SSVEP) of the
participants during natural human walking. This study considerably facilitates
the process of bridging laboratory-oriented BCI demonstrations into mobile
EEG-based systems for real-life environments. Keywords: EEG; BCI; SSVEP; moving humans | |||
| The Solid Angle of Light Sources and Its Impact on the Suppression of Melatonin in Humans | | BIBAK | Full-Text | 454-463 | |
| Philipp Novotny; Peyton Paulick; Markus J. Schwarz; Herbert Plischke | |||
| Our group conducted a preliminary study to examine the influence of
different sizes of light sources, and therefore different illuminance levels,
at the retina. Six participants were exposed to two lighting scenarios and
saliva samples were collected to determine melatonin levels throughout the
experiment. Melatonin levels were analyzed to compare the efficacy of each
lighting scenario and its ability to suppress melatonin period. Our data is
showing a trend that both lighting scenarios are capable of suppressing
melatonin. Moreover, the preliminary data show that the lighting scenario with
the large solid angle is more effective at suppressing melatonin compared to
the lighting scenario with the small solid angle lighting scenario period.
Further testing with a larger patient population will need to be done to prove
statistical significance of our findings. Our further studies will repeat this
experiment with a larger test group and modifying the time frame between
different lighting scenarios period. Keywords: light; health; melatonin; suppression; optimal healing environment;
chronodisruption; circadian rhythm; shift work; dementia; light therapy | |||
| Facial Electromyogram Activation as Silent Speech Method | | BIBAK | Full-Text | 464-473 | |
| Lisa Rebenitsch; Charles B. Owen | |||
| A wide variety of alternative speech-free input methods have been developed,
including speech recognition, gestural commands, and eye typing. These methods
are beneficial not only for the disabled, but for situations where the hands
are preoccupied. However, many of these methods are sensitive to noise,
tolerate little movement, and require it to be the primary focus of the
environment. Morse code offers an alternative when background noise cannot be
managed. A Morse code-inspired application was developed employing
electromyograms. Several muscles were explored to determine potential electrode
sites that possessed good sensitivity and were robust to normal movement. The
masseter jaw muscle was selected for later testing. The prototype application
demonstrated that the jaw muscle can be used as a Morse "key" while being
robust to normal speech. Keywords: Silent Speech; Human computer interaction; User interfaces | |||
| The Impact of Gender and Sexual Hormones on Automated Psychobiological Emotion Classification | | BIBAK | Full-Text | 474-482 | |
| Stefanie Rukavina; Sascha Gruss; Jun-Wen Tan; David Hrabal; Steffen Walter; Harald C. Traue; Lucia Jerg-Bretzke | |||
| It is a challenge to make cognitive technical systems more empathetic for
user emotions and dispositions. Among channels like facial behavior and
nonverbal cues, psychobiological patterns of emotional or dispositional
behavior contain rich information, which is continuously available and hardly
willingly controlled. However, within this area of research, gender differences
or even hormonal cycle effects as potential factors in influencing the
classification of psychophysiological patterns of emotions have rarely been
analyzed so far.
In our study, emotions were induced with a blocked presentation of pictures from the International Affective Picture System (IAPS) and Ulm pictures. For the automated emotion classification in a first step 5 features from the heart rate signal were calculated and in a second step combined with two features of the facial EMG. The study focused mainly on gender differences in automated emotion classification and to a lesser degree on classification accuracy with Support Vector Machine (SVM) per se. We got diminished classification results for a gender mixed population and also we got diminished results for mixing young females with their hormonal cycle phases. Thus, we could show an improvement of the accuracy rates when subdividing the population according to their gender, which is discussed as a possibility of incrementing automated classification results. Keywords: emotion classification; gender; hormonal cycle; heart rate; facial EMG | |||
| Evaluation of Mono/Binocular Depth Perception Using Virtual Image Display | | BIBAK | Full-Text | 483-490 | |
| Shys-Fan Yang-Mao; Yu-Ting Lin; Ming-Hui Lin; Wen-Jun Zeng; Yao-lien Wang | |||
| Augmented reality (AR) is a very popular technology in various applications.
It allows the user to see the real world, with virtual objects composited with
or superimposed upon the real world. The usability of interactive user
interface based on AR relies heavily on visibility and depth perception of
content, virtual image display particularly. In this paper, we performed
several basic evaluations for a commercial see-through head mounted display
based on those factors that can change depth perception: binocular or
monocular, viewing distance, eye dominance, content changed in shape or size,
indicated by hand or reference object. The experiment results reveal many
interesting and fascinating features. The features will be user interface
design guidelines for every similar see-through near-eye display systems. Keywords: augmented reality; virtual image display; see-through near-eye display; user
interface; depth perception | |||
| Visual Image Reconstruction from fMRI Activation Using Multi-scale Support Vector Machine Decoders | | BIBAK | Full-Text | 491-497 | |
| Yu Zhan; Jiacai Zhang; Sutao Song; Li Yao | |||
| The correspondence between the detailed contents of a person's mental state
and human neuroimaging has yet to be fully explored. Previous research
reconstructed contrast-defined images using combination of multi-scale local
image decoders, where contrast for local image bases was predicted from fMRI
activity by sparse logistic regression (SLR). The present study extends this
research to probe into accurate and effective reconstruction of images from
fMRI. First, support vector machine (SVM) was employed to model the
relationship between contrast of local image and fMRI; second, additional
3-pixel image bases were considered. Reconstruction results demonstrated that
the time consumption in modeling the local image decoder was reduced to 1% by
SVM compared to SLR. Our method also improved the spatial correlation between
the stimulus and reconstructed image. This finding indicated that our method
could read out what a subject was viewing and reconstruct simple images from
brain activity at a high speed. Keywords: Image Reconstrution; fMRI; Multi-scale; SVM | |||
| Alterations in Resting-State after Motor Imagery Training: A Pilot Investigation with Eigenvector Centrality Mapping | | BIBAK | Full-Text | 498-504 | |
| Rushao Zhang; Hang Zhang; Lele Xu; Mingqi Hui; Zhiying Long; Yijun Liu; Li Yao | |||
| Motor training, including motor execution and motor imagery training, has
been indicated to be effective in mental disorders rehabilitation and motor
skill learning. In related neuroimaging studies, resting-state has been
employed as a new perspective besides task-state to examine the neural
mechanism of motor execution training. However, motor imagery training, as
another part of motor training, has been few investigated. To address this
issue, eigenvector centrality mapping (ECM) was applied to explore
resting-state before and after motor imagery training. ECM could assess the
computational measurement of eigenvector centrality for capturing intrinsic
neural architecture on a voxel-wise level without any prior assumptions. Our
results revealed that the significant increases of eigenvector centrality were
in the precuneus and medial frontal gyrus (MFG) for the experimental group but
not for the control group. These alterations may be associated with the
sensorimotor information integration and inner state modulation of motor
imagery training. Keywords: Motor imagery; functional magnetic resonance imaging (fMRI); ECM; precuneus;
medial frontal gyrus (MFG) | |||