HCI Bibliography Home | HCI Conferences | ICEC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ICEC Tables of Contents: 040506070809101112131415

Proceedings of the 2005 International Conference on Entertainment Computing

Fullname:ICEC 2005: : 4th International Conference on Entertainment Computing
Editors:Fumio Kishino; Yoshifumi Kitamura; Hirokazu Kato; Noriko Nagata
Location:Sanda, Japan
Dates:2005-Sep-19 to 2005-Sep-21
Publisher:Springer Berlin Heidelberg
Series:Lecture Notes in Computer Science 3711
Standard No:DOI: 10.1007/11558651 hcibib: ICEC05; ISBN: 978-3-540-29034-6 (print), 978-3-540-32054-8 (online)
Papers:56
Pages:540
Links:Online Proceedings
  1. IFIP SG16 Chair's Welcome Address
  2. Interactive Digital Storytelling
  3. Graphics
  4. Advanced Interaction Design
  5. Social Impact and Evaluation
  6. Seamful/Seamless Interface
  7. Body and Face
  8. Robot
  9. Music and Sound
  10. Mixed Reality and Mobile
  11. Education
  12. Virtual Reality and Simulation
  13. Theory
  14. Posters and Demonstration

IFIP SG16 Chair's Welcome Address

A New Framework for Entertainment Computing: From Passive to Active Experience BIBAKFull-Text 1-12
  Ryohei Nakatsu; Matthias Rauterberg; Peter Vorderer
In this paper a new framework for entertainment computing is introduced and discussed. Based on already existing models and concepts the different links and relationships between enjoyment, flow, presence, and different forms of experiences are shown and their contributions to the new framework reviewed. To address the more fundamental and theoretical issues regarding entertainment, we have to utilize existing theories in information processing, enjoyment and flow theory. Some already possible and probably important conclusions for the design of new entertainment system are drawn.
Keywords: Adaptivity; active experience; complexity; enjoyment; entertainment; flow; incongruity; information; integrated presence; learning; play

Interactive Digital Storytelling

Cultural Computing with Context-Aware Application: ZENetic Computer BIBAFull-Text 13-23
  Naoko Tosa; Seigow Matsuoka; Brad Ellis; Hirotada Ueda; Ryohei Nakatsu
We offer Cultural Computing as a method for cultural translation that uses scientific methods to represent the essential aspects of culture. Including images that heretofore have not been the focus of computing, such as images of Eastern thought and Buddhism, and the Sansui paintings, poetry and kimono that evoke these images, we projected the style of communication developed by Zen schools over hundreds of years into a world for the user to explore -- an exotic Eastern Sansui world. Through encounters with Zen Koans and haiku poetry, the user is constantly and sharply forced to confirm the whereabouts of his or her self-consciousness. However, there is no "right answer" to be found anywhere.
Automatic Conversion from E-Content into Animated Storytelling BIBAFull-Text 24-35
  Kaoru Sumi; Katsumi Tanaka
This paper describes a medium, called Interactive e-Hon, for helping children to understand contents from the Web. It works by transforming electronic contents into an easily understandable "storybook world." In this world, easy-to-understand contents are generated by creating 3D animations that include contents and metaphors, and by using a child-parent model with dialogue expression and a question-answering style comprehensible to children.
Key Action Technique for Digital Storytelling BIBAFull-Text 36-47
  Hiroshi Mori; Jun'ichi Hoshino
Building story-based interactive systems are important for entertainment and education. In the storytelling system, a user can change the discourse of the story by talking with the character. The problem is that the scene goals in the realistic situations are complex and often multiple goals make effects to decide the current action. When a user asks something to the character, the character needs to arbitrate multiple goals based on the priorities, the current action plan, and the contents of the conversation. In this paper, we propose the method for controlling multiple temporal goals of the story character. The character controls its reaction to the user interactions by using temporal key action (TKA). TKA is a temporal sub-goal setting with time constraint and a priority value. When the TKA is newly added, the action sequences are interpolated using an action network. We created a story-based animation example in which the user can be a detective in the virtual London city.

Graphics

A New Constrained Texture Mapping Method BIBAKFull-Text 48-58
  Yan-Wen Guo; Jin Wang; Xiu-Fen Cui; Qun-Sheng Peng
The validity of texture mapping is an important issue for point or mesh based surfaces. This paper provides a new constrained texture mapping method which is capable of ensuring the validity of texture mapping. The method employs the "divided-and-ruled" strategy to construct a direct correspondence between the respective patches of the texture image and 3D mesh model with feature matching. The mesh model is segmented based on the "approximate shortest path". Further, a "virtual image" relaxation scheme is performed to refine the rendering effect. We show how mesh morphing can be conducted efficiently with our constrained texture mapping method. Experiment results demonstrate the satisfactory effects for both texture mapping and mesh morphing.
Keywords: Texture mapping; Approximate shortest path; Morphing
Protect Interactive 3D Models via Vertex Shader Programming BIBAFull-Text 59-66
  Zhigeng Pan; Shusen Sun; Jian Yang; Xiaochao Wei
In 3D games, virtual museum and other interactive environments, 3D modes are commonly used interactively. Many of these models are valuable and require protection from misuse such as unlawful exhibition, vicious distribution etc. A practical solution is to avoid the interactive user to reconstruct precise 3D models from data stream between applications and 3D APIs (such as Direct3D, OpenGL, etc) under condition of not affecting interaction. The scheme proposed in this paper protects 3D modes via vertex shader programming. The data of 3D models are encrypted in 3D application first and then decrypted in vertex shader.
An Optimized Soft 3D Mobile Graphics Library Based on JIT Backend Compiler BIBAFull-Text 67-75
  Bailin Yang; Lu Ye; Zhigeng Pan; Guilin Xu
Mobile device is one of the most widespread devices with rendering capabilities now. With the improved performance of the mobile device, displaying 3D scene becomes reality. This paper implements an optimized soft 3D mobile graphics library based on JIT backend compiler, which is suitable for the features of the mobile device. To deeply exploring the advantages of JIT technology, this paper improves the traditional rasterization model based on JIT technology and proposes a hybrid rasterization model which integrates the advantages of both the per-scanline and per-pixel rasterization models. As we know, the backend compiler is the critical factor in running 3D application programme. In this paper, we implement a backend compiler for certain CPUs and propose some optimization techniques accordingly. The experimental results indicate that our 3D graphics library has achieved fine performance.

Advanced Interaction Design

Frame Rate Control in Distributed Game Engine BIBAFull-Text 76-87
  Xizhi Li; Qinming He
Time management (or frame rate control) is the backbone system that feedbacks on a number of game engine modules to provide physically correct, interactive, stable and consistent graphics output. This paper discusses time related issues in game engines and proposes a unified time management (or more specifically frame rate control) architecture, which can be easily applied to existing game engines. The frame rate system has been used in our own distributed game engine and may also find applications in other multimedia simulation systems.
A Universal Interface for Video Game Machines Using Biological Signals BIBAFull-Text 88-98
  Keisuke Shima; Nan Bu; Masaru Okamoto; Toshio Tsuji
This paper proposes a universal entertainment interface for operation of amusement machines, such as video game machines and radio control toys. In the proposed interface system, biological signals are used as input, where users can choose some specific biological signal and configuration of signal measurement in accordance with their preference, physical condition (disabled or not), and degree of the disability. From the input signals, users' intention of operation can be estimated with a probabilistic neural network (PNN), and then, control commands can be determined accordingly. With the proposed interface, people, even those with severe physical disabilities, are able to operate amusement machines. To verify validity of the proposed method, experiments were conducted with a video game machine.
Development of a System to Measure Visual Functions of the Brain for Assessment of Entertainment BIBAKFull-Text 99-105
  Akihiro Yagi; Kiyoshi Fujimoto; Tsutomu Takahashi; Atsushi Noritake; Masumi Iwai; Noriyuki Suzuki
The unique event related brain potential (ERP) called the eye fixation related potential (EFRP) is obtained with averaging EEGs at terminations of saccadic eye movements. Firstly, authors reviewed some studies on EFRP in games and in ergonomics and, secondly introduced a new system for assessment of visual entertainments by using EFRP. The distinctive feature of the system is that we can measure the ERP under the conditions where a subject moves eyes. This system can analyze EEG data from many sites on the head and can display in real time the topographical maps related to the brain activities. EFRP is classified into several components at latent periods. We developed a new system to display topographical maps at three latent regions in order to analyze in more detail psychological and neural activities in the brain. This system will be useful for assessment of the visual entertainment.
Keywords: ERP; eye movement; attention; game; movie
SportsVBR: A Content-Based TV Sports Video Browsing and Retrieval System BIBAFull-Text 106-113
  Liu Huayong; Zhang Hui
An advanced content-based sports video browsing and retrieval system, SportsVBR, is proposed in this work. Its main features include event-based sports video browsing and keyword-based sports video retrieval. The paper first defines the basic structure of our SportsVBR system, and then introduces a novel approach that integrates multimodal analysis, such as visual streams analysis, speech recognition, speech signal processing and text extraction to realize event-based video clips selection. The experimental results for sports video of world cup football games indicate that multimodal analysis is effective for video browsing and retrieval by quickly browsing event-based video clips and inputting keywords according to a predefined sports vocabulary database. The system is proved to be helpful and effective for the overall understanding of the sports video content.

Social Impact and Evaluation

Online Community Building Techniques Used by Video Game Developers BIBAFull-Text 114-125
  Christopher Ruggles; Greg Wadley; Martin R. Gibbs
Online fan communities are an important element in the market success of a videogame, and game developers have begun to recognize the importance of fostering online communities associated with their games. In this paper we report on a study that investigated the techniques used by game developers to maintain and promote online communities within and around their games. We found that game developers consider online communities to be important to the success of both single-player and online multiplayer games, and that they actively support and nourish these communities. Online community building techniques identified in the study are categorized and discussed. The results represent a snapshot of current developer thinking and practice with regards to game-based online communities. The study augments existing research concerning the relationship between design features, online community and customer loyalty in new media, Internet and game-related industries.
Aggregation of Action Symbol Sub-sequences for Discovery of Online-Game Player Characteristics Using KeyGraph BIBAFull-Text 126-135
  Ruck Thawonmas; Katsuyoshi Hata
Keygraph is a visualization tool for discovery of relations among text-based data. This paper discusses a new application of KeyGraph for discovery of player characteristics in Massively Multiplayer Online Games (MMOGs). To achieve high visualization ability for this application, we propose a preprocessing method that aggregates action symbol sub-sequences of players into more informative forms. To verify whether this aim is achieved, we conduct an experiment where human subjects are asked to classify types of players in a simulated MMOG with KeyGraphs using and not using the proposed preprocessing method. Experimental results confirm the effectiveness of the proposed method.
Agreeing to Disagree -- Pre-game Interaction and the Issue of Community BIBAFull-Text 136-147
  Jonas Heide Smith
Playing online multiplayer games entails matching oneself with other players. To do so, players must typically employ various types of communication tools that are part of the game or of game-external matching services. But despite the centrality of these tools they receive little attention in discussions of game design and game HCI. This paper seeks to rectify this situation by presenting an in-depth analysis of two pre-game interaction systems which represent influential approaches. Whereas one of these games allows for high player control and thus inspires negotiation, the other allows player communication mainly to help players pass time between matches. The two approaches are discussed in the light of HCI researcher Jenny Preece's concept of "sociability" and zoologist Amotz Zahavi's demonstration of criteria for "honest signalling". The paper concludes with a discussion of the trade-off facing game designers between efficiency and community-supporting social interaction.
Keyword Discovery by Measuring Influence Rates on Bulletin Board Services BIBAKFull-Text 148-154
  Kohei Tsuda; Ruck Thawonmas
In this paper, we focus on relations between comments on Tree-style Bulletin Board Services (BBSs), and propose a method for discovering keywords by measuring influence rates thereon. Our method is based on an extended model of Influence Diffusion Model (IDM) proposed by N. Matsumura et al. in 2002, where they discussed the influence diffusion of a term in a comment to all succeeding comments that include that term and reply to that comment. Here we additionally consider the influence diffusion of a term over comments that include that term and all reply to a same comment, as well as the influence diffusion of a term over nearby comments that include that term, regardless of their reply relation. Evaluation results using Tree-style BBS data related to Massively Multiplayer Online Games (MMOGs) show that the proposed method has higher precision and recall rates than IDM and a classical method based on term frequencies. As a result, keywords discovered by the proposed method can be effectively used by MMOG publishers for incorporating users' needs into game contents.
Keywords: Keyword Discovery; Tree-style BBSs; Comments; MMOGs

Seamful/Seamless Interface

Seamful Design for Location-Based Mobile Games BIBAFull-Text 155-166
  Gregor Broll; Steve Benford
Seamful design is a new approach to reveal and exploit inevitable technical limitations in Ubiquitous Computing technology rather than hiding them. In this paper we want to introduce its general ideas and apply them to the design of location-aware games for mobile phones. We introduce our own seamful trading-game called "Tycoon" to explore seams on this platform and show how to incorporate them into the design of mobile games. We want to evaluate how applications for the mobile phone platform which use cell-positioning can exploit seams for better interaction, gameplay and usability.
A Display Table for Strategic Collaboration Preserving Private and Public Information BIBAFull-Text 167-179
  Yoshifumi Kitamura; Wataru Osawa; Tokuo Yamaguchi; Haruo Takemura; Fumio Kishino
We propose a new display table that allows multiple users to interact with both private and public information on a shared display in a face-to-face co-located setting. With this table users can create, manage and share information intuitively, strategically and cooperatively by naturally moving around the display. Users can interactively control private and public information space seamlessly according to their spatial location and motion. It enables users to dynamically choose negotiation partners, create cooperative relationships and strategically control the information they share and conceal. We see the proposed system as especially suited for strategic cooperative tasks in which participants collaborate while attempting to increase individual benefits, such as various trading floor-like and auction scenarios.
Gamble -- A Multiuser Game with an Embodied Conversational Agent BIBAFull-Text 180-191
  Matthias Rehm; Michael Wissner
In this article, we present Gamble, a small game of dice that is played by two users and an embodied conversational agent (ECA). By its abilities to communicate and collaborate, an ECA is well suited for engaging users in entertaining social interactions. Gamble is used as a test bed for such multiuser interactions. The description of the system's components and a thorough analysis of the agent's behavior control mechanisms is followed by insights gained from a first user study.
Touchable Interactive Walls: Opportunities and Challenges BIBAFull-Text 192-202
  Kelly L. Dempski; Brandon L. Harvey
Very large, high resolution, interactive screens -- also known as interactive walls -- can be used to deliver entertainment and advertising content that is qualitatively different from what is available in television, kiosk, or desktop formats. At a sufficient resolution and size, the touchable wall can offer the engaging interactivity of full-fledged entertainment software, but on a scale that enables new kinds of public experiences. This paper describes some of the opportunities enabled by what we believe to be a new computing medium in its own right. We also describe some of the new design challenges inherent in this medium, together with suggestions based on our own approach to those challenges.

Body and Face

Generic-Model Based Human-Body Modeling BIBAKFull-Text 203-214
  Xiaomao Wu; Lizhuang Ma; Ke-Sen Huang; Yan Gao; Zhihua Chen
This paper presents a generic-model based human-body modeling method which take the anatomical structure of the human body into account. The generic model contains anatomical structure of bones and muscles of the human body. For a given target skin mesh, the generic model can be scaled according to the skin, and then morphed to be fitted to the shape of the target skin mesh. After an anchoring process, the layered model can be animated via key-framing or motion capture data. The advantage of this approach is its convenience and efficiency comparing to existing anatomically-based modeling methods. Experimental results demonstrate the success of the proposed human-bodymodeling method.
Keywords: Anatomically-based modeling; human body modeling; generic model
Facial Expression Recognition Based on Two Dimensions Without Neutral Expressions BIBAFull-Text 215-222
  Young-Suk Shin; Young Joon Ahn
We present a new approach for recognizing facial expressions based on two dimensions without detectable cues such as a neutral expression, which has essentially zero motion energy. To remove much of the variability due to lighting, a zero-phase whitening filter was applied. Principal component analysis (PCA) representation excluded the first one principal component as the features for facial expression recognition regardless of neutral expressions was developed. The result of facial expression recognition using a neural network model is compared with two-dimension values of internal states derived from ratings of facial expression pictures related to emotion by experimental subjects. The proposed algorithm demonstrated the ability to overcome the limitation of expression recognition based on a small number of discrete categories of emotional expressions, lighting sensitivity, and dependence on cues such as a neutral expression.
Subjective Age Estimation System Using Facial Images BIBAFull-Text 223-229
  Naoyuki Miyamoto; Yumi Jinnouchi; Noriko Nagata; Seiji Inokuchi
We propose a relative estimation method for subjective age, imaged by ourselves, using peoples' facial images and their chronological (real) age. We experimented with a rating scale for facial images which stimulated subjects. The subject evaluated an image as looking older than themselves with a range of responses. Finding an approximation curve of this range, the zero crossing point in the approximation curve is defined as the subjective age. The experimental result shows that the subjective age tends to be found in negative direction (tendency to estimate oneself as younger than actual). Besides, there are other trends between gender, between age groups, and between the different expressions such as ordinary and smiling.
A Video Based Personalized Face Model Generation Approach for Network 3D Games BIBAFull-Text 230-238
  Xiangyong Zeng; Jian Yao; Mandun Zhang; Yangsheng Wang
We have developed a fast generation system for personalized 3D face model and plan to apply it in network 3D games. This system uses one video camera to capture player's frontal face image for 3D modeling and dose not need calibration and plentiful manual tuning. The 3D face model in games is represented by a 3D geometry mesh and a 2D texture image. The personalized geometry mesh is obtained by deforming an original mesh with the relative positions of the player's facial features, which are automatically detected from the frontal image. The relevant texture image is also obtained from the same image. In order to save storage space and network bandwidth, only the feature data and texture data from each player are sent to the game server and then to other clients. Finally, players can see their own faces in multiplayer games.

Robot

Live Feeling on Movement of an Autonomous Robot Using a Biological Signal BIBAFull-Text 239-247
  Shigeru Sakurazawa; Keisuke Yanagihara; Yasuo Tsukahara; Hitoshi Matsubara
Using Khepera Simulator software, we developed an autonomous robot with a simple neural network by applying the skin conductance response of an observer who was watching the movement of the agent. First, we found that the signals were generated when the observer felt that the robot faced a crucial phase, such as hitting a wall. Therefore, we used the signals as errors that were back-propagated to the network in the robot. By questionnaires completed by the observer, the movement of this robot was compared with the movement of two other kinds of robots. In these other two robots, random signals or switch signals, which were turned on at the robot's crucial phase, were used as errors instead of the skin conductance responses. From the results, we found that the movement of the robot with biological signals was most similar to the movement of something alive in the three kinds of robots. It is thought that applications of biological signals can promote natural interactions between humans and machines.
Detection of Speaker Direction Based on the On-and-Off Microphone Combination for Entertainment Robots BIBAFull-Text 248-255
  Takeshi Kawabata; Masashi Fujiwara; Takanori Shibutani
An important function of entertainment robots is voice communication with humans. For realizing them, accurate speech recognition and a speaker-direction detection mechanism are necessary. The direct-noise problem is serious in such speech processing. The microphone attached to the robot body receives not only human voices but also motor and mechanical noises directly. The direct noises are often larger than distance voices and fatally degrade the speech recognition rate. Even if the microphone close to the user ("on-mic") is used for speech recognition, the body microphones ("off-mic") are still necessary for detecting the speaker direction under the severe condition with direct noises. This paper describes a new method for detecting the speaker direction based on the on-and-off microphone combination. The system searches for the spectral elements of "on-mic" voice in the other "off-mic" channels. The segregated power ratio or the time delay between the "off-mic" channels is used for detecting the speaker direction. Experiments show that the proposed method effectively improves the direction detection accuracy during the robot moves.
Robot Navigation by Eye Pointing BIBAFull-Text 256-267
  Ikuhisa Mitsugami; Norimichi Ukita; Masatsugu Kidode
We present a novel wearable system for robot navigation. In this system, a user can operate multiple robots in a very intuitive way: the user gazes at a robot and then gazes at its destination on the floor. As this system needs no equipment in the environment, the user can apply it anywhere on a flat floor with only the wearable system. In this paper, we show how to estimate the positions and orientations of the robots and the gazed position. We also describe implementation of the robot navigation system.
Virtual Human with Regard to Physical Contact and Eye Contact BIBAFull-Text 268-278
  Asami Takayama; Yusuke Sugimoto; Akio Okuie; Tomoya Suzuki; Kiyotaka Kato
In the future a virtual human is expected to play important roles such as a character, an avatar and a robot in the amusement field. To realize a virtual human computer-graphics has advantages in cost and maintenance. However, a human feels a slight discomfort at the robot's face that is represented by using CG because a robot's head in the virtual world does not fit in the environment of the real world. To resolve the problem, we have proposed a robot's head by using CG and some sensors that respond to a surrounding environment. Here, focusing on physical contact, this paper proposes a robot's head using CG and a touch screen panel. Also, we propose robot eyes that reflect a surrounding environment realistically toward the realization of eye contact. Some experiments show that the robot's face changes according to an environmental change like a human in the real world.
Power, Death and Love: A Trilogy for Entertainment BIBAKFull-Text 279-290
  Ben Salem; Matthias Rauterberg
In this paper we review the latest understandings about what emotions are and their roles in perception, cognition and action in the context of entertainment computing. We highlight the key influence emotions have in the perception of our surrounding world, as well as in the initiation of action. Further to this we propose a model for emotions and demonstrate how it could be used for entertainment computing. We then present a review of emotion based toys and show our own development in this area. We conclude our paper with a discussion on how entertainment systems would gain from a better and more comprehensive understanding of emotions.
Keywords: Emotion; power death love trilogy; empathy; interactive toy

Music and Sound

The MUSICtable: A Map-Based Ubiquitous System for Social Interaction with a Digital Music Collection BIBAKFull-Text 291-302
  Ian Stavness; Jennifer Gluck; Leah Vilhan; Sidney Fels
Popular acceptance of the mp3 digital music standard has greatly increased the complexity of organizing and playing large music collections. Existing digital music systems do not adequately support exploration of a collection, nor do they cater to multi-user interaction in a social setting. In this paper, we present the design of a ubiquitous system that utilizes spatial visualization to support exploration and social interaction with a large music collection. Our interface is based on the interaction semantic of influence, which allows users to affect and control the mood of music being played without the need to select a set of specific songs. This design is inspired, to some extent, by Gaver's work on ludic design. We implemented a prototype as a proof of concept of our design. User testing demonstrates that our system encourages participation and strengthens social cohesion. Our work contributes to interactive interface research in that it extends the utility of map-based visualization of digital music.
Keywords: User interface; music map; social interaction; entertainment; tabletop display; music classification; mp3
Painting as an Interface for Timbre Design BIBAFull-Text 303-314
  Michael Bylstra; Haruhiro Katayose
There is a challenge in designing a system for timbre design that is engaging for new users and enables experienced users to intuitively design a diverse range of complex timbres. This paper discusses some of the issues involved in achieving these aims and proposes that a timbre can be intuitively represented as an image. The design of TimbrePainter, a system that uses images painted with a mouse to specify the parameters of a harmonic additive synthesizer, is described.
ism: Improvisation Supporting Systems with Melody Correction and Key Vibration BIBAFull-Text 315-327
  Tetsuro Kitahara; Katsuhisa Ishida; Masayuki Takeda
This paper describes improvisation support for musicians who do not have sufficient improvisational playing experience. The goal of our study is to enable such players to learn the skills necessary for improvisation and to enjoy it. In achieving this goal, we have two objectives: enhancing their skill for instantaneous melody creation and supporting their practice for acquiring this skill. For the first objective, we developed a system that automatically corrects musically inappropriate notes in the melodies of users' improvisations. For the second objective, we developed a system that points out musically inappropriate notes by vibrating corresponding keys. The main issue in developing these systems is how to detect musically inappropriate notes. We propose a method for detecting them based on the N-gram model. Experimental results show that this N-gram-based method improves the accuracy of detecting musically inappropriate notes and our systems are effective in supporting unskilled musicians' improvisation.
Physically-Based Sound Synthesis on GPUs BIBAFull-Text 328-333
  Qiong Zhang; Lu Ye; Zhigeng Pan
Modal synthesis is a physically-motivated sound modeling method. It has been successfully used in many applications. However, if large number of modes are involved in a simulated scene, it becomes an overwhelming task to synthesize sounds in real time without special hardware support. An implementation based on commodity graphics hardware is proposed as an alternative solution by using the parallelism and programmability in graphics pipeline.
On Cognition of Musical Grouping: Relationship Between the Listeners' Schema Type and Their Musical Preference BIBAFull-Text 334-344
  Mitsuyo Hashida; Kenzi Noike; Noriko Nagata; Haruhiro Katayose
We assume that there are various musical groupings of perceptions according to the degree of schemata and there are two dominant music grouping schemata; (a) accent-oriented grouping schema and (b) phrasing schema (musical expression referred to as the Rainbow type). In order to verify these hypotheses, we investigated how listeners' groupings change when the inner voice of Beethoven's Piano Sonata "Pathetique" was replaced with chords. We eventually succeeded in identifying three listening groups; those who have a strong (a) schema (type A), those whose (a) is prior to (b) (type AF), and those whose (b) is prior to (a) while paying attention to their inner voice (type FAI). We verified that type A listeners prefer Rap music, Rock music, listening in a lively place, listening to party music, and listening to lyrics, while type FAI listeners prefer Bach, Chopin, and listening alone and quietly.

Mixed Reality and Mobile

Augmented Reality Agents in the Development Pipeline of Computer Entertainment BIBAFull-Text 345-356
  István Barakonyi; Dieter Schmalstieg
Augmented reality (AR) has recently stepped beyond the usual scope of applications like machine maintenance, military training and production, and has been extended to the realm of entertainment including computer gaming. This paper discusses the potential AR environments offer for embodied animated agents, and demonstrates several advanced immersive content and scenario authoring techniques in AR through example applications.
Collaborative billiARds: Towards the Ultimate Gaming Experience BIBAFull-Text 357-367
  Usman Sargaana; Hossein S. Farahani; Jong Weon Lee; Jeha Ryu; Woontack Woo
In this paper, we identify the features that enhance gaming experience in Augmented Reality (AR) environments. These include Tangible User Interface, force-feedback, audio-visual cues, collaboration and mobility. We base our findings on lessons learnt from existing AR games. We apply these results to billiARds which is an AR system that, in addition to visual and aural cues, provides force-feedback. billiARds supports interaction through a vision-based tangible AR interface. Two users can easily operate the proposed system while playing Collaborative billiARds game around a table. The users can collaborate through both virtual and real objects. User study confirmed that the resulting system delivers enhanced gaming experience by supporting the five features highlighted in this paper.
Multi-dimensional Game Interface with Stereo Vision BIBAFull-Text 368-376
  Yufeng Chen; Mandun Zhang; Peng Lu; Xiangyong Zeng; Yangsheng Wang
An novel stereo vision tracking method is proposed to implement an interactive Human Computer Interface (HCI). Firstly, a feature detection method is introduced to accurately obtain the location and orientation of the feature in an efficient way. Secondly, a searching method is carried out, which uses probability in the time, frequency or color space to optimize the searching strategy. Then the 3D information is retrieved by the calibration and triangulation process. Up to 5 degrees of freedom (DOFs) can be achieved from a single feature, compared with the other methods, including the coordinates in 3D space and the orientation information. Experiments show that the method is efficient and robust for the real time game interface.
Experiments of Entertainment Applications of a Virtual World System for Mobile Phones BIBAFull-Text 377-388
  Hiroyuki Tarumi; Kasumi Nishihara; Kazuya Matsubara; Yuuki Mizukubo; Shouji Nishimoto; Fusako Kusunoki
Using a virtual world system for GPS-phones, we have developed a small RPG-like game to give information to tourists. Comparing with other virtual systems for mobile terminals, the cost of our system is much lower because only phones on the current market are required but no additional devices are needed. The game follows a Japanese famous tale and a player plays as the hero. We recruited twenty subjects and they played it 35 minutes in average. Through evaluation sessions of the system, we have found that the system is highly evaluated as an entertainment system.

Education

A Tutoring System for Commercial Games BIBAFull-Text 389-400
  Pieter Spronck; Jaap van den Herik
In computer games, tutoring systems are used for two purposes: (1) to introduce a human player to the mechanics of a game, and (2) to ensure that the computer plays the game at a level of playing strength that is appropriate for the skills of a novice human player. Regarding the second purpose, the issue is not to produce occasionally a weak move (i.e., a give-away move) so that the human player can win, but rather to produce not-so-strong moves under the proviso that, on a balance of probabilities, they should go unnoticed. This paper focuses on using adaptive game AI to implement a tutoring system for commercial games. We depart from the novel learning technique 'dynamic scripting' and add three straightforward enhancements to achieve an 'even game', viz. high-fitness penalising, weight clipping, and top culling. Experimental results indicate that top culling is particularly successful in creating an even game. Hence, our conclusion is that dynamic scripting with top culling can implement a successful tutoring system for commercial games.
Non-verbal Mapping Between Sound and Color-Mapping Derived from Colored Hearing Synesthetes and Its Applications BIBAFull-Text 401-412
  Noriko Nagata; Daisuke Iwai; Sanae H. Wake; Seiji Inokuchi
This paper presents an attempt at 'non-verbal mapping' between music and images. We use physical parameters of key, height and timbre as sound, and hue, brightness and chroma as color, to clarify their direct correspondence. First we derive a mapping rule between sound and color from those with such special abilities as 'colored hearing'. Next we apply the mapping to everyday people using a paired comparison test and key identification training, and we find similar phenomena to colored hearing among everyday people. The experimental result shows a possibility that they also have potential of ability of sound and color mapping.
Design and Implementation of a Pivotal Unit in a Games Technology Degree BIBAFull-Text 413-421
  Shri Rai; Chun Che Fung; Arnold Depickere
This paper reports the development and running of the first Games Development and Programming unit at Murdoch University, Western Australia. Unlike other Games courses which have been repackaged or re-modeled from existing multimedia courses, the proposed course and units are focused on meeting the needs of the industry and high level of academic standard. As such, great demands have been placed on the students. The unit objectives, structure and examples of assignments from the first batch of students are described in this paper. Experience has shown that the students were able to perform well with positive encouragement. Ability to work in a team also proved to be an important factor. This has shown to be related to the standard of the students' work and it is also an essential attribute expected by the industry.
Interactive Educational Games for Autistic Children with Agent-Based System BIBAFull-Text 422-432
  Karim Sehaba; Pascal Estraillier; Didier Lambert
This article addresses design issues that are relevant in the Autism project which aims at developing a computer games, for diagnosis and training of the children with autism and accompanying mental disorders. This approach is put in the broader context of interactive environments, which computer games are a special case. The characteristic of our approach is that it has the capability of user adaptation. The user adaptation is based on the model they maintain the observation of user interactions, the knowledge of therapists and the case-based reasoning paradigm.

Virtual Reality and Simulation

User Experiences with a Virtual Swimming Interface Exhibit BIBAFull-Text 433-444
  Sidney Fels; Steve Yohanan; Sachiyo Takahashi; Yuichiro Kinoshita; Kenji Funahashi; Yasufumi Takama; Grace Tzu-Pei Chen
We created an exhibit based on a new locomotion interface for swimming in a virtual reality ocean environment as part of our Swimming Across the Pacific art project. In our exhibit we suspend the swimmer using a hand gliding and leg harness with pulleys and ropes in an 8ft-cubic swimming apparatus. The virtual reality ocean world has sky, sea waves, splashes, ocean floor and an avatar representing the swimmer who wears a tracked head-mounted display so he can watch himself swim. The audience sees the swimmer hanging in the apparatus overlaid on a video projection of his ocean swimming avatar. The avatar mimics the real swimmer's movements sensed by eight magnetic position trackers attached to the swimmer. Over 500 people tried swimming and thousands watched during two exhibitions. We report our observations of swimmers and audiences engaged in and enjoying the experience leading us to identify design strategies for interactive exhibitions.
Toward Web Information Integration on 3D Virtual Space BIBAFull-Text 445-455
  Yasuhiko Kitamura; Noriko Nagata; Masataka Ueno; Makoto Nagamune
We report an implementation of GeneSys, which is a Web information integration system on 3D virtual space. We have built Kwansei Gakuin Kobe Sanda Campus as a virtual space called VKSC in which character agents navigate a user. VKSC reflects the Web information concerning weather, school calendar, and laboratories in the campus and the behaviour of agents changes depending on the information. Conventional Web systems mainly aim at providing information and knowledge to users, but GeneSys can additionally provide virtual experiences to users.
Ikebana Support System Reflecting Kansei with Interactive Evolutionary Computation BIBAKFull-Text 456-467
  Junichi Saruwatari; Masafumi Hagiwara
In this paper, we propose an ikebana support system which can reflect kansei employing interactive evolutionary computation. In the conventional flower layout support system, users cannot adjust the presented layouts. In the proposed system, users can adjust and evaluate the presented layouts. Moreover, new functions are implemented so that the system can learn users' preferences from their adjustments and evaluations. In addition, we deal with basic styles of ikebana which differ in many schools of Japanese ikebana. From evaluation experiments, it is indicated that the proposed system can present layouts which satisfy both ikebana beginners and advanced ikebana learners.
Keywords: Evolutionary computation; ikebana; image scale

Theory

Effects of Team-Based Computer Interaction: The Media Equation and Game Design Considerations BIBAKFull-Text 468-479
  Daniel Johnson; John Gardner
The current paper applies media equation research to video game design. The paper presents a review of the existing media equation research, describes a specific study conducted by the authors, discusses how the findings of the study can be used to inform future game design, and explores how other media equation findings might be incorporated into game design. The specific study, discussed in detail in the paper, explores the notion of team formation between humans and computer team-mates. The results show that while highly experienced users will accept a computer as a team-mate, they tend to react more negatively towards the computer than to human teammates (a 'Black Sheep' Effect).
Keywords: Media Equation; Team Formation; Groups; Game Design
The Ethics of Entertainment Computing BIBAKFull-Text 480-487
  Andy Sloane
This paper investigates a number of issues that relate to the development of entertainment computing (EC) and the home environment. The consumption of EC is closely related to the efforts of companies to market it. At the same time there are many different factors that affect the quality of life of the individual consumers that participate in the entertainment. There are a number of unresolved conflicts that are currently not answered by the providers of EC software and the manufacturers of hardware. These conflicts are explored and the ethics of an example scenario is discussed.
Keywords: Ethics; home; leisure; quality of life
Notes on the Methodology of Pervasive Gaming BIBAKFull-Text 488-495
  Bo Kampmann Walther
The paper introduces four axes of pervasive gaming (PG): mobility, distribution, persistence, and transmediality. Further, it describes and analyses three key units of PG (rules, entities, and mechanics) as well as discusses the role of space in PG by differentiating between tangible space, information embedded space, and accessibility space.
Keywords: Pervasive gaming; game rules; gameplay; game theory; ludology; game space; H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities; Performance, Human Factors, Theory
From Hunt the Wumpus to EverQuest: Introduction to Quest Theory BIBAFull-Text 496-506
  Espen Aarseth
The paper will explore how the landscape types and the quest types are used in various games, how they structure the gameplay, how they act as bones for the game-content (graphics, dialogue, sound) and how they sometimes form the base on which a story is imposed and related to the player. The question then becomes, how does the quest structure influence the story structure? How do the limitations of the quest combinations limit the kinds of story that are possible? How rich can the imposed story be, without breaking the gameplay? Are landscape and quest-structure the dominant factors in quest game design, to which the story-ambitions must defer? The main thesis of the paper is that if we understand the powerful but simple structure -- the grammar -- of quests (how they work, how they are used) we can understand both the limits and the potential of these kinds of games.

Posters and Demonstration

A Computerized Interactive Toy: TSU.MI.KI BIBAFull-Text 507-510
  Yuichi Itoh; Tokuo Yamaguchi; Yoshifumi Kitamura; Fumio Kishino
Young children often build various structures with wooden blocks; structures that are often used for pretend play, subtly improving children's creativity and imagination. Based on a traditional Japanese wooden block toy, Tsumiki, we propose a novel interactive toy for children, named "TSU.MI.KI", maintaining the physical assets of wooden blocks and enhancing them with automation. "TSU.MI.KI" consists of a set of computerized blocks equipped with several input/output devices. Children can tangibly interact with a virtual scenario by manipulating and constructing structures from the physical blocks, and by using input and output devices that are integrated into the blocks.
Multimodal Wayfinding in a Driving Simulator for the Schaire Internet Chair, a Networked Rotary Motion Platform BIBAFull-Text 511-514
  Kazuya Adachi; Ken'ichiro Iwai; Eiji Yamada; Michael Cohen
We are exploring idss (intelligent driver support systems), especially including way-finding presented via spatial audio. ("Way-finding" refers to giving a driver directions, as via car navigation ["Car-Nabi"] gps/gis systems.) We have developed a networked driving simulator as a virtual-reality based interface (control/display system) featuring integration with the Schaire rotary motion platform for azimuth-display stereographic display for 3D graphics spatial audio (sound spatialization) way-finding cues.
Making Collaborative Interactive Art "Ohka Rambu" BIBAFull-Text 515-518
  Ryota Oiwa; Haruhiro Katayose; Ryohei Nakatsu
This paper describes an environment for editing and performing interactive media art/entertainment. The design background is to provide artistic/entertainment pieces, in which multiple people can participate without special sensory equipment. This paper introduces a gesture input function using color tags in the image and some matching functions to be used for writing a piece. This paper shows an example of interactive media art/entertainment, called "Ohka Rambu," and describes the usage and possibilities of the environment.
Agents from Reality BIBAFull-Text 519-522
  Kazuhiro Asai; Atsushi Hattori; Katsuya Yamashita; Takashi Nishimoto; Yoshifumi Kitamura; Fumio Kishino
A fish tank is established in a cyberspace based on a real world in which autonomous fish agents, generated from images captured in an actual world, swim. The behavior of each fish is determined by an emotional model that reflects personality according to encountered events and user interactions.
AR Pueblo Board Game BIBAFull-Text 523-526
  Jong Weon Lee; Byung Chul Kim
This paper considers a new tangible interface for vision-based Augmented Reality (AR) systems. Tangible AR interfaces provide users seamless interaction with virtual objects in AR systems but with the restriction of user's motions. A new tangible AR interface is designed to overcome this limitation. Two hexahedral objects are attached together to create the new tangible AR interface. The shape of the new tangible AR interface removes the restriction of user's motions in existing tangible AR interfaces. Users can move and rotate the new interface freely to manipulate virtual objects in AR environment. This improvement is useful for applications that require unrestricted rotation motions of virtual objects. The Pueblo board game is developed to demonstrate the usability of the new tangible AR interface.
Aesthetic Entertainment of Social Network Interaction: Free Network Visible Network BIBAFull-Text 527-530
  Adrian David Cheok; Ke Xu; Wei Liu; Diego Diaz Garcia; Clara Boj Tovar
Free Network Visible Network is an active media system that uses the possibilities of the new technologies to create new landscapes in the public space by means of the visualization of the data that ow between digital networks. It changes our perception of the world with the "invisible meanings" that are around us. Mixed Reality Technology and Internet Traffic Listening system are adopted in this project in order to visualize, floating in the space, the interchanged information between users of a network. The people are able to experience in a new exciting way about how colorful virtual objects, representing the digital data, are flying around. These virtual objects will change their shape, size and color in relation with the different characteristics of the information that is circulating in the network. By the use of the objects exciting movement through space, users will feel fun and aesthetic entertainment at observing the social digital communications in their physical space and city streets.
Motion Illusion in Video Images of Human Movement BIBAFull-Text 531-534
  Kiyoshi Fujimoto; Akihiro Yagi
We found a novel motion illusion; when a video clip presents a moving person, the background image appears to move incorrectly. We investigated this illusion with psychophysical experiments using a movie display that consisted of a human figure and a vertical grating pattern. The grating periodically reversed its light-dark phase so that it was ambiguous in terms of motion directions. However, when the human figure presented a walking gait in front of the grating, the grating appears to move in the opposite direction of her/his locomotion. This illusion suggests that human movements modulate perception of video images, and that creators of entertainment images need to pay attention to background images in videos used in animation and computer graphics.
A Chat System Based on Emotion Estimation from Text and Embodied Conversational Messengers BIBAFull-Text 535-538
  Chunling Ma; Helmut Prendinger; Mitsuru Ishizuka
This short paper contains a preliminary description of a novel type of chat system that aims at realizing natural and social communication between distant communication partners. The system is based on an Emotion Estimation module that assesses the affective content of textual chat messages and avatars associated with chat partners that act out the assessed emotions of messages through multiple modalities, including synthetic speech and associated affective gestures.