HCI Bibliography Home | HCI Conferences | AH Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AH Tables of Contents: 10111213141516

Proceedings of the 2015 Augmented Human International Conference

Fullname:Proceedings of the 6th Augmented Human International Conference
Editors:Suranga Nanayakkara; Ellen Yi-Luen Do; Jun Rekimoto; Jochen Huber; Bing-Yu Chen
Location:Singapore
Dates:2015-Mar-09 to 2015-Mar-11
Publisher:ACM
Standard No:ISBN: 978-1-4503-3349-8; ACM DL: Table of Contents; hcibib: AH15
Papers:64
Pages:228
Links:Conference Website
  1. Wearable Interfaces
  2. Altered Experiences
  3. Haptics and Exoskeletons
  4. Augmenting Realities
  5. Learning and Reading
  6. Augmenting Sports... and Toilets!
  7. Posters & Demonstrations

Wearable Interfaces

Vision enhancement: defocus correction via optical see-through head-mounted displays BIBAFull-Text 1-8
  Yuta Itoh; Gudrun Klinker
Vision is our primary, essential sense to perceive the real world. Human beings have been keen to enhance the limit of the eye function by inventing various vision devices such as corrective glasses, sunglasses, telescopes, and night vision goggles. Recently, Optical See-Through Head-Mounted Displays (OST-HMD) have penetrated in the commercial market. While the traditional devices have improved our vision by altering or replacing it, OST-HMDs can augment and mediate it. We believe that future OST-HMDs will dramatically improve our vision capability, combined with wearable sensing systems including image sensors.
   For taking a step toward this future, this paper investigates Vision Enhancement (VE) techniques via OST-HMDs. We aim at correcting optical defects of human eyes, especially defocus, by overlaying a compensation image on the user's actual view so that the filter cancels the aberration. Our contributions are threefold. Firstly, we formulate our method by taking the optical relationships between OST-HMD and human eye into consideration. Secondly, we demonstrate the method in proof-of-concept experiments. Lastly and most importantly, we provide a thorough analysis of the results including limitations of the current system, potential research issues necessary for realizing practical VE systems, and possible solutions for the issues for future research.
Exploring users' attitudes towards social interaction assistance on Google Glass BIBAFull-Text 9-12
  Qianli Xu; Michal Mukawa; Liyuan Li; Joo Hwee Lim; Cheston Tan; Shue Ching Chia; Tian Gan; Bappaditya Mandal
Wearable vision brings about new opportunities for augmenting humans in social interactions. However, along with it comes privacy concerns and possible information overload. We explore users' needs and attitudes toward augmented interaction in face-to-face communications. In particular, we want to find out whether users need additional information when interacting with acquaintances, what information they want to access, and how they use it. Based on observations of user behaviors in interactions assisted by Google Glass, we find that users in general appreciated the usefulness of wearable assistance for social interactions. We highlight a few key issues of how wearable devices affect user experience in social interaction.
PickRing: seamless interaction through pick-up detection BIBAFull-Text 13-20
  Katrin Wolf; Jonas Willaredt
We are frequently switching between devices, and currently we have to unlock most of them. Ideally such devices should be seamlessly accessible and not require an unlock action. We introduce PickRing, a wearable sensor that allows seamless interaction with devices through predicting the intention to interact with them through the device's pick-up detection. A cross-correlation between the ring and the device's motion is used as basis for identifying the intention of device usage. In an experiment, we found that the pick-up detection using PickRing cost neither additional effort nor time when comparing it with the pure pick-up action, while it has more hedonic qualities and is rated to be more attractive than a standard smartphone technique. Thus, PickRing can reduce the overhead in using device through seamlessly activating mobile and ubiquitous computers.
SkinWatch: skin gesture interaction for smart watch BIBAFull-Text 21-24
  Masa Ogata; Michita Imai
We propose SkinWatch, a new interaction modality for wearable devices. SkinWatch provides gesture input by sensing deformation of the skin under a wearable wrist device, also known as a smart watch. A gesture command that is matched by learning data and two-dimensional linear input recognizes the gestures. The sensing part is small, thin, and stable, to accept accurate input via a user's skin. We also implement an anti-error mechanism to prevent unexpected input when the user moves or rotates his or her forearm. The whole sensor costs less than $1.50 and the sensor layer does not exceed a height of more than 3 mm in this prototype. We demonstrate sample applications with a practical task; using two-finger skin gesture input.

Altered Experiences

Improving work productivity by controlling the time rate displayed by the virtual clock BIBAFull-Text 25-32
  Yuki Ban; Sho Sakurai; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose
The main contribution of this paper is establishing the method for improving work productivity unconsciously by controlling the time rate that a virtual clock displays. Recently, it became clear that the work efficiency is influenced by various environmental factors. One of a way to increase work productivity is improving the work rate during certain duration. On the contrary, it is becoming clarified that the time pressure has the potential to enhance the task performance and the work productivity. The approximation of the work rate per certain time and this time pressure are evoked by the time sensation. In this study, we focus on a "clock" as a tool, which gives the recognition of time rate and length for everyone mutually. We propose a method to improve a person's work productivity unconsciously by giving an illusion of false sense of the passaged time by a virtual clock that displays the time rate that differ from real one visually. We conducted experiments to investigate the influence of the changes in the displayed virtual time rate on time perception and work efficiency. The experimental results showed that by displaying an the accelerated time rate, it is possible to improve work efficiency with constant time perception.
Gravitamine spice: a system that changes the perception of eating through virtual weight sensation BIBAFull-Text 33-40
  Masaharu Hirose; Karin Iwazaki; Kozue Nojiri; Minato Takeda; Yuta Sugiura; Masahiko Inami
The flavor of food is not just limited to the sense of taste, but also it changes according to the perceived information from other perception such as the auditory, visual, tactile senses, or through individual experiences or cultural background, etc. We proposed "Gravitamine Spice", a system that focuses on the cross-modal interaction between our perception; mainly the weight of food we perceived when we carry the utensils. This system consists of a fork and a seasoning called the "OMOMI". User can change the weight of the food by sprinkling seasoning onto it. Through this sequence of actions, users can enjoy different dining experiences, which may change the taste of their food or the feeling towards the food when they are chewing it.
DogPulse: augmenting the coordination of dog walking through an ambient awareness system at home BIBAFull-Text 41-44
  Christoffer Skovgaard; Josephine Raun Thomsen; Nervo Verdezoto; Daniel Vestergaard
This paper presents DogPulse, an ambient awareness system to support the coordination of dog walking among family members at home. DogPulse augments a dog collar and leash set to activate an ambient shape-changing lamp and visualize the last time the dog was taken for a walk. The lamp gradually changes its form and pulsates its lights in order to keep the family members aware of the dog walking activity. We report the iterative prototyping of DogPulse, its implementation and its preliminary evaluation. Based on our initial findings, we present the limitations and lessons learned as well as highlight recommendations for future work.
Snow walking: motion-limiting device that reproduces the experience of walking in deep snow BIBAFull-Text 45-48
  Tomohiro Yokota; Motohiro Ohtake; Yukihiro Nishimura; Toshiya Yui; Rico Uchikura; Tomoko Hashida
We propose "Snow Walking," a boot-shaped device that reproduces the experience of walking in deep snow. The main purpose of this study is reproducing the feel of walking in a unique environment that we do not experience daily, particularly one that has depth, such as of deep snow. When you walk in deep snow, you get three feelings: the feel of pulling your foot up from the deep snow, the feel of putting your foot down into the deep snow, and the feel of your feet crunching across the bottom of deep snow. You cannot walk in deep snow easily, and with the system, you get a unique feeling not only on the sole of your foot but as if your entire foot is buried in the snow. We reproduce these feelings by using a slider, electromagnet, vibration speaker, a hook and loop fastener, and potato starch.
The kraftwork and the knittstruments: augmenting knitting with sound BIBAFull-Text 49-52
  Enrique Encinas; Konstantia Koulidou; Robb Mitchell
This paper presents a novel example of technological augmentation of a craft practice. By translating the skilled, embodied knowledge of knitting practice into the language of sound, our study explores how audio augmentation of routinized motion patterns affects an individual's awareness of her bodily movements and alters conventional practice. Four different instruments (The Knittstruments: The ThereKnitt, The KnittHat, The Knittomic, and The KraftWork) were designed and tested in four different locations. This research entails cycles of data collection and analysis based on the action and grounded theory methods of noting, coding and memoing. Analysis of the data collected suggests substantial alterations in the knitters performance due to audio feedback at both an individual and group level and improvisation in the process of making. We argue that the usage of Knittstruments can have relevant consequences in the fields of interface design, wearable computing or artistic and musical creation in general and hope to provide a new inspiring venue for designers, artists and knitters to explore.

Haptics and Exoskeletons

Augmenting spatial skills with semi-immersive interactive desktop displays: do immersion cues matter? BIBAFull-Text 53-60
  Erin Treacy Solovey; Johanna Okerlund; Cassie Hoef; Jasmine Davis; Orit Shaer
3D stereoscopic displays for desktop use show promise for augmenting users' spatial problem solving tasks. These displays have the capacity for different types of immersion cues including binocular parallax, motion parallax, proprioception, and haptics. Such cues can be powerful tools in increasing the realism of the virtual environment by making interactions in the virtual world more similar to interactions in the real non-digital world [21, 32]. However, little work has been done to understand the effects of such immersive cues on users' understanding of the virtual environment. We present a study in which users solve spatial puzzles with a 3D stereoscopic display under different immersive conditions while we measure their brain workload using fNIRS and ask them subjective workload questions. We conclude that 1) stereoscopic display leads to lower task completion time, lower physical effort, and lower frustration; 2) vibrotactile feedback results in increased perceived immersion and in higher cognitive workload; 3) increased immersion (which combines stereo vision with vibrotactile feedback) does not result in reduced cognitive workload.
RippleTouch: initial exploration of a wave resonant based full body haptic interface BIBAFull-Text 61-68
  Anusha Withana; Shunsuke Koyama; Daniel Saakes; Kouta Minamizawa; Masahiko Inami; Suranga Nanayakkara
We propose RippleTouch, a low resolution haptic interface that is capable of providing haptic stimulation to multiple areas of the body via a single point of contact actuator. Concept is based on the low frequency acoustic wave propagation properties of the human body. By stimulating the body with different amplitude modulated frequencies at a single contact point, we were able to dissipate the wave energy in a particular region of the body, creating a haptic stimulation without direct contact. The RippleTouch system was implemented on a regular chair, in which, four base range speakers were mounted underneath the seat and driven by a simple stereo audio interface. The system was evaluated to investigate the effect of frequency characteristics of the amplitude modulation system. Results demonstrate that we can effectively create haptic sensations at different parts of the body with a single contact point (i.e. chair surface). We believe RippleTouch concept would serve as a scalable solution for providing full-body haptic feedback in variety of situations including entertainment, communication, public spaces and vehicular applications.
Optimal design for individualised passive assistance BIBAFull-Text 69-76
  Robert Peter Matthew; Victor Shia; Masayoshi Tomizuka; Ruzena Bajcsy
Assistive devices are capable of restoring independence and function to people suffering from musculoskeletal impairments. Traditional assistive exoskeletons can be divided into active or passive devices depending on the method used to provide joint torques. The design of these devices often does not take into account the abilities of the individual leading to complex designs, joint misalignment and muscular atrophy due to over assistance at each joint.
   We present a novel framework for the design of passive assistive devices whereby the device provides the minimal amount of assistance required to maximise the space that they can reach. In doing so, we effectively remap their capable torque load over their workspace, exercising existing muscle while ensuring that key points in the workspace are reached. In this way we hope to reduce the risk of muscular atrophy while assisting with tasks.
   We implement two methods for finding the necessary passive device parameters, one looks at static loading conditions while the second simulates the system dynamics using level set methods. This allows us to determine the set of points that an individual can hold their arms stationary, the statically achievable workspace (SAW).
   We show the efficacy of these methods on a number of case studies which show that individuals with pronounced muscle weakness and asymmetric muscle weakness can have restored SAW restoring a range of motion.
Design of a novel finger exoskeleton with a sliding six-bar joint mechanism BIBAFull-Text 77-80
  Mahasak Surakijboworn; Witaya Wannasuphoprasit
The objective of the paper is to propose a novel design of a finger exoskeleton. The merit of the work is that the proposed mechanism is expected to eliminate interference and translational force on a finger. The design consists of 3 identical joint mechanisms which, for each, adopts a six-bar RCM as an equivalent revolute joint incorporating with 2 prismatic joints to form a close-chain structure with a finger joint. Cable and hose transmission is designed to reduce burden from prospective driving modules. As a result, the prototype coherently follows finger movement throughout full range of motion for every size of fingers. This prototype is a part of the research that will be used in hand rehabilitation.

Augmenting Realities

A life log system that recognizes the objects in a pocket BIBAFull-Text 81-88
  Kota Shimozuru; Tsutomu Terada; Masahiko Tsukamoto
A novel approach has been developed for recognizing objects in pockets and for recording the events related to the objects. Information on putting an object into or taking it out of a pocket is closely related to user contexts. For example, when a house key is taken out from a pocket, the owner of the key is likely just getting home. We implemented a objects-in-pocket recognition device, which has a pair of infrared sensors arranged in a matrix, and life log software to obtain the time stamp of events happening. We evaluated whether or not the system could deal with one of five objects (a smartphone, ticket, hand, key, and lip balm) using template matching. When one registered object (the smartphone, ticket, or key) was put in the pocket, our system recognized the object correctly 91% of the time on average. We also evaluated our system in one action scenario. With our system's time stamps, user could easily remember what he took on that day and when he used the items.
VISTouch: dynamic three-dimensional connection between multiple mobile devices BIBAFull-Text 89-92
  Masasuke Yasumoto; Takehiro Teraoka
It has become remarkably common recently for people to own multiple mobile devices, although it is still difficult to effectively use them in combination. In this study, we constructed a new system called VISTouch that achieves a new operational capability and increases user interest in mobile devices by enabling multiple devices to be used in combination dynamically and spatially. Using VISTouch, for example, to spatially connect a smart-phone to a horizontally positioned tablet that is displaying a map as viewed from above enables these devices to dynamically obtain the correct relative position. The smart-phone displays images viewed from its position, direction, and angle in real time as a window to show the virtual 3D space. We applied VISTouch to two applications that used detailed information of the relative position in real space between multiple devices. These applications showed the potential improvement in using multiple devices in combination.
LumoSpheres: real-time tracking of flying objects and image projection for a volumetric display BIBAFull-Text 93-96
  Hideki Koike; Hiroaki Yamaguchi
This paper proposes a method for real-time tracking of flying objects and image projection onto them for developing a particle-based volumetric 3D display. The first section describes the concept using high-speed cameras and projectors for a particle-based volumetric 3D display. Our solution suggests a prediction model with kinematic laws and uses Kalman Filters to address latency issues within the projector-camera system. We conducted experiments to show the accuracy of the image projection. We also present an application of our method in entertainment, Digital Juggling.
B-C-invisibility power: introducing optical camouflage based on mental activity in augmented reality BIBAFull-Text 97-100
  Jonathan Mercier-Ganady; Maud Marchal; Anatole Lécuyer
In this paper we introduce a novel and interactive approach for controlling optical camouflage called "B-C-Invisibility power". We propose to combine augmented reality and Brain-Computer Interface (BCI) technologies to design a system which somehow provides the "power of becoming invisible". Our optical camouflage is obtained on a PC monitor combined with an optical tracking system. A cut out image of the user is computed from a live video stream and superimposed to the prerecorded background image using a transparency effect. The transparency level is controlled by the output of a BCI, making the user able to control her invisibility directly with mental activity. The mental task required to increase/decrease the invisibility is related to a concentration/relaxation state. Results from a preliminary study based on a simple video-game inspired by the Harry Potter universe could notably show that, compared to a standard control made with a keyboard, controlling the optical camouflage directly with the BCI could enhance the user experience and the feeling of "having a super-power".

Learning and Reading

Word out!: learning the alphabet through full body interactions BIBAFull-Text 101-108
  Kelly Yap; Clement Zheng; Angela Tay; Ching-Chiuan Yen; Ellen Yi-Luen Do
This paper presents Word Out, an interactive game for learning of the alphabet through full body interaction. Targeted for children 4-7 years old, Word Out employs the Microsoft Kinect to detect the silhouette of players. Players are tasked to twist and form their bodies to match the shapes of the letters displayed on the screen. By adopting full body interactions in games, we aim to promote learning through play, as well as encourage collaboration and kinesthetic learning for children. Over two months, more than 15,000 children have played Word Out installed in two different museums. This paper presents the design and implementation of the Word Out game, preliminary analyses of a survey carried out at the museums to share insights and discusses future work.
Unconscious learning of speech sounds using mismatch negativity neurofeedback BIBAFull-Text 109-112
  Ming Chang; Hiroyuki Iizuka; Yasushi Naruse; Hideyuki Ando; Taro Maeda
Learning the speech sounds of a foreign language is difficult for adults, and often requires significant training and attention. For example, native Japanese speakers are usually unable to differentiate between the "l" and "r" sounds in English; thus, words like "light" and "right" are hardly discriminated. We previously showed that the discrimination ability for similar pure tones can be improved unconsciously using neurofeedback (NF) training with mismatch negativity (MMN), but it is not clear whether it can improve discrimination of the speech sounds of words. We examined whether MMN Neurofeedback is effective in helping native Japanese speakers discriminate 'light' and 'right' in English. Participants seemed to unconsciously improve significantly in speech sound discrimination through NF training without attention to the auditory stimuli or awareness of what was to be learnt. Individual word sound recognition also improved significantly. Furthermore, our results indicate a lasting effect of NF training.
Use of an intermediate face between a learner and a teacher in second language learning with shadowing BIBAFull-Text 113-116
  Yoko Nakanishi; Yasuto Nakanishi
Shadowing is a language-learning method whereby a learner attempts to repeat, i.e., shadow, what he/she hears immediately. We propose displaying a computer-generated intermediate face between a learner and a teacher as an appropriate intermediate scaffold for shadowing. The intermediate face allows the learner to follow a teacher's face and mouth movements more effectively. We describe a prototype system that generates an intermediate face from real-time camera input and captured video. We also discuss a user study of the prototype system with crowd-sourced participants. The results of the user study suggest that the prototype system provided better pronunciation cues than video-only shadowing techniques.
Assessment of stimuli for supporting speed reading on electronic devices BIBAFull-Text 117-124
  Tilman Dingler; Alireza Sahami Shirazi; Kai Kunze; Albrecht Schmidt
Technology has introduced multimedia to tailor information more broadly to our various senses, but by no means has the ability to consume information through reading lost its importance. To cope with the ever-growing amount of textual information to consume, different techniques have been proposed to increase reading efficiency: rapid serial visual presentation (RSVP) has been suggested to increase reading speed by effectively reducing the number of eye movements. Further, moving a pen, finger or the entire hand across text is a common technique among speed readers to help guide eye movements. We adopted these techniques for electronic devices by introducing stimuli on text that guide users' eye movements. In a series of two user studies we sped up users' reading speed to 150% of their normal rate and evaluated effects on text comprehension, mental load, eye movements and subjective perception. Results show that reading speed can be effectively increased by using such stimuli while keeping comprehension rates nearly stable. We observed initial strain on mental load which significantly decreased after a short while. Subjective feedback conveys that kinetic stimuli are better suited for long, complex text on larger displays, whereas RSVP was preferred for short text on small displays.
How much do you read?: counting the number of words a user reads using electrooculography BIBAFull-Text 125-128
  Kai Kunze; Masai Katsutoshi; Yuji Uema; Masahiko Inami
We read to acquire knowledge. Reading is a common activity performed in transit and while sitting, for example during commuting to work or at home on the couch. Although reading is associated with high vocabulary skills and even with increased critical thinking, we still know very little about effective reading habits. In this paper, we argue that the first step to understanding reading habits in real life we need to quantify them with affordable and unobtrusive technology. Towards this goal, we present a system to track how many words a user reads using electrooculography sensors. Compared to previous work, we use active electrodes with a novel on-body placement optimized for both integration into glasses (or head-worn eyewear etc) and for reading detection. Using this system, we present an algorithm capable of estimating the words read by a user, evaluate it in an user independent approach over experiments with 6 users over 4 different devices (8" and 9" tablet, paper, laptop screen). We achieve an error rate as low as 7% (based on eye motions alone) for the word count estimation (std = 0.5%).

Augmenting Sports... and Toilets!

Designable sports field: sport design by a human in accordance with the physical status of the player BIBAFull-Text 129-136
  Ayaka Sato; Jun Rekimoto
We present the Designable Sports Field (DSF), an environment where a "designer" designs a sports field in accordance with the physical intensity of the player. Sports motivate players to compete and interact with teammates. However, the rules are fixed; thus, people who lack experience or physical strength often do not enjoy playing. In addition, the levels of the players should preferably match. On the other hand, in coaching, a coach trains players according to their skills. However, to be a coach requires considerable experience and expertise. We present a DSF application system called SportComposer. In this system, the "designer" and "player," roles that can be assumed even by amateur players, participate in the sport to achieve different goals. The designer designs a sports field according to the physical status of the player, such as his/her heart rate, in real time. Thus, the player can play a physical game that matches his/her physical intensity. In experiments conducted under this environment, we tested the system with persons ranging from a small child to adults who are not expert in sports and confirmed that both the roles of the designer and the player are functional and enjoyable. We also report findings from a demonstration conducted with 92 participants in a public museum.
Augmented dodgeball: an approach to designing augmented sports BIBAFull-Text 137-140
  Takuya Nojima; Ngoc Phuong; Takahiro Kai; Toshiki Sato; Hideki Koike
Ubiquitous computing offers enhanced interactive, human-centric experiences including sporting and fitness-based applications. To enhance this experience further, we consider augmenting dodgeball by adding digital elements to a traditional ball game. To achieve this, an understanding of the game mechanics with participating movable bodies, is required. This paper discusses the design process of a ball -- player-centric interface that uses live data acquisition during gameplay for augmented dodgeball, which is presented as an application of augmented sports. Initial prototype testing shows that player detection can be achieved using a low-energy wireless sensor based network such as that used with fitness sensors, and a ball with an embedded sensor together with proximity tagging.
A mobile augmented reality system to enhance live sporting events BIBAFull-Text 141-144
  Samantha Bielli; Christopher G. Harris
Sporting events broadcast on television or through the internet are often supplemented with statistics and background information on each player. This information is typically only available for sporting events followed by a large number of spectators. Here we describe an Android-based augmented reality (AR) tool built on the Tesseract API that can store and provide augmented information about each participant in nearly any sporting event. This AR tool provides for a more engaging spectator experience for viewing professional and amateur events alike. We also describe the preliminary field tests we have conducted, some identified limitations of our approach, and how we plan to address each in future work.
A teleoperated bottom wiper BIBAFull-Text 145-150
  Takeo Hamada; Hironori Mitake; Shoichi Hasegawa; Makoto Sato
In order to aid elderly and/or disabled people in cleaning and drying their posterior after defecation, a teleoperated bottom wiper is proposed. The wiper enables a person sitting on the toilet seat to wipe his/her bottom by specifying the wiping position and strength with a computer mouse and keyboard. The proposed teleoperation is novel in that the operator and target are the same. The operator feels force feedback through the buttocks instead of the hands. The result of a user study confirmed that users could successfully wipe the buttocks with appropriate position and strength by teleoperation. Since it is controller by the user, the teleoperated wiper is suitable for accommodating each participant's preference of the moment.
The toilet companion: a toilet brush that should be there for you and not for others BIBAFull-Text 151-154
  Laurens Boer; Nico Hansen; Ragna L. Möller; Ana I. C. Neto; Anne H. Nielsen; Robb Mitchell
In this article we present the Toilet Companion: an augmented toilet brush that aims to provide moments of joy in the toilet room, and if necessary, stimulates toilet goers to use the brush. Based upon the amount of time a user sits upon the toilet seat, the brush swings it handle with increasing speed: initially to draw attention to its presence, but over time to give a playful impression. Hereafter, the entire brush makes rapid up and downward movements to persuade the user to pick it up. In use, it generates beeps in response to human handling, to provide a sense of reward and accompanying pleasure. Despite our aims in providing joy and stimulation, participants from field trials with the Toilet Companion reported experiencing the brush as undesirable, predominantly because the sounds produced by the brush would make private toilet room activities publicly perceivable. The design intervention thus challenged the social boundaries of the otherwise private context of the toilet room, opening up an interesting area for design-ethnographic research about perception of space, where interactive artifacts can be mobilized to deliberately breach public, social, personal, and intimate spaces.

Posters & Demonstrations

EcoBears: augmenting everyday appliances with symbolic and peripheral feedback BIBAFull-Text 155-156
  Nick Nielsen; Sandra B. P. S. Pedersen; Jens A. Sørensen; Nervo Verdezoto; Nikolai H. Øllegaard
This paper introduces the EcoBears concept that aims to augment household appliances with functional and aesthetic features to promote their use and longevity of use to prevent their disposal. The EcoBears also aim to support the communication of environmental issues in the home setting. The initial design and implementation of the EcoBears that consist of two bear modules (a mother and her cub) is presented as well as the preliminary concept validation and lessons learned to be considered for future work.
Lovable couch: mitigating distrustful feelings for couples by visualizing excitation BIBAFull-Text 157-158
  Takuya Iwamoto; Soh Masuko
Increasing percentage of unmarried individuals in Japan has triggered decline in birth rate. This is partially caused by dominating modern lifestyles that involve long working hours, as well as increasing sex segregation in social interaction. People who are singles have fewer opportunities to build romantic relationships; therefore, speed-dating services have recently become popular. However, challenges still remain in supporting dating interaction especially to determine whether a potential couple feels affection toward each other. Hence, many people feel distrust and anxiety when being approached by a dating partner thus makes them feel hesitate to move forward. In this work, we report our findings that by visualizing excitation aggregated from users' heartbeat changes potentially help users to determine whether their potential partners feel mutual affection during a date. We propose Lovable Couch, an approach to support dating session by visually actuating a sofa with user's excitement measures as a way to mitigate users' anxiety.
Directional communication using spatial sound in human-telepresence BIBAFull-Text 159-160
  Shohei Nagai; Shunichi Kasahara; Jun Rekimoto
Communication is essential for working effectively with others. We communicate with each other to share their situation and what they are thinking. Especially, using voice is one of the most common ways to communicate. In previous research, we proposed LiveSphere that shares the surrounding environment with a remote person and provides immersive experience to effectively collaborate with each other. This system realizes "human-telepresence" where a person can be in other person and experience the environment. However, the communication in human-telepresence has some problems. In this paper, we propose directional communication with spatial sound to alleviate the problems. We also report on the result of user study.
PukuPuCam: a recording system from third-person view in scuba diving BIBAFull-Text 161-162
  Masaharu Hirose; Yuta Sugiura; Kouta Minamizawa; Masahiko Inami
In this paper, we propose "PukuPuCam" system, an apparatus to record one's diving experience from a third-person view, allowing the user to recall the experience at a later time. "PukuPuCam" continuously captures the center of the user's view point, by attaching a floating camera to the user's body using a string. With this simple technique, it is possible to maintain the same viewpoint regardless of the diving speed or the underwater waves. Therefore, user can dive naturally without being conscious about the camera. The main aim of this system is to enhance the diving experiences by recording user's unconscious behaviour and interactions with the surrounding environment.
The augmented narrative: toward estimating reader engagement BIBAFull-Text 163-164
  Kai Kunze; Susana Sanchez; Tilman Dingler; Olivier Augereau; Koichi Kise; Masahiko Inami; Terada Tsutomu
We present the concept of bio-feedback driven computing to design a responsive narrative, which acts according to the readers experience. We explore on how to detect engagement and give our evaluation on the usefulness of different sensor modalities. We find temperature and blink frequency are best to estimate engagement and can classify engaging and non-engaging user-independent without error for a small user sample size (5 users).
Cyrafour: an experiential activity facilitating empathic distant communication among copresent individuals BIBAFull-Text 165-166
  Enrique Encinas; Robb Mitchell
Distant communication relies mostly on a non-embodied representation of participants (e.g. textual in chats, photographic in videoconference, auditory in telephony, etc) that lessens the sensory richness of conversational interactions. Cyrafour is a novel activity that explores the implications of using human avatars (cyranoids) for empathic interpersonal remote communication. An unscripted conversation between two individuals (the sources) is transmitted through radio waves and reproduced by two copresent subjects (the cyranoids) following certain conversational guidelines. In particular, the Sources were invited to discuss about a topic, play a conversation game and comment on an opinionated video. All Cyrafour sessions were video-taped and participants interviewed afterwards in order to support analysis and discussion. Cyrafour could be considered as a playful embodied identity game in which cyranoids are simultaneously together in and aside from a conversation generated elsewhere. This puzzling circumstance seems to allow for an empathic embodiment of the meaning transmitted and appears to create a frame for further discussion on the topics raised.
FootNote: designing a cost effective plantar pressure monitoring system for diabetic foot ulcer prevention BIBAFull-Text 167-168
  Kin Fuai Yong; Juan Pablo Forero; Shaohui Foong; Suranga Nanayakkara
Diabetic Food Ulcer (DFU) is one of the dangerous complications of Diabetes Mellitus that is notoriously progressive and high in recurrence. Peripheral neuropathy, or damage to nerves in the foot, is the culprit that leads to DFU. Many research and commercial development has attempted to mitigate the condition by establishing an artificial feedback through in-shoe pressure-sensing solutions for patients. However these solutions suffer from inherent issues of analog sensors, prohibitive price tags and inflexibility in the choice of footwear. We approached these problems by designing a prototype with fabric digital sensors. The data showed promising potential for assertion frequency tracking and user activity recognition. Although the bigger challenge lies ahead -- to correlate approximation by digital sensors to analog pressure reading, we have demonstrated that an inexpensive, more versatile and flexible solution based from digital sensors for DFU prevention is indeed feasible.
Mudra: a multimodal interface for braille teaching BIBAFull-Text 169-170
  Aman Srivastava; Sanskriti Dawle
This poster explores how multimodal interfaces could be used to teach Braille faster and more efficiently. Mudra, an interface to teach Braille has been made intuitive by incorporating speech recognition, tactile and audio feedback. A prototype of the interface has been developed using a mobile phone application, Raspberry Pi based single cell refreshable Braille display and audio headset.
Telexistence drone: design of a flight telexistence system for immersive aerial sports experience BIBAFull-Text 171-172
  Hirohiko Hayakawa; Charith Lasantha Fernando; MHD Yamen Saraiji; Kouta Minamizawa; Susumu Tachi
In this paper, a new sports genre, "Aerial Sports" is introduced where the humans and robots collaborate to enjoy space as a whole new field. By integrating a flight unit with the user's voluntary motion, everyone can enjoy the crossing physical limitations such as height and physique. The user can dive into the drone by wearing a HMD and experience the provided binocular stereoscopic visuals and sensation of flight using his limbs effectively. In this paper, the requirements and design steps for a Synchronization of visual information and physical motion in a flight system is explained mainly for aerial sports experience. The requirements explained in this paper can be also adapted to the purpose such as search and rescue or entertainment purposes where the coupled body motion has advantages.
Really eating together: a kinetic table to synchronise social dining experiences BIBAFull-Text 173-174
  Robb Mitchell; Alexandra Papadimitriou; Youran You; Laurens Boer
Eating is one of the most social of human activities; yet, scant attention has been paid to coordinating meal completion speeds. Addressing this challenge, we present "Keep Up With Me" -- a novel augmented dining table designed to guide diners in keeping pace with each other. This mechatronical table incorporates a mechanism to gauge the relative weight of food on the dishes of dining partners. Actuators gradually raise the dish of a slower eating partner, and lower the dish of a faster eater by a corresponding amount. These discrete signals may iteratively bring the eating pace of dining companions back into mutual alignment. This table is offered as a contribution toward discussions around the subtle augmentation of dining and social experiences.
Extracting users' intended nuances from their expressed movements: in quadruple movements BIBAFull-Text 175-176
  Takanori Komatsu; Chihaya Kuwahara
We propose a method for extracting users' intended nuances from their expressed quadruple movements. Specifically, this method can quantify such nuances as a four dimensional vector representation {sharpness, softness, dynamics, largeness}. We then show an example of a music application based on this method that changes the volume of assigned music tracks in accordance with each attribute of the vector representation extracted from their quadruple movements like a music conductor.
Using point-light movement as peripheral visual guidance for scooter navigation BIBAFull-Text 177-178
  Hung-Yu Tseng; Rong-Hao Liang; Liwei Chan; Bing-Yu Chen
This work presents a preliminary study of utilizing point-light movement in scooter drivers' peripheral vision for turn-by-turn navigation. We examine six types of basic 1D point-light movement, and the results suggests several of them can be easily picked up and comprehended by peripheral vision in parallel with the on-going foveal vision task, and can be use to provide effective and distraction-free route-guiding experiences for scooter driving.
Non-invasive optical detection of hand gestures BIBAFull-Text 179-180
  Santiago Ortega-Avila; Bogdana Rakova; Sajid Sadi; Pranav Mistry
In this paper we present a novel type of sensing technology for hand and finger gesture recognition that utilizes light in the invisible spectrum to detect changes in position and form of body tissue like tendons and muscles. The proposed system can be easily integrated with existing wearable devices. Our approach not only enables gesture recognition but it could potentially double to perform a variety of health related monitoring tasks (e.g. heart rate, stress).
DIY IR sensors for augmenting objects and human skin BIBAFull-Text 181-182
  Paul Strohmeier
Interaction designers require simple methods of creating ad-hoc sensors for prototyping interactive objects. Methods of creating custom sensing solutions commonly include various capacitive and resistive techniques. Near-infrared (IR) sensing solutions can be used as an alternative to these established methods. There are many situations in which IR sensors may be a preferred method of input, such as grasp detection and touch interactions on the skin. In this paper we outline the general approach for designing IR sensors and discuss the design and applications of two custom sensors.
Walking experience by real-scene optic flow with synchronized vibrations on feet BIBAFull-Text 183-184
  Takaaki Hayashizaki; Atsuhiro Fujita; Junki Nozawa; Shohei Ueda; Koichi Hirota; Yasushi Ikei; Michiteru Kitazaki
We developed a walking recording and experiencing system. For the recording we captured stereo motion images from two cameras attached to a person's forehead with synchronized data of ankles' accelerations. For the experiencing we presented 3-D motion images with binocular disparity on a head-mounted display and vibrations to user's feet. The vibration was made from a sound of shoes when a person walked. We found that users subjectively reported the 3-D motion images with synchronized foot vibrations elicited stronger feelings of walking, leg motion, footsteps, and tele-existence than without vibrations in Experiment 1. In Experiment 2, participants' self-localization drifted in the direction of virtual walking after experiencing other walker's visual sight with the synchronized foot vibrations. These results suggest that our walking experiencing system gave users somewhat active walking feelings.
AR-HITOKE: visualizing popularity of brick and mortar shops to support purchase decisions BIBAFull-Text 185-186
  Soh Masuko; Ryosuke Kuroki
We propose a shopping support system (AR-HITOKE) that visualizes the popularity of brick and mortar shops by aggregating online and offline information using augmented reality technology that can be understood intuitively. In the proposed method, 3D-animated human-shaped icons in queues and user comments are overlaid above a shop's location on a physical map. Popularity is expressed visually by adjusting the queues length depending on offline sales data. We also visualize user comments related to each shop that are extracted from online reviews using our sentiment analysis framework. The proposed method offers new evaluation information for decision making in a physical environment and new online shopping experiences. Through exhibition of the proposed system at an actual event, we found that users are able to recognize the popularity of shops intuitively.
Bio-Collar: a wearable optic-kinetic display for awareness of bio-status BIBAFull-Text 187-188
  Takuya Nojima; Miki Yamamura; Junichi Kanebako; Lisako Ishigami; Mage Xue; Hiroko Uchiyama; Naoko Yamazaki
Advances in sensor technology allow us to wear various sensors that detect bio-signals, such as body posture, body movement, heart rate and respiration rate. Compared with the many options of wearable sensors available, the options of display methods are limited. This paper proposes the Bio-Collar, which is a novel collar-shaped wearable bio-status display. The Bio-Collar indicates the wearer's bio-status through its color and kinetic motion.
Superimposed projection of ghosted view on real object with color correction BIBAFull-Text 189-190
  Naoto Uekusa; Takafumi Koike
We describe a spatial augmented reality system that enables superimposed projection of an internal image on a real object with color correction. Our system is a projector-camera system, which consists of a camera, a projector, and a PC. At first, we generate a first projection image from the internal image of CG and a camera image of the real object captured by the camera. Next, we project the first projection image on the real object, and again capture an image of the real object with the internal image. At last, we update the projection image with color correction on CIELUV color space and project the image on the real object. This system will be able to visualize the internal structures on various objects easily.
Taste change of soup by the recreating of sourness and saltiness using the electrical stimulation BIBAFull-Text 191-192
  Yukika Aruga; Takafumi Koike
We change and amplify the taste of soup by stimulating the tongue electrically. Humans can feel electric taste at the moment of electrical tongue stimulation. Electric taste includes metal taste, saltiness, sourness, and bitterness. Giving electric taste while eating will enable the amplification of taste without increasing intake of sugar and salt. Our system recreates sourness and saltiness, and changes the taste of soup by giving electric taste while eating. The subject eats soup with a spoon which has an electrode attached to it. When the spoon touches his or her tongue, a circuit is formed and stimulates the tongue electrically. To evaluate this system, we performed an experiment where the subject evaluates the taste of soup when the system stimulates the tongue electrically. The experimental results show that anode stimulation amplifies acidity, saltiness and taste strength.
Body as display: augmenting the face through transillumination BIBAFull-Text 193-194
  Daniel Wessolek; Jochen Huber; Pattie Maes
In this paper we describe our explorations of the design space offered by augmenting parts of the human face, in this case, the ears. Using light-emitting add-ons behind the ears we aim to enhance social interactions. Scenarios range from indirect notifications of events, messaging directed to the wearer but communicated via a person face to face, or adding information regarding the internal state of the wearer, like loudness discomfort levels, concentration fatigue, or emotional strain levels.
Personally supported dynamic random dot stereogram by measuring binocular parallax BIBAFull-Text 195-196
  Shinya Kudo; Ryuta Okazaki; Taku Hachisu; Michi Sato; Hiroyuki Kajimoto
We present a novel approach to the use of gaze tracking as a means of supporting the experience of a Random Dot Stereogram (RDS). RDS is a method for producing an apparently noisy image that actually contains a stereoscopic scene, which becomes visible under a certain parallax of the eyes [1]. Although adjustment of eye convergence is required for RDS, many people have difficulty in making this adjustment. We implement a system by which most can stably experience stereoscopic images from RDSs. We confirmed that the times users took to find stereoscopic scenes in dynamic RDSs (d-RDS) were significantly decreased compared with presenting d-RDSs with fixed parallax. We also demonstrate this system as a means of secure information display when users input a password.
The mind-window: brain activity visualization using tablet-based AR and EEG for multiple users BIBAFull-Text 197-198
  Jonathan Mercier-Ganady; Maud Marchal; Anatole Lécuyer
In this poster we introduce a novel approach, called the "Mind-Window", for real-time visualization of brain activity. The Mind-Window enables one or multiple users to visualize the brain activity of another person as if her skull was transparent. Our approach relies on the use of multiple tablet PCs that the observers can move around the head of the observed person wearing an EEG cap. A 3D virtual brain model is superimposed onto the head of the observed person using augmented reality by tracking a 3D marker placed on top of the head. The EEG cap records the electrical fields emitted by the brain, and they are processed in real-time to update the display of the virtual brain model. Several visualization techniques are proposed such as an interactive cutting plane which can be manipulated with touch-based inputs on the tablet. The Mind-Window could be used for various application purposes such as for Education as teaching tool to learn brain anatomy/activity and EEG features, e.g., electrodes localization, electrical patterns, etc.
POVeye: enhancing e-commerce product visualization by providing realistic image based point-of-view BIBAFull-Text 199-200
  Shogo Yamashita; Adiyan Mujibiya
We present POVeye, a method to help users in capturing and creating visualization of products for extensive representation of the product's material color and texture. POVeye achieve this by providing realistic images captured from various angles, which are positioned correctly based on the calculated geometrical centroid. As input, users simply provide a video or multiple images of the product taken by any camera from arbitrary angles, without requiring any pre-calibration. POVeye provides an interface that shows object-centric camera positions alongside with image taken from respective camera angle. Users are able to either manually browse through automatically detected camera positions, or visualize the product by automatically detected view-angle path. POVeye leverages Structure-from-Motion (SfM) approach to obtain camera-object map. Our approach is unique from other solutions by preserving realistic imaging condition. We observe that visualization of products from different angles that provide information of light reflection and refraction potentially helps users to identify materials, and further perceive quality of a product.
Effective napping support system by hypnagogic time estimation based on heart rate sensor BIBAFull-Text 201-202
  Daichi Nagata; Yutaka Arakawa; Takatomi Kubo; Keiich Yasumoto
In daily life, lack of sleep is one of the main reasons for poor concentration. To support an effective napping, considered as one of good methods for recovering insufficient sleep and enhancing a user's concentration, we propose a hypnagogic time estimation using a heart rate sensor. Because a heart rate sensor has already been common, our method can be used widely and easily in our daily life. Most of existing sleep support systems aim to provide a comfortable wake-up by observing the sleep stage. Unlike these methods, we aim to provide an appropriate sleep duration by estimating a hypnagogic timing. By using various heart rate sensors, existing sleep support systems and 64ch electroencephalography, we tried to find out the relationship between various vital signals and sleep stages during a napping. Finally, we build a hypnagogic time estimation model by using the machine learning technique.
Sharing the lights: exploration on teaching electronics for sensory augmentation development BIBAFull-Text 203-204
  Matthew Swarts; Nicholas Davis; Chih-Pin Hsiao; James Hallam
In the spring of 2014 a workshop on Sensory Augmentation was held at the National University of Singapore's Connective Ubiquitous Technology and Embodiments (CUTE) Center. During the workshop, three tutorials were presented followed by individual and team based projects. This paper takes a look at the tutorials developed for the workshop and suggests evaluation through enactive cognition theory.
Nested perspective: an art installation that intersects the physical and virtual social worlds BIBAFull-Text 205-206
  Nan-Ching Tai
This paper presents the design concept, theoretical foundation, and expected exhibition effect of the art installation SEE[N]. SEE[N] operates on the principle of linear perspective to construct a physical structure and offers perspectives of seeing from two different scales: seeing an individual and being seen by a social group. The installation engages the audience in active behavior, which is influenced by the newly developed social patterns. The audience begins by reading familiar symbols and is driven by curiosity to further explore the installation piece, concluding with gaining a full grasp of the image at the moment of taking a photo of the installation, attempting to share it on social networking sites.
Wearable text input interface using touch typing skills BIBAFull-Text 207-208
  Kazuya Murao
A lot of systems and devices for text input in wearable computing environment have been proposed and released thus far, while these are not commonly used due to drawbacks such as slow input speed, long training period, low usability, and low wearability. This paper proposes a wearable text input device using touch typing skills that would have been acquired for full-size keyboard. Users who have touch typing skills can input texts without training.
Augmented non-visual distance sensing with the EyeCane BIBAFull-Text 209-210
  Galit Buchs; Shachar Maidenbaum; Amir Amedi
How can we sense distant objects without vision?
   Vision is the main distal sense used by humans, thus impairing distance and spatial perception for sighted individuals in the dark or for people with visual impairments.
   We suggest augmenting distance perception via other senses such as using auditory or haptic cues, and have created the EyeCane for this purpose. The EyeCane is a minimal Sensory Substitution Device that enables users to perform tasks such as distance estimation, and obstacle detection and avoidance up to 5m away on-visually.
   In the demonstration, visitors will receive a brief training with the device, and then use it to detect objects and estimate distances while blindfolded.
A wearable stimulation device for sharing and augmenting kinesthetic feedback BIBAFull-Text 211-212
  Jun Nishida; Kanako Takahashi; Kenji Suzuki
In this paper, we introduce a wearable stimulation device that is capable of simultaneously achieving functional electrical stimulation (FES) and the measurement of electromyogram (EMG) signals. We also propose dynamically adjustable frequency stimulation over a wide range of frequencies (1-150Hz), which allows the EMG-triggered FES device to be used in various scenarios. The developed prototype can be used not only as social playware for facilitating touch communications but also as a tool for virtual experiences such as hand tremors in Parkinson's disease, and an assistive tool for sports training. The methodology, preliminarily experiments, and potential applications are described in this paper.
SHRUG: stroke haptic rehabilitation using gaming BIBAFull-Text 213-214
  Roshan Lalintha Peiris; Vikum Wijesinghe; Suranga Nanayakkara
This demonstration paper describes SHRUG, an interactive shoulder exerciser for rehabilitation. Firstly, the system's interactive and responsive elements provide just-in-time feedback to the patients and can also be used by the therapists to observe and personalise the rehabilitation program. Secondly, it has a gamified element, which is expected to engage and motivate the patient throughout the rehabilitation process. With this demonstration, the participants will be able to use the system and play the games introduced by SHRUG and observe the feedback.
Feel & see the globe: a thermal, interactive installation BIBAFull-Text 215-216
  Jochen Huber; Hasantha Malavipathirana; Yikun Wang; Xinyu Li; Jody C. Fu; Pattie Maes; Suranga Nanayakkara
"Feel & See the Globe" is a thermal, interactive installation. The central idea is to map temperature information in regions around the world from prehistoric, modern to futuristic times onto a low fidelity display. The display visually communicates global temperature rates and lets visitors experience the temperature physically through a tangible, thermal artifact. A pertinent educational aim is to inform and teach about global warming.
Towards effective interaction with omnidirectional videos using immersive virtual reality headsets BIBAFull-Text 217-218
  Benjamin Petry; Jochen Huber
Omnidirectional videos (ODV), also known as panoramic videos, are an emerging, new kind of media. ODVs are typically recorded with cameras that cover up to 360° of the recorded scene. Due to the limitation of the human vision, ODVs cannot be viewed as-is. There is a larger body of work that focuses on browsing ODVs on ordinary 2D displays, e.g. on an LCD using a desktop computer or on a smartphone. In this demonstration paper, we present a new approach for ODV browsing using an immersive, head-mounted system. The novelty of our implementation lies in decoupling navigation in time from navigation in space: navigation in time is mapped to gesture-based interactions and navigation in space is mapped to head movements. We argue that this enables more natural ways of interacting with ODVs.
ChromaGlove: a wearable haptic feedback device for colour recognition BIBAFull-Text 219-220
  Pawel Wozniak; Kristina Knaving; Mohammad Obaid; Marta Gonzalez Carcedo; Ayça Ünlüer; Morten Fjeld
While colour-blindness is a disability that does not prevent those suffering from it from living fruitful lives, it does cause difficulties in everyday life situations such as buying clothes. Users suffering from colourblindness may be helped by designing devices that integrate well with their daily routines. This paper introduces ChromaGlove, a wearable device that converts colour input into haptic output thus enhancing the colour-sensing ability of the user. The device uses variable pulse widths on vibration motor to communicate differences in hue. Data is obtained through an illuminated colour sensor placed on the palm. In the future, we plan to conduct studies that will show how well a haptic glove can be integrated in everyday actions.
Mutual hand representation for telexistence robots using projected virtual hands BIBAFull-Text 221-222
  MHD Yamen Saraiji; Charith Lasantha Fernando; Kouta Minamizawa; Susumu Tachi
In this paper, a mutual body representation for Telexistence Robots that does not have physical arms were discussed. We propose a method of projecting user's hands as a virtual superimposition that not only the user sees through a HMD, but also to the remote participants by projecting virtual hands images into the remote environment with a small projector aligned with robot's eyes. These virtual hands are produced by capturing user's hands from the first point of view (FPV), and then segmented from the background. This method expands the physical body representation of the user, and allows mutual body communication between the user and remote participants while providing a better understanding user's hand motion and intended interactions in the remote place.
A(touch)ment: a smartphone extension for instantly sharing visual and tactile experience BIBAFull-Text 223-224
  Haruki Nakamura; Nobuhisa Hanamitsu; Kouta Minamizawa
Using social networks, users are able to share visual, audial and audio-visual information by making use of their smartphone's built-in camera and microphone. However, up until now, the sharing of corresponding haptic experiences has not been possible. Here we present a haptic-information recording and displaying attachment for smartphone, or 'a(touch)ment', that allows the user to instantly record, as shown in fig. 1, and share haptic experiences, for example, in situations illustrated in fig. 2.
CapacitiveMarker: novel interaction method using visual marker integrated with conductive pattern BIBAFull-Text 225-226
  Kohei Ikeda; Koji Tsukada
The visual markers have spatial limitations to require certain distances between a camera and markers. Meanwhile, as capacitive multi-touch displays on mobile devices have become popular, many researchers proposed interaction techniques using conductive patterns and a capacitive display. In this study, we propose a novel visual marker, "CapacitiveMarker", which can be recognized both by a camera and capacitive display. The CapacitiveMarker consists of two layered markers: a visual marker printed on a seal and a conductive pattern on a plastic film. We also propose a new interaction method using CapacitiveMarker through applications.
Interactive instant replay: sharing sports experience using 360-degrees spherical images and haptic sensation based on the coupled body motion BIBAFull-Text 227-228
  Yusuke Mizushina; Wataru Fujiwara; Tomoaki Sudou; Charith Lasantha Fernando; Kouta Minamizawa; Susumu Tachi
We propose "Interactive Instant Replay" system that the user can experience previously recorded sports play with 360-degrees spherical images and haptic sensation. The user wears a HMD, holds a Haptic Racket and experience the first person sports play scene with his own coupled body motion. The system proposed in this paper could be integrated with existing television broadcasting data that can be used in large sports events such as 2020 Olympic, to experience the same sports play experience at home.