HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,542,961
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Nanayakkara_S* Results: 41 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 41 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 09 |
[1] Augmented Winter Ski with AR HMD / Fan, Kevin / Seigneur, Jean-Marc / Guislain, Jonathan / Nanayakkara, Suranga / Inami, Masahiko Proceedings of the 2016 Augmented Human International Conference 2016-02-25 p.34
ACM Digital Library Link
Summary: At time of writing, several affordable Head-Mounted Displays (HMD) are going to be released to the mass market, most of them for Virtual Reality (VR with Oculus Rift, Samsung Gear...) but also for indoor Augmented Reality (AR) with Hololens. We have investigated how to adapt such HMD as Oculus Rift for an outdoor AR ski slope. Rather than setting physical obstacles such as poles, our system employs AR to render dynamic obstacles by different means. During the demo, skiers will wear a video-see-through HMD while trying to ski on a real ski slope where AR obstacles are rendered.

[2] Electrosmog Visualization through Augmented Blurry Vision / Fan, Kevin / Seigneur, Jean-Marc / Nanayakkara, Suranga / Inami, Masahiko Proceedings of the 2016 Augmented Human International Conference 2016-02-25 p.35
ACM Digital Library Link
Summary: Electrosmog is the electromagnetic radiation emitted from wireless technology such as Wi-Fi hotspots or cellular towers, and poses potential hazard to human. Electrosmog is invisible, and we rely on detectors which show level of electrosmog in a warning such as numbers. Our system is able to detect electrosmog level from number of Wi-Fi networks, connected cellular towers and strengths, and show in an intuitive representation by blurring the vision of the users wearing a Head-Mounted Display (HMD). The HMD displays in real-time the users' augmented surrounding environment with blurriness, as though the electrosmog actually clouds the environment. For demonstration, participants can walk in a video-see-through HMD and observe vision gradually blurred while approaching our prepared dense wireless network.

[3] FingerReader: A Wearable Device to Explore Printed Text on the Go Accessibility at Home & on The Go / Shilkrot, Roy / Huber, Jochen / Ee, Wong Meng / Maes, Pattie / Nanayakkara, Suranga Chandima Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.2363-2372
ACM Digital Library Link
Summary: Accessing printed text in a mobile context is a major challenge for the blind. A preliminary study with blind people reveals numerous difficulties with existing state-of-the-art technologies including problems with alignment, focus, accuracy, mobility and efficiency. In this paper, we present a finger-worn device, FingerReader, that assists blind users with reading printed text on the go. We introduce a novel computer vision algorithm for local-sequential text scanning that enables reading single lines, blocks of text or skimming the text with complementary, multimodal feedback. This system is implemented in a small finger-worn form factor, that enables a more manageable eyes-free operation with trivial setup. We offer findings from three studies performed to determine the usability of the FingerReader.

[4] zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables Mid-Air Gestures and Interaction / Withana, Anusha / Peiris, Roshan / Samarasekara, Nipuna / Nanayakkara, Suranga Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.3661-3670
ACM Digital Library Link
Summary: In this paper we present zSense, which provides greater input expressivity for spatially limited devices such as smart wearables through a shallow depth gesture recognition system using non-focused infrared sensors. To achieve this, we introduce a novel Non-linear Spatial Sampling (NSS) technique that significantly cuts down the number of required infrared sensors and emitters. These can be arranged in many different configurations; for example, number of sensor emitter units can be as minimal as one sensor and two emitters. We implemented different configurations of zSense on smart wearables such as smartwatches, smartglasses and smart rings. These configurations naturally fit into the flat or curved surfaces of such devices, providing a wide scope of zSense enabled application scenarios. Our evaluations reported over 94.8% gesture recognition accuracy across all configurations.

[5] RippleTouch: initial exploration of a wave resonant based full body haptic interface Haptics and Exoskeletons / Withana, Anusha / Koyama, Shunsuke / Saakes, Daniel / Minamizawa, Kouta / Inami, Masahiko / Nanayakkara, Suranga Proceedings of the 2015 Augmented Human International Conference 2015-03-09 p.61-68
ACM Digital Library Link
Summary: We propose RippleTouch, a low resolution haptic interface that is capable of providing haptic stimulation to multiple areas of the body via a single point of contact actuator. Concept is based on the low frequency acoustic wave propagation properties of the human body. By stimulating the body with different amplitude modulated frequencies at a single contact point, we were able to dissipate the wave energy in a particular region of the body, creating a haptic stimulation without direct contact. The RippleTouch system was implemented on a regular chair, in which, four base range speakers were mounted underneath the seat and driven by a simple stereo audio interface. The system was evaluated to investigate the effect of frequency characteristics of the amplitude modulation system. Results demonstrate that we can effectively create haptic sensations at different parts of the body with a single contact point (i.e. chair surface). We believe RippleTouch concept would serve as a scalable solution for providing full-body haptic feedback in variety of situations including entertainment, communication, public spaces and vehicular applications.

[6] FootNote: designing a cost effective plantar pressure monitoring system for diabetic foot ulcer prevention Posters & Demonstrations / Yong, Kin Fuai / Forero, Juan Pablo / Foong, Shaohui / Nanayakkara, Suranga Proceedings of the 2015 Augmented Human International Conference 2015-03-09 p.167-168
ACM Digital Library Link
Summary: Diabetic Food Ulcer (DFU) is one of the dangerous complications of Diabetes Mellitus that is notoriously progressive and high in recurrence. Peripheral neuropathy, or damage to nerves in the foot, is the culprit that leads to DFU. Many research and commercial development has attempted to mitigate the condition by establishing an artificial feedback through in-shoe pressure-sensing solutions for patients. However these solutions suffer from inherent issues of analog sensors, prohibitive price tags and inflexibility in the choice of footwear. We approached these problems by designing a prototype with fabric digital sensors. The data showed promising potential for assertion frequency tracking and user activity recognition. Although the bigger challenge lies ahead -- to correlate approximation by digital sensors to analog pressure reading, we have demonstrated that an inexpensive, more versatile and flexible solution based from digital sensors for DFU prevention is indeed feasible.

[7] SHRUG: stroke haptic rehabilitation using gaming Posters & Demonstrations / Peiris, Roshan Lalintha / Wijesinghe, Vikum / Nanayakkara, Suranga Proceedings of the 2015 Augmented Human International Conference 2015-03-09 p.213-214
ACM Digital Library Link
Summary: This demonstration paper describes SHRUG, an interactive shoulder exerciser for rehabilitation. Firstly, the system's interactive and responsive elements provide just-in-time feedback to the patients and can also be used by the therapists to observe and personalise the rehabilitation program. Secondly, it has a gamified element, which is expected to engage and motivate the patient throughout the rehabilitation process. With this demonstration, the participants will be able to use the system and play the games introduced by SHRUG and observe the feedback.

[8] Feel & see the globe: a thermal, interactive installation Posters & Demonstrations / Huber, Jochen / Malavipathirana, Hasantha / Wang, Yikun / Li, Xinyu / Fu, Jody C. / Maes, Pattie / Nanayakkara, Suranga Proceedings of the 2015 Augmented Human International Conference 2015-03-09 p.215-216
ACM Digital Library Link
Summary: "Feel & See the Globe" is a thermal, interactive installation. The central idea is to map temperature information in regions around the world from prehistoric, modern to futuristic times onto a low fidelity display. The display visually communicates global temperature rates and lets visitors experience the temperature physically through a tangible, thermal artifact. A pertinent educational aim is to inform and teach about global warming.

[9] SparKubes: exploring the interplay between digital and physical spaces with minimalistic interfaces Physical -- virtual / Ortega-Avila, Santiago / Huber, Jochen / Janaka, Nuwan / Withana, Anusha / Fernando, Piyum / Nanayakkara, Suranga Proceedings of the 2014 Australian Computer-Human Interaction Conference 2014-12-02 p.204-207
ACM Digital Library Link
Summary: Tangible objects have seen an ongoing integration into real-world settings, e.g. in the classroom. These objects allow for instance learners to explore digital content in the physical space and leverage the physicality of the interface for spatial interaction. In this paper, we present SparKubes, a set of stand-alone tangible objects that are corded with simple behaviors and do not require additional instrumentation or setup. This overcomes a variety of issues such as setting up network connection and instrumentation of the environment -- as long as a SparKube sees another, it "works". The contribution of this paper is three-fold: we (1) present the implementation of a minimalistic tangible platform as the basis for SparKube, (2) depict the design space that covers a variety of interaction primitives and (3) show how these primitives can be combined to create and manipulate SparKube interfaces in the scope of two salient application scenarios: tangible widgets and the manipulation of information flow.

[10] I-Draw: towards a freehand drawing assistant Physical -- virtual / Fernando, Piyum / Peiris, Roshan Lalintha / Nanayakkara, Suranga Proceedings of the 2014 Australian Computer-Human Interaction Conference 2014-12-02 p.208-211
ACM Digital Library Link
Summary: In this paper we present I-Draw, a drawing tool to assist free hand drawings on physical surfaces. We explore the interaction design space that combines the digital capabilities with the traditional drawing process. I-Draw device has been conceptualised in terms of its interactive philosophy, features and affordances. We developed a proof-of-concept prototype of I-Draw and discuss the future directions. We believe I-Draw would open new drawing possibilities between physical and digital spaces.

[11] EarPut: augmenting ear-worn devices for ear-based interaction User experience / Lissermann, Roman / Huber, Jochen / Hadjakos, Aristotelis / Nanayakkara, Suranga / Mühlhäuser, Max Proceedings of the 2014 Australian Computer-Human Interaction Conference 2014-12-02 p.300-307
ACM Digital Library Link
Summary: One of the pervasive challenges in mobile interaction is decreasing the visual demand of interfaces towards eyes-free interaction. In this paper, we focus on the unique affordances of the human ear to support one-handed and eyes-free mobile interaction. We present EarPut, a novel interface concept and hardware prototype, which unobtrusively augments a variety of accessories that are worn behind the ear (e.g. headsets or glasses) to instrument the human ear as an interactive surface. The contribution of this paper is three-fold. We contribute (i) results from a controlled experiment with 27 participants, providing empirical evidence that people are able to target salient regions on their ear effectively and precisely, (ii) a first, systematically derived design space for ear-based interaction and (iii) a set of proof of concept EarPut applications that leverage on the design space and embrace mobile media navigation, mobile gaming and smart home interaction.

[12] SHRUG: stroke haptic rehabilitation using gaming Persuasion and health / Peiris, Roshan Lalintha / Janaka, Nuwan / De Silva, Deepthika / Nanayakkara, Suranga Proceedings of the 2014 Australian Computer-Human Interaction Conference 2014-12-02 p.380-383
ACM Digital Library Link
Summary: In this paper we present SHRUG, an interactive shoulder rehabilitation exerciser. With this work-in-progress system, we intend to (1) explore the effectiveness of providing interactive and just-in-time feedback to the patients and therapists; (2) explore the effect of adding a gaming element on the motivation of the patients. The SHRUG prototype was developed in collaboration with the rehabilitation therapists by augmenting their existing exercising system. We present the implementation details of the system and some of the initial reactions from the therapists on various aspects of the SHRUG prototypes.

[13] Toward context-aware just-in-time information: micro-activity recognition of everyday objects Gaze and object recognition / Hettiarachchi, Anuruddha / Premalal, Anuruddha / Dias, Dileeka / Nanayakkara, Suranga Proceedings of the 2014 Australian Computer-Human Interaction Conference 2014-12-02 p.422-425
ACM Digital Library Link
Summary: Transferring computational tasks from user-worn devices to everyday objects allows users to freely focus on their regular non-computing tasks. Identifying micro-activities (short-repetitive-activities that compose the macro-level behavior) enables the understanding of subtle behavioral changes and providing just-in-time information without explicit user input. In this paper, we propose the concept of micro-activity recognition of augmented-everyday-objects and evaluate the applicability of machine learning algorithms that are previously used for macro-level activity recognition. We outline a few proof-of-concept application scenarios that provide micro-activity-aware just-in-time information.

[14] PaperPixels: a toolkit to create paper-based displays Design and use / Peiris, Roshan Lalintha / Nanayakkara, Suranga Proceedings of the 2014 Australian Computer-Human Interaction Conference 2014-12-02 p.498-504
ACM Digital Library Link
Summary: In this paper we present PaperPixels, a toolkit for creating subtle and ambient animations on regular paper. This toolkit consists of two main components: (1) a modularised plug and play type elements (PaperPixels elements) that can be attached on the back of regular paper; (2) a GUI (graphical user interface) that allows users to stage the animation in a time line format. A user would simply draw on regular paper, attach PaperPixels elements behind the regions that needs to be animated, and specify the sequence of appearing and disappearing by arranging icons on a simple GUI. Observations made during a workshop at a local maker faire showed the potential of PaperPixels being integrated in many different applications such as animated wallpapers, animated story books.

[15] Workshop on assistive augmentation Workshop summaries / Huber, Jochen / Rekimoto, Jun / Inami, Masahiko / Shilkrot, Roy / Maes, Pattie / Ee, Wong Meng / Pullin, Graham / Nanayakkara, Suranga Chandima Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.103-106
ACM Digital Library Link
Summary: Our senses are the dominant channel for perceiving the world around us, some more central than the others, such as the sense of vision. Whether they have impairments or not, people often find themselves at the edge of sensorial capability and seek assistive or enhancing devices. We wish to put sensorial ability and disability on a continuum of usability for certain technology, rather than treat one or the other extreme as the focus.
    The overarching topic of the workshop proposed here is the design and development of assistive technology, user interfaces and interactions that seamlessly integrate with a user's mind, body and behavior, providing an enhanced perception. We call this "Assistive Augmentation".
    The workshop aims to establish conversation and idea exchange with researchers and practitioners at the junction of human-computer interfaces, assistive technology and human augmentation. The workshop will serve as a hub for the emerging community of assistive augmentation researchers.

[16] A wearable text-reading device for the visually-impaired Video showcase presentations / Shilkrot, Roy / Huber, Jochen / Liu, Connie / Maes, Pattie / Nanayakkara, Suranga Chandima Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.193-194
ACM Digital Library Link
Summary: Visually impaired people report numerous difficulties with accessing printed text using existing technology, including problems with alignment, focus, accuracy, mobility and efficiency. We present a finger worn device, which contains a camera, vibration motors and a microcontroller, that assists the visually impaired with effectively and efficiently reading paper-printed text in a manageable operation with little setup. We introduce a novel, local-sequential manner for scanning text which enables reading single lines, blocks of text or skimming the text for important sections while providing real-time auditory and tactile feedback.

[17] FingerReader: a wearable device to support text reading on the go Works-in-progress / Shilkrot, Roy / Huber, Jochen / Liu, Connie / Maes, Pattie / Nanayakkara, Suranga Chandima Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.2 p.2359-2364
ACM Digital Library Link
Summary: Visually impaired people report numerous difficulties with accessing printed text using existing technology, including problems with alignment, focus, accuracy, mobility and efficiency. We present a finger worn device that assists the visually impaired with effectively and efficiently reading paper-printed text. We introduce a novel, local-sequential manner for scanning text which enables reading single lines, blocks of text or skimming the text for important sections while providing real-time auditory and tactile feedback. The design is motivated by preliminary studies with visually impaired people, and it is small-scale and mobile, which enables a more manageable operation with little setup.

[18] SpiderVision: extending the human field of view for augmented awareness 8. Super Perception / Fan, Kevin / Huber, Jochen / Nanayakkara, Suranga / Inami, Masahiko Proceedings of the 2014 Augmented Human International Conference 2014-03-07 p.47
ACM Digital Library Link
Summary: We present SpiderVision, a wearable device that extends the human field of view to augment a user's awareness of things happening behind one's back. SpiderVision leverages a front and back camera to enable users to focus on the front view while employing intelligent interface techniques to cue the user about activity in the back view. The extended back view is only blended in when the scene captured by the back camera is analyzed to be dynamically changing, e.g. due to object movement. We explore factors that affect the blended extension, such as view abstraction and blending area. We contribute results of a user study that explore 1) whether users can perceive the extended field of view effectively, and 2) whether the extended field of view is considered a distraction. Quantitative analysis of the users' performance and qualitative observations of how users perceive the visual augmentation are described.

[19] SmartFinger: connecting devices, objects and people seamlessly Ubiquitous computing / Ransiri, Shanaka / Peiris, Roshan Lalintha / Yeo, Kian Peen / Nanayakkara, Suranga Proceedings of the 2013 Australian Computer-Human Interaction Conference 2013-11-25 2013-11-25 p.359-362
ACM Digital Library Link
Summary: In this paper, we demonstrate a method to create a seamless information media 'channel' between the physical and digital worlds. Our prototype, SmartFinger, aims to achieve this goal with a finger-worn camera, which continuously captures images for the extraction of information from our surroundings. With this metaphorical channel, we have created a software architecture which allows users to capture and interact with various entities in our surroundings. The interaction design space of SmartFinger is discussed in terms of smart-connection, smart-sharing and smart-extraction of information. We believe this work will create numerous possibilities for future explorations.

[20] SpeechPlay: composing and sharing expressive speech through visually augmented text Audio and speech / Yeo, Kian Peen / Nanayakkara, Suranga Proceedings of the 2013 Australian Computer-Human Interaction Conference 2013-11-25 2013-11-25 p.565-568
ACM Digital Library Link
Summary: SpeechPlay allows users to create and share expressive synthetic voices in a fun and interactive manner. It promotes a new level of self-expression and public communication by adding expressiveness to a plain text. Control of prosody information in synthesized speech output is based on the visual appearance of the text, which can be manipulated with touch gestures. Users could create/modify contents using their mobile phone (SpeechPlay Mobile application) and publish/share their work on a large screen (SpeechPlay Surface). Initial user reactions suggest that the correlation between the visual appearance of a text phrase and the resulting audio was intuitive. While it is possible to make the speech output more expressive, users could easily distort the naturalness of the voice in a fun manner. This could also be a useful tool for music composers and for training new musicians.

[21] StickEar: making everyday objects respond to sound Sensing / Yeo, Kian Peen / Nanayakkara, Suranga / Ransiri, Shanaka Proceedings of the 2013 ACM Symposium on User Interface Software and Technology 2013-10-08 v.1 p.221-226
ACM Digital Library Link
Summary: This paper presents StickEar, a system consisting of a network of distributed 'Sticker-like' sound-based sensor nodes to propose a means of enabling sound-based interactions on everyday objects. StickEar encapsulates wireless sensor network technology into a form factor that is intuitive to reuse and redeploy. Each StickEar sensor node consists of a miniature sized microphone and speaker to provide sound-based input/output capabilities. We provide a discussion of interaction design space and hardware design space of StickEar that cuts across domains such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. We implemented three applications to demonstrate the unique interaction capabilities of StickEar.

[22] StickEar: augmenting objects and places wherever whenever Music and audio / Yeo, Kian Peen / Nanayakkara, Suranga Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.751-756
ACM Digital Library Link
Summary: Sticky notes provide a means of anchoring visual information on physical objects while having the versatility of being redeployable and reusable. StickEar encapsulate sensor network technology in the form factor of a sticky note that has a tangible user interface, offering the affordances of redeployablilty and reusability. It features a distributed set of network-enabled sound-based sensor nodes. StickEar is a multi-function input/output device that enables sound-based interactions for applications such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. In addition, multiple StickEars can interact with each other to perform novel input and output tasks. We believe this work would provide non-expert users with an intuitive and seamless method of interacting with the environment and its artifacts though sound.

[23] StickEar: augmenting objects and places wherever whenever Video showcase presentations / Yeo, Kian Peen / Nanayakkara, Suranga Chandima Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.2893-2894
ACM Digital Library Link
Summary: Sticky notes provide a means of anchoring visual information on physical objects while having the versatility of being redeployable and reusable. StickEar encapsulate sensor network technology in the form factor of a sticky note that has a tangible user interface, offering the affordances of redeployablilty and reusability. It features a distributed set of network-enabled sound-based sensor nodes. StickEar is a multi-function input/output device that enables sound-based interactions for applications such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. In addition, multiple StickEars can interact with each other to perform novel input and output tasks. We believe this work would provide non-expert users with an intuitive and seamless method of interacting with the environment and its artifacts though sound.

[24] FingerDraw: more than a digital paintbrush / Hettiarachchi, Anuruddha / Nanayakkara, Suranga / Yeo, Kian Peen / Shilkrot, Roy / Maes, Pattie Proceedings of the 2013 Augmented Human International Conference 2013-03-07 p.1-4
ACM Digital Library Link
Summary: Research in cognitive science shows that engaging in visual arts has great benefits for children particularly when it allows them to bond with nature [7]. In this paper, we introduce FingerDraw, a novel drawing interface that aims to keep children connected to the physical environment by letting them use their surroundings as templates and color palette. The FingerDraw system consists of (1) a finger-worn input device [13] which allows children to upload visual contents such as shapes, colors and textures that exist in the real world; (2) a tablet with touch interface that serves as a digital canvas for drawing. In addition to real-time drawing activities, children can also collect a palette of colors and textures in the input device and later feed them into the drawing interface. Initial reactions from a case study indicated that the system could keep a child engaged with their surroundings for hours to draw using the wide range of shapes, colors and patterns found in the natural environment.

[25] SmartFinger: an augmented finger as a seamless 'channel' between digital and physical objects / Ransiri, Shanaka / Nanayakkara, Suranga Proceedings of the 2013 Augmented Human International Conference 2013-03-07 p.5-8
ACM Digital Library Link
Summary: Connecting devices in the digital domain for exchanging data is an essential task in everyday life. Additionally, our physical surrounding is full of valuable visual information. However, existing approaches for transferring digital content and extracting information from physical objects require separate equipment. SmartFinger aims to create a seamless 'channel' between digital devices and physical surrounding by using a finger-worn vision based system. It is an always available and intuitive interface for 'grasping' and semantically analyzing visual content from physical objects as well as sharing media between digital devices. We hope that SmartFinger will lead to seamless digital information 'channel' among all entities with a semblance in the physical and digital worlds.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 41 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 09 |