EMPress: Practical Hand Gesture Classification with Wrist-Mounted EMG and
Pressure Sensing
In-Air Gesture
/
McIntosh, Jess
/
McNeill, Charlie
/
Fraser, Mike
/
Kerber, Frederic
/
Löchtefeld, Markus
/
Krüger, Antonio
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.2332-2342
© Copyright 2016 ACM
Summary: Practical wearable gesture tracking requires that sensors align with
existing ergonomic device forms. We show that combining EMG and pressure data
sensed only at the wrist can support accurate classification of hand gestures.
A pilot study with unintended EMG electrode pressure variability led to
exploration of the approach in greater depth. The EMPress technique senses both
finger movements and rotations around the wrist and forearm, covering a wide
range of gestures, with an overall 10-fold cross validation classification
accuracy of 96%. We show that EMG is especially suited to sensing finger
movements, that pressure is suited to sensing wrist and forearm rotations, and
their combination is significantly more accurate for a range of gestures than
either technique alone. The technique is well suited to existing wearable
device forms such as smart watches that are already mounted on the wrist.
PowerShake: Power Transfer Interactions for Mobile Devices
How can Smartphones Fit Our Lives?
/
Worgan, Paul
/
Knibbe, Jarrod
/
Fraser, Mike
/
Plasencia, Diego Martinez
Proceedings of the ACM CHI'16 Conference on Human Factors in Computing
Systems
2016-05-07
v.1
p.4734-4745
© Copyright 2016 ACM
Summary: Current devices have limited battery life, typically lasting less than one
day. This can lead to situations where critical tasks, such as making an
emergency phone call, are not possible. Other devices, supporting different
functionality, may have sufficient battery life to enable this task. We present
PowerShake; an exploration of power as a shareable commodity between mobile
(and wearable) devices. PowerShake enables users to control the balance of
power levels in their own devices (intra-personal transactions) and to trade
power with others (inter-personal transactions) according to their ongoing
usage requirements. This paper demonstrates Wireless Power Transfer (WPT)
between mobile devices. PowerShake is: simple to perform on-the-go; supports
ongoing/continuous tasks (transferring at 3.1W); fits in a small form factor;
and is compliant with electromagnetic safety guidelines while providing
charging efficiency similar to other standards (48.2% vs. 51.2% in Qi). Based
on our proposed technical implementation, we run a series of workshops to
derive candidate designs for PowerShake enabled devices and interactions, and
to bring to light the social implications of power as a tradable asset.
Resonant Bits: Harmonic Interaction with Virtual Pendulums
Paper Session 2: Focus on Interaction
/
Bennett, Peter
/
Nolan, Stuart
/
Uttamchandani, Ved
/
Pages, Michael
/
Cater, Kirsten
/
Fraser, Mike
Proceedings of the 2015 International Conference on Tangible and Embedded
Interaction
2015-01-15
p.49-52
© Copyright 2015 ACM
Summary: This paper presents the concept of Resonant Bits, an interaction technique
for encouraging engaging, slow and skilful interaction with tangible, mobile
and ubiquitous devices. The technique is based on the resonant excitation of
harmonic oscillators and allows the exploration of a number of novel types of
tangible interaction including: ideomotor control, where subliminal
micro-movements accumulate over time to produce a visible outcome; indirect
tangible interaction, where a number of devices can be controlled
simultaneously through an intermediary object such as a table; and slow
interaction, with meditative and repetitive gestures being used for control.
The Resonant Bits concept is tested as an interaction method in a study where
participants resonate with virtual pendulums on a mobile device. The Harmonic
Tuner, a resonance-based music player, is presented as a simple example of
using resonant bits. Overall, our ambition in proposing the Resonant Bits
concept is to promote skilful, engaging and ultimately rewarding forms of
interaction with tangible devices that takes time and patience to learn and
master.
VideoHandles: replicating gestures to search through action-camera video
Spatial gestures
/
Knibbe, Jarrod
/
Seah, Sue Ann
/
Fraser, Mike
Proceedings of the 2014 ACM Symposium Spatial User Interaction
2014-10-04
p.50-53
© Copyright 2014 ACM
Summary: We present VideoHandles, a novel interaction technique to support rapid
review of wearable video camera data by re-performing gestures as a search
query. The availability of wearable video capture devices has led to a
significant increase in activity logging across a range of domains. However,
searching through and reviewing footage for data curation can be a laborious
and painstaking process. In this paper we showcase the use of gestures as
search queries to support review and navigation of video data. By exploring
example self-captured footage across a range of activities, we propose two
video data navigation styles using gestures: prospective gesture tagging and
retrospective gesture searching. We describe VideoHandles' interaction design,
motivation and results of a pilot study.
Quick and dirty: streamlined 3D scanning in archaeology
Multiple dimensions and displays
/
Knibbe, Jarrod
/
O'Hara, Kenton P.
/
Chrysanthi, Angeliki
/
Marshall, Mark T.
/
Bennett, Peter D.
/
Earl, Graeme
/
Izadi, Shahram
/
Fraser, Mike
Proceedings of ACM CSCW 2014 Conference on Computer-Supported Cooperative
Work and Social Computing
2014-02-15
v.1
p.1366-1376
© Copyright 2014 ACM
Summary: Capturing data is a key part of archaeological practice, whether for
preserving records or to aid interpretation. But the technologies used are
complex and expensive, resulting in time-consuming processes associated with
their use. These processes force a separation between ongoing interpretive work
and capture. Through two field studies we elicit more detail as to what is
important about this interpretive work and what might be gained through a
closer integration of capture technology with these practices. Drawing on these
insights, we go on to present a novel, portable, wireless 3D modeling system
that emphasizes "quick and dirty" capture. We discuss its design rational in
relation to our field observations and evaluate this rationale further by
giving the system to archaeological experts to explore in a variety of
settings. While our device compromises on the resolution of traditional 3D
scanners, its support of interpretation through emphasis on real-time capture,
review and manipulability suggests it could be a valuable tool for the future
of archaeology.
m+pSpaces: virtual workspaces in the spatially-aware mobile environment
Body, space and motion
/
Cauchard, Jessica
/
Löchtefeld, Markus
/
Fraser, Mike
/
Krüger, Antonio
/
Subramanian, Sriram
Proceedings of the 14th Conference on Human-computer interaction with mobile
devices and services
2012-09-21
p.171-180
© Copyright 2012 ACM
Summary: We introduce spatially-aware virtual workspaces for the mobile environment.
The notion of virtual workspaces was initially conceived to alleviate mental
workload in desktop environments with limited display real-estate. Using
spatial properties of mobile devices, we translate this approach and illustrate
that mobile virtual workspaces greatly improve task performance for mobile
devices. In a first study, we compare our spatially-aware prototype (mSpaces)
to existing context switching methods for navigating amongst multiple tasks in
the mobile environment. We show that users are faster, make more accurate
decisions and require less mental and physical effort when using
spatially-aware prototypes. We furthermore prototype pSpaces and m+pSpaces, two
spatially-aware systems equipped with pico-projectors as auxiliary displays to
provide dual-display capability to the handheld device. A final study reveals
advantages of each of the different configurations and functionalities when
comparing all three prototypes. Drawing on these findings, we identify design
considerations to create, manipulate and manage spatially-aware virtual
workspaces in the mobile environment.
Augmenting spatial skills with mobile devices
Mobile computing & interaction
/
Boari, Doug
/
Fraser, Mike
/
Fraser, Danae Stanton
/
Cater, Kirsten
Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems
2012-05-05
v.1
p.1611-1620
© Copyright 2012 ACM
Summary: Mobile devices are increasingly providing novel ways for users to engage
with the spaces around them. However, there are few systematic studies of
enhancing spatial ability with mobile devices, and applications such as
turn-by-turn navigation systems have even been associated with a decline in
spatial skills. In this paper we present a study based on the 1971
Shepard-Metzler mental rotation test but performed on a mobile-phone handset
and a tablet PC. Our study extends the original experiment with the
incorporation of touch and tilt interaction techniques, in order to determine
if these affect the use and acquisition of spatial skills. Results suggest that
the task is performed faster, and with no significant difference in accuracy,
when participants rely on mental abilities rather than interaction techniques
to perform 3D rotations. We also find significant differences between tablet
and phone handset platforms under interactive conditions. We conclude that
applications on mobile devices could be designed to enhance rather than erode
spatial skills, by supporting the use of imagination to align real and virtual
content.
Surveying the extent of involvement in online academic dishonesty
(e-dishonesty) related practices among university students and the rationale
students provide: One university's experience
/
Sendag, Serkan
/
Duran, Mesut
/
Fraser, M. Robert
Computers in Human Behavior
2012-05
v.28
n.3
p.849-860
Keywords: Computer ethics
Keywords: Cyberethics
Keywords: Online academic integrity
Keywords: Media in education
© Copyright 2012 Elsevier Ltd.
Summary: This study reports data from a Midwestern university, investigating the
extent of involvement in online academic dishonesty practices (e-dishonesty)
among students and the rationale they provided. Involvement in and rationale
for e-dishonesty was studied across individual variables including academic
level, primary field of study, taking the university's academic integrity
tutorial, and taking online and hybrid courses. A total of 1153 students
participated in the study by completing a 44 item questionnaire. The findings
indicate that the extent of involvement in e-dishonesty practices was
significantly greater among freshmen than graduate students in most of the
subscales of the survey. In addition, the primary field of study demonstrated a
significant relationship between involvement in e-dishonesty and the rationale
for e-dishonesty. Students in education and the social sciences reported of the
least involvement in e-dishonesty; engineering and physical sciences the most.
Completing the university's academic integrity tutorial did not significantly
affect e-dishonesty practices while taking online or hybrid courses had some
significant effect on e-dishonesty. The results highlight the need for early
intervention concerning academic integrity followed by an ongoing and
consistent effort throughout students' undergraduate and graduate experience.
ChronoTape: tangible timelines for family history
Fold unfold
/
Bennett, Peter
/
Fraser, Mike
/
Balaam, Madeline
Proceedings of the 6th International Conference on Tangible and Embedded
Interaction
2012
v.9
p.49-56
© Copyright 2012 ACM
Summary: An explosion in the availability of online records has led to surging
interest in genealogy. In this paper we explore the present state of
genealogical practice, with a particular focus on how the process of research
is recorded and later accessed by other researchers. We then present our
response, ChronoTape, a novel tangible interface for supporting family history
research. The ChronoTape is an example of a temporal tangible interface, an
interface designed to enable the tangible representation and control of time.
We use the ChronoTape to interrogate the value relationships between physical
and digital materials, personal and professional practices, and the ways that
records are produced, maintained and ultimately inherited. In contrast to
designs that support existing genealogical practice, ChronoTape captures and
embeds traces of the researcher within the document of their own research, in
three ways: (i) it ensures physical traces of digital research; (ii) it
generates personal material around the use of impersonal genealogical data;
(iii) it allows for graceful degradation of both its physical and digital
components in order to deliberately accommodate the passage of information into
the future.
Visual separation in mobile multi-display environments
Mobile
/
Cauchard, Jessica R.
/
Löchtefeld, Markus
/
Irani, Pourang
/
Schoening, Johannes
/
Krüger, Antonio
/
Fraser, Mike
/
Subramanian, Sriram
Proceedings of the 201 ACM Symposium on User Interface Software and
Technology1
2011-10-16
v.1
p.451-460
© Copyright 2011 ACM
Summary: Projector phones, handheld game consoles and many other mobile devices
increasingly include more than one display, and therefore present a new breed
of mobile Multi-Display Environments (MDEs) to users. Existing studies
illustrate the effects of visual separation between displays in MDEs and
suggest interaction techniques that mitigate these effects. Currently, mobile
devices with heterogeneous displays such as projector phones are often designed
without reference to visual separation issues; therefore it is critical to
establish whether concerns and opportunities raised in the existing MDE
literature apply to the emerging category of Mobile MDEs (MMDEs). This paper
investigates the effects of visual separation in the context of MMDEs and
contrasts these with fixed MDE results, and explores design factors for Mobile
MDEs. Our study uses a novel eye-tracking methodology for measuring switches in
visual context between displays and identifies that MMDEs offer increased
design flexibility over traditional MDEs in terms of visual separation. We
discuss these results and identify several design implications.
Cooperative gestures: effective signaling for humanoid robots
Paper session 2: affect from appearance & motion
/
Riek, Laurel D.
/
Rabinowitch, Tal-Chen
/
Bremner, Paul
/
Pipe, Anthony G.
/
Fraser, Mike
/
Robinson, Peter
Proceedings of the 5th ACM/IEEE International Conference on Human Robot
Interaction
2010-03-02
p.61-68
Keywords: affective robotics, cooperation, gestures, human-robot interaction
© Copyright 2010 ACM
Summary: Cooperative gestures are a key aspect of human-human pro-social interaction.
Thus, it is reasonable to expect that endowing humanoid robots with the ability
to use such gestures when interacting with humans would be useful. However,
while people are used to responding to such gestures expressed by other humans,
it is unclear how they might react to a robot making them. To explore this
topic, we conducted a within-subjects, video based laboratory experiment,
measuring time to cooperate with a humanoid robot making interactional
gestures. We manipulated the gesture type (beckon, give, shake hands), the
gesture style (smooth, abrupt), and the gesture orientation (front, side). We
also employed two measures of individual differences: negative attitudes toward
robots (NARS) and human gesture decoding ability (DANVA2-POS). Our results show
that people cooperate with abrupt gestures more quickly than smooth ones and
front-oriented gestures more quickly than those made to the side, people's
speed at decoding robot gestures is correlated with their ability to decode
human gestures, and negative attitudes toward robots is strongly correlated
with a decreased ability in decoding human gestures.
Taking shortcuts: embedded physical interfaces for spatial navigation
Tangible and embedded interaction -- in the lab and in the wild
/
Boari, Douglas
/
Fraser, Mike
Proceedings of the 3rd International Conference on Tangible and Embedded
Interaction
2009-02-18
p.189-196
Keywords: interactive embedded interfaces, physical embodiment, spatial navigation
© Copyright 2009 ACM
Summary: Designing for embodied physical interaction is just as important at a coarse
level of spatial navigation as in the minutiae of object exploration. We
created interactive embedded interfaces called 'Navitiles' that can be
suspended in a floor to support navigation of a building. Our design uses
capacitance and RFID sensors to determine users' location and LEDs to indicate
possible directions. We determine whether Navitile cues could help users
understand spatial relationships between points of interest. We based our study
on a previous experiment that used a simulated VR maze to test whether users
were able to exhibit 'shortcut' behaviour that would indicate the formation of
spatial maps. Our hypothesis was that the physicality of embodied spatial
navigation directed by the Navitiles in a real maze would enable users to
achieve similar spatial shortcut behaviours to those found in the virtual task.
We found significant evidence that sufficient spatial knowledge was acquired to
enable successful shortcut performance between unexplored routes. However,
further work is required to measure the effect of physical body movement on
spatial skills development.
Using actuated devices in location-aware systems
Mobile and tangible interaction
/
Fraser, Mike
/
Cater, Kirsten
/
Duff, Paul
Proceedings of the 2nd International Conference on Tangible and Embedded
Interaction
2008-02-18
p.19-26
Keywords: actuators, human-robot interaction, location awareness, physicality,
pointing, robotics
© Copyright 2008 ACM
Summary: Location-aware systems have traditionally left mobility to the user through
carrying, supporting and manipulating the device itself. This design choice has
limited the scale and style of device to corresponding weight and form
constraints. This paper presents a project introducing school children to
location aware systems. We observed that it is hard to notice, physically grasp
and simultaneously share these small personal devices in groups. These
behaviours are partly grounded in the physical device design, but also in the
location awareness model itself, which provides information 'right here' while
the children are looking around and about them. These observations lead us to
suggest the alternative model of pointing at locations so that they can be
noticed and experienced by groups in public places. We further build this
location model into the device itself by introducing actuated components from
robotics to make a location-aware device called 'Limbot' that can be physically
pointed. A preliminary study of the Limbot with the school children indicates
rich sharing behaviours, but that user control of actuation at all points is
critical to the ultimate success of our approach, and further exploration of
our location model is required.
The Distributed Work of Local Action: Interaction amongst virtually
collocated research teams
/
Tutt, Dylan
/
Hindmarsh, Jon
/
Shaukat, Muneeb
/
Fraser, Mike
Proceedings of the Tenth European Conference on Computer-Supported
Cooperative Work
2007-09-24
p.199-218
© Copyright 2007 Springer
Summary: Existing research on synchronous remote working in CSCW has highlighted the
troubles that can arise because actions at one site are (partially) unavailable
to remote colleagues. Such 'local action' is routinely characterised as a
nuisance, a distraction, subordinate and the like. This paper explores
interconnections between 'local action' and 'distributed work' in the case of a
research team virtually collocated through 'MiMeG'. MiMeG is an e-Social
Science tool that facilitates 'distributed data sessions' in which social
scientists are able to remotely collaborate on the real-time analysis of video
data. The data are visible and controllable in a shared workspace and
participants are additionally connected via audio conferencing. The findings
reveal that whilst the (partial) unavailability of local action is at times
problematic, it is also used as a resource for coordinating work. The paper
considers how local action is interactionally managed in distributed data
sessions and concludes by outlining implications of the analysis for the design
and study of technologies to support group-to-group collaboration.
Seconds matter: improving distributed coordination bytracking and
visualizing display trajectories
Distributed coordination
/
Fraser, Mike
/
McCarthy, Michael R.
/
Shaukat, Muneeb
/
Smith, Phillip
Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems
2007-04-28
v.1
p.1303-1312
© Copyright 2007 ACM
Summary: Pauses in distributed groupware activity can indicate anything from
technical latency through infrastructure failure to a participant's thoughtful
contemplation. Unraveling these ambiguities highlights mismatches between
unseen off-screen activities and on-screen cursor behaviors. In this paper we
suggest that groupware systems have typically been poor at representing
off-screen activities, and introduce the concept of display trajectories to
bridge the sensor gap between the display and its surrounding space. We
consider requirements for display trajectories using the distributed social
scientific analysis of video data as an example domain. Drawing on these
requirements, we prototype a freeform whiteboard pen tracking and visualization
technique around displays using ultrasound. We describe an experiment which
inspects the impact of display trajectories on remote response efficiency. Our
findings show that visualization of the display trajectory improves
participants' ability to coordinate their actions by one second per interaction
turn, reducing latency in organizing turn taking by a 'standard maximum'
conversation pause.
Remote Collaboration Over Video Data: Towards Real-Time e-Social Science
/
Fraser, Mike
/
Hindmarsh, Jon
/
Best, Katie
/
Heath, Christian
/
Biegel, Greg
/
Greenhalgh, Chris
/
Reeves, Stuart
Computer Supported Cooperative Work
2006
v.15
n.4
p.257-279
Keywords: video analysis; e-social science; groupware; synchronous collaboration
© Copyright 2006 Springer
Summary: The design of distributed systems to support collaboration among groups of
scientists raises new networking challenges that grid middleware developers are
addressing. This field of development work, 'e-Science', is increasingly
recognising the critical need of understanding the ordinary day-to-day work of
doing research to inform design. We have investigated one particular area of
collaborative social scientific work -- the analysis of video data. Based on
interviews and observational studies, we discuss current practices of social
scientific work with digital video in three areas: Preparation for
collaboration; Control of data and application; and Annotation configurations
and techniques. For each, we describe how these requirements feature in our
design of a distributed video analysis system as part of the MiMeG project: our
security policy and distribution; the design of the control system; and
providing freeform annotation over data. Finally, we review our design in light
of initial use of the software between project partners; and discuss how we
might transform the spatial configuration of the system to support annotation
behaviour.
Designing the spectator experience
Public life
/
Reeves, Stuart
/
Benford, Steve
/
O'Malley, Claire
/
Fraser, Mike
Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems
2005-04-02
v.1
p.741-750
Summary: Interaction is increasingly a public affair, taking place in our theatres,
galleries, museums, exhibitions and on the city streets. This raises a new
design challenge for HCI - how should spectators experience a performer's
interaction with a computer? We classify public interfaces (including examples
from art, performance and exhibition design) according to the extent to which a
performer's manipulations of an interface and their resulting effects are
hidden, partially revealed, fully revealed or even amplified for spectators.
Our taxonomy uncovers four broad design strategies: 'secretive,' where
manipulations and effects are largely hidden; 'expressive,' where they tend to
be revealed enabling the spectator to fully appreciate the performer's
interaction; 'magical,' where effects are revealed but the manipulations that
caused them are hidden; and finally 'suspenseful,' where manipulations are
apparent but effects are only revealed as the spectator takes their turn.
Extending game participation with embodied reporting agents
/
Fielding, Dan
/
Fraser, Mike
/
Logan, Brian
/
Benford, Steve
Proceedings of the 2004 International Conference on Advances in Computer
Entertainment Technology
2004-09-02
p.100-108
© Copyright 2004 ACM
Summary: We introduce a multi-agent framework to generate reports of players'
activities within multi-player computer games so that other players who are
currently unable to participate can keep track of the activities of their
colleagues. We describe an initial implementation of our framework as an
extension to the Capture the Flag game within Unreal Tournament. We report the
results of a preliminary experiment that shows that embodied reporter agents
give varying coverage depending on deployment strategies used, and, in
particular, suggests that the dynamic assignment of reporter agents by an
editor agent can provide more effective coverage than static assignment
schemes. Finally, we explore future applications of this work including other
genres of games, the emergence of games as spectator sports, implications for
pervasive games as well as non-gaming applications.
Revealing delay in collaborative environments
/
Gutwin, Carl
/
Benford, Steve
/
Dyck, Jeff
/
Fraser, Mike
/
Vaghi, Ivan
/
Greenhalgh, Chris
Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems
2004-04-24
v.1
p.503-510
© Copyright 2004 ACM
Summary: Delay is an unavoidable reality in collaborative environments. We propose an
approach to dealing with delay in which 'decorators' are introduced into the
interface. Decorators show the presence, magnitude and effects of delay so that
participants can better understand its consequences and adopt their own natural
coping strategies. Two experiments with different decorators show that this
approach can significantly reduce errors in specific collaborative activities.
We conclude that revealing delays is one way in which groupware can benefit
from accepting and working with the reality of distributed systems, rather than
trying to maintain the illusion of copresent interaction.
Assembling history: Achieving coherent experiences with diverse technologies
/
Fraser, M.
/
Stanton, D.
/
Ng, K. H.
/
Benford, S.
/
Malley, C. O.
/
Bowers, J.
/
Taxen, G.
/
Ferris, K.
/
Hindmarsh, J.
Proceedings of the Eighth European Conference on Computer-Supported
Cooperative Work
2003-09-14
p.179-198
© Copyright 2003 Kluwer Academic Publishers
The augurscope: a mixed reality interface for outdoors
Contextual Displays
/
Schnadelbach, Holger
/
Koleva, Boriana
/
Flintham, Martin
/
Fraser, Mike
/
Izadi, Shahram
/
Chandler, Paul
/
Foster, Malcolm
/
Benford, Steve
/
Greenhalgh, Chris
/
Rodden, Tom
Proceedings of ACM CHI 2002 Conference on Human Factors in Computing Systems
2002-04-20
p.9-16
© Copyright 2002 Association for Computing Machinery
Summary: The augurscope is a portable mixed reality interface for outdoors. A
tripod-mounted display is wheeled to different locations and rotated and tilted
to view a virtual environment that is aligned with the physical background.
Video from an onboard camera is embedded into this virtual environment. Our
design encompasses physical form, interaction and the combination of a GPS
receiver, electronic compass, accelerometer and rotary encoder for tracking. An
initial application involves the public exploring a medieval castle from the
site of its modern replacement. Analysis of use reveals problems with lighting,
movement and relating virtual and physical viewpoints, and shows how
environmental factors and physical form affect interaction. We suggest that
problems might be accommodated by carefully constructing virtual and physical
content.
Unearthing Virtual History: Using Diverse Interfaces to Reveal Hidden
Virtual Worlds
Applications and Design Spaces
/
Benford, Steve
/
Bowers, John
/
Chandler, Paul
/
Ciolfi, Luigina
/
Flintham, Martin
/
Fraser, Mike
/
Greenhalgh, Chris
/
Hall, Tony
/
Hellström, Sten Olof
/
Izadi, Shahram
/
Rodden, Tom
/
Schnädelbach, Holger
/
Taylor, Ian
Proceedings of the 2001 International Conference on Ubiquitous Computing
2001-09-30
p.225-231
© Copyright 2001 Springer-Verlag
Summary: We describe an application in which museum visitors hunt for virtual history
outdoors, capture it, and bring it back indoors for detailed inspection. This
application provides visitors with ubiquitous access to a parallel virtual
world as they move through an extended physical space. Diverse devices,
including mobile wireless interfaces for locating hotspots of virtual activity
outdoors, provide radically different experiences of the virtual depending upon
location, task, and available equipment. Initial reflections suggest that the
physical design of such devices needs careful attention so as to encourage an
appropriate style of use. We also consider the extension of our experience to
support enacted scenes. Finally, we discuss potential benefits of using diverse
devices to make a shared underlying virtual world ubiquitously available
throughout physical space.
Collaboratively improvising magic: An approach to managing participation in
an on-line drama
/
Drozd, A.
/
Bowers, J.
/
Benford, S.
/
Greenhalgh, C.
/
Fraser, M.
Proceedings of the Seventh European Conference on Computer-Supported
Cooperative Work
2001-09-16
p.159-178
© Copyright 2001 Kluwer Academic Publishers
Exploiting Interactivity, Influence, Space and Time to Explore Non-Linear
Drama in Virtual Worlds
Designed Experiences/Experienced Designs
/
Craven, Mike
/
Taylor, Ian
/
Drozd, Adam
/
Purbrick, Jim
/
Greenhalgh, Chris
/
Benford, Steve
/
Fraser, Mike
/
Bowers, John
/
Jaa-Aro, Kai-Mikael
/
Lintermann, Bernd
/
Hoch, Michael
Proceedings of ACM CHI 2001 Conference on Human Factors in Computing Systems
2001-03-31
p.30-37
Keywords: entertainment applications, virtual reality
© Copyright 2001 ACM
Summary: We present four contrasting interfaces to allow multiple viewers to explore
3D recordings of dramas in on-line virtual worlds. The first is an on-line
promenade performance to an audience of avatars. The second is a form of
immersive cinema, with multiple simultaneous viewpoints. The third is a
tabletop projection surface that allows viewers to select detailed views from a
bird's-eye overview. The fourth is a linear television broadcast created by a
director or editor. A comparison of these examples shows how a viewing audience
can exploit four general resources - interactivity, influence, space, and time
- to make sense of complex, non-linear virtual drama. These resources provide
interaction designers with a general framework for defining the relationship
between the audience and the 3D content.
Orchestrating a Mixed Reality Performance
Designed Experiences/Experienced Designs
/
Koleva, Boriana
/
Taylor, Ian
/
Benford, Steve
/
Fraser, Mike
/
Greenhalgh, Chris
/
Schnadelbach, Holger
/
vom Lehn, Dirk
/
Heath, Christian
/
Row-Farr, Ju
/
Adams, Matt
Proceedings of ACM CHI 2001 Conference on Human Factors in Computing Systems
2001-03-31
p.38-45
Keywords: mixed reality, performance, traversable interfaces
© Copyright 2001 ACM
Summary: A study of a professional touring mixed reality performance called Desert
Rain yields insights into how performers orchestrate players' engagement in an
interactive experience. Six players at a time journey through an extended
physical and virtual set. Each sees a virtual world projected onto a screen
made from a fine water spray. This acts as a traversable interface, supporting
the illusion that performers physically pass between real and virtual worlds.
Live and video-based observations of Desert Rain, coupled with interviews with
players and the production team, have revealed how the performers create
conditions for the willing suspension of disbelief, and how they monitor and
intervene in the players experience without breaking their engagement. This
involves carefully timed performances and "off-face" and "virtual"
interventions. In turn, these are supported by the ability to monitor players'
physical and virtual activity through asymmetric interfaces.