| Natural User Interfaces | | BIBA | Full-Text | 1 | |
| António Câmara | |||
| Recent developments in user-input technologies are changing the way we interact with digital screens. The mouse and the keyboard are being replaced by touch and motion based interfaces, increasingly known as Natural User Interfaces (NUI). YDreams has developed YVision, a platform that enables the development of natural user interfaces. YVision has a modular architecture matching YDreams technologies with the best of open source third party libraries. Our technologies emphasize the creation of smart interfaces using autonomous agents that go beyond the traditional reactive systems. Yvision also includes computer vision algorithms for motion detection and the application of 3D depth sensing in rendering engines. NUI applications involve: data acquisition using various sensors that detect the user's motion and gestures; interpretation of sensor data; and presentation, the end visualization layer. YVision includes augmented reality capabilities as a visualization component, where images are captured from the real world and enhanced in real-time with contextual information. Natural user interface applications, developed for both 2D and 3D depth sensing, will be presented for illustrative purposes. Applications include projects developed for clients such as Orange, Coca-Cola, Santander and Nike. Ongoing research projects focusing on digital signage and serious games will be also discussed. | |||
| The Future of Distributed Groups and Their Use of Social Media | | BIBA | Full-Text | 2 | |
| Mary Czerwinski | |||
| Distributed team field research has shown that shared group awareness, coordination and informal communication are the most common ways for teams to inform each other of progress. In addition, we have observed that poorly documented, informal communication causes a fragmented workday due to frequent interruptions and knowledge loss due to the passage of time and team attrition. Because informal communication has both advantages and disadvantages for information sharing, it merits deeper study to allow any proposed solution to preserve the good while reducing the bad. Over the past several years, we have conducted a series of studies at Microsoft Corporation and beyond to document the nature of group conversations and communications. Based on surveys, lab studies, field studies and interviews, we have begun to develop a suite of tools that allow groups, both co-located and distributed, to stay more aware of their colleagues' actions, get on board to a new team more efficiently, and engage with each other at the most optimal times. Examples of many of these tools will be discussed, as will our progress in transitioning these ideas into real products. | |||
| Opportunities for Proxemic Interactions in Ubicomp (Keynote) | | BIBAK | Full-Text | 3-10 | |
| Saul Greenberg | |||
| In this keynote presentation, I describe and illustrate proxemic
interactions as realized in several projects in my laboratory. My goal is to
advocate proxemics as a more natural way of mediating inter-entity interactions
in ubiquitous computing environments, while still cautioning about the many
pitfalls around its use. Keywords: Proxemic interactions; ubiquitous computing; interaction techniques; ubicomp
ecologies | |||
| Voice Games: Investigation Into the Use of Non-speech Voice Input for Making Computer Games More Accessible | | BIBAK | Full-Text | 11-29 | |
| Susumu Harada; Jacob O. Wobbrock; James A. Landay | |||
| We conducted a quantitative experiment to determine the performance
characteristics of non-speech vocalization for discrete input generation in
comparison to existing speech and keyboard input methods. The results from the
study validated our hypothesis that non-speech voice input can offer
significantly faster discrete input compared to a speech-based input method by
as much as 50%. Based on this and other promising results from the study, we
built a prototype system called the Voice Game Controller that augments
traditional speech-based input methods with non-speech voice input methods to
make computer games originally designed for the keyboard and mouse playable
using voice only. Our preliminary evaluation of the prototype indicates that
the Voice Game Controller greatly expands the scope of computer games that can
be played hands-free using just voice, to include games that were difficult or
impractical to play using previous speech-based methods. Keywords: Computer games; accessible games; speech recognition; non-speech
vocalization | |||
| GraVVITAS: Generic Multi-touch Presentation of Accessible Graphics | | BIBAK | Full-Text | 30-48 | |
| Cagatay Goncu; Kim Marriott | |||
| Access to graphics and other two dimensional information is still severely
limited for people who are blind. We present a new multimodal computer tool,
GraVVITAS, for presenting accessible graphics. It uses a multi-touch display
for tracking the position of the user's fingers augmented with haptic feedback
for the fingers provided by small vibrating motors, and audio feedback for
navigation and to provide non-geometric information about graphic elements. We
believe GraVVITAS is the first practical, generic, low cost approach to
providing refreshable accessible graphics. We have used a participatory design
process with blind participants and a final evaluation of the tool shows that
they can use it to understand a variety of graphics -- tables, line graphs, and
floorplans. Keywords: graphics; accessibility; multi-touch; audio; speech; haptic | |||
| Designing a Playful Communication Support Tool for Persons with Aphasia | | BIBAK | Full-Text | 49-56 | |
| Abdullah Al Mahmud; Idowu I. B. I. Ayoola; Jean-Bernard Martens | |||
| Many studies have investigated ways to leverage communication with people
with aphasia. Here, a new concept is developed for people with non-severe
aphasia in a way that accesses the emotional and unaware layer of a
conversation and then communicates certain information to the partner hence;
introducing new dynamics and structure to a conversation. We present the
concept with detailed design and expert evaluation results. Keywords: Aphasia; Accessibility; Storytelling; Contextual interview; Assistive
technology | |||
| How to Make Numerical Information Accessible: Experimental Identification of Simplification Strategies | | BIBAK | Full-Text | 57-64 | |
| Susana Bautista; Raquel Hervás; Pablo Gervás; Richard Power; Sandra Williams | |||
| Public information services and documents should be accessible to the widest
possible readership. Information in newspapers often takes the form of
numerical expressions which pose comprehension problems for people with limited
education. A first possible approach to solve this important social problem is
making numerical information accessible by rewriting difficult numerical
expressions in a simpler way. To obtain guidelines for performing this task
automatically, we have carried out a survey in which experts in numeracy were
asked to simplify a range of proportion expressions, with three readerships in
mind: (a) people who did not understand percentages; (b) people who did not
understand decimals; (c) more generally, people with poor numeracy. Responses
were consistent with our intuitions about how common values are considered
simpler and how the value of the original expression influences the chosen
simplification. Keywords: numerical information; simplification strategies | |||
| Blind People and Mobile Keypads: Accounting for Individual Differences | | BIBAK | Full-Text | 65-82 | |
| Tiago João Vieira Guerreiro; João Oliveira; João Benedito; Hugo Nicolau; Joaquim A. Jorge; Daniel Gonçalves | |||
| No two persons are alike. We usually ignore this diversity as we have the
capability to adapt and, without noticing, become experts in interfaces that
were probably misadjusted to begin with. This adaptation is not always at the
user's reach. One neglected group is the blind. Age of blindness onset, age,
cognitive, and sensory abilities are some characteristics that diverge between
users. Regardless, all are presented with the same methods ignoring their
capabilities and needs. Interaction with mobile devices is highly visually
demanding which widens the gap between blind people. Herein, we present studies
performed with 13 blind people consisting on key acquisition tasks with 10
mobile devices. Results show that different capability levels have significant
impact on user performance and that this impact is related with the device and
its demands. It is paramount to understand mobile interaction demands and
relate them with the users' capabilities, towards inclusive design. Keywords: Individual Differences; Mobile Accessibility; Blind; Mobile Device; User
Assessments | |||
| Elderly User Evaluation of Mobile Touchscreen Interactions | | BIBAK | Full-Text | 83-99 | |
| Masatomo Kobayashi; Atsushi Hiyama; Takahiro Miura; Chieko Asakawa; Michitaka Hirose; Tohru Ifukube | |||
| Smartphones with touchscreen-based interfaces are increasingly used by
non-technical groups including the elderly. However, application developers
have little understanding of how senior users interact with their products and
of how to design senior-friendly interfaces. As an initial study to assess
standard mobile touchscreen interfaces for the elderly, we conducted
performance measurements and observational evaluations of 20 elderly
participants. The tasks included performing basic gestures such as taps, drags,
and pinching motions and using basic interactive components such as software
keyboards and photo viewers. We found that mobile touchscreens were generally
easy for the elderly to use and a week's experience generally improved their
proficiency. However, careful observations identified several typical problems
that should be addressed in future interfaces. We discuss the implications of
our experiments, seeking to provide informal guidelines for application
developers to design better interfaces for elderly people. Keywords: Mobile; Smartphones; Touchscreens; Gestures; Aging; Elderly; Senior
Citizens; User Evaluation | |||
| BrailleType: Unleashing Braille over Touch Screen Mobile Phones | | BIBAK | Full-Text | 100-107 | |
| João Oliveira; Tiago João Vieira Guerreiro; Hugo Nicolau; Joaquim A. Jorge; Daniel Gonçalves | |||
| The emergence of touch screen devices poses a new set of challenges
regarding text-entry. These are more obvious when considering blind people, as
touch screens lack the tactile feedback they are used to when interacting with
devices. The available solutions to enable non-visual text-entry resort to a
wide set of targets, complex interaction techniques or unfamiliar layouts. We
propose BrailleType, a text-entry method based on the Braille alphabet.
BrailleType avoids multi-touch gestures in favor of a more simple single-finger
interaction, featuring few and large targets. We performed a user study with
fifteen blind subjects, to assess this method's performance against Apple's
VoiceOver approach. BrailleType although slower, was significantly easier and
less error prone. Results suggest that the target users would have a smoother
adaptation to BrailleType than to other more complex methods. Keywords: Blind; Braille; Mobile Devices; Text-Entry; Touch screens | |||
| Potential Pricing Discrimination Due to Inaccessible Web Sites | | BIBAK | Full-Text | 108-114 | |
| Jonathan Lazar; Brian Wentz; Matthew Bogdan; Edrick Clowney; Matthew Davis; Joseph Guiffo; Danial Gunnarsson; Dustin Hanks; John Harris; Behnjay Holt; Mark Kitchin; Mark Motayne; Roslin Nzokou; Leela Sedaghat; Kathryn Stern | |||
| Although tools and design guidelines exist to make web sites accessible, a
majority of web sites continue to be inaccessible. When a web site offers
special prices that are available only on the web site (not the physical
store), and the web site itself is inaccessible, this can lead to
discriminatory pricing, where people with disabilities could end up paying
higher prices than people without disabilities who can access the web site and
take advantage of the online-only prices. This research examined whether 10 of
the top e-commerce web sites which offer online-only price specials are
accessible. The results revealed that there were multiple categories of
accessibility violations found on all of the evaluated web sites. Keywords: discrimination; web accessibility; disabilities; e-commerce | |||
| Measuring Immersion and Affect in a Brain-Computer Interface Game | | BIBAK | Full-Text | 115-128 | |
| Gido Hakvoort; Hayrettin Gürkök; Danny Plass-Oude Bos; Michel Obbink; Mannes Poel | |||
| Brain-computer interfaces (BCIs) have widely been used in medical
applications, to facilitate making selections. However, whether they are
suitable for recreational applications is unclear as they have rarely been
evaluated for user experience. As the scope of the BCI applications is
expanding from medical to recreational use, the expectations of BCIs are also
changing. Although the performance of BCIs is still important, finding suitable
BCI modalities and investigating their influence on user experience demand more
and more attention. In this study a BCI selection method and a comparable
non-BCI selection method were integrated into a computer game to evaluate user
experience in terms of immersion and affect. An experiment with seventeen
participants showed that the BCI selection method was more immersive and
positively affective than the non-BCI selection method. Participants also
seemed to be more indulgent towards the BCI selection method. Keywords: Brain-computer interfaces; affective computing; immersion; games | |||
| Understanding Goal Setting Behavior in the Context of Energy Consumption Reduction | | BIBAK | Full-Text | 129-143 | |
| Michelle Scott; Mary Barreto; Filipe Quintal; Ian Oakley | |||
| Home energy use represents a significant proportion of total consumption. A
growing research area is considering how to help everyday users consume less.
However, simply determining how to best reduce consumption remains a
challenging task for many users. Based on goal setting theory, this paper
presents two lab studies (based on the presentation of detailed scenarios and
the solicitation of goal selections for the individuals depicted) in order to
better understand how users make such decisions. It reveals a preference for
goals that are perceived to be easy and specific, rather than those known to be
effective (e.g. those that reduce energy consumption) or generic. Goal setting
theory suggests that easy goals lead to low levels of commitment and
motivation, suggesting such choices may be doubly ineffective. Ultimately, this
paper contributes to a better understanding of users' goal selections and
argues this is a prerequisite to effectively supporting users in reducing
resource consumption. Keywords: Sustainability; Goal-Setting; Motivation; Energy Consumption | |||
| Designing a Context-Aware Architecture for Emotionally Engaging Mobile Storytelling | | BIBAK | Full-Text | 144-151 | |
| Fabio Pittarello | |||
| This work illustrates the design of a context-aware software architecture
supporting the narration of interactive stories for mobile users. The
peculiarity of this work is the use of an extended set of context dimensions,
including the surrounding environment and the user social network, for
enhancing the engagement and the emotional impact on the users experiencing the
story. Keywords: context-awareness; emotionally engaging interaction; mobile devices; social
network; storytelling | |||
| Towards Emotional Interaction: Using Movies to Automatically Learn Users' Emotional States | | BIBAK | Full-Text | 152-161 | |
| Eva Oliveira; Mitchel Benovoy; Nuno Ribeiro; Teresa Chambel | |||
| The HCI community is actively seeking novel methodologies to gain insight
into the user's experience during interaction with both the application and the
content. We propose an emotional recognition engine capable of automatically
recognizing a set of human emotional states using psychophysiological measures
of the autonomous nervous system, including galvanic skin response,
respiration, and heart rate. A novel pattern recognition system, based on
discriminant analysis and support vector machine classifiers is trained using
movies' scenes selected to induce emotions ranging from the positive to the
negative valence dimension, including happiness, anger, disgust, sadness, and
fear. In this paper we introduce an emotion recognition system and evaluate its
accuracy by presenting the results of an experiment conducted with three
physiologic sensors. Keywords: Affective computing; Emotion-aware systems; Human-centered design;
Psychophysiological measures; Pattern-recognition; Discriminant analysis;
Support vector machine classifiers; Movies classification and recommendation | |||
| Motion and Attention in a Kinetic Videoconferencing Proxy | | BIBAK | Full-Text | 162-180 | |
| David Sirkin; Gina Venolia; John C. Tang; George G. Robertson; Taemie Kim; Kori Inkpen; Mara Sedlins; Bongshin Lee; Mike Sinclair | |||
| Compared to collocated interaction, videoconferencing disrupts the ability
to use gaze and gestures to mediate interaction, direct reactions to specific
people, and provide a sense of presence for the satellite (i.e., remote)
participant. We developed a kinetic videoconferencing proxy with a swiveling
display screen to indicate which direction that the satellite participant was
looking. Our goal was to compare two alternative motion control conditions, in
which the satellite participant directed the display screen's motion either
explicitly (aiming the direction of the display with a mouse) or implicitly
(with the screen following the satellite participant's head turns). We then
explored the effectiveness of this prototype compared to a typical stationary
video display in a lab study. We found that both motion conditions resulted in
communication patterns that indicate higher engagement in conversation, more
accurate responses to the satellite participant's deictic questions (i.e.,
"What do you think?"), and higher user rankings. We also discovered tradeoffs
in attention and clarity between explicit versus implicit control, a tension in
how motion toward one person can exclude other people, and ways that swiveling
motion provides attention awareness, even without direct eye contact. Keywords: Video-mediated communication; videoconferencing; gaze awareness; proxy;
telepresence | |||
| Making Sense of Communication Associated with Artifacts during Early Design Activity | | BIBAK | Full-Text | 181-198 | |
| Moushumi Sharmin; Brian P. Bailey | |||
| Communication associated with artifacts serves a critical role in the
creation, refinement, and selection of conceptual ideas. Despite the close
relationship between ideas and surrounding communication, effective integration
of these two types of design materials are not well-supported by exiting design
tools -- resulting in ad-hoc and ineffective strategies for managing
communication during the design process. In this paper, we report the results
of a contextual inquiry (N=15) aimed at understanding communication practices,
its role in the design process, and strategies utilized by designers to manage
and utilize communication outcomes in relation to artifacts. Our findings show
that more than 50% of early design activity consists of three categories of
communication (information seeking, brainstorming, and feedback) and
communication practice varies as a function of expertise, organizational and
social factors. Additionally, novice and freelance designers exhibit greater
reliance on online forums to find suitable communication partners to generate
and refine ideas whereas experts communicate with other experts or team members
for information collection and sharing. Keywords: Design; Artifacts; Communication; Ideation; User Study | |||
| Children's Interactions in an Asynchronous Video Mediated Communication Environment | | BIBAK | Full-Text | 199-206 | |
| Michail N. Giannakos; Konstantinos Chorianopoulos; Paul Johns; Kori Inkpen; Honglu Du | |||
| Video-mediated communication (VMC) has become a feasible way to connect
people in remote places for work and play. Nevertheless, little research has
been done with regard to children and VMC. In this paper, we explore the
behavior of a group of children, who exchanged video messages in an informal
context. In particular, we have analyzed 386 videos over a period of 11 weeks,
which were exchanged by 30 students of 4th and 5th grade from USA and Greece.
We found that the number of views and the duration of a video message
significantly depend on the gender of the viewer and creator. Most notably,
girls created more messages, but boys viewed their own messages more. Finally,
there are video messages with numerous views, which indicates that some videos
have content qualities beyond the communication message itself. Overall, the
practical implications of these findings indicate that the developers of
asynchronous VMC should consider functionalities for preserving some of the
video messages. Keywords: Asynchronous; Video-Mediated Communication; Children; Thread; Gender | |||
| Effects of Automated Transcription Delay on Non-native Speakers' Comprehension in Real-Time Computer-Mediated Communication | | BIBAK | Full-Text | 207-214 | |
| Lin Yao; Yingxin Pan; Danning Jiang | |||
| Real-time transcription generated by automated speech recognition (ASR)
technologies with a reasonably high accuracy has been demonstrated to be
valuable in facilitating non-native speakers' comprehension in real-time
communication. Besides errors, time delay often exists due to technical
problems in automated transcription as well. This study focuses on how the time
delay of transcription impacts non-native speakers' comprehension performance
and user experience. The experiment design simulated a one-way
computer-mediated communication scenario, where comprehension performance and
user experiences in 3 transcription conditions (no transcript; perfect
transcripts with a 2-second delay; and transcripts with a 10% word-error-rate
and a 2-second delay) were compared. The results showed that the participants
can benefit from the transcription with a 2-second time delay, as their
comprehension performance in this condition was improved compared with the
no-transcript condition. However, the transcription presented with delay was
found to have negative effects on user experience. In the final part of the
paper, implications for further system development and design are discussed. Keywords: Real-time transcription; Delay; Comprehension performance; User experience | |||
| Redundancy and Collaboration in Wikibooks | | BIBAK | Full-Text | 215-232 | |
| Ilaria Liccardi; Olivier Chapuis; Ching-man Au Yeung; Wendy E. Mackay | |||
| This paper investigates how Wikibooks authors collaborate to create
high-quality books. We combined Information Retrieval and statistical
techniques to examine the complete multi-year lifecycle of over 50 high-quality
Wikibooks. We found that: 1. The presence of redundant material is negatively
correlated with collaboration mechanisms; 2. For most books, over 50% of the
content is written by a small core of authors; and 3. Use of collaborative
tools (predicted pages and talk pages) is significantly correlated with
patterns of redundancy. Non-redundant books are well-planned from the beginning
and require fewer talk pages to reach high-quality status. Initially redundant
books begin with high redundancy, which drops as soon as authors use
coordination tools to restructure the content. Suddenly redundant books display
sudden bursts of redundancy that must be resolved, requiring significantly more
discussion to reach high-quality status. These findings suggest that providing
core authors with effective tools for visualizing and removing redundant
material may increase writing speed and improve the book's ultimate quality. Keywords: Collaborative writing; text redundancy; coordination mechanisms | |||
| Towards Interoperability in Municipal Government: A Study of Information Sharing Practices | | BIBAK | Full-Text | 233-247 | |
| Stacy F. Hobson; Rangachari Anand; Jeaha Yang; Juhnyoung Lee | |||
| Municipal governments rely heavily on the sharing of data between
departments as a means to provide high-quality and timely service to its
citizens. Common tasks such as parcel renovations require the involvement of
multiple departments such as Building, Planning, Zoning, Assessment and Tax to
achieve the ultimate goals. However, the software applications used to support
the work of these departments are provided by independent software vendors and
are not integrated with one another. Therefore, municipal employees rely
heavily on manual methods for data sharing. We conducted a study of 12
municipal governments to understand their information sharing needs and
practices. We focused on the interaction and information sharing within and
between municipal departments. Our findings can be used to shape future
research on e-government initiatives and interoperability of municipal
applications. Keywords: Information Sharing; Cooperative Work; Municipal Government; e-government | |||
| An Integrated Communication and Collaboration Platform for Distributed Scientific Workgroups | | BIBAK | Full-Text | 248-258 | |
| Christian Müller-Tomfelde; Jane Li; Alex Hyatt | |||
| We present the design, technologies and user study of an advanced
collaboration platform which integrates life-size video conferencing and group
interactions on a large shared workspace. The platform has been developed to
support the diagnostics and research scientists in an animal health laboratory
to work collaboratively across a physical containment barrier. We present the
design rationale for this enhanced shared workspace which allows the sharing of
a range of data and synchronous interactions on computer applications in this
complex work setting. This can not be simply supported by the "board-room" type
of "telepresence" technology. We describe the technical solution which has
focused on the ergonomic aspect and, importantly, the integration of
communication and collaboration features in the shared workspace. The platform
has been under routine use and a user study has shown that these design
considerations are critical for supporting the distributed scientific
collaborations and may be also applicable to other scientific domains. Keywords: Human-Work Interaction Design; Interaction with Small or Large Displays;
Computer-Supported Cooperative Work | |||
| IdeaTracker: An Interactive Visualization Supporting Collaboration and Consensus Building in Online Interface Design Discussions | | BIBAK | Full-Text | 259-276 | |
| Roshanak Zilouchian Moghaddam; Brian P. Bailey; Christina M. Poon | |||
| With the rapid growth of open source and other geographically distributed
software projects, more interface design discussions are occurring online.
Participation in such discussions typically occurs via issue management systems
or similar interactive discussion forums. While such systems have a low
learning curve, they do not support key elements of design discussion such as
comparing alternatives, maintaining awareness of the arguments for and against
the alternatives, or building consensus. To better understand these and other
challenges, we conducted a study of online interface design discussion. The
study consisted of analyzing a large corpus of online discussion content and
conducting interviews with designer and developer participants. We discuss the
findings of our study and use them to motivate the implementation of an
interactive visualization tool -- IdeaTracker. The tool offers explicit support
for tracking and comparing ideas and gaining an abstract summary of the overall
discussion as well as specific alternatives. It also provides a voting system
to support consensus building. The tool extracts and visualizes useful
information from the discussions that would otherwise be hidden but without
interfering with the current method of participation. Our tool is compatible
with the issue management system of one open source project but can be extended
for others. Initial user feedback is positive and confirms the need for an
alternative visual representation of interface design discussions online. Keywords: Design; open source software; user interface; visualization | |||
| What You See Is What You (Can) Get? Designing for Process Transparency in Financial Advisory Encounters | | BIBAK | Full-Text | 277-294 | |
| Philipp Nussbaumer; Inu Matter | |||
| In this paper, we report on a study to establish process transparency in
service encounters of financial advisors and their clients. To support their
interaction, we implemented a cooperative software system for tabletops,
building on transparency patterns suggested by the literature. In evaluations,
however, we found that our design did not improve the perceived transparency
and comprehensibility. Introducing the IT artifact into advisory failed to
enhance the client's overall experience and even seemed to negatively influence
the client's perception of the advisory process. Using the representational
guidance of depicting the process and its activities as a navigable,
interactive map made clients believe that interactions with their advisor were
restricted to the system's functionality, thus expecting that what they see is
all they can get. Keywords: process transparency; collaboration; advisory; tabletops | |||
| A Framework for Supporting Joint Interpersonal Attention in Distributed Groups | | BIBAK | Full-Text | 295-312 | |
| Jeremy P. Birnholtz; Johnathon Schultz; Matthew Lepage; Carl Gutwin | |||
| Informal interactions are a key element of workgroup communication, but have
proven difficult to support in distributed groups. One reason for this is that
existing systems have focused either on novel means for gathering information
about the availability or activity of others, or on allowing people to display
their activities to others. There has not been sufficient focus on the
interplay between these activities. This interplay is important, however,
because mutual awareness and attention are the mechanisms by which people
negotiate the start of conversations. In this paper, we present the
OpenMessenger Framework, a system and design framework rooted in the assumption
that individual behaviors occur in anticipation of and/or in response to the
behavior of others. We describe both the system architecture, and specific
examples of the novel implementations it enables. These include techniques for
coupling gathering behaviors with display behaviors, and for integrating these
into user workspaces via peripheral displays and gaze tracking. Keywords: Awareness; Attention; Interaction; CMC; CSCW | |||
| Do Teams Achieve Usability Goals? Evaluating Goal Achievement with Usability Goals Setting Tool | | BIBAK | Full-Text | 313-330 | |
| Anirudha Joshi; N. L. Sarda | |||
| Do teams achieve important usability goals most of the time? Further, is
goal achievement uniform or are practitioners more mindful of some goals than
others? This paper presents an empirical study on usability goal achievement in
industry projects. We used Usability Goal setting Tool (UGT), a recommender
system that helps teams set, prioritize, and evaluate usability goals. The
practitioner creates profiles for the product and its users. Based on these
inputs, UGT helps the practitioner break down high-level usability goals into
more specific goal parameters and provides recommendations, examples, and
guidelines to assign weights to these parameters. UGT suggests strategies to
evaluate goal parameters after the design is ready and assign them scores. UGT
was used to collect data from 65 projects in the Indian software industry in
which participants assigned weights and scores to the goal parameters. The 30
goal parameters suggested by UGT were found to be internally reliable, and
having acceptable granularity and coverage. It was observed that goal parameter
weights and scores correlated, but only moderately. Another interesting
observation was that more than a third of the important goal parameters did not
score well. We identify eight goal parameters that are typically high-weighted
but have poor weight-score correlations. We call these "latent but important"
goal parameters. Design teams will do well to pay closer attention to these
goal parameters during projects. Keywords: Usability goals achievement; usability goal parameters; latent goals; design
tools; methods | |||
| Supporting Window Switching with Spatially Consistent Thumbnail Zones: Design and Evaluation | | BIBAK | Full-Text | 331-347 | |
| Susanne Tak; Joey Scarr; Carl Gutwin; Andy Cockburn | |||
| Computer users switch between applications and windows all day, but finding
the target window can be difficult, particularly when the total number of
windows is high. We describe the design and evaluation of a new window switcher
called SCOTZ (for Spatially Consistent Thumbnail Zones). SCOTZ is a window
switching interface which shows all windows grouped by application and
allocates more space to the most frequently revisited applications. The two key
design principles of SCOTZ are (1) predictability of window locations, and (2)
improved accessibility of recently and frequently used windows. We describe the
design and features of SCOTZ, and present the findings from qualitative and
empirical studies which demonstrate that SCOTZ yields performance and
preference benefits over existing window switching tools. Keywords: window switching; revisitation; spatial stability; predictability | |||
| Evaluating Commonsense Knowledge with a Computer Game | | BIBA | Full-Text | 348-355 | |
| Juan Fernando Mancilla-Caceres; Eyal Amir | |||
| Collecting commonsense knowledge from freely available text can reduce the cost and effort of creating large knowledge bases. For the acquired knowledge to be useful, we must ensure that it is correct, and that it carries information about its relevance and about the context in which it can be considered commonsense. In this paper, we design, and evaluate an online game that classifies, using the input from players, text extracted from the web as either commonsense knowledge, domain-specific knowledge, or nonsense. A continuous scale is defined to classify the knowledge as nonsense or commonsense and it is later used during the evaluation of the data to identify which knowledge is reliable and which one needs further qualification. When comparing our results to other similar knowledge acquisition systems, our game performs better with respect to coverage, redundancy, and reliability of the commonsense acquired. | |||
| Remote Usability Testing Using Eyetracking | | BIBAK | Full-Text | 356-361 | |
| Piotr Chynal; Jerzy M. Szymanski | |||
| In the paper we present a low cost method of using eyetracking to perform
remote usability tests on users. Remote usability testing enables to test users
in their natural environment. Eyetracking is one of the most popular techniques
for usability testing in the laboratory environment. We decided to try to use
this technique in remote tests. We used standard web camera with freeware
software. Our experiment showed that such method is not perfect, but it could
be a good addition to the standard remote tests, and a foundation for further
development. Keywords: Eyetracking; Usability; Remote Usability Testing; Human-Computer Interaction | |||
| A Means-End Analysis of Consumers' Perceptions of Virtual World Affordances for E-commerce | | BIBAK | Full-Text | 362-379 | |
| Minh Quang Tran; Shailey Minocha; Dave Roberts; Angus Laing; Darren Langdridge | |||
| Virtual worlds are three-dimensional (3D) persistent multi-user online
environments where users interact through avatars. The affordances of virtual
worlds can be useful for business-to-consumer e-commerce. Moreover, affordances
of virtual worlds can complement affordances of websites to provide consumers
with an enhanced e-commerce experience. We investigated which affordances of
virtual worlds can enhance consumers' experiences on e-commerce websites. We
conducted laddering interviews with 30 virtual world consumers to understand
their perceptions of virtual world affordances. A means-end analysis was then
applied to the interview data. The results suggest co-presence, product
discovery, 3D product experience, greater interactivity with products and
sociability are some of the key virtual world affordances for consumers. We
discuss theoretical implications of the research using dimensions from the
Technology Acceptance Model. We also discuss practical implications, such as
how virtual world affordances can be incorporated into the design of e-commerce
websites. Keywords: Consumer experience; e-commerce; interaction design; laddering interviews;
means-end analysis; qualitative research; user experience; virtual worlds | |||
| Improving Users' Consistency When Recalling Location Sharing Preferences | | BIBAK | Full-Text | 380-387 | |
| Jayant Venkatanathan; Denzil Ferreira; Michael Benisch; Jialiu Lin; Evangelos Karapanos; Vassilis Kostakos; Norman M. Sadeh; Eran Toch | |||
| This paper presents a study of the effect of one instance of contextual
cues, trajectory reminders, on the recollection of location sharing preferences
elicited using a retrospective protocol. Trajectory reminders are user
interface elements that indicate for a particular location of a person's trail
across a city the locations visited before and after. The results of the study
show that reminding users where they have been before and after a specific
visited location can elicit more consistent responses in terms of stated
location sharing preferences for that location visit. This paper argues that
trajectory reminders are useful when collecting preference data with
retrospective protocols because they can improve the quality of the collected
data. Keywords: Location sharing preferences; consistency; retrospective protocols | |||
| Navigation Time Variability: Measuring Menu Navigation Errors | | BIBAK | Full-Text | 388-395 | |
| Krystian Samp; Stefan Decker | |||
| The subject of errors in menu studies is typically limited to reporting
error rates (i.e., the number of clicks missing target items) or even
completely neglected. This paper investigates menu navigation errors in more
depth. We propose the Navigation Time Variability (NTV) measure to capture the
total severity of navigation errors. The severity is understood as time needed
to recover from the errors committed. We present a menu study demonstrating use
and value of the new measure. Keywords: Navigation Time Variability; errors; navigation; menus | |||
| Challenges in Designing Inter-usable Systems | | BIBAK | Full-Text | 396-403 | |
| Ville Antila; Alfred Lui | |||
| Interactive systems are increasingly interconnected across different devices
and platforms. The challenge for interaction designers is to meet the
requirements of consistency and continuity across these platforms to ensure the
inter-usability of the system. In this paper we investigate the current
challenges the designers are facing in the emerging fields of interactive
systems. Through semi-structured interviews of 17 professionals working on
interaction design in different domains we probed into the current
methodologies and the practical challenges in their daily tasks. The identified
challenges include but are not limited to: the inefficiency of using low-fi
prototypes in a lab environment to test inter-usability and the challenges of
"seeing the big picture" when designing a part of an interconnected system. Keywords: Interaction design; cross-platform systems; inter-usability | |||
| Directed Cultural Probes: Detecting Barriers in the Usage of Public Transportation | | BIBAK | Full-Text | 404-411 | |
| Susanne Schmehl; Stephanie Deutsch; Johann Schrammel; Lucas Paletta; Manfred Tscheligi | |||
| In this paper we describe the application of a variation of cultural probing
for identifying barriers in the use of public transportation for target groups
with visual, cognitive or language-related handicaps. To be able to better
focus on the targeted aspect -- the barriers -- we applied modifications to the
traditional cultural probing approach: Users were encouraged to generate data
related to the targeted aspect. We found that this approach can produce focused
results that can be analysed fast and can help to overcome obstacles related to
limitations in verbal skills or expressiveness of the user. Keywords: User requirements; Cultural Probes; Directed Cultural Probes; elderly;
immigrants; functional illiterates; public transportation | |||
| Image Retrieval with Semantic Sketches | | BIBAK | Full-Text | 412-425 | |
| David Engel; Christian Herdtweck; Björn Browatzki; Cristóbal Curio | |||
| With increasingly large image databases, searching in them becomes an ever
more difficult endeavor. Consequently, there is a need for advanced tools for
image retrieval in a webscale context. Searching by tags becomes intractable in
such scenarios as large numbers of images will correspond to queries such as
"car and house and street". We present a novel approach that allows a user to
search for images based on semantic sketches that describe the desired
composition of the image. Our system operates on images with labels for a few
high-level object categories, allowing us to search very fast with a minimal
memory footprint. We employ a structure similar to random decision forests
which avails a data-driven partitioning of the image space providing a search
in logarithmic time with respect to the number of images. This makes our system
applicable for large scale image search problems. We performed a user study
that demonstrates the validity and usability of our approach. Keywords: Content-Based Image Retrieval; Sketch Interface; Semantic Brushes; Real-Time
Application; User-Study | |||
| Mixer: Mixed-Initiative Data Retrieval and Integration by Example | | BIBAK | Full-Text | 426-443 | |
| Steven Gardiner; Anthony Tomasic; John Zimmerman; Rafae Aziz; Kathryn Rivard | |||
| Office administrators are frequently asked to create ad hoc reports based on
web accessible data. The web contains the desired data but does not allow
efficient access in the way the administrator needs, prompting a tedious and
labor-intensive task of retrieving and integrating the required data. Mixer is
a programming-by-demonstration (PBD) tool empowering administrators to
construct ad hoc reports from diverse web sources without tedious piecemeal
labor. Mixer's design builds on the exploration into end user conceptualization
of data retrieval tasks from our previous Wizard-of-Oz study [39], and
incorporates insights from mixed-initiative researchers into collaboration
between end users and software agents. This paper justifies the design
decisions that drive Mixer, focusing on general lessons for designers of
programming-by-demonstration systems targeting nonprogrammers. We evaluate
Mixer by performing a user study showing that administrators are able to
accomplish programming tasks without needing to understand programming concepts
for data retrieval and integration. Keywords: programming by demonstration; end user programming; mixed initiative; data
integration | |||
| Speaking to See: A Feasibility Study of Voice-Assisted Visual Search | | BIBAK | Full-Text | 444-451 | |
| Victor Kaptelinin; Herje Wåhlen | |||
| The paper presents the concept, implementation, and a feasibility study of a
user interface technique, named VAVS ("voice-assisted visual search"). VAVS
employs user's voice input for assisting the user in searching for objects of
interest in complex displays. User voice input is compared with attributes of
visually presented objects and, if there is a match, the matching object is
highlighted to help the user visually locate the object. The paper discusses
differences between, on the one hand, VAVS and, on the other hand, voice
commands and multimodal input techniques. An interactive prototype implementing
the VAVS concept and employing a standard voice recognition program is
described. The paper reports an empirical study, in which an object location
task was carried out with and without VAVS. It was found that the VAVS
condition was associated with higher performance and use satisfaction. The
paper concludes with a discussion of directions for future work. Keywords: Voice recognition; visual search; multimodal input; voice command | |||
| Analysing the Playground: Sensitizing Concepts to Inform Systems That Promote Playful Interaction | | BIBAK | Full-Text | 452-469 | |
| Stefan Rennick Egglestone; Brendan Walker; Joe Marshall; Steve Benford; Derek McAuley | |||
| Playful interaction in an important topic in HCI research, and there is an
ongoing debate about the fundamental principles that underpin playful systems.
This paper makes a contribution to this debate by outlining a set of
sensitizing concepts which have emerged from an analysis of interaction in the
playground; these help explain its appeal to children, and have been selected
for their potential to inspire the design of future playful systems. These
concepts have emerged from the analysis of material collected during a
structured workshop which was organized by the authors, and which was attended
by a group of experts. They have also been applied to the design of Breathless,
a playful interactive system which has recently been deployed by the authors,
and which represents an unusual evolution of the playground swing. The paper
concludes with a number of reflections inspired by Breathless. These have been
structured through the use of the concepts as an analytical tool. Keywords: Playground; playful interaction; sensitizing concepts | |||
| Comparative Feedback in the Street: Exposing Residential Energy Consumption on House Façades | | BIBAK | Full-Text | 470-488 | |
| Andrew Vande Moere; Martin Tomitsch; Monika Hoinkis; Elmar Trefz; Silje Johansen; Allison Jones | |||
| This study investigates the impact of revealing the changes in daily
residential energy consumption of individual households on their respective
house façades. While energy feedback devices are now commercially
available, still little is known about the potential of making such private
information publicly available in order to encourage various forms of social
involvement, such as peer pressure or healthy competition. This paper reports
on the design rationale of a custom-made chalkboard that conveys different
visualizations of household energy consumption, which were updated daily by
hand. An in-situ, between-subject study was conducted during which the effects
of such a public display were compared with two different control groups over a
total period of 7 weeks. The competitive aspects of the public display led to
more sustained behavior change and more effective energy conservation, as some
graphical depictions such as a historical line graph raised awareness about
consumption behavior, and the public character of the display prompted
discussions in the wider community. The paper concludes with several
considerations for the design of public displays, and of household energy
consumption in particular. Keywords: persuasive computing; public display; urban screen; visualization;
sustainability; interaction design; urban computing | |||
| Are First Impressions about Websites Only Related to Visual Appeal? | | BIBAK | Full-Text | 489-496 | |
| Eleftherios Papachristos; Nikolaos M. Avouris | |||
| This paper investigates whether immediate impression about websites
influences only perceptions of attractiveness. The evaluative constructs of
perceived usability, credibility and novelty were investigated alongside visual
appeal in an experimental setting in which users evaluated 20 website
screenshots in two phases. The websites were rated by the participants after
viewing time of 500 ms in the first phase and with no time limit in the second.
Within-website and within-rater consistency were examined in order to determine
whether extremely short time period are enough to quickly form stable opinions
about high level evaluative constructs besides visual appeal. We confirmed that
quick and stable visual appeal judgments were made without the need of
elaborate investigations and found evidence that this is also true for novelty.
Usability and credibility judgments were found less consistent but nonetheless
noteworthy. Keywords: Webpage design; aesthetic evaluation; credibility; visual appeal; perceived
usability | |||
| You Can Wear It, But Do They Want to Share It or Stare at It? | | BIBAK | Full-Text | 497-504 | |
| Arto Puikkonen; Anu Lehtiö; Antti Virolainen | |||
| Wearable technologies are often used for supporting our daily lives instead
of aiming to be entertaining. Yet it is in our daily lives that clothing is
used to highlight our personas and engage others. In this paper, we describe
what type of social acceptance issues might be worth to consider when it comes
to entertaining and engaging wearable technology. Our user study with 10
participants was conducted by wearing a T-shirt that served as a display for an
online game. The participants wore the T-shirt in their everyday surroundings.
We gained a preliminary understanding on peoples' reactions and the suitability
of this type of wearable technology for everyday usage. Our results indicate
that established social boundaries for inappropriate attention influence the
spectator experience with performative wearable technologies. Keywords: Performative Wearable Devices; Social Interaction; Game Spectatorship | |||
| Design and Evaluation of Interaction Technology for Medical Team Meetings | | BIBAK | Full-Text | 505-522 | |
| Alex Olwal; Oscar Frykholm; Kristina Groth; Jonas Moll | |||
| Multi-disciplinary team meetings (MDTMs) are essential in health-care, where
medical specialists discuss diagnosis and treatment of patients. We introduce a
prototype multi-display groupware system, intended to augment the discussions
of medical imagery, through a range of input mechanisms, multi-user interfaces
and interaction techniques on multi-touch devices and pen-based technologies.
Observations of MDTMs, as well as interviews and observations of surgeons and
radiologists, serve as a foundation for guidelines and a set of implemented
techniques. We present a detailed analysis of a study where the techniques'
potential was explored with radiologists and surgeons of different specialties
and varying expertise. The results show that the implemented technologies have
the potential to bring numerous benefits to the team meetings with minimal
modification to the current workflow. We discuss how they can augment the
expressiveness and communication between meeting participants, facilitate
understanding for novices, and improve remote collaboration. Keywords: Medical team meetings; collaboration; single-display groupware;
multi-display groupware; multi-touch; pen; mobile Note: Errata for this article: http://link.springer.com/chapter/10.1007/978-3-642-23774-4_51 | |||
| How Technology Influences the Therapeutic Process: A Comparative Field Evaluation of Augmented Reality and In Vivo Exposure Therapy for Phobia of Small Animals | | BIBAK | Full-Text | 523-540 | |
| Maja Wrzesien; Jean-Marie Burkhardt; Mariano Alcañiz Raya; Cristina Botella | |||
| In Vivo Exposure Therapy (IVET) has been a recommended protocol for the
treatment of specific phobias. More recently, several studies have suggested
that Augmented Reality Exposure Therapy (ARET) is a potentially effective
technology in this field. The objective of this paper is to report the
preliminary results of a comparative analysis of ARET and IVET applied to the
treatment of phobia to small animals. To analyze participants' activity, we
have adopted a multidisciplinary and mixed perspective based on clinical and
user-centered approaches. This pilot results show that ARET and IVET are both
clinically effective. Both therapies produce a significant reduction in the
clinical outcome measures and allow the clients to interact with a real phobic
stimulus after the therapeutic session. The results also show some main
differences between technology-mediated therapy and traditional non-mediated
therapy. We discuss these results in terms of future design and evaluation
guidelines for Mental Health technologies. Keywords: Mental health; augmented reality; field evaluation | |||
| You've Covered: Designing for In-shift Handoffs in Medical Practice | | BIBAK | Full-Text | 541-558 | |
| Yunan Chen | |||
| Handoffs are moments of critical transition in which clinicians engage to
maintain continuous coverage of patient care. This paper reports on an
observational study of continuous coverage in an Emergency Department (ED),
where three types of handoffs that occur during the same shift were identified:
lunch breaks, ad hoc breaks and high workloads. The findings show these
"in-shift handoffs" are managed not only through temporal linear coordination,
but also through the local coordination among nurses working nearby. In-shift
handoffs are crucial to maintaining continuous coverage in hospital settings.
However, insufficient understanding of in-shift handoffs in Electronic Medical
System (EMR) design may lead to a separation of information and responsibility,
and an illusion of communication in patient care. The findings of this study
call for attention to in-shift handoffs in future system design and for
improving the traditional handoff process through the coordination of local
awareness during ED work. Keywords: In-shift Handoffs; Electronic Medical Record (EMR); Emergency Departments;
Non-working Moments; Design | |||
| A Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-Dependent Requirements | | BIBAK | Full-Text | 559-575 | |
| Katrin Wolf; Anja Naumann; Michael Rohs; Jörg Müller | |||
| This paper explores how microgestures can allow us to execute a secondary
task, for example controlling mobile applications, without interrupting the
manual primary task, for instance, driving a car. In order to design
microgestures iteratively, we interviewed sports- and physiotherapists while
asking them to use task related props, such as a steering wheel, a cash card,
and a pen for simulating driving a car, an ATM scenario, and a drawing task.
The primary objective here is to define microgestures that are easily
performable without interrupting or interfering the primary task. Using expert
interviews, we developed a taxonomy that classifies these gestures according to
their task context. We also assessed the ergonomic and attentional attributes
that influence the feasibility and task suitability of microinteractions, and
evaluated their level of resources required. Accordingly, we defined 21
microgestures that allow performing microinteractions within a manual, dual
task context. Our taxonomy poses a basis for designing microinteraction
techniques. Keywords: gestures; microinteractions; dual-task; multitask; interruption | |||
| Unifying Events from Multiple Devices for Interpreting User Intentions through Natural Gestures | | BIBAK | Full-Text | 576-590 | |
| Pablo Llinás; Manuel García-Herranz; Pablo A. Haya; Germán Montoro | |||
| As technology evolves (e.g. 3D cameras, accelerometers, multitouch surfaces,
etc.) new gestural interaction methods are becoming part of the everyday use of
computational devices. This trend forces practitioners to develop applications
for each interaction method individually. This paper tackles the problem of
interpreting gestures in a multiple ways of interaction scenario, by focusing
on the abstract gesture rather than on the technology or technologies used to
generate it. This article describes the Flash Library for Interpreting Natural
Gestures (FLING), a framework for developing multi-gestural applications
integrated and running in different gestural-platforms. By offering an
architecture for the integration and unification of different types of
interaction, FLING eases scalability while presenting an environment for rapid
prototyping by novice multi-gestural programmers. Throughout the article we
analyse the benefits of this approach, comparing it with state of the art
technologies, describe the framework architecture, and present several examples
of applications and experiences of use. Keywords: FLING framework; Multi-touch interface; multiple input peripherals;
application development | |||
| SimpleFlow: Enhancing Gestural Interaction with Gesture Prediction, Abbreviation and Autocompletion | | BIBA | Full-Text | 591-608 | |
| Mike Bennett; Kevin McCarthy; Sile O'Modhrain; Barry Smyth | |||
| Gestural interfaces are now a familiar mode of user interaction and gestural input is an important part of the way that users can interact with such interfaces. However, entering gestures accurately and efficiently can be challenging. In this paper we present two styles of visual gesture autocompletion for 2D predictive gesture entry. Both styles enable users to abbreviate gestures. We experimentally evaluate and compare both styles of visual autocompletion against each other and against non-predictive gesture entry. The best performing visual autocompletion is referred to as SimpleFlow. Our findings establish that users of SimpleFlow take significant advantage of gesture autocompletion by entering partial gestures rather than whole gestures. Compared to non-predictive gesture entry, users enter partial gestures that are 41% shorter than the complete gestures, while simultaneously improving the accuracy (+13%, from 68% to 81%) and speed (+10%) of their gesture input. The results provide insights into why SimpleFlow leads to significantly enhanced performance, while showing how predictive gestures with simple visual autocompletion impacts upon the gesture abbreviation, accuracy, speed and cognitive load of 2D predictive gesture entry. | |||
| The Perception of Sound and Its Influence in the Classroom | | BIBAK | Full-Text | 609-626 | |
| Sofia Reis; Nuno Correia | |||
| In this paper we describe a game to assess if the quantitative and graphical
perception of sound by students can influence how they behave in the classroom.
The game captures sound and shows the sound wave or the frequency spectrum,
integrated with an animated character, to students in real time. The quieter
the students are the higher the score. A survey was conducted to teachers from
an elementary and secondary school to determine if they considered that noise,
caused by the students, was a problem. Most of the teachers considered that
students make too much noise. All the classes where the game was tested became
quieter, thus showing that when these students perceived, in a quantitative
way, how much their behavior was disruptive they were more inclined to be quiet
or, at least, to reduce the amount of noise. Keywords: game; persuasive technology; noise; classroom | |||
| Encouraging Initiative in the Classroom with Anonymous Feedback | | BIBAK | Full-Text | 627-642 | |
| Tony Bergstrom; Andrew Harris; Karrie Karahalios | |||
| Inspiring and maintaining student participation in large classes can be a
difficult task. Students benefit from an active experience as it helps them
better understand the course material. However, it's easy to stay silent.
Opportunities to participate in conversation allow students to question and
learn. The Fragmented Social Mirror (FSM) provides students with the ability to
anonymously initiate classroom dialog with the lecturer. The system encourages
participation by enabling expressive anonymous feedback to reduce evaluation
anxiety. The FSM further catalyzes participation by allowing for many
simultaneous participants. In this paper, we introduce the FSM as a classroom
device, discuss its design, and describe a pilot test of the interface. Initial
results indicate a promising direction for future feedback systems. Keywords: Social Mirrors; Classroom; Feedback; Anonymous | |||
| U-Note: Capture the Class and Access It Everywhere | | BIBAK | Full-Text | 643-660 | |
| Sylvain Malacria; Thomas Pietrzak; Aurélien Tabard; Eric Lecolinet | |||
| We present U-Note, an augmented teaching and learning system leveraging the
advantages of paper while letting teachers and pupils benefit from the richness
that digital media can bring to a lecture. U-Note provides automatic linking
between the notes of the pupils' notebooks and various events that occurred
during the class (such as opening digital documents, changing slides, writing
text on an interactive whiteboard...). Pupils can thus explore their notes in
conjunction with the digital documents that were presented by the teacher
during the lesson. Additionally, they can also listen to what the teacher was
saying when a given note was written. Finally, they can add their own comments
and documents to their notebooks to extend their lecture notes. We interviewed
teachers and deployed questionnaires to identify both teachers and pupils'
habits: most of the teachers use (or would like to use) digital documents in
their lectures but have problems in sharing these resources with their pupils.
The results of this study also show that paper remains the primary medium used
for knowledge keeping, sharing and editing by the pupils. Based on these
observations, we designed U-Note, which is built on three modules. U-Teach
captures the context of the class: audio recordings, the whiteboard contents,
together with the web pages, videos and slideshows displayed during the lesson.
U-Study binds pupils' paper notes (taken with an Anoto digital pen) with the
data coming from U-Teach and lets pupils access the class materials at home,
through their notebooks. U-Move lets pupils browse lecture materials on their
smartphone when they are not in front of a computer. Keywords: Augmented classroom; digital pen; digital lecturing environment; capture and
access; digital classroom | |||