HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,646,481
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Hughes_C* Results: 23 Sorted by: Date  Comments?
Help Dates
Limit:   
[1] Providing Real-time Feedback for Student Teachers in a Virtual Rehearsal Environment Grand Challenge 3: Multimodal Learning and Analytics Grand Challenge 2015 / Barmaki, Roghayeh / Hughes, Charles E. Proceedings of the 2015 International Conference on Multimodal Interaction 2015-11-09 p.531-537
ACM Digital Library Link
Summary: Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.

[2] Online News Videos: The UX of Subtitle Position Making Speech Accessible and Usable / Crabb, Michael / Jones, Rhianne / Armstrong, Mike / Hughes, Chris J. Seventeenth International ACM SIGACCESS Conference on Computers and Accessibility 2015-10-26 p.215-222
ACM Digital Library Link
Summary: Millions of people rely on subtitles when watching video content. The current change in media viewing behaviour involving computers has resulted in a large proportion of people turning to online sources as opposed to regular television for news information. This work analyses the user experience of viewing subtitled news videos presented as part of a web page. A lab-based user experiment was carried out with frequent subtitle users, focusing on determining whether changes in video dimension and subtitle location could affect the user experience attached to viewing subtitled content. A significant improvement in user experience was seen when changing the subtitle location from the standard position of within a video at the bottom to below the video clip. Additionally, participants responded positively when given the ability to change the position of subtitles in real time, allowing for a more personalised viewing experience. This recommendation for an alternative subtitle positioning that can be controlled by the user is unlike current subtitling practice. It provides evidence that further user-based research examining subtitle usage outside of the traditional television interface is required.

[3] Pilot Study for Telepresence with 3D-Model in Mixed Reality User Experience in Virtual and Augmented Environments / Jung, Sungchul / Hughes, Charles E. VAMR 2015: 7th International Conference on Virtual, Augmented and Mixed Reality 2015-08-02 p.22-29
Keywords: Telepresence; Mixed reality; Situational plausibility; Place illusion; Co-presence
Link to Digital Content at Springer
Summary: In this paper we present the results of an experiment investigating a participant's sense of presence by examining the correlation between visual information and physical actions in a mixed reality environment. There have been many approaches to measure presence in a virtual reality environment, such as the "Pit" experiment, a physiological presence experiment that used a person's fear of heights to test body ownership. The studies reported in these prior works were conducted to measure the extent to which a person feels physical presence in virtual worlds [1-3]. Here, we focus on situational plausibility and place illusion in mixed reality, where real and virtual content coexist [4]. Generally, the phenomenon we are studying is called telepresence: an aroused sensation of 'being together in the same real location' between users [5].

[4] Responsive design for personalised subtitles Learning and language / Hughes, Chris J. / Armstrong, Mike / Jones, Rhianne / Crabb, Michael Proceedings of the 2015 International Cross-Disciplinary Conference on Web Accessibility (W4A) 2015-05-18 p.8
ACM Digital Library Link
Summary: The Internet has continued to evolve, becoming increasingly media rich. It is now a major platform for video content, which is available to a variety of users across a range of devices. Subtitles enhance this experience for many users. However, subtitling techniques are still based on early television systems, which impose limitations on font type, size and line length. These are no longer appropriate in the context of a modern web-based culture.
    In this paper we describe a new approach to displaying subtitles alongside the video content. This follows the responsive web design paradigm enabling subtitles to be formatted appropriately for different devices whilst respecting the requirements and preferences of the viewer. We present a prototype responsive video player, and report initial results from a study to evaluate the value perceived by regular subtitle users.

[5] A case study to track teacher gestures and performance in a virtual learning environment Posters / Barmaki, Roghayeh / Hughes, Charles E. LAK'15: 2015 International Conference on Learning Analytics and Knowledge 2015-03-16 p.420-421
ACM Digital Library Link
Summary: As part of normal interpersonal communication, people send and receive messages with their body, especially with their hands. Gestures play an important role in teacher-student classroom interactions. In the domain of education, many research projects have focused on the study of such gestures either in real classrooms or in tutorial settings with experienced teachers. Novice teachers especially need to understand the messages they are sending through nonverbal communication as this can have a major effect on their ability to manage behaviors and deliver content. Such learning should optimally occur before experiencing the real classroom. To assist in this process, we have developed a virtual classroom environment -- TeachLivE -- and used it for teacher practice, reflection and assessment. This paper investigates the way teachers use gestures in the virtual classroom settings of TeachLivE. Biology and algebra teachers were evaluated in our study. Analysis of video recordings from real and virtual environment seems to indicate that algebra teachers gesture significantly more often than biology teachers. These results have implications for providing useful feedback to participant teachers.

[6] Good Enough Yet? A Preliminary Evaluation of Human-Surrogate Interaction Avatars and Virtual Characters / Abich, Julian, IV / Reinerman-Jones, Lauren E. / Matthews, Gerald / Welch, Gregory F. / Lackey, Stephanie J. / Hughes, Charles E. / Nagendran, Arjun VAMR 2013: 6th International Conference on Virtual, Augmented and Mixed Reality, Part I: Designing and Developing Virtual and Augmented Environments 2014-06-22 v.1 p.239-250
Keywords: human-robot interaction; human-surrogate interaction; communications; social psychology; avatar; physical-virtual avatar
Link to Digital Content at Springer
Summary: Research exploring the implementation of surrogates has included areas such as training (Chuah et al., 2013), education (Yamashita, Kuzuoka, Fujimon, & Hirose, 2007), and entertainment (Boberg, Piippo, & Ollila, 2008). Determining the characteristics of the surrogate that could potentially influence the human's behavioral responses during human-surrogate interactions is of importance. The present work will draw on the literature about human-robot interaction (HRI), social psychology literature regarding the impact that the presence of a surrogate has on another human, and communications literature about human-human interpersonal interaction. The review will result in an experimental design to evaluate various dimensions of the space of human-surrogate characteristics influence on interaction.

[7] AMITIES: avatar-mediated interactive training and individualized experience system Avatars and robots in telepresence / Nagendran, Arjun / Pillat, Remo / Kavanaugh, Adam / Welch, Greg / Hughes, Charles Proceedings of the 2013 ACM Symposium on Virtual Reality Software and Technology 2013-10-06 p.143-152
ACM Digital Library Link
Summary: This paper presents an architecture to control avatars and virtual characters in remote interaction environments. A human-in-the-loop (interactor) metaphor provides remote control of multiple virtual characters, with support for multiple interactors and multiple observers. Custom animation blending routines and a gesture-based interface provide interactors with an intuitive digital puppetry paradigm. This paradigm reduces the cognitive and physical loads on the interactor while supporting natural bi-directional conversation between a user and the virtual characters or avatar counterparts. A multi-server-client architecture, based on a low-demand network protocol, connects the user environment, interactor station(s) and observer station(s). The associated system affords the delivery of personalized experiences that adapt to the actions and interactions of individual users, while staying true to each virtual character's personality and backstory. This approach has been used to create experiences designed for training, education, rehabilitation, remote presence and other-related applications.

[8] Segmenting Instrumented Activities of Daily Living (IADL) Using Kinematic and Sensor Technology for the Assessment of Limb Apraxia Health and Medicine / Hughes, Charmayne M. L. / Parekh, Manish / Hermsdörfer, Joachim HCI International 2013: 15th International Conference on HCI: Posters' Extended Abstracts Part II 2013-07-21 v.7 p.158-162
Keywords: action segmentation; apraxia; activities of daily living
Link to Digital Content at Springer
Summary: In this paper we present a method of segmenting instrumented activities of daily living (IADL) using kinematic criterion coupled with sensor technology. To collect our training data we asked four neurologically healthy individuals to make a total of 60 cups of tea with a set order of ASs. We then evaluated our IADL segmentation technique in healthy individuals and patients with limb apraxia, and demonstrate that combining kinematic criterion with sensor data is provides an accurate means to segment IADL's into relevant ASs.

[9] Application of Human Error Identification (HEI) Techniques to Cognitive Rehabilitation in Stroke Patients with Limb Apraxia Health, Well-Being, Rehabilitation and Medical Applications / Hughes, Charmayne M. L. / Baber, Chris / Bienkiewicz, Marta / Hermsdörfer, Joachim UAHCI 2013: 7th International Conference on Universal Access in Human-Computer Interaction, Part III: Applications and Services for Quality of Life 2013-07-21 v.3 p.463-471
Keywords: Human error identification; apraxia; activities of daily living
Link to Digital Content at Springer
Summary: The aim of this study was to consider the potential uses of human error identification (HEI) techniques in the development of a Personal Healthcare System (PHS) capable of delivering cognitive rehabilitation of activities of daily living (ADL) for stroke patients with limb apraxia (i.e., CogWatch). HEI techniques were able to predict a number of apraxic errors, as well as the associated consequences. The results of the present study indicate that HEI analysis is a useful tool in the design of cognitive systems that seek to reduce or eliminate errors in apraxic populations. The results will be implemented in the CogWatch system and will be used to develop error reduction strategies that prevent errors from occurring, and to provide post-error feedback to help the user correct their actions.

[10] Perceived Presence's Role on Learning Outcomes in a Mixed Reality Classroom of Simulated Students Virtual and Augmented Environments for Learning and Education / Hayes, Aleshia T. / Hardin, Stacey E. / Hughes, Charles E. VAMR 2013: 5th International Conference on Virtual, Augmented and Mixed Reality, Part II: Systems and Applications 2013-07-21 v.2 p.142-151
Keywords: Mixed Reality Classroom; Simulation; Presence; Suspension of Disbelief; Immersion; Engagement; Knowledge Acquisition; Virtual Learning
Link to Digital Content at Springer
Summary: This research is part of an ongoing effort on the efficacy and user experience of TLE TeachLivE™, a 3D mixed reality classroom with simulated students used to facilitate virtual rehearsal of pedagogical skills by teachers. This research investigated a potential relationship between efficacy, in terms of knowledge acquisition and transfer, and user experience in regard to presence, suspension of disbelief, and immersion. The initial case studies examining user experience of presence, suspension of disbelief, and immersion were used to develop a presence questionnaire revised from the work of Witmer and Singer (1998) to address the TLE TeachLivE™ mixed reality environment. The findings suggest that targeted practice, authentic scenarios, and suspension of disbelief in virtual learning environments may impact learning.

[11] Mixed Reality Space Travel for Physics Learning Virtual and Augmented Environments for Learning and Education / Hughes, Darin E. / Sabbagh, Shabnam / Lindgren, Robb / Moshell, J. Michael / Hughes, Charles E. VAMR 2013: 5th International Conference on Virtual, Augmented and Mixed Reality, Part II: Systems and Applications 2013-07-21 v.2 p.162-169
Keywords: STEM; mixed reality; whole-body learning; informal education; physics simulation
Link to Digital Content at Springer
Summary: In this paper we describe research being conducted on a mixed reality simulation called MEteor that is designed for informal physics learning in science centers. MEteor is a 30 x 10 foot floor area where participants use their bodies to interact with projected astronomical imagery. Participants walk and run across the floor to simulate how objects move in space, and to enact basic physics principles. Key to the success of this learning environment is an interface scheme that supports the central metaphor of "child as asteroid." Using video data collected in our studies we examine the extent to which feedback mechanisms and interface conventions strengthened the metaphorical connection, and we describe ways the interaction design can be improved for future iterations.

[12] ChronoLeap: The Great World's Fair Adventure Culture and Entertainment Applications / Walters, Lori C. / Hughes, Darin E. / Barrio, Manuel Gértrudix / Hughes, Charles E. VAMR 2013: 5th International Conference on Virtual, Augmented and Mixed Reality, Part II: Systems and Applications 2013-07-21 v.2 p.426-435
Keywords: STEAM; STEM; Immersive Education; virtual environments; virtual heritage; interdisciplinary; 1964/65 New York World's Fair
Link to Digital Content at Springer
Summary: ChronoLeap: The Great World's Fair Adventure utilizes the educational potential of immersive 3D virtual venues for children and early adolescents between 9 and 13. Virtual reality environments transport the mind beyond the 2D bounds of text or photographs; they engage the imagination and can be a powerful tool for conveying educational content [1]. ChronoLeap leverages these innate qualities and weaves together the individual threads of single disciplines into a multi-disciplinary tapestry of web-based exploration through the 1964/65 New York World's Fair. Through their myriad of pavilions and exhibits, World Fairs offer links to science, technology, engineering, mathematics, art and humanities topics. ChronoLeap provides an immersive 3D environment with highly accurate and detailed models, and merges it with games and themes designed to provide users an educational STEAM environment. The project is a collaborative effort between the University of Central Florida, Queens Museum of Art and New York Hall of Science.

[13] CogWatch -- Automated Assistance and Rehabilitation of Stroke-Induced Action Disorders in the Home Environment Cognitive Issues in Health and Well-Being / Hermsdörfer, Joachim / Bienkiewicz, Marta / Cogollor, José M. / Russel, Martin / Jean-Baptiste, Emilie / Parekh, Manish / Wing, Alan M. / Ferre, Manuel / Hughes, Charmayne EPCE 2013: 10th International Conference on Engineering Psychology and Cognitive Ergonomics, Part II: Applications and Services 2013-07-21 v.2 p.343-350
Keywords: Apraxia; activities of daily living; rehabilitation; stroke; assistive technology
Link to Digital Content at Springer
Summary: Stroke frequently causes apraxia, particularly if it affects the left-hemisphere. A major symptom of apraxia is the presence of deficits during the execution and organization of activities of daily living (ADL). These deficits may substantially limit the capacity of stroke patients to live independently in their home environment. Traditional rehabilitative techniques to improve ADL function revolve around physical and occupational therapy. This approach is labor intensive and constraints therapy to clinical environments. The CogWatch system provides an supplementary means of rehabilitation that is based on instrumented objects and ambient devices that are part of patients' everyday environment and can be used to monitor behavior and progress as well as re-train them to carry out ADL through persistent multimodal feedback.

[14] The economics of data: quality, value & exchange in web observatories WOW'13 technical presentations / Booth, Paul / Gaskell, Paul / Hughes, Chris Companion Proceedings of the 2013 International Conference on the World Wide Web 2013-05-13 v.2 p.1309-1316
ACM Digital Library Link
Summary: The aim of this paper is to present a requirement for assessing the quality of data and the development of efficient methods of valuing and exchanging data among Web Observatories. Using economic and business theory a range of concepts are explored which include a brief review of existing business structures related to the exchange of goods, data or otherwise. The paper calls for a wider discussion by the Web Observatory community to begin to define relevant criteria by which data can be assessed and improved over time. The economic incentives are addressed as part of a price by proxy framework we introduce, which is supported by the need to strive for clear pricing signals and the reduction of information asymmetries. What is presented here is a way of establishing and improving data quality with a view to valuing data exchanges that does not require the presence of money in the transaction, yet it remains tied to revenue generation models as they exist online.

[15] Establishing a baseline for text entry for a multi-touch virtual keyboard / Varcholik, Paul D. / LaViola, Joseph J., Jr. / Hughes, Charles E. International Journal of Human-Computer Studies 2012-10 v.70 n.10 p.657-672
Keywords: Multi-touch
Keywords: Text entry
Keywords: Speed
Keywords: Accuracy
Keywords: Text intensive applications
Link to Article at sciencedirect
Summary: Multi-touch, which has been heralded as a revolution in human -- computer interaction, provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization -- features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday' computer interaction devices that support important text entry intensive applications such as word processing and spreadsheets. In this paper, we present two studies that begin to explore user performance and experience with entering text using a multi-touch input. The first study establishes a benchmark for text entry performance on a multi-touch platform across input modes that compare uppercase-only to mixed-case, single-touch to multi-touch and copy to memorization tasks. The second study includes mouse style interaction for formatting rich text to simulate a word processing task using multi-touch input. As expected, our results show that users do not perform as well in terms of text entry efficiency and speed using a multi-touch interface as with a traditional keyboard. Not as expected was the result that degradation in performance was significantly less for memorization versus copy tasks, and consequently willingness to use multi-touch was substantially higher (50% versus 26%) in the former case. Our results, which include preferred input styles of participants, also provide a baseline for further research to explore techniques for improving text entry performance on multi-touch systems.

[16] Geppetto: An Environment for the Efficient Control and Transmission of Digital Puppetry Virtual Humans and Avatars / Mapes, Daniel P. / Tonner, Peter / Hughes, Charles E. VMR 2011: 4th International Conference on Virtual and Mixed Reality, Part II: Systems and Applications 2011-07-09 v.2 p.270-278
Keywords: Digital puppetry; avatar; gesture; motion capture
Link to Digital Content at Springer
Summary: An evolution of remote control puppetry systems is presented. These systems have been designed to provide high quality trainer to trainee communication in game scenarios containing multiple digital puppets with interaction occurring over long haul networks. The design requirements were to support dynamic switching of control between multiple puppets; suspension of disbelief when communicating through puppets; sensitivity to network bandwidth requirements; and as an affordable tool for professional interactive trainers (Interactors). The resulting system uses a novel pose blending solution guided by a scaled down desktop range motion capture controller as well as traditional button devices running on an standard game computer. This work incorporates aspects of motion capture, digital puppet design and rigging, game engines, networking, interactive performance, control devices and training.

[17] Why Can't a Virtual Character Be More Like a Human: A Mixed-Initiative Approach to Believable Agents Virtual Humans and Avatars / Zhu, Jichen / Moshell, J. Michael / Ontañón, Santiago / Erbiceanu, Elena / Hughes, Charles E. VMR 2011: 4th International Conference on Virtual and Mixed Reality, Part II: Systems and Applications 2011-07-09 v.2 p.289-296
Keywords: Mixed-initiative system; character believability; interactive storytelling; artificial intelligence; interactive virtual environment
Link to Digital Content at Springer
Summary: Believable agents have applications in a wide range of human computer interaction-related domains, such as education, training, arts and entertainment. Autonomous characters that behave in a believable manner have the potential to maintain human users' suspense of disbelief and fully engage them in the experience. However, how to construct believable agents, especially in a generalizable and cost effective way, is still an open problem. This paper compares the two common approaches for constructing believable agents -- human-driven and artificial intelligence-driven interactive characters -- and proposes a mixed-initiative approach in the domain of interactive training systems. Our goal is to provide the user with engaging and effective educational experiences through their interaction with our system.

[18] Automatic Scenario Generation through Procedural Modeling for Scenario-Based Training TRAINING / Martin, Glenn / Schatz, Sae / Bowers, Clint / Hughes, Charles E. / Fowlkes, Jennifer / Nicholson, Denise Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009-10-19 v.53 p.1949-1953
Link to HFES Digital Content
Summary: We discuss our current efforts at developing automatic scenario generation software. We begin by explaining the rationale, and then review successful previous efforts. We discuss the lessons-learned from the past work, and the conceptual pieces that are required to generate operationally-valid scenarios that support effective training. We then present the conceptual design of our scenario generation approach, which uses novel procedural modeling approaches to ensure operational and training requirements are adequately met.

[19] Evaluating the Potential of Cognitive Rehabilitation with Mixed Reality VR Applications / Beato, Nicholas / Mapes, Daniel P. / Hughes, Charles E. / Fidopiastis, Cali M. / Smith, Eileen M. VMR 2009: 3rd International Conference on Virtual and Mixed Reality 2009-07-19 p.522-531
Keywords: Mixed reality; post traumatic stress disorder; psychophysical sensing; medical rehabilitation; cognitive rehabilitation
Link to Digital Content at Springer
Summary: We describe the development and use of a mixed reality (MR) testbed to evaluate potential scenarios that may alleviate performance deficits in subjects who may be experiencing cognitive deficiencies, such as posttraumatic stress disorder (PTSD). The system blends real world sensory data with synthetic enhancements in the visual and aural domains. It captures user actions (movement, view direction, environment interaction, and task performance) and psychophysical states (engagement, workload, and skin conductivity) during an MR-enabled experience in order to determine task performance in the context of a variety of stimuli (visual and aural distracters in time-constrained activities). The goal is to discover triggers that affect stress levels and task performance in order to develop individualized plans for personal improvement.

[20] EDITED BOOK The Universal Access Handbook 2009 n.61 p.1034 CRC Press
ISBN: 978-1-4200-6499-5
www.crcpress.com/product/isbn/9780805862805
== Introduction to Universal Access ==
Universal Access and Design for All in the Evolving Information Society
	+ Stephanidis, C.
Perspectives on Accessibility: From Assistive Technologies to Universal Access and Design for All
	+ Emiliani, P. L.
Accessible and Usable Design of Information and Communication Technologies
	+ Vanderheiden, G. C.
== Diversity in the User Population ==
Dimensions of User Diversity
	+ Ashok, M.
	+ Jacko, J. A.
Motor Impairments and Universal Access
	+ Keates, S.
Sensory Impairments
	+ Kinzel, E.
	+ Jacko, J. A.
Cognitive Disabilities
	+ Lewis, C.
Age-Related Diff erences in the Interface Design Process
	+ Kurniawan, S.
International and Intercultural User Interfaces
	+ Marcus, A.
	+ Rau, P.-L. P.
== Technologies for Diverse Contexts of Use ==
Accessing the Web
	+ Hanson, V. L.
	+ Richards, J. T.
	+ Harper, S.
	+ Trewin, S.
Handheld Devices and Mobile Phones
	+ Kaikkonen, A.
	+ Kaasinen, E.
	+ Ketola, P.
Virtual Reality
	+ Hughes, D.
	+ Smith, E.
	+ Shumaker, R.
	+ Hughes, C.
Biometrics and Universal Access
	+ Fairhurst, M. C.
Interface Agents: Potential Benefits and Challenges for Universal Access
	+ and, E. André
M. Rehm
== Development Lifecycle of User Interfaces ==
User Requirements Elicitation for Universal Access
	+ Antona, M.
	+ Ntoa, S.
	+ Adami, I.
	+ Stephanidis, C.
Unified Design for User Interface Adaptation
	+ Savidis, A.
	+ Stephanidis, C.
Designing Universally Accessible Games
	+ Grammenos, D.
	+ Savidis, A.
	+ Stephanidis, C.
Software Requirements for Inclusive User Interfaces
	+ Savidis, A.
	+ Stephanidis, C.
Tools for Inclusive Design
	+ Waller, S.
	+ Clarkson, P. J.
The Evaluation of Accessibility, Usability, and User Experience
	+ Petrie, H.
	+ Bevan, N.
== User Interface Development: Architectures, Components, and Tools ==
A Unified Soft ware Architecture for User Interface Adaptation
	+ Savidis, A.
	+ Stephanidis, C.
A Decision-Making Specifi cation Language for User Interface Adaptation
	+ Savidis, A.
	+ Stephanidis, C.
Methods and Tools for the Development of Unified Web-Based User Interfaces
	+ Doulgeraki, C.
	+ Partarakis, N.
	+ Mourouzis, A.
	+ Stephanidis, C.
User Modeling: A Universal Access Perspective
	+ Adams, R.
Model-Based Tools: A User-Centered Design for All Approach
	+ Stary, C.
Markup Languages in Human-Computer Interaction
	+ Paternò, F.
	+ Santoro, C.
Abstract Interaction Objects in User Interface Programming Languages
	+ Savidis, A.
== Interaction Techniques and Devices ==
Screen Readers
	+ Asakawa, C.
	+ Leporini, B.
Virtual Mouse and Keyboards for Text Entry
	+ Evreinov, G.
Speech Input to Support Universal Access
	+ Feng, J.
	+ Sears, A.
Natural Language and Dialogue Interfaces
	+ Jokinen, K.
Auditory Interfaces and Sonification
	+ Nees, M. A.
	+ Walker, B. N.
Haptic Interaction
	+ Jansson, G.
	+ Raisamo, R.
Vision-Based Hand Gesture Recognition for Human-Computer Interaction
	+ Zabulis, X.
	+ Baltzakis, H.
	+ Argyros, A.
Automatic Hierarchical Scanning for Windows Applications
	+ Ntoa, S.
	+ Savidis, A.
	+ Stephanidis, C.
Eye Tracking
	+ Majaranta, P.
	+ Bates, R.
	+ Donegan, M.
Brain-Body Interfaces
	+ Gnanayutham, P.
	+ George, J.
Sign Language in the Interface: Access for Deaf Signers
	+ Huenerfauth, M.
	+ Hanson, V. L.
Visible Language for Global Mobile Communication: A Case Study of a Design Project in Progress
	+ Marcus, A.
Contributions of "Ambient" Multimodality to Universal Access
	+ Carbonell, N.
== Application Domains ==
Vocal Interfaces in Supporting and Enhancing Accessibility in Digital Libraries
	+ Catarci, T.
	+ Kimani, S.
	+ Dubinsky, Y.
	+ Gabrielli, S.
Theories and Methods for Studying Online Communities for People with Disabilities and Older People
	+ Pfeil, U.
	+ Zaphiris, P.
Computer-Supported Cooperative Work
	+ Gross, T.
	+ Fetter, M.
Developing Inclusive e-Training
	+ Savidis, A.
	+ Stephanidis, C.
Training through Entertainment for Learning Difficulties
	+ Savidis, A.
	+ Grammenos, D.
	+ Stephanidis, C.
Universal Access to Multimedia Documents
	+ Petrie, H.
	+ Weber, G.
	+ Völkel, T.
Interpersonal Communication
	+ Waller, A.
Universal Access in Public Terminals: Information Kiosks and ATMs
	+ Kouroupetroglou, G.
Intelligent Mobility and Transportation for All
	+ Bekiaris, E.
	+ Panou, M.
	+ Gaitanidou, E.
	+ Mourouzis, A.
	+ Ringbauer, B.
Electronic Educational Books for Blind Students
	+ Grammenos, D.
	+ Savidis, A.
	+ Georgalis, Y.
	+ Bourdenas, T.
	+ Stephanidis, C.
Mathematics and Accessibility: A Survey
	+ Pontelli, E.
	+ Karshmer, A. I.
	+ Gupta, G.
Cybertherapy, Cyberpsychology, and the Use of Virtual Reality in Mental Health
	+ Renaud, P.
	+ Bouchard, S.
	+ Chartier, S.
	+ Bonin, M-P
== Nontechnological Issues ==
Policy and Legislation as a Framework of Accessibility
	+ Kemppainen, E.
	+ Kemp, J. D.
	+ Yamada, H.
Standards and Guidelines
	+ Vanderheiden, G. C.
eAccessibility Standardization
	+ Engelen, J.
Management of Design for All
	+ Bühler, C.
Security and Privacy for Universal Access
	+ Maybury, M. T.
Best Practice in Design for All
	+ Miesenberger, K.
== Looking to the Future ==
Implicit Interaction
	+ Ferscha, A.
Ambient Intelligence
	+ Streitz, N. A.
	+ Privat, G.
Emerging Challenges
	+ Stephanidis, C.

[21] Constraint-Directed Performance Measurement for Large Tactical Teams TRAINING: Methods for Assessing and Debriefing Team and Multiteam Performance in Distributed Simulation-Based Training / Fowlkes, Jennifer / Owens, Jerry / Hughes, Corbin / Johnston, Joan H. / Stiso, Michael / Hafich, Amanda / Bracken, Kevin Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting 2005-09-26 v.49 p.2125-2129
Link to HFES Digital Content
Summary: Large tactical teams must demonstrate integrative performance as tens to thousands of operators perform within highly dynamic, complex, and unpredictable environments. The development of methods for capturing integrated performance and the achievement of team goals, while also allowing for and even embracing adaptive performance, is challenging. However, as Distributed Mission Training (DMT) systems continue to mature and are increasingly representative of important training opportunities in the military, diagnostic performance assessment systems are needed to ensure training quality. In this paper, we propose a methodological framework for team performance that is responsive to the performance measurement challenges found within DMT systems. The approach is illustrated within a U.S. Navy research and development program called Debriefing Distributed Simulation-Based Exercises (DDSBE).

[22] Salient Characteristics of Virtual Trees VIRTUAL ENVIRONMENTS: Virtual Environments Posters / Sims, Valerie K. / Moshell, J. Michael / Hughes, Charles E. / Cotton, James E. / Xiao, Jiangjian Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting 2001-10-08 v.45 p.1935-1938
Link to HFES Digital Content
Summary: Recent research on the design of virtual environments has focused on the important perceptual characteristics of man-made "carpentered" environments, rather than on VEs of natural environments. The present research examines memory for characteristics of natural settings consisting of virtual trees. Participants viewed either a symmetrical or asymmetrical virtual tree and then re-created it using custom-designed tree editing software. Memory was more accurate for the symmetrical tree. Across trees, participants were most accurate re-creating gross structural dimensions of a tree such as height and leaf size, and were particularly inaccurate at re-creating the curvature of tree branches. Our conclusion is that the design of virtual environments should focus on accurately representing gross structural properties of trees, rather than on using high levels of detail to accurately portray trunk and branch curvature.

[23] The Virtual Academy: A Simulated Environment for Constructionist Learning Human-Virtual Environment Interaction / Moshell, J. Michael / Hughes, Charles E. International Journal of Human-Computer Interaction 1996 v.8 n.1 p.95-110
Summary: The Virtual Academy is an educational model based on multiage teams of students and adults working through the Internet to build and use virtual worlds for educational purposes. These collaborations are mediated by a range of tools ranging from electronic mail to hypermedia and video links, and result in the creation of simulation-based role-playing adventure games within the ExploreNet software environment. ExploreNet is an Internet-based multimedia, multiuser domain constructed specifically for educational experimentation.
    This article describes the Virtual Academy Model, the ExploreNet software system, and an experiment conducted in the spring of 1995. The article describes the evolution of features of ExploreNet's user interface and their relevance to collaborative work by children.