HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,396,171
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Johnston_M* Results: 25 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 25 Jump to: 2015 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 02 | 98 |
Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual Assistant Demonstrations / Selfridge, Ethan / Johnston, Michael Proceedings of the 2015 International Conference on Multimodal Interaction 2015-11-09 p.381-382
ACM Digital Library Link
Summary: Interact is a mobile virtual assistant that uses multimodal dialog to enable an interactive concierge experience over multiple application domains including hotel, restaurants, events, and TV search. Interact demonstrates how multimodal interaction combined with conversational dialog enables a richer and more natural user experience. This demonstration will highlight incremental recognition and understanding, multimodal speech and gesture input, context tracking over multiple simultaneous domains, and the use of multimodal interface techniques to enable disambiguation of errors and online personalization.

A Systems Approach to Diagnosing and Measuring Teamwork in Complex Sociotechnical Organizations General Sessions: GS2 -- General Sessions Lectures 1 / Duff, Sacha N. / Del Giudice, Katherine / Johnston, Matthew / Flint, Jesse / Kudrick, Bonnie Proceedings of the Human Factors and Ergonomics Society 2014 Annual Meeting 2014-10-27 p.573-577
doi 10.1177/1541931214581121
Link to HFES Digital Content
Summary: This paper presents a novel approach to diagnosing and measuring teamwork in complex sociotechnical systems. First, the underlying theoretical constructs that have inspired the development and use of a multi-level model to study team phenomena from a general systems perspective are presented. Next, in an attempt to theoretically ground the construct, 'flow state' will be presented as an isomorphic variable in a multi-level model, meaning it is represented similarly at the system, team, and individual level. Approaching processes embedded in organizations from this perspective allows diagnosis of the systemic influences that contribute most to the variance in performance, identification of pervasive latent systemic failures, and the development of a tailored taxonomy of behavioral teamwork dimensions, which can then be translated into metrics to measure teamwork within any observable complex process.

Using a Game to Evaluate Passenger Screener Fatigue and Sleepiness at Airport Screening Checkpoints System Development: SD3 -- David Meister Award: Best Technical Paper / Johnston, Matthew / McNeil, Mike / Del Giudice, Katherine / Kudrick, Bonnie Proceedings of the Human Factors and Ergonomics Society 2014 Annual Meeting 2014-10-27 p.2290-2294
doi 10.1177/1541931214581477
Link to HFES Digital Content
Summary: The sensitivity of a game-based neurocognitive test to detect sleepiness and fatigue among workers at an airport passenger screening checkpoint (screeners) was examined. Screener fatigue and sleepiness was evaluated using both the game-based test and self-reports over the course of an eight-hour shift. Fatigue and sleepiness using the game-based test was evaluated utilizing differences in pre- and post-shift performance on four games targeting fatigue-mediated cognitive processes including simple reaction time, spatial processing, logical relations, and mathematical processing. Self-reports of fatigue and sleepiness were also collected pre-and post-shift using a previously validated tool. Results revealed that screeners at the checkpoint experienced a significant increase in fatigue and sleepiness from pre- to post-shift, indicated by both performance on the game and the self-report tool. The results suggest that the game-based tool could be used to evaluate the impact of countermeasures to reduce screener fatigue at screening checkpoints.

Development of a System for Communicating Human Factors Readiness Business Integration / Johnston, Matthew / Del Giudice, Katie / Hale, Kelly S. / Winslow, Brent HIMI 2013: Human Interface and the Management of Information, Part III: Information and Interaction for Learning, Culture, Collaboration and Business 2013-07-21 v.3 p.475-484
Keywords: Human Factors Readiness; Risk Assessment; Acquisition Decision Support; Human System Integration
Link to Digital Content at Springer
Summary: While human factors has been recognized as a key component in research and development efforts, there is a lack of systematic guidance as to how to insert human factors evaluation outcomes into system development processes. The current effort proposes a systematic scale comparable to existing Technology Readiness Level scales to objectively quantify and track human factors readiness throughout the system development lifecycle. The resultant Human Factors Readiness Levels (HFRLs), iteratively developed with input from government and industry human factors practitioners across a variety of domains, prioritize each identified human factors issue based on its risk level and by the status of any resolution. The overall scoring method utilizes a scale of 1 to 10, with a higher score indicating a higher level of human factors readiness. The HFRL scale has been integrated into a software tool, the System for Human Factors Readiness Evaluation (SHARE), that supports tracking and calculation of system level HFRLs that can be quickly and easily shared to support acquisition decision making and product development in an effort to realize return on investment through early identification, prioritization and rectification of issues avoiding expensive, late design changes.

A multimodal dialogue interface for mobile local search Demonstrations / Ehlen, Patrick / Johnston, Michael Proceedings of the 2013 International Conference on Intelligent User Interfaces 2013-03-19 v.2 p.63-64
ACM Digital Library Link
Summary: Speak4itSM uses a multimodal interface to perform mobile search for local businesses. Users combine simultaneous speech and touch to input queries or commands, for example, by saying, "gas stations", while tracing a route on a touchscreen. This demonstration will exhibit an extension of our multimodal semantic processing architecture from a one-shot query system to a multimodal dialogue system that tracks dialogue state over multiple turns and resolves prior context using unification-based context resolution. We illustrate the capabilities and limitations of this approach to multimodal interpretation, describing the challenges of supporting true multimodal interaction in a deployed mobile service, while offering an interactive demonstration on tablets and smartphones.

Multimodal dialogue in mobile local search Demo session 2 / Ehlen, Patrick / Johnston, Michael Proceedings of the 2012 International Conference on Multimodal Interfaces 2012-10-22 p.303-304
ACM Digital Library Link
Summary: Speak4itSM is a multimodal, mobile search application that provides information about local businesses. Users can combine speech and touch input simultaneously to make search queries or commands to the application. For example, a user might say, "gas stations", while simultaneously tracing a route on a touchscreen. In this demonstration, we describe the extension of our multimodal semantic processing architecture and application from a one-shot query system to a multimodal dialogue system that tracks dialogue state over multiple turns. We illustrate the capabilities and limitations of an information-state-based approach to multimodal interpretation. We provide interactive demonstrations of Speak4it on a tablet and a smartphone, and explain the challenges of supporting true multimodal interaction in a deployed mobile service.

Multimodal interaction patterns in mobile local search Mobile interfaces & novel interaction / Ehlen, Patrick / Johnston, Michael Proceedings of the 2012 International Conference on Intelligent User Interfaces 2012-02-14 p.21-24
ACM Digital Library Link
Summary: Speak4it is a mobile search application that leverages multimodal input and integration to allow users to search for and act on local business information. We present an initial empirical analysis of user interaction with a multimodal local search application deployed in the field with real users. Specifically, we focus on queries involving multimodal commands, and analyze multimodal interaction behaviors seen in a deployed multimodal system.

Collecting multimodal data in the wild Demonstration session / Johnston, Michael / Ehlen, Patrick Proceedings of the 2012 International Conference on Intelligent User Interfaces 2012-02-14 p.339-340
ACM Digital Library Link
Summary: Multimodal interaction allows users to specify commands using combinations of inputs from multiple different modalities. For example, in a local search application, a user might say "gas stations" while simultaneously tracing a route on a touchscreen display. In this demonstration, we describe the extension of our cloud-based speech recognition architecture to a Multimodal Semantic Interpretation System (MSIS) that supports processing of multimodal inputs streamed over HTTP. We illustrate the capabilities of the framework using Speak4itSM, a deployed mobile local search application supporting combined speech and gesture input. We provide interactive demonstrations of Speak4it on the iPhone and iPad and explain the challenges of supporting true multimodal interaction in a deployed mobile service.

Youth searching online: an investigation of gender influence Posters / Johnston, Melissa P. Proceedings of the 2012 iConference 2012-02-07 p.494-497
ACM Digital Library Link
Summary: Questions relating to gender and technology are important cultural issues in our society and the design of educational programs for children depends on accurate information about this aspect of our culture. The changing information landscape and highly technological environment of 21st century schools is one where the Internet has become a significant source of information to support class-based work. Yet, there is little current research that specifically investigates how students search for information online and the various factors than can influence this process. One of these factors is gender. As technology's presence in our society increases, school librarians and educators need research to inform their instruction in preparing students to be effective online information seekers. This poster presents in-progress research investigating children's online information seeking behavior through the cultural lens of gender in order to further the understanding of how youth seek information online and aid school librarians' efforts in developing effective instruction.

Test-Retest Reliability of CogGauge: A Cognitive Assessment Tool for SpaceFlight Aerospace and Military Applications / Johnston, Matthew / Carpenter, Angela / Hale, Kelly S. EPCE 2011: 9th International Conference on Engineering Psychology and Cognitive Ergonomics 2011-07-09 p.565-571
Keywords: cognitive; decrement; assessment; diagnosis; reliability; stability
Link to Digital Content at Springer
Summary: The purpose of this study was to assess at a preliminary level, the test/retest reliability of the math processing mini-game of CogGauge: a cognitive assessment tool for spaceflight. The focus of this assessment was on the stability of test scores and calculation of reliable change on test/retest scores obtained on a mathematical processing task. A sample of 18 neurotypical, non-concussed individuals with a minimum of a graduate or professional school degree completed the task on two separate occasions separated by 7 days. Test-retest coefficients, reliable change difference scores (including adjustments for practice effects) and descriptive statistics are provided along with a discussion of the CogGauge tool.

OpenGesture: a low-cost authoring framework for gesture and speech based application development and learning analytics / Worsley, Marcelo / Johnston, Michael / Blikstein, Paulo Proceedings of ACM IDC'11: Interaction Design and Children 2011-06-20 p.254-256
ACM Digital Library Link
Summary: In this paper, we present an application framework for enabling education practitioners and researchers to develop interactive, multi-modal applications. These applications can be designed using typical HTML programming, and will enable a larger audience to make applications that incorporate speech recognition, gesture recognition and engagement detection. The application framework uses open-source software and inexpensive hardware that supports both multi-touch and multi-user capabilities.

Speech and multimodal interaction in mobile search Tutorials / Feng, Junlan / Johnston, Michael / Bangalore, Srinivas Proceedings of the 2011 International Conference on the World Wide Web 2011-03-28 v.2 p.293-294
ACM Digital Library Link
Summary: This tutorial highlights the characteristics of mobile search comparing with its desktop counterpart, reviews the state of art technologies of speech-based mobile search, and presents opportunities for exploiting multimodal interaction to optimize the efficiency of mobile search. It is suitable for students, researchers and practitioners working in the areas of spoken language processing, multimodal and search with an emphasis on a synergistic integration of these technologies for applications on mobile devices. We will provide detailed bibliography and sufficient literature for everyone interested to jumpstart work on this topic.

Multimodal local search in Speak4it Demos / Ehlen, Patrick / Johnston, Michael Proceedings of the 2011 International Conference on Intelligent User Interfaces 2011-02-13 p.435-436
ACM Digital Library Link
Summary: Speak4it is a consumer-oriented mobile search application that leverages multimodal input and output to allow users to search for and act on local business information. It supports true multimodal integration where user inputs can be distributed over multiple input modes. In addition to specifying queries by voice e.g. bike repair shops near the golden gate bridge users can combine speech and gesture, for example, gas stations + <route drawn on display> will return the gas stations along the specified route traced on the display. We describe the underlying multimodal architecture and some challenges of supporting multimodal interaction as a deployed mobile service.

The school librarian as a technology integration leader: enablers and barriers to leadership enactment Doctoral Colloquium Posters / Johnston, Melissa P. Proceedings of the 2011 iConference 2011-02-08 p.691-693
ACM Digital Library Link
Summary: This poster presents preliminary findings of in-progress research investigating current practice to identify what is enabling some school librarians to thrive as technology integration leaders and what is hindering others in order to guide school librarians in successfully enacting this role. The highly technological environment of 21st century schools has significantly redefined the role of the school librarian by presenting the opportunity to assume leadership roles through technology integration. The school librarian must evolve as a leader in order to address the needs of today's learners and ensure that they are equipped with the knowledge and skills they need to succeed, but the lack of research in this area has left school librarians ill prepared for the enactment of this role. This research, based on a distributed leadership theoretical foundation, seeks to identify and categorize the enablers and barriers experienced by school librarians in enacting a leadership role in technology integration.

Speak4it: multimodal interaction for local search Demo session / Ehlen, Patrick / Johnston, Michael Proceedings of the 2010 International Conference on Multimodal Interfaces 2010-11-08 p.10
ACM Digital Library Link
Summary: Speak4itSM is a consumer-oriented mobile search application that leverages multimodal input and output to allow users to search for and act on local business information. It supports true multimodal integration where user inputs can be distributed over multiple input modes. In addition to specifying queries by voice (e.g., "bike repair shops near the golden gate bridge") users can combine speech and gesture. For example, "gas stations" + <route drawn on display> will return the gas stations along the specified route traced on the display. We provide interactive demonstrations of Speak4it on both the iPhone and iPad platforms and explain the underlying multimodal architecture and challenges of supporting multimodal interaction as a deployed mobile service.

Location grounding in multimodal local search Speech and language / Ehlen, Patrick / Johnston, Michael Proceedings of the 2010 International Conference on Multimodal Interfaces 2010-11-08 p.32
ACM Digital Library Link
Summary: Computational models of dialog context have often focused on unimodal spoken dialog or text, using the language itself as the primary locus of contextual information. But as we move from spoken interaction to situated multimodal interaction on mobile platforms supporting a combination of spoken dialog with graphical interaction, touch-screen input, geolocation, and other non-linguistic contextual factors, we will need more sophisticated models of context that capture the influence of these factors on semantic interpretation and dialog flow. Here we focus on how users establish the location they deem salient from the multimodal context by grounding it through interactions with a map-based query system. While many existing systems rely on geolocation to establish the location context of a query, we hypothesize that this approach often ignores the grounding actions users make, and provide an analysis of log data from one such system that reveals errors that arise from that faulty treatment of grounding. We then explore and evaluate, using live field data from a deployed multimodal search system, several different context classification techniques that attempt to learn the location contexts users make salient by grounding them through their multimodal actions.

EPG: speech access to program guides for people with disabilities Posters and Demonstrations / Johnston, Michael / Stent, Amanda J. Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010-10-25 p.257-258
ACM Digital Library Link
Summary: Over the last 10 years, in-home entertainment options have expanded dramatically. However, interfaces to listing data are still very limited. For people with visual disabilities, or those with limited hand mobility, it can be difficult or impossible to use the "guide" provided by many cable and satellite television companies. In this demo, we present the assistive technology features of AT&T's Electronic Program Guide (EPG) prototype. These features include: speech input for listing search, speech commands for browsing search results, and text to speech for browsing search results. In addition, EPG uses commodity hardware and software to reduce barriers to entry.

RESULTS FROM EMPIRICAL TESTING OF THE SYSTEM FOR TACTILE RECEPTION OF ADVANCED PATTERNS (STRAP) PERCEPTION AND PERFORMANCE: PP3 / Johnston, Matthew / Hale, Kelly / Axelsson, Par Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting 2010-09-27 v.54 p.1335-1339
Link to HFES Digital Content
Summary: The System for Tactile Reception of Advanced Patterns (STRAP) is capable of displaying complex information through tactile actuators on a user's torso. Non-verbal communication requirements from a Military Operations in Urban Terrain (MOUT) task and tactile design guidelines resulted greater than 60 distinct tactile symbols for communication and a context free grammar. This empirical evaluation is the first step in validating the STRAP system as a complement to traditional communication methods such as military hand and arm signals and radio. Nine participants were trained on the entire tactile language to a 90% criterion and were asked to utilize a small subset of the vocabulary while completing room clearing tasks using a virtual desktop simulation. The results show no significant difference in room clearing performance when haptic versus verbal communications were provided, indicating that the STRAP system shows promise as a complementary communication device. Improvements to both the tactile display and symbols are discussed as a means to improve recognition of haptic commands and overall system utility.

A Survey of Nurses Self-reported Prospective Memory Tasks: What Must they Remember and What do they Forget POSTERS: POS3 -- Posters 3 / Fink, Nicole / Pak, Richard / Bass, Brock / Johnston, Michael / Battisto, Dina Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting 2010-09-27 v.54 p.1600-1604
Link to HFES Digital Content
Summary: Although a nurse's job is inundated with prospective memory (PM) demands, and studies show that PM failures are a key component of adverse medical events, only one study has examined prospective memory in nursing (Grundgeiger, Sanderson, MacDougall, & Venkatesh, 2009). The purpose of the current study was to complement existing research with self-reports from 25 nurses on the PM tasks they must remember and those they forget. Results revealed that nurses most frequently perform episodic tasks, and these tasks can be further classified to better explain when nursing PM demands arise and what the demands consist of. A more specific categorization of nursing PM tasks enables researchers to focus on specific design solutions. We provide examples of such re-design recommendations intended to alleviate PM demands.

Building multimodal applications with EMMA Multimodal dialog / Johnston, Michael Proceedings of the 2009 International Conference on Multimodal Interfaces 2009-11-02 p.47-54
Keywords: gesture, multimodal, prototyping, speech, standards
ACM Digital Library Link
Summary: Multimodal interfaces combining natural modalities such as speech and touch with dynamic graphical user interfaces can make it easier and more effective for users to interact with applications and services on mobile devices. However, building these interfaces remains a complex and high specialized task. The W3C EMMA standard provides a representation language for inputs to multimodal systems facilitating plug-and-play of system components and rapid prototyping of interactive multimodal systems. We illustrate the capabilities of the EMMA standard through examination of its use in a series of mobile multimodal applications for the iPhone.

Robust gesture processing for multimodal interaction Multimodal interfaces II (oral session) / Bangalore, Srinivas / Johnston, Michael Proceedings of the 2008 International Conference on Multimodal Interfaces 2008-10-20 p.225-232
Keywords: finite-state methods, local search, mobile, multimodal interfaces, robustness, speech-gesture integration
ACM Digital Library Link
Summary: With the explosive growth in mobile computing and communication over the past few years, it is possible to access almost any information from virtually anywhere. However, the efficiency and effectiveness of this interaction is severely limited by the inherent characteristics of mobile devices, including small screen size and the lack of a viable keyboard or mouse. This paper concerns the use of multimodal language processing techniques to enable interfaces combining speech and gesture input that overcome these limitations. Specifically we focus on robust processing of pen gesture inputs in a local search application and demonstrate that edit-based techniques that have proven effective in spoken language processing can also be used to overcome unexpected or errorful gesture input. We also examine the use of a bottom-up gesture aggregation technique to improve the coverage of multimodal understanding.

Results from Pilot Testing a System for Tactile Reception of Advanced Patterns (STRAP) PERCEPTION AND PERFORMANCE: PP7 - The Role of Perception in the Design of Military Systems / Fuchs, Sven / Johnston, Matthew / Hale, Kelly S. / Axelsson, Par Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting 2008-09-22 v.52 p.1302-1306
Link to HFES Digital Content
Summary: This paper presents pilot study results on the learnability and effectiveness of the System for Tactile Reception of Advanced Patterns (STRAP) that is capable of displaying complex information through tactile actuators on the user's torso. Information requirements from dismounted soldier communications and tactile design guidelines resulted in 56 distinct tactile symbols. To facilitate cognitive demands for decoding, information presentation was formalized by developing construction rules for tactile symbols and a context-free grammar for compilation of tactile sentences. The pilot study outlined trained two participants on the tactile language. Results showed they were able to reach 90% criterion in less than 3.5 hours. Furthermore, once learned, participants were able to receive and comprehend complex commands comprised of multiple tactile symbols under varying levels of workload with some success.

The Physiological Assessment of VE Training System Fidelity VIRTUAL ENVIRONMENTS: VE1 - Human Interfaces for Virtual Environments / Jones, David L. / Greenwood-Ericksen, Adams / Hale, Kelly / Johnston, Matthew Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting 2008-09-22 v.52 p.2107-2111
Link to HFES Digital Content
Summary: This paper describes the Training Effectiveness Evaluation with neurophysiological metrics: Fidelity Assessment of VE Training Systems (TEE-FAST) framework, which offers a comprehensive assessment of physical, functional and psychological fidelity of VE training systems. TEE-FAST evaluates at the cue level how information is presented to users and how users respond, both behaviorally and physiologically, in both a VE training environment and the related operational (or live training) environment. The differences in cue presentation and user responses across VE and live tasks are evaluated to determine how effective a VE training system is at targeting specified training goals. Evaluation outcomes provide targeted design guidance regarding training utility related to each specified training goal, as well as redesign recommendations to enhance VE system fidelity and training utility.

Context-Sensitive Help for Multimodal Dialogue / Hastie, Helen Wright / Johnston, Michael / Ehlen, Patrick Proceedings of the 2002 International Conference on Multimodal Interfaces 2002-10-14 p.93
ACM Digital Library Link
Summary: Multimodal interfaces offer users unprecedented flexibility in choosing a style of interaction. However, users are frequently unaware of or forget shorter or more effective multimodal or pen-based commands. This paper describes a working help system that leverages the capabilities of a multimodal interface in order to provide targeted, unobtrusive, context-sensitive help. This Multimodal Help System guides the user to the most effective way to specify a request, providing transferable knowledge that can be used in future requests without repeatedly invoking the help system.

EDITED BOOK Readings in Intelligent User Interfaces / Maybury, Mark T. / Wahlster, Wolfgang 1998 p.736 Morgan Kaufmann Publishers
ISBN: 1-55860-444-8
Intelligent User Interfaces: An Introduction
I. MULTIMEDIA INPUT ANALYSIS
"Put-That-There": Voice and Gesture at the Graphics Interface
	+ Bolt, R. A.
Synergistic Use of Direct Manipulation and Natural Language
	+ Cohen, P. R.
	+ Dalrymple, M.
	+ Moran, D. B.
Natural Language with Integrated Deictic and Graphic Gestures
	+ Neal, J. G.
	+ Thielman, C. Y.
	+ Dobes, Z.
Integrating Simultaneous Input from Speech, Gaze, and Hand Gestures
	+ Koons, D. B.
	+ Sparrell, C. J.
	+ Thorisson, K. R.
The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at Is What You Get
	+ Jacob, R.
II. MULTIMEDIA PRESENTATION DESIGN
Automating the Generation of Coordinated Multimedia Explanations
	+ Feiner, S. K.
	+ McKeown, K. R.
Planning Multimedia Explanations Using Communicative Acts
	+ Maybury, M. T.
Plan-Based Integration of Natural Language and Graphics Generation
	+ Wahlster, W.
	+ Andre, E.
	+ Finkler, W.
Presentation Design Using an Integrated Knowledge Base
	+ Arens, Y.
	+ Miller, L.
	+ Sondheimer, N. K.
Automatic Generation of Technical Documentation
	+ Reiter, E.
	+ Mellish, C.
	+ Levine, J.
On the Knowledge Underlying Multimedia Presentations
	+ Arens, Y.
	+ Hovy, E.
	+ Vossers, M.
III. AUTOMATED GRAPHICS DESIGN
Automating the Design of Graphical Presentations of Relational Information
	+ Mackinlay, J. D.
Data Characterization for Intelligent Graphics Presentation
	+ Roth, S. F.
	+ Mattis, J.
A Task-Analytic Approach to the Automated Design of Graphic Presentations
	+ Casner, S. M.
Automated Generation of Intent-Based 3D Illustrations
	+ Seligmann, D.
	+ Feiner, S.
Interactive Graphic Design Using Automatic Presentation Knowledge
	+ Roth, S. F.
	+ Kolojejchick, J.
	+ Mattis, J.
IV. AUTOMATED LAYOUT
A Grid-Based Approach to Automating Display Layout
	+ Feiner, S. K.
Automatic Generation of Formatted Text
	+ Hovy, E.
	+ Arens, Y.
Constraint-Based Graphical Layout of Multimodal Presentations
	+ Graf, W. H.
An Empirical Study of Algorithms for Point-Feature Label Placement
	+ Christensen, J.
	+ Marks, J.
	+ Shieber, S.
Grammar-Based Articulation for Multimedia Document Design
	+ Weitzman, L.
	+ Wittenburg, K.
V. USER AND DISCOURSE MODELING
User Modeling via Stereotypes
	+ Rich, E.
Intelligent Interfaces as Agents
	+ Chin, D. N.
User and Discourse Models for Multimodal Communication
	+ Wahlster, W.
KN-AHS: An Adaptive Hypertext Client of the User Modeling System BGP-MS
	+ Kobsa, A.
	+ Muller, D.
	+ Nill, A.
Planning Text for Advisory Dialogues: Capturing Intentional and Rhetorical Information
	+ Moore, J. D.
	+ Paris, C. L.
Planning Interactive Explanations
	+ Cawsey, A.
Natural Language and Exploration of an Information Space: The ALFresco Interactive System
	+ Stock, O.
The Application of Natural Language Models to Intelligent Multimedia
	+ Burger, J. D.
	+ Marshall, R. J.
VI. MODEL-BASED INTERFACES
Steamer: An Interactive Inspectable Simulation-Based Training System
	+ Hollan, J. D.
	+ Hutchins, E. L.
	+ Weitzman, L. M.
A Knowledge-Based User Interface Management System
	+ Foley, J.
	+ Gibbs, C.
	+ Kim, W.
ITS: A Tool for Rapidly Developing Interactive Applications
	+ Wiecha, C.
	+ Bennett, W.
	+ Boies, S.
Beyond Interface Builders: Model-Based Interface Tools
	+ Szekely, P.
	+ Luo, P.
	+ Neches, R.
Model-Based Automated Generation of User Interfaces
	+ Puerta, A. R.
	+ Eriksson, H.
	+ Gennari, J. H.
Automatic Generation of a User Interface for Highly Interactive Business-Oriented Applications
	+ Vanderdonckt, J.
VII. AGENT INTERFACES
Agents That Reduce Work and Information Overload
	+ Maes, P.
Embedding Critics in Design Environments
	+ Fischer, G.
	+ Nakakoji, K.
	+ Ostwald, J.
Multimodal Interaction for Distributed Interactive Simulation
	+ Cohen, P.
	+ Johnston, M.
	+ McGee, D.
Speech Dialogue with Facial Displays: Multimodal Human-Computer Conversation
	+ Nagao, K.
	+ Takeuchi, A.
Animated Conversation: Rule-Based Generation of Facial Expression, Gesture and Spoken Intonation for Multiple Conversational Agents
	+ Cassell, J.
	+ Pelachaud, C.
	+ Badler, N.
VIII. EVALUATION
A Morphological Analysis of the Design Space of Input Devices
	+ Card, S. K.
	+ Mackinlay, J. D.
	+ Robertson, G. G.
Wizard of Oz Studies -- Why and How
	+ Dahlback, N.
	+ Jonsson, A.
	+ Ahrenberg, L.
User-Centered Modeling for Spoken Language and Multimodal Interfaces
	+ Oviatt, S. L.
PARADISE: A Framework for Evaluating Spoken Dialogue Agents
	+ Walker, M.
	+ Litman, D.
	+ Kamm, C.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 25 Jump to: 2015 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 02 | 98 |