HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,520,324
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Lecolinet_E* Results: 66 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 66 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 03 | 01 | 00 | 99 | 98 | 96 |
[1] Shared Interaction on a Wall-Sized Display in a Data Manipulation Task Displays and Shared Interactions / Liu, Can / Chapuis, Olivier / Beaudouin-Lafon, Michel / Lecolinet, Eric Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.2075-2086
ACM Digital Library Link
Summary: Wall-sized displays support small groups of users working together on large amounts of data. Observational studies of such settings have shown that users adopt a range of collaboration styles, from loosely to closely coupled. Shared interaction techniques, in which multiple users perform a command collaboratively, have also been introduced to support co-located collaborative work. In this paper, we operationalize five collaborative situations with increasing levels of coupling, and test the effects of providing shared interaction support for a data manipulation task in each situation. The results show the benefits of shared interaction for close collaboration: it encourages collaborative manipulation, it is more efficient and preferred by users, and it reduces physical navigation and fatigue. We also identify the time costs caused by disruption and communication in loose collaboration and analyze the trade-offs between parallelization and close collaboration. These findings inform the design of shared interaction techniques to support collaboration on wall-sized displays.

[2] Finding Objects Faster in Dense Environments Using a Projection Augmented Robotic Arm Human-Robot Interaction / Gacem, Hind / Bailly, Gilles / Eagan, James / Lecolinet, Eric Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part III 2015-09-14 v.3 p.221-238
Keywords: Guidance techniques; Augmented arm; Steerable pico-projector
Link to Digital Content at Springer
Summary: Locating an object in an unfamiliar and dense physical environment, such as a control room, supermarket, or warehouse, can be challenging. In this paper, we present the Projection-Augmented Arm (PAA), a motorized robotic arm augmented with a pico-projector to help users to localize targets in such environments. The arm moves and displays a projected spotlight on the target. We present the results of a study that shows that the PAA helps users to more quickly locate target objects in a dense environment. We further study the influence of the visibility of the projected spotlight while moving versus that of the physical movement of the projection arm on user performance and search strategy, finding that (1) information about the orientation of the arm has a stronger impact on performance than moving spotlight projected on the search space; (2) the orientation of the arm is useful (24% improvement) and especially when the target is behind the user (26% improvement); and (3) users' strategies relied mainly on the arm when it is visible.

[3] Glass+Skin: An Empirical Evaluation of the Added Value of Finger Identification to Basic Single-Touch Interaction on Touch Screens Tangible and Tactile Interaction / Roy, Quentin / Guiard, Yves / Bailly, Gilles / Lecolinet, Éric / Rioul, Olivier Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part IV 2015-09-14 v.4 p.55-71
Keywords: Input modality; Multitouch; Finger identification; Evaluation methodology; Throughput; Information theory
Link to Digital Content at Springer
Summary: The usability of small devices such as smartphones or interactive watches is often hampered by the limited size of command vocabularies. This paper is an attempt at better understanding how finger identification may help users invoke commands on touch screens, even without recourse to multi-touch input. We describe how finger identification can increase the size of input vocabularies under the constraint of limited real estate, and we discuss some visual cues to communicate this novel modality to novice users. We report a controlled experiment that evaluated, over a large range of input-vocabulary sizes, the efficiency of single-touch command selections with vs. without finger identification. We analyzed the data not only in terms of traditional time and error metrics, but also in terms of a throughput measure based on Shannon's theory, which we show offers a synthetic and parsimonious account of users' performance. The results show that the larger the input vocabulary needed by the designer, the more promising the identification of individual fingers.

[4] Physical Loci: Leveraging Spatial, Object and Semantic Memory for Command Selection Interaction in 3D Space / Perrault, Simon T. / Lecolinet, Eric / Bourse, Yoann Pascal / Zhao, Shengdong / Guiard, Yves Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.299-308
ACM Digital Library Link
Summary: Physical Loci, a technique based on an ancient memory technique, allows users to quickly learn a large command set by leveraging spatial, object and verbal/semantic memory to create a cognitive link between individual commands and nearby physical objects in a room (called loci). We first report on an experiment that showed that for learning 25 items Physical Loci outperformed a mid-air Marking Menu baseline. A long-term retention experiment with 48 items then showed that recall was nearly perfect one week later and, surprisingly, independent of whether the command/locus mapping was one's own choice or somebody else's. A final study suggested that recall performance is robust to alterations of the learned mapping, whether systematic or random.

[5] SuperVision: Spatial Control of Connected Objects in a Smart Home WIP Theme: Ubicomp, Robots and Wearables / Ghosh, Sarthak / Bailly, Gilles / Despouys, Robin / Lecolinet, Eric / Sharrock, Rémi Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.2079-2084
ACM Digital Library Link
Summary: In this paper, we propose SuperVision, a new interaction technique for distant control of objects in a smart home. This technique aims at enabling users to point towards an object, visualize its current state and select a desired functionality as well. To achieve this: 1) we present a new remote control that contains a pico-projector and a slider; 2) we introduce a visualization technique that allows users to locate and control objects kept in adjacent rooms, by using their spatial memories. We further present a few example applications that convey the possibilities of this technique.

[6] A design space of guidance techniques for large and dense physical environments Systèmes mixtes / Gacem, Hind / Bailly, Gilles / Eagan, James / Lecolinet, Eric Proceedings of the 2014 Conference of the Association Francophone d'Interaction Homme-Machine 2014-10-28 p.9-17
ACM Digital Library Link
Summary: Finding an object in a physical environment is difficult if the environment contains many objects, especially if it is large and dense. We propose a design space that describes and compares existing guidance techniques according to four dimensions: output modality, physicality, granularity and spatial information. Output modality can be visual, audio or tactile. Guidance information can be displayed using physical objects or virtual artifacts. Granularity indicates whether the technique serves to navigate towards the vicinity of the target or to precisely localize the target. Finally, spatial information is either exocentric or egocentric. This design space aims at providing an overview of the domain and helping designers and researchers to understand the key properties of these techniques. It also enables their comparison and the generation of new techniques by highlighting unexplored areas.

[7] A design space for three-dimensional curve edition Techniques d'interaction: dimensions > 2 / Jacob, Thibaut / Bailly, Gilles / Lecolinet, Eric / Foulon, Raphael / Corteel, Etienne Proceedings of the 2014 Conference of the Association Francophone d'Interaction Homme-Machine 2014-10-28 p.105-112
ACM Digital Library Link
Summary: Designing and editing 3D curves is often involved in a wide array of applications such as CAD, multimedia content edition or landscape and road generation. This diversity resulted in a spread of 3D-related works across different communities such as SIGCHI or SIGGRAPH. In this article, we introduce a design space to gather existing techniques in the field of 3D curves creation and edition. This design space is built around two axes: system and language, in order to describe and compare existing techniques.

[8] SuperVision: spatial control of connected objects in smart-home Travaux en cours (TeC) / Ghosh, Sarthak / Bailly, Gilles / Despouys, Robin / Lecolinet, Eric / Sharrock, Rémi Proceedings of the 2014 Conference of the Association Francophone d'Interaction Homme-Machine 2014-10-28 p.201-206
ACM Digital Library Link
Summary: In this paper, we propose SuperVision, a novel interaction technique for controlling distant connect objects in smart-home. Users point an object with their remote control to visualize its state, and select its functionalities. To achieve this goal, 1) we present a novel remote control augmented with a video-projector and a slider; 2) we introduce a visualization allowing users to see through the walls in order to control objects in the line of sight as well as objects in another rooms; 3) we describe applications relying on this interaction technique.

[9] Belly gestures: body centric gestures on the abdomen / Vo, Dong-Bach / Lecolinet, Eric / Guiard, Yves Proceedings of the 8th Nordic Conference on Human-Computer Interaction 2014-10-26 p.687-696
ACM Digital Library Link
Summary: Recent HCI research has shown that the body offers an interactive surface particularly suitable to eyes-free interaction. While researchers have mainly focused on the arms and the hands, we argue that the surface of the belly is especially appropriate. The belly offers a fairly large surface that can be easily reached with the two hands in any circumstance, including walking or running. We report on a study that explored how users perform one-handed gestures on their abdomen. Users use different mental spatial orientations depending on the complexity of the gesture they have to draw (drawing a digit vs. a simple directional stroke). When provided with no visual orientation cues they often draw gestures following symmetries relative to a horizontal or vertical axis. The more complex the gesture, the less stability in orientation. Focusing on directional strokes, we found that users are able to draw almost linear gestures, despite the fact that the abdomen is not perfectly planar, and perform particularly well in cardinal directions. The paper ends up with some guidelines that may inform the design of novel interaction techniques.

[10] Multi-finger chords for hand-held tablets: recognizable and memorable Multitouch interaction / Wagner, Julie / Lecolinet, Eric / Selker, Ted Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.2883-2892
ACM Digital Library Link
Summary: Despite the demonstrated benefits of multi-finger input, todays gesture vocabularies offer a limited number of postures and gestures. Previous research designed several posture sets, but does not address the limited human capacity of retaining them. We present a multi-finger chord vocabulary, introduce a novel hand-centric approach to detect the identity of fingers on off-the-shelf hand-held tablets, and report on the detection accuracy. A between-subjects experiment comparing "random" to a "categorized" chord-command mapping found that users retained categorized mappings more accurately over one week than random ones. In response to the logical posture-language structure, people adapted to logical memorization strategies, such as 'exclusion', 'order', and 'category', to minimize the amount of information to retain. We conclude that structured chord-command mappings support learning, short-, and long-term retention of chord-command mappings.

[11] Effects of display size and navigation type on a classification task Interactive surfaces and pervasive displays / Liu, Can / Chapuis, Olivier / Beaudouin-Lafon, Michel / Lecolinet, Eric / Mackay, Wendy E. Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.4147-4156
ACM Digital Library Link
Summary: The advent of ultra-high resolution wall-size displays and their use for complex tasks require a more systematic analysis and deeper understanding of their advantages and drawbacks compared with desktop monitors. While previous work has mostly addressed search, visualization and sense-making tasks, we have designed an abstract classification task that involves explicit data manipulation. Based on our observations of real uses of a wall display, this task represents a large category of applications. We report on a controlled experiment that uses this task to compare physical navigation in front of a wall-size display with virtual navigation using pan-and-zoom on the desktop. Our main finding is a robust interaction effect between display type and task difficulty: while the desktop can be faster than the wall for simple tasks, the wall gains a sizable advantage as the task becomes more difficult. A follow-up study shows that other desktop techniques (overview+detail, lens) do not perform better than pan-and-zoom and are therefore slower than the wall for difficult tasks.

[12] Watchit: simple gestures and eyes-free interaction for wristwatches and bracelets Papers: displays everywhere / Perrault, Simon T. / Lecolinet, Eric / Eagan, James / Guiard, Yves Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.1451-1460
ACM Digital Library Link
Summary: We present WatchIt, a prototype device that extends interaction beyond the watch surface to the wristband, and two interaction techniques for command selection and execution. Because the small screen of wristwatch computers suffers from visual occlusion and the fat finger problem, we investigated the use of the wristband as an available interaction resource. Not only does WatchIt use a cheap, energy efficient and invisible technology, but it involves simple, basic gestures that allow good performance after little training, as suggested by the results of a pilot study. We propose a novel gesture technique and an adaptation of an existing menu technique suitable for wristband interaction. In a user study, we investigated their usage in eyes-free contexts, finding that they perform well. Finally, we present techniques where the bracelet is used in addition to the screen to provide precise continuous control over list scrolling. We also report on a preliminary survey of traditional and digital jewelry that points to the high frequency of watches and bracelets in both genders and gives a sense of the tasks people feel like performing on such devices.

[13] Augmented letters: mnemonic gesture-based shortcuts Papers: touch interaction / Roy, Quentin / Malacria, Sylvain / Guiard, Yves / Lecolinet, Eric / Eagan, James Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.2325-2328
ACM Digital Library Link
Summary: We propose Augmented Letters, a new technique aimed at augmenting gesture-based techniques such as Marking Menus [9] by giving them natural, mnemonic associations. Augmented Letters gestures consist of the initial of command names, sketched by hand in the Unistroke style, and affixed with a straight tail. We designed a tentative touch device interaction technique that supports fast interactions with large sets of commands, is easily discoverable, improves user's recall at no speed cost, and supports fluid transition from novice to expert mode. An experiment suggests that Augmented Letters outperform Marking Menu in terms of user recall.

[14] Bezel-Tap gestures: quick activation of commands from sleep mode on tablets Papers: mobile gestures / Serrano, Marcos / Lecolinet, Eric / Guiard, Yves Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.3027-3036
ACM Digital Library Link
Summary: We present Bezel-Tap Gestures, a novel family of interaction techniques for immediate interaction on handheld tablets regardless of whether the device is alive or in sleep mode. The technique rests on the close succession of two input events: first a bezel tap, whose detection by accelerometers will awake an idle tablet almost instantly, then a screen contact. Field studies confirmed that the probability of this input sequence occurring by chance is very low, excluding the accidental activation concern. One experiment examined the optimal size of the vocabulary of commands for all four regions of the bezel (top, bottom, left, right). Another experiment evaluated two variants of the technique which both allow two-level selection in a hierarchy of commands, the initial bezel tap being followed by either two screen taps or a screen slide. The data suggests that Bezel-Tap Gestures may serve to design large vocabularies of micro-interactions with a sleeping tablet.

[15] Design and evaluation of finger-count interaction: Combining multitouch gestures and menus / Bailly, Gilles / Müller, Jörg / Lecolinet, Eric International Journal of Human-Computer Studies 2012-10 v.70 n.10 p.673-689
Keywords: Menu techniques
Keywords: Multi-touch
Keywords: Multi-finger interaction
Keywords: Two-handed interaction
Link to Article at sciencedirect
Summary: Selecting commands on multi-touch displays is still a challenging problem. While a number of gestural vocabularies have been proposed, these are generally restricted to one or two fingers or can be difficult to learn. We introduce Finger-Count gestures, a coherent set of multi-finger and two-handed gestures. Finger-Count gestures are simple, robust, expressive and fast to perform. In order to make these gestures self-revealing and easy to learn, we propose the Finger-Count menu, a menu technique and teaching method for implicitly learning Finger-Count gestures. We discuss the properties, advantages and limitations of Finger-Count interaction from the gesture and menu technique perspectives as well as its integration into three applications. We present alternative designs to increase the number of commands and to enable multi-user scenarios. Following a study which shows that Finger-Count is as easy to learn as radial menus, we report the results of an evaluation investigating which gestures are easier to learn and which finger chords people prefer. Finally, we present Finger-Count for in-the-air gestures. Thereby, the same gesture set can be used from a distance as well as when touching the surface.

[16] S-Notebook: augmenting mobile devices with interactive paper for data management Interactive posters / Pietrzak, Thomas / Malacria, Sylvain / Lecolinet, Éric Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012-05-22 p.733-736
ACM Digital Library Link
Summary: This paper presents S-Notebook, a tool that makes it possible to "extend" mobile devices with augmented paper. Paper is used to overcome the physical limitations of mobile devices by offering additional space to annotate digital files and to easily create relationships between them. S-Notebook allows users to link paper annotations or drawings to anchors in digital files without having to learn pre-defined pen gestures. The systems stores meta data such as spatial or temporal location of anchors in the document as well as the zoom level of the view. Tapping on notes with the digital pen make appear the corresponding documents as displayed when the notes were taken. A given piece of augmented paper can contain notes associated to several documents, possibly at several locations. The annotation space can thus serve as a simple way to relate various pieces of one or several digital documents between them. When the user shares his notes, the piece of paper becomes a tangible token that virtually contains digital information.

[17] Watchit: simple gestures for interacting with a watchstrap Video presentations / Perrault, Simon / Malacria, Sylvain / Guiard, Yves / Lecolinet, Eric Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing Systems 2012-05-05 v.2 p.1467-1468
ACM Digital Library Link
Summary: We present WatchIt, a new interaction technique for wristwatch computers, a category of devices that badly suffers from a scarcity of input surface area. WatchIt considerably increases this surface by extending it from the touch screen to the wristband. The video shows a mockup of how simple gestures on the external and/or internal bands may allow the user to scroll a list (one-finger slide), to select an item (tap), and to set a continuous parameter like the volume of music playing (two-finger slide), avoiding the drawback of screen occlusion by the finger. Also shown is the prototype we are currently using to investigate the usability of our new interaction technique.

[18] JerkTilts: using accelerometers for eight-choice selection on mobile devices Poster session / Baglioni, Mathias / Lecolinet, Eric / Guiard, Yves Proceedings of the 2011 International Conference on Multimodal Interfaces 2011-11-14 p.121-128
ACM Digital Library Link
Summary: This paper introduces JerkTilts, quick back-and-forth gestures that combine device pitch and roll. JerkTilts may serve as gestural self-delimited shortcuts for activating commands. Because they only depend on device acceleration and rely on a parallel and independent input channel, these gestures do not interfere with finger activity on the touch screen. Our experimental data suggest that recognition rates in an eight-choice selection task are as high with JerkTilts as with thumb slides on the touch screen. We also report data confirming that JerkTilts can be combined successfully with simple touch-screen operation. Data from a field study suggest that inadvertent JerkTilts are unlikely to occur in real-life contexts. We describe three illustrative implementations of JerkTilts, which show how the technique helps to simplify and shorten the sequence of actions to reach frequently used commands.

[19] Gesture-aware remote controls: guidelines and interaction technique Oral session 3: gesture and touch / Bailly, Gilles / Vo, Dong-Bach / Lecolinet, Eric / Guiard, Yves Proceedings of the 2011 International Conference on Multimodal Interfaces 2011-11-14 p.263-270
ACM Digital Library Link
Summary: Interaction with TV sets, set-top boxes or media centers strongly differs from interaction with personal computers: not only does a typical remote control suffer strong form factor limitations but the user may well be slouching in a sofa. In the face of more and more data, features, and services made available on interactive televisions, we propose to exploit the new capabilities provided by gesture-aware remote controls. We report the data of three user studies that suggest some guidelines for the design of a gestural vocabulary and we propose five novel interaction techniques. Study 1 reports that users spontaneously perform pitch and yaw gestures as the first modality when interacting with a remote control. Study 2 indicates that users can accurately select up to 5 items with eyes-free roll gestures. Capitalizing on our findings, we designed five interaction techniques that use either device motion, or button-based interaction, or both. They all favor the transition from novice to expert usage for selecting favorites. Study 3 experimentally compares these techniques. It reveals that motion of the device in 3D space, associated with finger presses at the surface of the device, is achievable, fast and accurate. Finally, we discuss the integration of these techniques into a coherent multimedia menu system.

[20] Promesses et contraintes de la joaillerie numérique interactive: un aperçu de l'état de l'art Interagir sans clavier ni souris / Perrault, Simon T. / Bailly, Gilles / Guiard, Yves / Lecolinet, Eric Proceedings of the 2011 Conference of the Association Francophone d'Interaction Homme-Machine 2011-10-24 p.14
ACM Digital Library Link
Summary: The miniaturization of electronic components paves the way for new interaction techniques for wearable computing. We briefly review interactive digital jewelry, an emerging subfield. We report the data of a limited poll about the way people perceive the prospect of digital jewelry. We then consider the constraints and the promise of digital jewelry, and finally classify the current solutions, which generally resort to gestural interaction.

[21] Un espace de caractérisation de la télécommande dans le contexte de la télévision interactive Interagir sans clavier ni souris / Vo, Dong-Bach / Bailly, Gilles / Lecolinet, Eric / Guiard, Yves Proceedings of the 2011 Conference of the Association Francophone d'Interaction Homme-Machine 2011-10-24 p.17
ACM Digital Library Link
Summary: Initially designed in the nineteen seventies as a mere zapping tool, obviously the traditional device known as the TV remote control no longer responds to the multifarious needs of today's interactive television. Designing new remote-control devices is a challenge that the HCI community has started to face. The paper indicates the various directions that are currently being investigated by researchers. The paper starts with an attempt to characterize the specific context of interactive TV. We then offer a tentative account of the design and evaluation space of interest. There is little doubt that the traditional remote control may still be improved and augmented. It is unlikely, however, that it will survive for long the crisis it has been undergoing since the beginning of the digital era, given the emerging plethora of alternative interaction possibilities based on a variety of new interfacing logics, which the paper reviews.

[22] U-Note: Capture the Class and Access It Everywhere HCI in the Classroom / Malacria, Sylvain / Pietrzak, Thomas / Tabard, Aurélien / Lecolinet, Eric Proceedings of IFIP INTERACT'11: Human-Computer Interaction 2011-09-05 v.1 p.643-660
Keywords: Augmented classroom; digital pen; digital lecturing environment; capture and access; digital classroom
Link to Digital Content at Springer
Summary: We present U-Note, an augmented teaching and learning system leveraging the advantages of paper while letting teachers and pupils benefit from the richness that digital media can bring to a lecture. U-Note provides automatic linking between the notes of the pupils' notebooks and various events that occurred during the class (such as opening digital documents, changing slides, writing text on an interactive whiteboard...). Pupils can thus explore their notes in conjunction with the digital documents that were presented by the teacher during the lesson. Additionally, they can also listen to what the teacher was saying when a given note was written. Finally, they can add their own comments and documents to their notebooks to extend their lecture notes. We interviewed teachers and deployed questionnaires to identify both teachers and pupils' habits: most of the teachers use (or would like to use) digital documents in their lectures but have problems in sharing these resources with their pupils. The results of this study also show that paper remains the primary medium used for knowledge keeping, sharing and editing by the pupils. Based on these observations, we designed U-Note, which is built on three modules. U-Teach captures the context of the class: audio recordings, the whiteboard contents, together with the web pages, videos and slideshows displayed during the lesson. U-Study binds pupils' paper notes (taken with an Anoto digital pen) with the data coming from U-Teach and lets pupils access the class materials at home, through their notebooks. U-Move lets pupils browse lecture materials on their smartphone when they are not in front of a computer.

[23] Comparing Free Hand Menu Techniques for Distant Displays Using Linear, Marking and Finger-Count Menus Interacting with Displays / Bailly, Gilles / Walter, Robert / Müller, Jörg / Ning, Tongyan / Lecolinet, Eric Proceedings of IFIP INTERACT'11: Human-Computer Interaction 2011-09-05 v.2 p.248-262
Keywords: Finger-Counting; Depth-Camera; Public display; ITV; Menus
Link to Digital Content at Springer
Summary: Distant displays such as interactive public displays (IPD) or interactive television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising interaction technique. This paper presents the adaptation of three menu techniques for free hand interaction: Linear menu, Marking menu and Finger-Count menu. The first study based on a Wizard-of-Oz protocol focuses on Finger-Counting postures in front of interactive television and public displays. It reveals that participants do not choose the most efficient gestures neither before nor after the experiment. Results are used to develop a Finger-Count recognizer. The second experiment shows that all techniques achieve satisfactory accuracy. It also shows that Finger-Count requires more mental demand than other techniques.

[24] Flick-and-brake: finger control over inertial/sustained scroll motion Works-in-progress / Baglioni, Mathias / Malacria, Sylvain / Lecolinet, Eric / Guiard, Yves Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.2 p.2281-2286
ACM Digital Library Link
Summary: We present two variants of Flick-and-Brake, a technique that allows users to not only trigger motion by touch-screen flicking but also to subsequently modulate scrolling speed by varying pressure of a stationary finger. These techniques, which further exploit the metaphor of a massive wheel, provide the user with online friction control. We describe a finite-state machine that models a variety of flicking interaction styles, with or without pressure control. We report the results of a preliminary user study that suggests that for medium to long distance scrolling the Flick-and-Brake techniques require less gestural activity than does standard flicking. One of the two variants of the technique is faster, but no less accurate, than state-of-the-art flicking. Users also reported they preferred Flick-and-Brake over the standard flick and judged it more efficient. We indicate some pending issues raised by the results of this preliminary investigation.

[25] Visualisation interactive de données temporelles: un aperçu de l'état de l'art Articles de recherche longs (Long Research Papers) / Adjanor, Kangnikoé / Lecolinet, Eric / Guiard, Yves / Ribière, Myriam Proceedings of the 2010 Conference of the Association Francophone d'Interaction Homme-Machine 2010-09-20 p.97-104
ACM Digital Library Link
Summary: Many visualization systems have been designed and developed to address the ever-growing mass of temporal data. The multiple aspects of time (linear vs cyclic, instant vs interval, different units etc.) have been represented in different manners in existing visualization systems. A design space is thus needed to analyse and compare different visual representations used in those systems. In this article we propose a framework to describe and analyze existing temporal visual representations with emphasis on three factors: time, data and user task.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 66 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 03 | 01 | 00 | 99 | 98 | 96 |