HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,905,772
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: Islam_M* Results: 17 Sorted by: Date  Comments?
Help Dates
Limit:   
The Flat Finger: Exploring Area Touches on Smartwatches Fingers and Technology / Oakley, Ian / Lindahl, Carina / Le, Khanh / Lee, DoYoung / Islam, M. D. Rasel Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.4238-4249
ACM Digital Library Link
Summary: Smartwatches are emerging device category that feature highly limited input and display surfaces. We explore how touch contact areas, such as lines generated by flat fingers, can be used to increase input expressivity in these diminutive systems in three ways. Firstly, we present four design themes that emerged from an ideation workshop in which five designers proposed concepts for smartwatch touch area interaction. Secondly, we describe a sensor unit and study that captured user performance with 31 area touches and contrasted this against standard targeting performance. Finally, we describe three demonstration applications that instantiate ideas from the workshop and deploy the most reliably and rapidly produced area touches. We report generally positive user reactions to these demonstrators: the area touch interactions were perceived as quick, convenient and easy to learn and remember. Together this work characterizes how designers can use area touches in watch UIs, which area touches are most appropriate and how users respond to this interaction style.

RoadRank: Traffic Diffusion and Influence Estimation in Dynamic Urban Road Networks Short Papers: Databases / Anwar, Tarique / Liu, Chengfei / Vu, Hai L. / Islam, Md. Saiful Proceedings of the 2015 ACM Conference on Information and Knowledge Management 2015-10-19 p.1671-1674
ACM Digital Library Link
Summary: With the rapidly growing population in urban areas, these days the urban road networks are expanding at a faster rate. The frequent movement of people on them leads to traffic congestions. These congestions originate from some crowded road segments, and diffuse towards other parts of the urban road networks creating further congestions. This behavior of road networks motivates the need to understand the influence of individual road segments on others in terms of congestion. In this work, we propose RoadRank, an algorithm to compute the influence scores of each road segment in an urban road network, and rank them based on their overall influence. It is an incremental algorithm that keeps on updating the influence scores with time, by feeding with the latest traffic data at each time point. The method starts with constructing a directed graph called influence graph, which is then used to iteratively compute the influence scores using probabilistic diffusion theory. We show promising preliminary experimental results on real SCATS traffic data of Melbourne.

Understanding the Semantics of Web Interface Signs: A Set of Ontological Principals Designing the Social Media Experience / Islam, Muhammad Nazrul / Islam, A. K. M. Najmul DUXU 2015: Fourth International Conference on Design, User Experience, and Usability, Part III: Interactive Experience Design 2015-08-02 v.3 p.46-53
Keywords: Ontology; Web interface sign; Web usability; User interface design; Usability evaluation
Link to Digital Content at Springer
Summary: Interface signs are the communication artifacts of web interfaces, with which users interact. Examples of interface signs are small images, navigational links, buttons and thumbnails. Although, intuitive interface signs are crucial elements of a good user interface (UI), prior research ignored these in UI design and usability evaluation process. This paper argues that ontology (the set of concepts and skills for understanding the referential meaning of an interface sign) mapping is critical for intuitive sign design. A light weighted experiment with six participants and twelve signs has been carried out in order to demonstrate the importance of ontology mapping in understanding the semantics of interface signs. The paper concludes with some practical implications and suggestions for future research.

Beats: Tapping Gestures for Smart Watches Smartwatch Interaction / Oakley, Ian / Lee, DoYoung / Islam, MD. Rasel / Esteves, Augusto Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.1237-1246
ACM Digital Library Link
Summary: Interacting with smartwatches poses new challenges. Although capable of displaying complex content, their extremely small screens poorly match many of the touchscreen interaction techniques dominant on larger mobile devices. Addressing this problem, this paper presents beating gestures, a novel form of input based on pairs of simultaneous or rapidly sequential and overlapping screen taps made by the index and middle finger of one hand. Distinguished simply by their temporal sequence and relative left/right position these gestures are designed explicitly for the very small screens (approx. 40mm square) of smartwatches and to operate without interfering with regular single touch input. This paper presents the design of beating gestures and a rigorous empirical study that characterizes how users perform them -- in a mean of 355ms and with an error rate of 5.5%. We also derive thresholds for reliably distinguishing between simultaneous (under 30ms) and sequential (under 400ms) pairs of screen touches or releases. We then present five interface designs and evaluate them in a qualitative study in which users report valuing the speed and ready availability of beating gestures.

End-to-End High Speed Forward Error Correction Using Graphics Processing Units Computational Awareness for Telecommunication/Energy-Efficient Systems / Islam, Md Shohidul / Kim, Jong-Myon MUSIC 2013: Mobile, Ubiquitous, and Intelligent Computing 2013-09-04 p.47-53
Keywords: Real-time wireless communication; multiple bit error FEC; extended Hamming code; GPU
Link to Digital Content at Springer
Summary: Forward error correction (FEC) is an efficient error recovery mechanism for wireless networks in which erroneous packet is corrected in the destination node. More importantly, real-time and high-speed wireless networks require fast error recovery to ensure quality of service (QoS). Since graphics processing units (GPUs) offer massively parallel computing platform, we propose a GPU-based parallel error control mechanism using extended Hamming code supporting single-bit as well as multiple-bit error correction. We compare the performance of the proposed GPU-based approach with the equivalent sequential algorithm that runs on the traditional CPU for error strength, t, such that 1 ≤ t ≤ 7. Experimental results demonstrate that the proposed GPU-based approach outperforms the sequential approach in terms of execution time. Moreover, the proposed parallel implementation yields significant reduction in computational complexity from O(n³) of the sequential algorithm to O(n) of the GPU-based approach, leading to tremendous speedup gain.

Accelerating Adaptive Forward Error Correction Using Graphics Processing Units Frontier Computing -- Theory, Technologies and Applications / Islam, Md Shohidul / Kim, Jong-Myon MUSIC 2013: Mobile, Ubiquitous, and Intelligent Computing 2013-09-04 p.591-597
Keywords: High-speed real-time wireless communication; packet corruption; AFEC; Hamming code; GPU
Link to Digital Content at Springer
Summary: The demand of error free high-speed, real-time wireless communication is mounting day by day. Adaptive forward error correction (AFEC) is one of the error control mechanisms in which corrupted packets are automatically corrected at the destination end. Graphics processing units (GPUs) offer highly parallel computing platform, and we propose a GPU based AFEC approach for fast error recovery in this paper. We develop a massively parallel AFEC algorithm using the GPU and accomplish performance comparison with an equivalent serial algorithm that runs on the traditional CPU. Experimental results demonstrate that the proposed GPU based AFEC approach enormously outperforms the sequential approach yielding significant reduction in execution time while improving buffer utilization. In addition, the proposed GPU based approach achieves the average speedup of 74X over the sequential algorithm using the CPU while reducing the computational complexity from O(n³) of the sequential algorithm to O(n) by using the single instruction multiple data (SIMD) based GPU.

Towards Exploring Web Interface Sign Ontology: A User Study HCI Design Approaches, Methods and Techniques / Islam, Muhammad Nazrul HCI International 2013: 15th International Conference on HCI: Posters' Extended Abstracts Part I 2013-07-21 v.6 p.41-45
Keywords: Semiotics; web usability; user interface design; web sign ontology
Link to Digital Content at Springer
Summary: The smallest elements of web user interface (UI) like navigation links, buttons, icons, labels, thumbnails, symbols, etc. are defined in this paper as interface signs.The term Ontology is referred to the set of concepts and skills a user should own in order to understand the meaning of an interface sign. Designer should aware of web interface sign ontology to design user-intuitive web interface signs to get an idea what kind of presupposed knowledge end users hold to interpret the web interface signs. The objective of this research is to reveal the set ontologies available in web UI and the complexity associated with different ontological signs to interpret the meaning of web interface sign from semiotics perspective. Towards achieving the research goals a user study was replicated with 26 participants. So far, a preliminary analysis has performed on 13 participants' data and reports the preliminary outcomes in this work-in-progress paper.

Towards Determinants of User-Intuitive Web Interface Signs Design Philosophy / Islam, Muhammad Nazrul DUXU 2013: 2nd International Conference on Design, User Experience, and Usability, Part I: Design Philosophy, Methods, and Tools 2013-07-21 v.1 p.84-93
Keywords: Semiotics; interface sign; web usability; user interface design; web sign ontology
Link to Digital Content at Springer
Summary: User interfaces of web applications encompass a number of objects like navigation links, buttons, icons, labels, thumbnails, symbols, etc. which are defined in this paper as interface signs. Designing interface signs to be intuitive to the users is widely accepted to have a significant effect on enhancing web usability. Interface signs design principles are semiotics by nature, as semiotics is the doctrine of signs. Thus, the fundamental objective of this study is to reveal the determinants of user-intuitive interface signs for enhancing web usability from a semiotics perspective. To attain this research objective, an extensive user study was conducted with twenty six participants following a semi-structured interview approach. The preliminary results provide a number of determinants and their attributes to interpret properly the meaning of interface signs.

Thematic organization of web content for distraction-free text-to-speech narration Screen reader usage / Islam, Muhammad Asiful / Ahmed, Faisal / Borodin, Yevgen / Ramakrishnan, I. V. Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012-10-22 p.17-24
ACM Digital Library Link
Summary: People with visual disabilities, especially those who are blind, have digital content narrated to them by text-to-speech (TTS) engines (e.g., with the help of screen readers). Naively narrating web pages, particularly the ones consisting of several diverse pieces (e.g., news summaries, opinion pieces, taxonomy, ads), with TTS engines without organizing them into thematic segments will make it very difficult for the blind user to mentally separate out and comprehend the essential elements in a segment, and the effort to do so can cause significant cognitive stress. One can alleviate this difficulty by segmenting web pages into thematic pieces and then narrating each of them separately. Extant segmentation methods typically segment web pages using visual and structural cues. The use of such cues without taking into account the semantics of the content, tends to produce "impure" segments containing extraneous material interspersed with the essential elements. In this paper, we describe a new technique for identifying thematic segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with very little irrelevant content. Another interesting aspect is that the clutter-free main content of a web page, that is produced by the Readability tool and the "Reader" feature of the Safari browser, emerges as a special case of the thematic segments created by our technique. We provide experimental evidence of the effectiveness of our technique in reducing clutter. We also describe a user study with 23 blind subjects of its impact on web accessibility.

Accessible skimming: faster screen reading of web pages Interactions II / Ahmed, Faisal / Borodin, Yevgen / Soviak, Andrii / Islam, Muhammad / Ramakrishnan, I. V. / Hedgpeth, Terri Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012-10-07 v.1 p.367-378
ACM Digital Library Link
Summary: In our information-driven web-based society, we are all gradually falling ""victims"" to information overload [5]. However, while sighted people are finding ways to sift through information faster, Internet users who are blind are experiencing an even greater information overload. These people access computers and Internet using screen-reader software, which reads the information on a computer screen sequentially using computer-generated speech. While sighted people can learn how to quickly glance over the headlines and news articles online to get the gist of information, people who are blind have to use keyboard shortcuts to listen through the content narrated by a serial audio interface. This interface does not give them an opportunity to know what content to skip and what to listen to. So, they either listen to all of the content or listen to the first part of each sentence or paragraph before they skip to the next one. In this paper, we propose an automated approach to facilitate non-visual skimming of web pages. We describe the underlying algorithm, outline a non-visual skimming interface, and report on the results of automated experiments, as well as on our user study with 23 screen-reader users. The results of the experiments suggest that we have been moderately successful in designing a viable algorithm for automatic summarization that could be used for non-visual skimming. In our user studies, we confirmed that people who are blind could read and search through online articles faster and were able to understand and remember most of what they have read with our skimming system. Finally, all 23 participants expressed genuine interest in using non-visual skimming in the future.

Tightly coupling visual and linguistic features for enriching audio-based web browsing experience Poster session: information retrieval / Islam, Muhammad Asiful / Ahmed, Faisal / Borodin, Yevgen / Ramakrishnan, I. V. Proceedings of the 2011 ACM Conference on Information and Knowledge Management 2011-10-24 p.2085-2088
ACM Digital Library Link
Summary: People who are blind use screen readers for browsing web pages. Since screen readers read out content serially, a naive readout tends to mix irrelevant and relevant content thereby disrupting the coherency of the material being read out and confusing the listener. To address this problem we can partition web pages into coherent segments and narrate each such piece separately. Extant methods to do segmentation use visual and structural cues without taking the semantics into account and consequently create segments containing irrelevant material. In this paper, we describe a new technique for creating coherent segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with little irrelevant content. Preliminary experiments indicate that the technique is effective in creating highly coherent segments and the experiences of an early adopter who is blind suggest that it enriches the overall browsing experience.

Upper Body Gesture Recognition for Human-Robot Interaction Gaze and Gesture-Based Interaction / Oh, Chi-Min / Islam, Md. Zahidul / Lee, Jun-Sung / Lee, Chil-Woo / Kweon, In-So HCI International 2011: 14th International Conference on Human-Computer Interaction, Part II: Interaction Techniques and Environments 2011-07-09 v.2 p.294-303
Link to Digital Content at Springer
Summary: This paper proposes a vision-based human-robot interaction system for mobile robot platform. A mobile robot first finds an interested person who wants to interact with it. Once it finds a subject, the robot stops in the front of him or her and finally interprets her or his upper body gestures. We represent each gesture as a sequence of body poses and the robot recognizes four upper body gestures: "Idle", "I love you", "Hello left", and "Hello right". A key pose-based particle filter determines the pose sequence and key poses are sparsely collected from the pose space. Pictorial Structure-based upper body model represents key poses and these key poses are used to build an efficient proposal distribution for the particle filtering. Thus, the particles are drawn from key pose-based proposal distribution for the effective prediction of upper body pose. The Viterbi algorithm estimates the gesture probabilities with a hidden Markov model. The experimental results show the robustness of our upper body tracking and gesture recognition system.

Assistive web browsing with touch interfaces Posters and Demonstrations / Ahmed, Faisal / Islam, Muhammad Asiful / Borodin, Yevgen / Ramakrishnan, I. V. Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010-10-25 p.235-236
ACM Digital Library Link
Summary: This demonstration will propose a touch-based directional navigation technique, on touch interface (e.g., iPhone, Macbook) for people with visual disabilities especially blind individuals. Such interfaces coupled with TTS (text-to-speech) systems open up intriguing possibilities for browsing and skimming web content with ease and speed. Apple's seminal VoiceOver system for iOS is an exemplar of bringing touch-based web navigation to blind people. There are two major shortcomings: "fat finger" and "finger-fatigue" problems, which have been addressed in this paper with two proposed approaches. A preliminary user evaluation of the system incorporating these ideas suggests that they can be effective in practice.

Mixture model based label association techniques for web accessibility AI and toolkits / Islam, Muhammad Asiful / Borodin, Yevgen / Ramakrishnan, I. V. Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010-10-03 p.67-76
Keywords: aural web browser, blind user, context, mixture models, screen reader, web accessibility, web forms
ACM Digital Library Link
Summary: An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

Multi-modal sensing smart spaces embedded with WSN based image camera Workshop on Workflow and Event Analysis for Assistive Environments / Hwang, Sun-Min / Kim, Kyu-Jin / Islam, Md. Motaharul / Huh, Eui-Nam / Huang, W. / Foo, V. / Tolstikov, A. / Aung, Aung / Jayachandran, M. / Biswas, J. Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments 2010-06-23 p.63
Keywords: feature extraction, multi-modal, recognition, sensor image camera
ACM Digital Library Link
Summary: In this paper we discuss the use of low frame rate image cameras on a WSN in order to gather micro-context information in the context of smart homes and smart living spaces for the elderly. These simple devices are an attractive alternative to their more heavy duty counterparts since they can gather ambient image data at a rate that is amenable to the ambient space that they are in without much infrastructural support or modification. We propose their use in a multi-modal sensing environment where information from other ambient sensors may be mixed and matched in order to provide intelligence about the space and the activities of the subjects within the space. Their compelling use case, which includes their light weight and ease of mobility makes them a good candidate for a multi-modal sensing smart space. In this paper we introduce our work on architecture of the smart space and the implementation of the feature extraction using the image camera.

Hearsay: a new generation context-driven multi-modal assistive web browser WWW 2010 demos / Borodin, Yevgen / Ahmed, Faisal / Islam, Muhammad Asiful / Puzis, Yury / Melnyk, Valentyn / Feng, Song / Ramakrishnan, I. V. / Dausch, Glenn Proceedings of the 2010 International Conference on the World Wide Web 2010-04-26 v.1 p.1233-1236
Keywords: assistive browser, audio interface, blind users, multi-modal, screen reader, web accessibility
ACM Digital Library Link
Summary: This demo will present HearSay, a multi-modal non-visual web browser, which aims to bridge the growing Web Accessibility divide between individuals with visual impairments and their sighted counterparts, and to facilitate full participation of blind individuals in the growing Web-based society.

New Integrated Framework for Video Based Moving Object Tracking Ambient Interaction / Islam, Md. Zahidul / Oh, Chi-Min / Lee, Chil-Woo HCI International 2009: 13th International Conference on Human-Computer Interaction, Part III: Ambient, Ubiquitous and Intelligent Interaction 2009-07-19 v.3 p.423-432
Link to Digital Content at Springer
Summary: In this paper, we depict a novel approach to improve the moving object tracking system with particle filter using shape similarity and color histogram matching by a new integrated framework. The shape similarity between a template and estimated regions in the video sequences can be measured by their normalized cross-correlation of distance transformation image map. Observation model of the particle filter is based on shape from distance transformed edge features with concurrent effect of color information. The target object to be tracked forms the reference color window and its histogram are calculated, which is used to compute the histogram distance while performing a deterministic search for matching window. For both shape and color matching reference template window is created instantly by selecting any object in a video scene and updated in every frame. Experimental results have been offered to show the effectiveness of the proposed method.